infoscale-sles12_x86_64-Patch-7.4.2.5100

 Basic information
Release type: Patch
Release date: 2024-03-12
OS update support: None
Technote: None
Documentation: None
Popularity: 1103 viewed    downloaded
Download size: 523.6 MB
Checksum: 1291812950

 Applies to one or more of the following products:
InfoScale Availability 7.4.2 On SLES12 x86-64
InfoScale Enterprise 7.4.2 On SLES12 x86-64
InfoScale Foundation 7.4.2 On SLES12 x86-64
InfoScale Storage 7.4.2 On SLES12 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
4006619, 4006982, 4007372, 4007374, 4007375, 4007376, 4007677, 4008502, 4010025, 4011781, 4011971, 4012032, 4012176, 4012397, 4012765, 4013034, 4013169, 4013420, 4013446, 4014719, 4014720, 4014920, 4014985, 4015287, 4015834, 4015835, 4016078, 4016721, 4017282, 4017818, 4017820, 4018173, 4018178, 4018182, 4019535, 4019536, 4019877, 4020055, 4020056, 4020090, 4020207, 4020438, 4020528, 4020912, 4021235, 4021238, 4021240, 4021346, 4021366, 4021517, 4021946, 4022492, 4022942, 4023095, 4023553, 4023556, 4027741, 4031342, 4033243, 4033688, 4034357, 4037283, 4037288, 4038945, 4039475, 4040238, 4040608, 4040612, 4040618, 4040836, 4041770, 4042038, 4042590, 4042686, 4042890, 4043366, 4043372, 4043892, 4044184, 4045605, 4045606, 4045881, 4046196, 4046200, 4046265, 4046266, 4046267, 4046271, 4046272, 4046415, 4046419, 4046420, 4046423, 4046515, 4046520, 4046521, 4046524, 4046525, 4046526, 4046829, 4046906, 4046907, 4046908, 4047510, 4047568, 4047592, 4047695, 4047722, 4048120, 4048981, 4049091, 4049097, 4049268, 4049416, 4049440, 4049572, 4050467, 4050870, 4051703, 4052119, 4052860, 4052867, 4053752, 4053875, 4053876, 4054311, 4054322, 4054435, 4054857, 4054913, 4055055, 4056329, 4056919, 4057309, 4057311, 4057313, 4057424, 4057429, 4057596, 4058873, 4059899, 4059901, 4060549, 4060566, 4060584, 4060585, 4060805, 4060839, 4060962, 4060966, 4061004, 4061036, 4061055, 4061057, 4061203, 4061298, 4061317, 4061509, 4061527, 4061646, 4062461, 4062577, 4062746, 4062747, 4062751, 4062755, 4063374, 4064523, 4066237, 4066721, 4066930, 4067422, 4067433, 4067460, 4067464, 4067706, 4067710, 4067712, 4067713, 4067715, 4067717, 4067914, 4067915, 4069522, 4069523, 4069524, 4069525, 4070099, 4070186, 4070253, 4070366, 4070908, 4071007, 4071090, 4071105, 4071131, 4072874, 4074298, 4075873, 4075875, 4076495, 4077735, 4079500, 4079532, 4079828, 4079916, 4080026, 4080100, 4080777, 4081964, 4083792, 4083948, 4084881, 4084977, 4085612, 4086043, 4086047, 4086570, 4086624, 4087148, 4087157, 4087809, 4088025, 4088078, 4088159, 4089394, 4089657, 4089815, 4090282, 4090311, 4090411, 4090415, 4090442, 4090541, 4090573, 4090591, 4090599, 4090600, 4090601, 4090604, 4090617, 4090621, 4090639, 4090745, 4090932, 4090946, 4090960, 4090970, 4090986, 4091000, 4091036, 4091248, 4091580, 4091588, 4091819, 4091910, 4091911, 4091912, 4091963, 4091989, 4092002, 4092407, 4092589, 4092597, 4093193, 4093306, 4093943, 4094433, 4094664, 4098108, 4099550, 4100721, 4102403, 4102424, 4103494, 4105253, 4105278, 4105296, 4105305, 4105309, 4105318, 4105323, 4105325, 4105330, 4105334, 4106001, 4106702, 4107084, 4110666, 4110765, 4110766, 4111010, 4112305, 4112578, 4113062, 4113327, 4113616, 4113661, 4113663, 4113664, 4113666, 4114018, 4114033, 4114040, 4114251, 4115231, 4115943, 4116214, 4116348, 4116422, 4116427, 4116429, 4116435, 4116437, 4116576, 4117482, 4117899, 4117989, 4118256, 4118838, 4119951, 4120529, 4120531, 4120540, 4120545, 4120547, 4120653, 4120720, 4120722, 4120724, 4120728, 4120769, 4120783, 4120876, 4120899, 4120903, 4120916, 4120940, 4121068, 4121071, 4121075, 4121081, 4121083, 4121222, 4121243, 4121254, 4121625, 4121681, 4121763, 4121767, 4121790, 4121875, 4122629, 4122632, 4123313, 4124324, 4125096, 4125931, 4126041, 4126254, 4126360, 4127473, 4127475, 4128868, 4128876, 4128885, 4129502, 4130256, 4131718, 4133298, 4133718, 4134361, 4134638, 4134659, 4134662, 4134665, 4134673, 4134702, 4134887, 4134888, 4134889, 4135000, 4135005, 4135008, 4135017, 4135018, 4135022, 4135027, 4135028, 4135038, 4135040, 4135042, 4135057, 4135102, 4135105, 4135142, 4135149, 4135150, 4135184, 4135222, 4135248, 4135270, 4135325, 4135826, 4136002, 4136095, 4136238, 4136239, 4136240, 4136316, 4136360, 4136482, 4137008, 4137137, 4137139, 4137266, 4139975, 4140140, 4140562, 4140572, 4140587, 4140589, 4140594, 4140599, 4140690, 4140691, 4140692, 4140693, 4140694, 4140706, 4140782, 4141124, 4141125, 4142044, 4146957, 4149099, 4149222, 4149660, 4149888, 4149891, 4149898, 4149904, 4149906, 4150574, 4150577, 4150589, 4150621, 4151832, 4151834, 4151837, 4151838, 4152117, 4152119, 4152180, 4152549, 4152550, 4152553, 4152554, 4152732, 4152963, 4153768, 4154451, 4155169, 4155832, 4156008

 Patch ID:
VRTSvlic-4.01.742.300-SLES
VRTSvcswiz-7.4.2.2100-SLES12
VRTSperl-5.30.0.5-SLES12
VRTSsfcpi-7.4.2.3000-GENERIC
VRTSdbed-7.4.2.2300-SLES
VRTSgab-7.4.2.3100-SLES12
VRTScps-7.4.2.3100-SLES12
VRTSvcsea-7.4.2.3100-SLES12
VRTSdbac-7.4.2.2500-SLES12
VRTSvxfen-7.4.2.3100-SLES12
VRTSpython-3.9.2.0_1-SLES12
VRTSspt-7.4.2.1500-0029_SLES12
VRTSvxvm-7.4.2.4900-SLES12
VRTSvcsag-7.4.2.3700-SLES12
VRTSaslapm-7.4.2.4900-SLES12
VRTSvcs-7.4.2.3700-SLES12
VRTSamf-7.4.2.3700-SLES12
VRTSvxfs-7.4.2.4900-SLES12
VRTSllt-7.4.2.3700-SLES12
VRTSglm-7.4.2.4900-SLES12
VRTScavf-7.4.2.4900-SLES12
VRTSodm-7.4.2.4900-SLES12
VRTSveki-7.4.2.4900-SLES12
VRTSfsadv-7.4.2.4900-SLES12
VRTSgms-7.4.2.4900-SLES12
VRTSsfmh-7.4.2.1101_Linux.rpm

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.2 * * *
                         * * * Patch 5100 * * *
                         Patch Date: 2024-02-28


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH
   * KNOWN ISSUES


PATCH NAME
----------
InfoScale 7.4.2 Patch 5100


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES12 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTScavf
VRTScps
VRTSdbac
VRTSdbed
VRTSfsadv
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSperl
VRTSpython
VRTSsfcpi
VRTSsfmh
VRTSspt
VRTSvcs
VRTSvcsag
VRTSvcsea
VRTSvcswiz
VRTSveki
VRTSvlic
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.2
   * InfoScale Enterprise 7.4.2
   * InfoScale Foundation 7.4.2
   * InfoScale Storage 7.4.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxfs-7.4.2.4900
* 4093193 (4090032) System might panic in vx_dev_strategy() while Sybase or Oracle configuration.
* 4114018 (4067505) invalid VX_AF_OVERLAY aflags error in fsck
* 4114033 (4101634) Directory inode getting incorrect file-type error in fsck.
* 4114040 (4083056) Hang observed while punching the smaller hole over the bigger hole.
* 4115943 (4077506) Support to set/get attribute by using fd
* 4118838 (4116329) While checking FS sanity with the help of "fsck -o full -n" command, we tried to correct the FS flag value (WORM/Softworm), but failed because -n (read-only) option was given.
* 4120529 (4042168) Running FULLFSCK on the filesystem reports error regarding incorrect state file.
* 4120531 (4096561) Running FULLFSCK on the filesystem reports error regarding incorrect state file.
* 4121068 (4100021) Running setfacl followed by getfacl resulting in "No such device or address" error.
* 4121071 (4117342) System might panic due to hard lock up detected on CPU
* 4128876 (4104103) File system unmount operation is in hang state due to missing rele of vnode.
* 4134659 (4103045) Veritas File Replication failover(promote) might fail during disaster recovery or upgrade scenarios.
* 4134662 (4134661) Hang seen in the cp command in case of checkpoint promote in cluster filesystem environment.
* 4134665 (4130230) vx_prefault_uio_readable() function is going beyond intended boundaries of the uio->uio_iov structure, potentially causing it to access memory addresses that are not valid.
* 4135000 (4070819) Handling the case where the error is returned while we are going to get the inode from the last clone which is marked as overlay and is going to be removed.
* 4135005 (4068548) File system log replay fails with "inode marked bad, allocation flags (0x0001)"
* 4135008 (4126957) System crashes with VxFS stack.
* 4135017 (4119961) We hit the assert "xted_irwunlock:2" while doing in-house testing of WORM/Aulog features.
* 4135018 (4068201) File system corruption can happen in cases where node which committed the transaction crashed after sending the reply and before flushing to the log.
* 4135022 (4101075) During in-house CFS testing we hit the assert "vx_extfindbig:4" in extent look path.
* 4135027 (4084239) Machine hit with Panic because if assert "f:xted_irwunlock:2"
* 4135028 (4058153) FSCK hangs while clearing VX_EXHASH_CLASS attribute in 512 byte FS.
* 4135038 (4090088) Force unmounting might cause system crash, specifically when "panic_on_warn" is set.
* 4135040 (4092440) FSPPADM giving return code 0 (success) despite policy enforcement is failing.
* 4135042 (4068953) FSCK detected error on 512 byte node FS, in 1 fset ilist while verifying the FS after doing log replay and upgrading the FS to 17 DLV.
* 4135102 (4099740) UX:vxfs mount: ERROR: V-3-21264: <device> is already mounted, <mount-point> is busy,
                 or the allowable number of mount points has been exceeded.
* 4135105 (4112056) Hitting assert "f:vx_vnode_deinit:1" during in-house FS testing.
* 4135149 (4129680) Generate and add changelog in VxFS rpm
* 4136095 (4134194) vxfs/glm worker thread panic with kernel NULL pointer dereference
* 4136238 (4134884) Unable to deport Diskgroup. Volume or plex device is open or attached
* 4137137 (4136110) cmd "umount -l" is unmounting mount points even after adding mntlock in sles12 and sles15.
* 4137139 (4126943) Create lost+found directory in VxFS file system with default ACL permissions as 700.
* 4140587 (4136235) Includes module parameter for changing pnlct merge frequency.
* 4140594 (4116887) Running fsck -y on large size metasave with lots of hardlinks is consuming huge amount of system memory.
* 4140599 (4132435) Failures seen in FSQA cmds->fsck tests, panic in get_dotdotlst
* 4140782 (4137040) System got hung.
* 4141124 (4034246) In case of race condition in cluster filesystem, link count table in VxFS might miss some metadata.
* 4141125 (4008980) In cluster filesystem, due to mismatch between size in Linux inode and VxFS inode, wrong file size may be reported.
* 4149891 (4145203) Invoking veki through systemctl inside vxfs-startup script.
* 4149898 (4095890) In Solaris, panic seen with changes related to delegation of FREE EAU to primary.
* 4149904 (4111385) Export of FS from AIX (big-endian) to Linux (little-endian) using fscdsconv command will make FS disable.
* 4149906 (4085768) In this we are unable to adjust the record of the rct inode at the indirect level.
* 4150621 (4103398) The IOs on filesystem get hung.
* 4155169 (4106777) Testing Enabling ted parameters may cause test failure or assertions
* 4155832 (4028534) Add VX_HOLE check while reading next extent after allocating partial requested length
Patch ID: VRTSvxfs-7.4.2.4200
* 4110765 (4110764) Security Vulnerability observed in Zlib a third party component used by VxFS .
Patch ID: VRTSvxfs-7.4.2.4100
* 4106702 (4106701) A security vulnerability exists in the third-party component sqlite.
Patch ID: VRTSvxfs-7.4.2.3900
* 4050870 (3987720) vxms test is having failures.
* 4071105 (4067393) Panic "UG: unable to handle kernel NULL pointer dereference at 00000000000009e0."
* 4074298 (4069116) fsck got stuck in pass1 inode validation.
* 4075873 (4075871) Utility to find possible pending stuck messages.
* 4075875 (4018783) Metasave collection and restore takes significant amount of time.
* 4084881 (4084542) Enhance fsadm defrag report to display if FS is badly fragmented.
* 4088078 (4087036) The fsck binary has been updated to fix a failure while running with the "-o metasave" option on a shared volume.
* 4090573 (4056648) Metasave collection can be executed on a mounted filesystem.
* 4090600 (4090598) Utility to detect culprit nodes while cfs hang is observed.
* 4090601 (4068143) fsck->misc is having failures.
* 4090617 (4070217) Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.
* 4090639 (4086084) VxFS mount operation causes system panic.
* 4091580 (4056420) VFR  Hardlink file is not getting replicated after modification in incremental sync.
* 4093306 (4090127) CFS hang in vx_searchau().
Patch ID: VRTSvxfs-7.4.2.3600
* 4089394 (4089392) Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.
Patch ID: VRTSvxfs-7.4.2.3500
* 4083948 (4070814) Security Vulnerability in VxFS third party component Zlib
Patch ID: VRTSvxfs-7.4.2.3400
* 4079532 (4079869) Security Vulnerability in VxFS third party components
Patch ID: VRTSvxfs-7.4.2.2600
* 4015834 (3988752) Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.
* 4040612 (4033664) Multiple different issues occur with hardlink replication using VFR.
* 4040618 (4040617) Veritas file replicator is not performing as per the expectation.
* 4060549 (4047921) Replication job getting into hung state when pause/resume operations performed repeatedly.
* 4060566 (4052449) Cluster goes in an 'unresponsive' mode while invalidating pages due to duplicate page entries in iowr structure.
* 4060585 (4042925) Intermittent Performance issue on commands like df and ls.
* 4060805 (4042254) A new feature has been added in vxupgrade which fails disk-layout upgrade if sufficient space is not available in the filesystem.
* 4061203 (4005620) Internal counter of inodes from Inode Allocation Unit (IAU) can be negative if IAU is marked bad.
* 4061527 (4054386) If systemd service fails to load vxfs module, the service still shows status as active instead of failed.
Patch ID: VRTSvxfs-7.4.2.2200
* 4013420 (4013139) The abort operation on an ongoing online migration from the native file system to VxFS on RHEL 8.x systems.
* 4040238 (4035040) vfradmin stats command failed to show all the fields in the command output in-case job paused and resume.
* 4040608 (4008616) fsck command got hung.
* 4042686 (4042684) ODM resize fails for size 8192.
* 4044184 (3993140) Compclock was not giving accurate results.
* 4046265 (4037035) Added new tunable "vx_ninact_proc_threads" to control the number of inactive processing threads.
* 4046266 (4043084) panic in vx_cbdnlc_lookup
* 4046267 (4034910) Asynchronous access/updatation of global list large_dirinfo  can corrupt its values in multi-threaded execution.
* 4046271 (3993822) fsck stops running on a file system
* 4046272 (4017104) Deleting a lot of files can cause resource starvation, causing panic or momentary hangs.
* 4046829 (3993943) The fsck utility hit the coredump due to segmentation fault in get_dotdotlst()
* 4047568 (4046169) On RHEL8, while doing a directory move from one FS (ext4 or vxfs) to migration VxFS, the migration can fail and FS will be disable.
* 4049091 (4035057) On RHEL8, IOs done on FS, while other FS to VxFS migration is in progress can cause panic.
* 4049097 (4049096) Dalloc change ctime in background while extent allocation
Patch ID: VRTSvxfs-7.4.2.1600
* 4012765 (4011570) WORM attribute replication support in VxFS.
* 4014720 (4011596) Multiple issues were observed during glmdump using hacli for communication
* 4015287 (4010255) "vfradmin promote" fails to promote target FS with selinux enabled.
* 4015835 (4015278) System panics during vx_uiomove_by _hand.
* 4016721 (4016927) For multi cloud tier scenario, system panic with NULL pointer dereference when we try to remove second cloud tier
* 4017282 (4016801) filesystem mark for fullfsck
* 4017818 (4017817) VFR performance enhancement changes.
* 4017820 (4017819) Adding cloud tier operation fails while trying to add AWS GovCloud.
* 4019877 (4019876) Remove license library dependency from vxfsmisc.so library
* 4020055 (4012049) Documented "metasave" option and added one new option in fsck binary.
* 4020056 (4012049) Documented "metasave" option and added one new option in fsck binary.
* 4020912 (4020758) Filesystem mount or fsck with -y may see hang during log replay
Patch ID: VRTSdbac-7.4.2.2500
* 4091000 (4090485) Installation of Oracle 12c GRID and database fails on RHEL8.*/OL8.* with GLIBC package error
Patch ID: VRTSsfcpi-7.4.2.3000
* 4006619 (4015976) On a Solaris system, patch upgrade of InfoScale fails with an error in the alternate boot environment.
* 4008502 (4008744) Rolling upgrade using response file fails if one or more operating system packages are missing on the cluster nodes.
* 4010025 (4010024) While upgrading from InfoScale 7.4.2 to 7.4.2.xxx, CPI installs the packages from the 7.4.2.xxx patch only and not the base packages of 7.4.2 GA.
* 4012032 (4012031) Installer does not upgrade VRTSvxfs and VRTSodm inside
non-global zones
* 4013446 (4008578) Even though a cluster node may have a fully qualified hostname, the product installer trims this value and uses the shorter hostname for the cluster configuration.
* 4014920 (4015139) Product installer fails to install InfoScale on RHEL 8 systems if IPv6 addresses are provided for the system list.
* 4014985 (4014983) The product installer does not display a warning at the time of the pre-upgrade check to suggest that you will need to provide telemetry details later on if the cluster nodes are not registered with TES or VCR.
* 4016078 (4007633) The product installer fails to synchronize the system clocks with the NTP server.
* 4020090 (4022920) The product installer fails to install InfoScale 7.4.2 on SLES 15 SP2.
* 4021517 (4021515) On SLES 12 SP4 and later systems, the installer fails to fetch the media speed of the network interfaces.
* 4022492 (4022640) Installer fails to complete installation after it automatically downloads a required support patch from SORT that contains a VRTSvlic package.
* 4027741 (4027759) The product installer installs lower versions packages if multiple patch bundles are specified using the patch path options in the incorrect order.
* 4033243 (4033242) When a responsefile is used, the product installer fails to add the required VCS users.
* 4033688 (4033687) The InfoScale product installer deletes any existing cluster configuration files during uninstallation.
* 4034357 (4033988) The product installer does not allow the installation of an Infoscale patch bundle if a more recent version of any package in the bundle is already installed on the system.
* 4038945 (4033957) The VRTSveki and the VRTSvxfs RPMs fail to upgrade when using yum.
* 4040836 (4040833) After an InfoScale upgrade to version 7.4.2 Update 2 on Solaris, the latest vxfs module is not loaded.
* 4041770 (4041816) On RHEL 8.4, the system panics after the InfoScale stack starts.
* 4042590 (4042591) On RHEL 8.4, installer disable IMF for the CFSMount and the Mount agents.
* 4042890 (4043075) After performing a phased upgrade of InfoScale, the product installer fails to update the types.cf file.
* 4043366 (4042674) The product installer does not honor the single-node mode of a cluster and restarts it in the multi-mode if 'vcs_allowcomms = 1'.
* 4043372 (4043371) If SecureBoot is enabled on the system, the product installer fails to install some InfoScale RPMs (VRTSvxvm, VRTSaslapm, VRTScavf).
* 4043892 (4043890) The product installer incorrectly prompts users to install deprecated OS RPMs for LLT over RDMA configurations.
* 4045881 (4043751) The VRTScps RPM installation may fail on SLES systems.
* 4046196 (4067426) Package uninstallation during a rolling upgrade fails if non-global zones are under VCS service group control
* 4050467 (4050465) The InfoScale product installer fails to create VCS users for non-secure clusters.
* 4052860 (4052859) The InfoScale product installer needs to install the VRTSpython package on AIX and Solaris.
* 4052867 (4052866) The InfoScale product installer needs to run the CollectorService process during a fresh configuration of VCS.
* 4053752 (4053753) The licensing service is upgraded to allow an InfoScale server to be registered with a Veritas Usage Insights server.
* 4053875 (4053635) The vxconfigd service fails to start during the add node operation when the InfoScale product installer is used to perform a fresh configuration on Linux.
* 4053876 (4053638) The installer prompt to mount shared volumes during an add node operation does not advise that the corresponding CFSMount entries will be updated in main.cf.
* 4054322 (4054460) InfoScale installer fails to start the GAB service with the -start option on Solaris.
* 4054913 (4054912) During upgrade, the product installer fails to stop the vxfs service.
* 4055055 (4055242) InfoScale installer fails to install a patch on Solaris.
* 4066237 (4057908) The product installer fails to configure passwordless SSH communication for remote Solaris systems.
* 4067433 (4067432) While upgrading to 7.4.2 Update 2 , VRTSvlic patch package installation fails.
* 4070908 (4071690) The installer does perform an Infoscale configuration without registering the Infoscale server to an edge server.
* 4079500 (4079853) Patch installer flashes a false error message with -precheck option.
* 4079916 (4079922) Installer fails to complete installation after it automatically downloads a required support patch from SORT that contains a VRTSperl package.
* 4080100 (4080098) Installer fails to complete the CP server configuration.
* 4081964 (4081963) VRTSvxfs patch fails to install on Linux platforms.
* 4084977 (4084975) Installer fails to complete the CP server configuration.
* 4085612 (4087319) On RHEL 7.4.2, Installer fails to uninstall VxVM while upgrading from 7.4.2 to 8.0U1.
* 4086047 (4086045) When Infoscale cluster is reconfigured, LLT, GAB, VXFEN services fail to start after reboot.
* 4086570 (4076583) On a Solaris system, the InfoScale installer runs set/unset publisher several times slowing down deployment.
* 4086624 (4086623) Installer fails to complete the CP server configuration.
* 4087148 (4088698) CPI installer tries to download a must-have patch whose version is lower than the version specified in media path.
* 4087809 (4086533) VRTSfsadv pkg fails to upgrade from 7.4.2 U4 to 8.0 U1 while using yum upgrade.
* 4089657 (4089934) Installer does not update the '/opt/VRTSvcs/conf/config/types.cf' file after a VRTSvcs patch upgrade.
* 4089815 (4089867) On Linux, Installer fails to start fsdedupschd service.
* 4092407 (4092408) On a Linux platform, CPI installer fails to correctly identify status of vxfs_replication service.
Patch ID: VRTSsfmh-vom-HF07421101
* 4156008 (4156005) sfmh for IS 7.4.2 U7
Patch ID: VRTSgab-7.4.2.3100
* 4105325 (3991274) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).
Patch ID: VRTSgab-7.4.2.2500
* 4057313 (4057312) After an InfoScale upgrade, the updated values of GAB tunables that are used when loading the corresponding modules fail to persist.
Patch ID: VRTSgab-7.4.2.2100
* 4046415 (4046413) gab node count/fencing quorum not getting updated properly
* 4046419 (4046418) gab startup does not fail even if llt is not configured
Patch ID: VRTSgab-7.4.2.1300
* 4013034 (4011683) The GAB module failed to start and the system log messages indicate failures with the mknod command.
Patch ID: VRTScps-7.4.2.3100
* 4090986 (4072151) The message 'Error executing update nodes set is_reachable...' might intermittently appear in the syslogs of the Coordination Point (CP) servers.
Patch ID: VRTScps-7.4.2.2800
* 4088159 (4088158) Security vulnerabilities exists in Sqlite third-party components used by VCS.
Patch ID: VRTScps-7.4.2.2500
* 4054435 (4018218) Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2
* 4067464 (4056666) The Error writing to database message may intermittently appear in syslogs on CP servers.
Patch ID: -5.30.0.5
* 4079828 (4079827) Security vulnerabilities detected in OpenSSL packaged with VRTSperl/VRTSpython for Infoscale 7.4.2 and its update release.
Patch ID: VRTSvcsea-7.4.2.3100
* 4091036 (4088595) hapdbmigrate utility fails to online the oracle service group
Patch ID: VRTSvcsea-7.4.2.1100
* 4020528 (4001565) On Solaris 11.4, IMF fails to provide notifications when Oracle processes stop.
Patch ID: VRTSvcswiz-7.4.2.2100
* 4049572 (4049573) Veritas High Availability Configuration Wizard (HA-Plugin) is not supported on VMWare vCenter HTML based UI.
Patch ID: VRTSvxfen-7.4.2.3100
* 4105330 (3991274) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).
Patch ID: VRTSvxfen-7.4.2.2500
* 4057309 (4057308) After an InfoScale upgrade, the updated values of vxfen tunables that are used when loading the corresponding module fail to persist.
* 4067460 (4004248) vxfend generates core sometimes during vxfen race in CPS based fencing configuration
Patch ID: VRTSvxfen-7.4.2.2100
* 4046423 (4043619) OCPR failed from SCSI3 fencing to Customized mode
Patch ID: VRTSvxfen-7.4.2.1300
* 4006982 (3988184) The vxfen process cannot complete due to incomplete vxfentab file.
* 4007375 (4000745) The VxFEN process fails to start due to late discovery of the VxFEN disk group.
* 4007376 (3996218) In a customized fencing mode, the 'vxfenconfig -c' command creates a new vxfend process even if VxFen is already configured.
* 4007677 (3970753) Freeing uninitialized/garbage memory causes panic in vxfen.
Patch ID: VRTSspt-7.4.2.1500
* 4139975 (4149462) New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version.
* 4146957 (4149448) New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog.
Patch ID: VRTSdbed-7.4.2.2300
* 4092589 (4092588) SFAE failed to start with systemd.
Patch ID: -4.01.742.300
* 4049416 (4049416) Migrate Telemetry Collector from Java to Python.
Patch ID: VRTSpython-3.9.2.0_1
* 4140140 (4140139) Upgrading Python programming language and vulnerable module under VRTSpython to address open exploitable security vulnerabilities.
Patch ID: VRTSpython-3.7.4.40
* 4133298 (4133297) For VRTSpython need to fix some open CVE's
Patch ID: VRTSpython-3.7.4.39
* 4117482 (4117483) For VRTSpython need to fix some open CVE's
Patch ID: VRTSamf-7.4.2.3700
* 4136002 (4136003) A cluster node panics when the AMF module overruns internal buffer to analyze arguments of an executable binary.
Patch ID: VRTSamf-7.4.2.3100
* 4105318 (3991274) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).
Patch ID: VRTSamf-7.4.2.2500
* 4066721 (4066719) Remove multiple AMF module support on Solaris 11.4
Patch ID: VRTSamf-7.4.2.2100
* 4046524 (4041596) A cluster node panics when the arguments passed to a process that is registered with AMF exceeds 8K characters.
Patch ID: VRTSvcs-7.4.2.3700
* 4080026 (4080024) VxODM goes into a 'maintenance' state if cluster node is rebooted or VCS/ODM is restarted.
* 4090591 (4090590) Security vulnerabilities exist in the OpenSSL third-party components used by VCS.
* 4100721 (4100720) GCO fails to configure for the latest RHEL/SLES platforms.
* 4129502 (4129493) Tenable security scan kills the Notifier resource.
* 4136360 (4136359) When upgrading InfoScale with latest Public Patch Bundle, types.cf is updated and HTC types definition removed.
Patch ID: VRTSvcs-7.4.2.3100
* 4090591 (4090590) Security vulnerabilities exist in the OpenSSL third-party components used by VCS.
Patch ID: VRTSvcs-7.4.2.2500
* 4059899 (4059897) In some cases, core files get generated after executing the hacf -verify command.
* 4059901 (4059900) On a VCS node, hastop -local is unresponsive.
* 4071007 (4070999) Processes registered under VCS control get killed after running the 'hastop -local -force' command.
Patch ID: VRTSvcs-7.4.2.2100
* 4046515 (4040705) hacli hangs indefinitely when command exceeds character limit of 4096
* 4046520 (4040656) Gracefully restart HAD in occurrence of ENOMEM error
* 4046526 (4043700) While Online operation is in progress and the PreOnline trigger is already executing; Multiple PreOnline triggers can be executed on the same/different nodes in the cluster for failover/parallel/hybrid service groups.
Patch ID: VRTSvcsag-7.4.2.3700
* 4112578 (4113151) VMwareDisksAgent reports resource online before VMware disk to be online is present into vxvm/dmp database.
* 4113062 (4113056) ReuseMntPt is not honored when the same mountpoint is used for two resources with different FSType.
* 4120653 (4118454) Process agent fails to come online when root user shell is set to /sbin/nologin.
* 4134361 (4122001) NIC resource remain online after unplug network cable on ESXi server.
* 4134638 (4127320) The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.
* 4142044 (4142040) While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.
* 4149222 (4121270) EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.
Patch ID: VRTSvcsag-7.4.2.3100
* 4077735 (4075950) IPv6 neighbor flush logic needs to be added to IP/MultiNIC agents
* 4090282 (4056397) In a rare case, a NIC resource may go into FAILED state even though it is active at the OS level.
* 4090621 (4083099) AzureIP resource fails to go offline when OverlayIP is configured.
* 4090745 (4094539) Agent resource monitor not parsing process name correctly.
* 4091819 (4090381) The VMware disk agent does not support more than 15 disk IDs.
Patch ID: VRTSvcsag-7.4.2.2100
* 4045605 (4038906) In case of ESXi 6.7, the VMwareDisks agent fails to perform a failover on a peer node.
* 4045606 (4042944) In a hardware replicated environment, a disk group resource may fail to import when the HARDWARE_MIRROR flag is set
* 4046521 (4030215) Azure agents now support azure-identity based credential methods
* 4046525 (4046286) Azure Cloud agents does not handle generic exceptions
* 4048981 (4048164) Cloud agents may report incorrect resource state in case cloud API hangs.
Patch ID: VRTSvcsag-7.4.2.1400
* 4007372 (4016624) When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.
* 4007374 (1837967) Application agent falsely detects an application as faulted, due to corruption caused by non-redirected STDOUT or STDERR.
* 4012397 (4012396) AzureDisk agent fails to work with latest Azure Storage SDK.
* 4019536 (4009761) A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.
Patch ID: VRTSllt-7.4.2.3700
* 4102403 (4100288) LLT timer handler triggers a panic after OS is upgrading to AIX 7.2TL5 and enabled errchecknormal(7).
* 4120940 (4087662) During memory fragmentation LLT module may fail to allocate large memory leading to the node eviction or a node not being able to join.
* 4121625 (4081574) LLT unnecessarily replies to an explicit heartbeat request after 'sendhbcap' is over.
* 4126360 (4124759) Panic happened with llt_ioship_recv on a server running in AWS.
* 4135222 (4065484) Update the default LLT configuration to make clusters more resilient to transient issues.
* 4135826 (4135825) Once root file system is full during llt start, llt module failing to load forever.
* 4137266 (4139781) Unexpected or corrupted skb, memory type missing in buffer header.
* 4149099 (4128887) During rmmod of llt package, warning trace is observed on kernel versions higher than 5.14 on RHEL9 and SLES15.
* 4152180 (4087543) Node panic observed at llt_rdma_process_ack+189
Patch ID: VRTSllt-7.4.2.3100
* 4105323 (3991274) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).
Patch ID: VRTSllt-7.4.2.2500
* 4046420 (3989372) When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.
* 4057311 (4057310) After an InfoScale upgrade, the updated values of LLT tunables that are used when loading the corresponding modules fail to persist.
* 4067422 (4040261) During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.
Patch ID: VRTSllt-7.4.2.2100
* 4039475 (4045607) Performance improvement of the UDP multiport feature of LLT on 1500 MTU-based networks.
* 4046200 (4046199) llt over udp configuration now accepts any link tag name
* 4046420 (3989372) When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.
Patch ID: VRTSllt-7.4.2.1300
* 4019535 (4018581) The LLT module fails to start and the system log messages indicate missing IP address.
Patch ID: VRTSvxvm-7.4.2.4900
* 4069525 (4065490) VxVM udev rules consumes more CPU and appears in "top" output when system has thousands of storage devices attached.
* 4092002 (4081740) vxdg flush command slow due to too many luns needlessly access /proc/partitions.
* 4094433 (4098391) Continuous system crash is observed during VxVM installation.
* 4105253 (4087628) CVM goes into faulted state when slave node of primary is rebooted .
* 4107084 (4107083) In case of EMC BCV NR LUNs, vxconfigd taking a long time to start post reboot.
* 4111010 (4108475) vxfentsthdw script failed with "Expect no writes for disks ... "
* 4113327 (4102439) Volume Manager Encryption EKM Key Rotation (vxencrypt rekey) Operation Fails on IS 7.4.2/rhel7
* 4113661 (4091076) SRL gets into pass-thru mode because of head error.
* 4113663 (4095163) system panic due to a race freeing VVR update.
* 4113664 (4091390) vradmind service has dump core and stopped on few nodes
* 4113666 (4064772) After enabling slub debug, system could hang with IO load.
* 4114251 (4114257) Observed IO hung and high system load average after rebooted master and one slave node rejoins cluster.
* 4115231 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd
* 4116214 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .
* 4116422 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference.
* 4116427 (4108913) Vradmind dumps core because of memory corruption.
* 4116429 (4085404) Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.
* 4116435 (4034741) The current fix from limits IO load on secondary causing deadlock situtaion
* 4116437 (4072862) Stop cluster hang because of RVGLogowner and CVMClus resources fail to offline.
* 4116576 (3972344) vxrecover returns an error - 'ERROR V-5-1-11150'  Volume <vol_name> not found'
* 4117899 (4055159) vxdisk list showing incorrect value of LUN_SIZE for nvme disks
* 4117989 (4085145) System with NVME devices can crash due to memory corruption.
* 4118256 (4028439) Updating mediatype tages through disk online event.
* 4120540 (4102532) /etc/default/vxsf file gets world write permission when "vxtune storage_connectivity asymmetric" is run.
* 4120545 (4090826) system panic at vol_page_offsetlist_sort
* 4120547 (4093067) System panic occurs because of NULL pointer in block device structure.
* 4120720 (4086063) semodule policy is installed in %post stage during vxvm upgrade and then gets removed in %preun stage.
* 4120722 (4021816) semodule of upgraded VxVM package gets removed in %preun stage of install script during package upgrade.
* 4120724 (3995831) System hung: A large number of SIOs got queued in FMR.
* 4120728 (4090476) SRL is not draining to secondary.
* 4120769 (4014894) Disk attach is done one by one for each disk creating transactions for each disk
* 4120783 (4087294) [NBFS-3.1 RHEL8 Upgrade] Upgrade from 3.0 to 3.1 got failed when upgrading VRTSvxvm
* 4120876 (4081434) VVR kernel panic dring process the ACK message at VVR Primary side.
* 4120899 (4116024) machine panic due to access illegal address.
* 4120903 (4100775) vxconfigd was hung as VxDMP doesn't support chained BIO on rhel7.
* 4120916 (4112687) DLE (Dynamic Lun Expansion) of single path GPT disk may corrupt disk public region.
* 4121075 (4100069) One of standard disk groups fails to auto-import with 'Disk for disk group not found' error when those disk groups co-exist with the cloned disk group.
* 4121081 (4098965) Crash at memset function due to invalid memory access.
* 4121083 (4105953) system panic due to VVR accessed a NULL pointer.
* 4121222 (4095718) some tasks kept waiting for IO drain and caused system IO hung.
* 4121243 (4101588) vxtune shows incorrect vol_rvio_maxpool_sz and some other tunables when they're over 4g.
* 4121254 (4115078) vxconfigd hung was observed when reboot all nodes of the primary site.
* 4121681 (3995731) vxconfigd dumping core due to NULL pointer.
* 4121763 (3995308) vxtask status hang due to incorrect values getting copied into task status information.
* 4121767 (4117568) vradmind dumps core due to invalid memory access.
* 4121790 (4116496) System panic at dmp_process_errbp+47.
* 4121875 (4090943) VVR Primary RLink cannot connect as secondary reports SRL log is full.
* 4122629 (4118809) System panic at dmp_process_errbp.
* 4122632 (4121564) Memory leak for volcred_t could be observed in vxio.
* 4123313 (4114927) Failed to mount /boot on dmp device after enabling dmp_native_support.
* 4124324 (4098582) Permission are changing to 644 of ddl.log after logrotation, customer need it to be 640 as per security compliance
* 4126041 (4124223) Core dump is generated for vxconfigd in TC execution.
* 4127473 (4089626) Create XFS on VxDMP devices hang as VxDMP doesn't support chained BIO.
* 4127475 (4114601) Panic: in dmp_process_errbp() for disk pull scenario.
* 4128868 (4128867) Security vulnerabilities exists in third party component OpenSSL.
* 4128885 (4115193) Data corruption observed after the node fault and cluster restart in DR environment
* 4131718 (4088941) Panic observed at scsi_queue_rq in SLES15SP3.
* 4134702 (4122396) When using KillMode=control-group, stopping the vxvm-recover.service results in a failed state.
* 4134887 (4020942) Data corruption/loss on erasure code (EC) volumes post rebalance/disk movement operations while active application IO in progress.
* 4134888 (4105204) Node not able to join the cluster after iLO "press and hold" scenario in loop
* 4134889 (4107401) Replication stopped after VVR logowner reboot
* 4135142 (4040043) Warnings in dmesg/ kernel logs  for violating memory usage/handling  protocols.
* 4135150 (4114867) systemd-udevd[2224]: invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 103 ('D')
* 4135248 (4129663) Generate and add changelog in vxvm and aslapm rpm
* 4136239 (4069940) FS mount failed during Cluster configuration on 24-node physical HP BOM2 setup.
* 4136240 (4040695) vxencryptd getting coredump because of static buffer size.
* 4136316 (4098144) vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume
* 4136482 (4132799) No detailed error messages while joining CVM fail.
* 4137008 (4133793) vxsnap restore failed with DCO IO errors during the operation when run in loop for multiple VxVM volumes.
* 4140562 (4134305) Collecting ilock stats for admin SIO causes buffer overrun.
* 4140572 (4080124) Data corruption on mirrored volume in shared-nothing (Flexible Shared Storage) environment during failure of VxVM configuration update.
* 4140589 (4120068) A standard disk was added to a cloned diskgroup successfully which is not expected.
* 4140690 (4100547) Full volume resync happens(~9hrs) post last node reboot at secondary site in a NBFS DR cluster.
* 4140691 (4114962) [NBFS-3.1][DL]:MASTER and CAT_FS got corrupted while performing multiple NVMEs failure
* 4140692 (4074002) In FSS environment, after rebooted 4 nodes in the cluster, VOLD IO hang during node joining.
* 4140693 (4122061) Observing hung after resync operation, vxconfigd was waiting for slaves' response
* 4140694 (4128351) System hung observed when switching log owner.
* 4140706 (4130393) vxencryptd crashed repeatedly due to segfault.
* 4149660 (4106254) Nodes crashed in shared-nothing (Flexible Shared Storage) environment if node reboot followed by NVME disk failure is executed
* 4150574 (4077944) In VVR environment, application I/O operation may get hung.
* 4150577 (4019380) vxcloudd daemon dumps core .
* 4150589 (4085477) Settag operation fails due to an incorrect disk getting picked up for operation.
* 4151832 (4005719) For encrypted volumes, the disk reclaim operation gets hung.
* 4151834 (3989340) EC: Volume state Tutil flag not getting cleared for cascaded disk fail / cluster reboot
* 4151837 (4024140) In VVR environments, in case of disabled volumes, the DCM read operation does not complete, resulting in application IO hang.
* 4151838 (4046560) vxconfigd aborts on Solaris if device's hardware path is too long.
* 4152117 (4142054) primary master got panicked with ted assert during the run.
* 4152119 (4142772) Error mask NM_ERR_DCM_ACTIVE on rlink may not be cleared resulting in the rlink being unable to get into DCM again.
* 4152549 (4089801) Cluster went in hanged state after rebooting 6 slave nodes
* 4152550 (3972770) Longevity:RHEL7.6:DV_adaptive_sync:Pri master node got panic during hastop -all/hastart, "voldco_get_mapid+0x5b/0xd0 [vxio]"
* 4152553 (4011582) Display minimum and maximum read/write time it takes for the I/O under VxVM layer using vxstat utility.
* 4152554 (4058266) Add an option to ignore 0 stats entries for objects.
* 4152732 (4111978) Replication failed to start due to vxnetd threads not running on secondary site.
* 4152963 (4100037) Error in vxstat statics display
* 4153768 (4120878) After enabling the dmp_native_support, system failed to boot.
* 4154451 (4107801) /dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.
Patch ID: VRTSaslapm 7.4.2.4900
* 4011781 (4011780) Add support for DELL EMC PowerStore plus PP
* 4103494 (4101807) VxVM with DMP is failing to detect Hitachi ShadowImage (SI) svol devices.
Patch ID: VRTSvxvm-7.4.2.4300
* 4119951 (4119950) Security vulnerabilities exists in third party components [curl and libxml].
Patch ID: VRTSvxvm-7.4.2.4100
* 4116348 (4112433) Security vulnerabilities exists in third party components [openssl, curl and libxml].
Patch ID: VRTSvxvm-7.4.2.3800
* 4110666 (4110665) A security vulnerability exists in the third-party component libcurl.
* 4110766 (4112033) A security vulnerability exists in the third-party component libxml2.
Patch ID: VRTSvxvm-7.4.2.3700
* 4106001 (4102501) A security vulnerability exists in the third-party component libcurl.
Patch ID: VRTSvxvm-7.4.2.3600
* 4052119 (4045871) vxconfigd crashed at ddl_get_disk_given_path.
* 4086043 (4072241) vxdiskadm functionality is failing due to changes in dmpdr script
* 4090311 (4039690) Change the logger files size and do the gzip on logger files.
* 4090411 (4054685) In case of CVR environment, RVG recovery gets hung in linux platforms.
* 4090415 (4071345) Unplanned fallback synchronisation is unresponsive
* 4090442 (4078537) Connection to s3-fips bucket is failing
* 4090541 (4058166) Increase DCM log size based on volume size without exceeding region size limit of 4mb.
* 4090599 (4080897) Performance drop on raw VxVM volume in RHEL 8.x compared to RHEL7.X
* 4090604 (4044529) DMP is unable to display PWWN details for some LUNs by "vxdmpadm getportids".
* 4090932 (3996634) System boots slow since Linux lsblk command return within long time.
* 4090946 (4023297) Smartmove functionality was not being used after VVR Rlink was paused and resumed during VVR initial sync or DCM resync operation.
* 4090960 (4087770) NBFS: Data corruption due to skipped full-resync of detached mirrors of volume after DCO repair operation
* 4090970 (4017036) After enabling DMP (Dynamic Multipathing) Native support, enable /boot to be
mounted on DMP device when Linux is booting with systemd.
* 4091248 (4040808) df command hung in clustered environment
* 4091588 (3966157) SRL batching feature is broken
* 4091910 (4090321) Increase timeout for vxvm-boot systemd service
* 4091911 (4090192) Increase number of DDL threads for faster discovery
* 4091912 (4090234) Volume Manager Boot service is failing after reboot the system.
* 4091963 (4067191) In CVR environment after rebooting Slave node, Master node may panic
* 4091989 (4090930) [NBFS-3.1]: MASTER FS corruption is seen in loop reboot (-f) test
* 4092002 (4081740) vxdg flush command slow due to too many luns needlessly access /proc/partitions.
* 4099550 (4065145) multivolume and vset not able to overwrite encryption tags on secondary.
* 4102424 (4103350) vxvm-encrypted.service going into failed state on secondary site on performing "vradmind -g <dg> -encrypted addsec <rvg> <prim_ip> <sec_ip>" command.
Patch ID: VRTSaslapm 7.4.2.3600
* 4012176 (3996206) Update Lun Serial Number for 3par disks
* 4076495 (4076320) AVID, reclaim_cmd_nv, extattr_nv, old_udid_nv are not generated for HPE 3PAR/Primera/Alletra 9000 ALUA array.
* 4094664 (4093396) Fail to recognize more than one EMC PowerStore arrays.
Patch ID: VRTSvxvm-7.4.2.3300
* 4083792 (4082799) A security vulnerability exists in the third-party component libcurl.
Patch ID: VRTSvxvm-7.4.2.3200
* 4011971 (3991668) In a Veritas Volume Replicator (VVR) configuration where secondary logging is enabled, data inconsistency is reported after the "No IBC message arrived" error is encountered.
* 4013169 (4011691) High CPU consumption on the VVR secondary nodes because of high pending IO load.
* 4037288 (4034857) VxVM support on SLES 15 SP2
* 4048120 (4031452) vxesd core dump in esd_write_fc()
* 4051703 (4010794) When storage activity was going on, Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster.
* 4052119 (4045871) vxconfigd crashed at ddl_get_disk_given_path.
* 4054311 (4040701) Some warnings are observed while installing vxvm package.
* 4056329 (4056156) VxVM Support for SLES15 Sp3
* 4056919 (4056917) Import of disk group in Flexible Storage Sharing (FSS) with missing disks can lead to data corruption.
* 4058873 (4057526) Adding check for init while accessing /var/lock/subsys/ path in vxnm-vxnetd.sh script.
* 4060839 (3975667) Softlock in vol_ioship_sender kernel thread
* 4060962 (3915202) Reporting repeated disk failures & DCPA events for other internal disks
* 4060966 (3959716) System may panic with sync replication with VVR configuration, when the RVG is in DCM mode.
* 4061004 (3993242) vxsnap prepare command when run on vset sometimes fails.
* 4061036 (4031064) Master switch operation is hung in VVR secondary environment.
* 4061055 (3999073) The file system corrupts when the cfsmount group goes into offline state.
* 4061057 (3931583) Node may panic while unloading the vxio module due to race condition.
* 4061298 (3982103) I/O hang is observed in VVR.
* 4061317 (3925277) DLE (Dynamic Lun Expansion) of single path GPT disk may corrupt disk public region.
* 4061509 (4043337) logging fixes for VVR
* 4062461 (4066785) create new option usereplicatedev=only to import the replicated LUN only.
* 4062577 (4062576) hastop -local never finishes on Rhel8.4 and RHEL8.5 servers with latest minor kernels due to hang in vxdg deport command.
* 4062746 (3992053) Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex.
* 4062747 (3943707) vxconfigd reconfig hang when joing a cluster
* 4062751 (3989185) In a Veritas Volume Manager(VVR) environment vxrecover command can hang.
* 4062755 (3978453) Reconfig hang during master takeover
* 4063374 (4005121) Application IOPS drop in DCM mode with DCO-integrated DCM
* 4064523 (4049082) I/O read error is displayed when remote FSS node rebooting.
* 4066930 (3951527) Data loss on DR site seen while upgrading from Infoscale 7.3.1 or before to 7.4.x or later versions.
* 4067706 (4060462) Nidmap information is not cleared after a node leaves, resulting in add node failure subsequently.
* 4067710 (4064208) Node failed to join the existing cluster after bits are upgraded to a newer version.
* 4067712 (3868140) VVR primary site node might panic if the rlink disconnects while some data is getting replicated to secondary.
* 4067713 (3997531) Fail to start the VVR replication as vxnetd threads are not running
* 4067715 (4008740) Access to freed memory
* 4067717 (4009151) Auto-import of diskgroup on system reboot fails with error 'Disk for diskgroup not found'.
* 4067914 (4037757) Add a tunable to control auto start VVR services on boot up.
* 4067915 (4059134) Resync takes too long on raid-5 volume
* 4069522 (4043276) vxattachd is onlining previously offlined disks.
* 4069523 (4056751) Import read only cloned disk corrupts private region
* 4069524 (4056954) Vradmin addsec failures when encryption is enabled over wire
* 4070099 (3159650) Implemented vol_vvr_use_nat tunable support for vxtune.
* 4070186 (4041822) In an SRDF/Metro array setup, the last path is in the enabled state even after all the host and the array-side switch ports are disabled.
* 4070253 (3911930) Provide a way to clear the PGR_FLAG_NOTSUPPORTED flag on the device instead of using exclude and include commands.
* 4071131 (4071605) A security vulnerability exists in the third-party component libxml2.
* 4072874 (4046786) FS becomes NOT MOUNTED after powerloss/poweron on all nodes.
Patch ID: VRTSaslapm 7.4.2.3200
* 4070186 (4041822) In an SRDF/Metro array setup, the last path is in the enabled state even after all the host and the array-side switch ports are disabled.
Patch ID: VRTSvxvm-7.4.2.2200
* 4018173 (3852146) A shared disk group (DG) fails to be imported when "-c" and "-o noreonline" are specified together.
* 4018178 (3906534) After Dynamic Multi-Pathing (DMP) Native support is enabled, /boot should to be mounted on the DMP device.
* 4031342 (4031452) vxesd core dump in esd_write_fc()
* 4037283 (4021301) Data corruption issue observed in VxVM on RHEL8.
* 4042038 (4040897) Add support for HPE MSA 2060 arrays in the current ASL.
* 4046906 (3956607) A core dump occurs when you run the vxdisk reclaim command.
* 4046907 (4041001) In a VxVM environment, a system hangs when some nodes are rebooted.
* 4046908 (4038865) System panick at vxdmp module in IRQ stack.
* 4047592 (3992040) bi_error - bi_status conversion map added for proper interpretation of errors at FS side.
* 4047695 (3911930) Provide a way to clear the PGR_FLAG_NOTSUPPORTED flag on the device instead of using exclude and include commands.
* 4047722 (4023390) Vxconfigd keeps dump core as invalid private region offset on a disk.
* 4049268 (4044583) A system goes into the maintenance mode when DMP is enabled to manage native devices.
Patch ID: VRTSaslapm 7.4.2.2200
* 4047510 (4042420) APM modules creation fails as vxvm-startup tries to make hardlink on different partition.
Patch ID: VRTSvxvm-7.4.2.1500
* 4018182 (4008664) System panic when signal vxlogger daemon that has ended.
* 4020207 (4018086) system hang was observed when RVG was in DCM resync with SmartMove as ON.
* 4020438 (4020046) DRL log plex gets detached unexpectedly.
* 4021238 (4008075) Observed with ASL changes for NVMe, This issue observed in reboot scenario. For every reboot machine was hitting panic And this was happening in loop.
* 4021240 (4010612) This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on. means every nvme/ssd disks names would be 
hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....
* 4021346 (4010207) System panicked due to hard-lockup due to a spinlock not released properly during the vxstat collection.
* 4021366 (4008741) VxVM device files are not correctly labeled to prevent unauthorized modification - device_t
* 4023095 (4007920) Control auto snapshot deletion when cache obj is full.
Patch ID: VRTSaslapm 7.4.2.1500
* 4021235 (4010667) NVMe devices are not detected by Veritas Volume Manager(VxVM) on RHEL 8.
* 4021946 (4017905) Modifying current ASL to support VSPEx series array.
* 4022942 (4017656) Add support for XP8 arrays in the current ASL.
Patch ID: VRTScavf-7.4.2.4900
* 4113616 (4027640) resource with type ApplicationNone doesn't come online
* 4133718 (4079285) CVMVolDg resource takes many minutes to online with CPS fencing.
Patch ID: VRTScavf-7.4.2.3700
* 4092597 (4092596) Support of unmounting all the filesystems in case of offline of cfmount agent on Solaris and Linux
Patch ID: VRTScavf-7.4.2.1400
* 4054857 (4035066) Improving logging capabilities in case of monitor and EP timeout events.
Patch ID: VRTSfsadv-7.4.2.4900
* 4130256 (4130255) Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.
Patch ID: VRTSfsadv-7.4.2.3900
* 4105309 (4103002) Replication failures observed in internal testing
Patch ID: VRTSfsadv-7.4.2.3600
* 4088025 (4088024) Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.
Patch ID: VRTSfsadv-7.4.2.2600
* 4070366 (4070367) After Upgrade "/var/VRTS/fsadv" directory is getting deleted.
* 4071090 (4040281) Security vulnerabilities exist in some third-party components used by VxFS.
Patch ID: VRTSveki-7.4.2.4900
* 4125096 (4110457) Veki packaging were failing due to dependency
* 4135057 (4130815) Generate and add changelog in VEKI rpm
Patch ID: VRTSveki-7.4.2.3700
* 4105334 (4105335) Failed to load VEKI module
Patch ID: VRTSveki-7.4.2.2600
* 4057596 (4055072) Upgrading VRTSveki package using yum reports error
Patch ID: VRTSgms-7.4.2.4900
* 4125931 (4125932) no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.
* 4135270 (4129707) Generate and add changelog in GMS rpm
Patch ID: VRTSgms-7.4.2.3800
* 4105296 (4105297) GMS module failed to load on SLES12
Patch ID: VRTSgms-7.4.2.2600
* 4057424 (4057176) Rebooting the system results into emergency mode due to corruption of module dependency files.
* 4061646 (4061644) GMS failed to start after the kernel upgrade.
Patch ID: VRTSgms-7.4.2.1200
* 4023553 (4023552) Unable to load the vxgms module on linux.
Patch ID: VRTSglm-7.4.2.4900
* 4098108 (4087259) System panics while upgrading CFS protocol from 90 to 135 (latest).
* 4134673 (4126298) System may panic due to unable to handle kernel paging request 
and memory corruption could happen.
* 4135184 (4129714) Generate and add changelog in GLM rpm
Patch ID: VRTSglm-7.4.2.3800
* 4105278 (4105277) GLM module failed to load on Sles12
Patch ID: VRTSglm-7.4.2.1500
* 4014719 (4011596) man page changes for glmdump
Patch ID: VRTSodm-7.4.2.4900
* 4093943 (4076185) VxODM goes into maintenance mode after reboot.
* 4126254 (4126256) no symbol version warning for VEKI's symbol in dmesg after SFCFSHA configuration
* 4135325 (4129837) Generate and add changelog in ODM rpm
* 4149888 (4118154) System may panic in simple_unlock_mem() when errcheckdetail enabled.
Patch ID: VRTSodm-7.4.2.4200
* 4112305 (4112304) VRTSodm driver will not load with VRTSvxfs 7.4.2.4200 patch.
Patch ID: VRTSodm-7.4.2.3900
* 4105305 (4105306) VRTSodm driver will not load with VRTSvxfs patch.
Patch ID: VRTSodm-7.4.2.3500
* 4087157 (4087155) VRTSodm driver will not load with VRTSvxfs patch.
Patch ID: VRTSodm-7.4.2.3400
* 4080777 (4080776) VRTSodm driver will not load with 7.4.1.3400 VRTSvxfs patch.
Patch ID: VRTSodm-7.4.2.2600
* 4057429 (4056673) Rebooting the system results into emergency mode due to corruption of module dependency files. Incorrect vxgms dependency in odm service file.
* 4060584 (3868609) High CPU usage by vxfs thread.
Patch ID: VRTSodm-7.4.2.2200
* 4049440 (4049438) VRTSodm driver will not load with 7.4.2.2200 VRTSvxfs patch.
Patch ID: VRTSodm-7.4.2.1500
* 4023556 (4023555) Unable to load the vxodm module on linux.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxfs-7.4.2.4900

* 4093193 (Tracking ID: 4090032)

SYMPTOM:
System might panic in vx_dev_strategy() while Sybase or Oracle configuration, 
the panic stack looks like following:
vx_dev_strategy
vx_dio_physio
vx_dio_rdwri
vx_write_direct
vx_write1
vx_write_common_slow
vx_write_common
vx_write
fop_write
pwrite

DESCRIPTION:
When we are allocating a different buffer, vx_dev_strategy() unable to find the LDI handle.

RESOLUTION:
Code is modified to fix this issue.

* 4114018 (Tracking ID: 4067505)

SYMPTOM:
Fsck reports error invalid VX_AF_OVERLAY aflags

DESCRIPTION:
If the inode does not have push linkage (inode not allocated / inode and data already pushed), we skip pushing the data blocks when the inode is removed. Inode will have overlay data blocks, gen bumped up and IEREMOVE set. During extop processing size is set to 0 and bmap is cleared. This is a valid scenario.

Fsck while validating the inodes with overlay flag set, expects gen can be different only if the overlay inode has IEREMOVE set and it is last clone in the chain.

RESOLUTION:
If the push inode is not present allow gen to be different even if the clone is not last in chain.

* 4114033 (Tracking ID: 4101634)

SYMPTOM:
Fsck reports error directory block containing inode has incorrect file-type and directory contains invalid directory blocks.

DESCRIPTION:
While doing diretory sanity in fsck we skip updating new directory type ondisk in case of filetype error, hence fsck
reporting incorrect file-type error and directory contains invalid directory blocks .

RESOLUTION:
While doing diretory sanity in fsck updating new directory type ondisk in case of filetype error.

* 4114040 (Tracking ID: 4083056)

SYMPTOM:
Hang observed while punching the smaller hole over the bigger hole.

DESCRIPTION:
We observed the hang while punching the smaller hole over the bigger hole in the file due to the tight race
while processing the punching of the hole to the file and flushing it to the disk.

RESOLUTION:
Code changes checked in.

* 4115943 (Tracking ID: 4077506)

SYMPTOM:
Using file name in multi threaded application may lead to race condition

DESCRIPTION:
Using file name in multi threaded application creates race. Example of such race is follows
1. In thread-1 we open a file (file1), to send it to client. (instance-1 of file) assume file is long and will take time
2. Thread-2 comes and rename temp file with this name file1, this temp file has different data and attribute (say instance-2 of file1)
3. At some time in thread-1, it query attribute by using filename, this attribute will be of instance-2 and not instance-1 since we used path.

RESOLUTION:
Provided support for new apis to set/get worm attribute using fd

* 4118838 (Tracking ID: 4116329)

SYMPTOM:
fsck -o full -n command will fail with error:
"ERROR: V-3-28446:  bc_write failure devid = 0, bno = 8, len = 1024"

DESCRIPTION:
Previously, to correct the file system WORM/SoftWORM, we didn't check  if user wanted to correct the pflags or just wanted to validate if value is flag is missing or not. Also fsck was not capable to handle SOFTWORM flag.

RESOLUTION:
Code added to not try to fix the the problem if user ran fsck with -n option. Also SOFTWORM scenario is added.

* 4120529 (Tracking ID: 4042168)

SYMPTOM:
Running FULLFSCK on the filesystem reports error regarding incorrect state file. 

au <au number> state file incorrect - fix? (ynq)

DESCRIPTION:
When we allocate a Zero Fill on Demand (ZFOD) extent larger than an AU then it is split into smaller sized chunks. After splitting, it requires to change the Allocation Unit (AU) state from ALLOCATED to EXPANDED. But this state change is missing in the code that leads to incorrect state file scenario.

RESOLUTION:
Code changes have been done to update Extent allocation unit state correctly.

* 4120531 (Tracking ID: 4096561)

SYMPTOM:
Running FULLFSCK on the filesystem reports error regarding incorrect state file. 

au <au number> state file incorrect - fix? (ynq)

DESCRIPTION:
When we allocate a Zero Fill on Demand (ZFOD) extent larger than an AU then it is split into smaller sized chunks. After splitting, it requires to change the 
Allocation Unit (AU) state from ALLOCATED to EXPANDED. But this state change is missing in the code that leads to incorrect state file scenario.

RESOLUTION:
Code changes have been done to update Extent allocation unit state correctly.

* 4121068 (Tracking ID: 4100021)

SYMPTOM:
Running setfacl followed by getfacl resulting in "No such device or address" error.

DESCRIPTION:
When running setfacl command on some of the directories which have the VX_ATTR_INDIRECT type of acl attribute, it is not removing the existing acl attribute and adding a new one, which should  not happen ideally. This is resulting in the failure of getfacl with following "No such device or address" error.

RESOLUTION:
we have done the code chages to removal of VX_ATTR_INDIRECT type acl in setfacl code.

* 4121071 (Tracking ID: 4117342)

SYMPTOM:
System might panic due to hard lock up detected on CPU

DESCRIPTION:
When purging the dentries, there is a possible race which can 
lead to corrupted vnode flag. Because of these corrupted flag, 
vxfs tries to purge dentry again and it gets stuck for vnode lock 
which was taken in the current thread context which leads to 
deadlock/softlockup.

RESOLUTION:
Code is modified to protect vnode flag with vnode lock.

* 4128876 (Tracking ID: 4104103)

SYMPTOM:
File system unmount is hang

DESCRIPTION:
In case of error rele was missing on inode. Vnode count was leaked. Umount on the node was stuck waiting for the vnode count to become 1.

RESOLUTION:
Release the hold on vnode in case of error.

* 4134659 (Tracking ID: 4103045)

SYMPTOM:
Veritas File Replication failover(promote) might fail during disaster recovery or upgrade scenarios.

DESCRIPTION:
Veritas File Replication failover  is used to swap the role of source and target site during disaster recovery. As part of  failover, the filesystem is being unmounted and mounted again to update the state and other replication configurations. The failover might failed,  because the unmount(offline of filesystem) operation is not succesful. As offline of filesystem is not successful and after certain retries to offline the  filesystem as final step the process holding mount point  is being killed. So Failover is exited.

RESOLUTION:
The fix is to open the replication config file with O_CLOEXEC, which will ensure not to inherit the process on the filesystem from replication context.

* 4134662 (Tracking ID: 4134661)

SYMPTOM:
Hang seen in the cp command in case of checkpoint promote in cluster filesystem environment.

DESCRIPTION:
The Hang is seen in cp command as we are not able to pull the inode blocks which is marked as overlay, hence made the code changes to pull the inode blocks marked as overlay.

RESOLUTION:
The Hang is seen in cp command as we are not able to pull the inode blocks which is marked as overlay, hence made the code changes to pull the inode blocks marked as overlay.

* 4134665 (Tracking ID: 4130230)

SYMPTOM:
vx_prefault_uio_readable() function is going beyond intended boundaries of the uio->uio_iov structure, potentially causing it to access memory addresses that are not valid.

DESCRIPTION:
The maximum length we can pre-fault is fixed(8K). But, the amount of user IO in some situation is less than the fixed value that we pre-fault. This leads the code in vx_prefault_uio_readable() to run off the end of uio->uio_iov structure and access invalid memory address.

RESOLUTION:
To fix this we are introducing a check in the code which will stop the code from accessing invalid memory location, after it has processed (pre-faulted) all the requested user-space IO pages.

* 4135000 (Tracking ID: 4070819)

SYMPTOM:
Handling the case where the error is returned while we are going to get the inode from the last clone which is marked as overlay and is going to be removed.

DESCRIPTION:
We are going to get the inode which is marked as overlay from the last clone that is marked for deletion. Hence have made the code changes to handle the scenario where we can get the error while fetching the inode in this case.

RESOLUTION:
Have handled this scenario through code.

* 4135005 (Tracking ID: 4068548)

SYMPTOM:
Fullfsck set on the file system and message "WARNING: msgcnt 222 mesg 017: V-2-17: vx_nattr_dirremove_1 - <mntpt> file system inode <ino> marked 
bad incore" logged in dmesg

DESCRIPTION:
In case if file system is full, allocation fails with ENOSPC. enospc processing is done through inactive_process. If the file creation is done by non-root user and this thread itself starts doing the worklist processing it may fail with EACCES while processing IEREMOVE on files with root ownership.

RESOLUTION:
Set the root credentials while doing enospc processing and restore the old credentials after it is done.

* 4135008 (Tracking ID: 4126957)

SYMPTOM:
If "fsadm -o mntunlock=<string> <mountpoint>" and "umount -f <mountpoint>" operations are run in parallel,
system may crash with following stack:

 vx_aioctl_unsetmntlock+0xd3/0x2a0 [vxfs]
 vx_aioctl_vfs+0x256/0x2d0 [vxfs]
 vx_admin_ioctl+0x156/0x2f0 [vxfs]
 vxportalunlockedkioctl+0x529/0x660 [vxportal]
 do_vfs_ioctl+0xa4/0x690
 ksys_ioctl+0x64/0xa0
 __x64_sys_ioctl+0x16/0x20
 do_syscall_64+0x5b/0x1b0

DESCRIPTION:
There is a race condition between these two operations, due to which by the time fsadm thread tries to access
FS data structure, it is possible that umount operation has already freed the structures, which leads to panic.

RESOLUTION:
As a fix, the fsadm thread first checks if the umount operation is in progress. If so, it fails rather than continuing.

* 4135017 (Tracking ID: 4119961)

SYMPTOM:
Machine hit with Kernel PANIC, and generated the core dump.

DESCRIPTION:
During read we were trying to release the lock which was never taken.

RESOLUTION:
Fixed the issue with code changes.

* 4135018 (Tracking ID: 4068201)

SYMPTOM:
File system corruption

DESCRIPTION:
In certain cases we are not not flushing the intent log for transactions committed by a msg handler thread before replying to the msg.

RESOLUTION:
Flush the transactions done during ilist pull and push in case of error before sending the response.

* 4135022 (Tracking ID: 4101075)

SYMPTOM:
Will hit the core dump with debug bits, during finding big size extent.

DESCRIPTION:
During search of big extent (32K) with delegation, it is okay for smap to be NULL, if it is not NULL then unlock it.

RESOLUTION:
Relaxed the assert as it was unnecessarily complaining, also added the code to free the smap if it is not NULL.

* 4135027 (Tracking ID: 4084239)

SYMPTOM:
In case of OOM (Out of Memory) situation we might hit the issue if IOCTL fails to copy the data.

DESCRIPTION:
In case of error while copying the data (here it is OOM), we tried to release the lock which was never taken because of error.

RESOLUTION:
Fixed the bug with code changes.

* 4135028 (Tracking ID: 4058153)

SYMPTOM:
# mkfs.vxfs -o inosize=512 /dev/vx/dsk/testdg/testvol
# mount.vxfs /dev/vx/dsk/testdg/testvol /mnt1
# mkdir /mnt1/dir1
nxattrset -n ab -v 012345678901234567890123456789012345678901234567890123456789012345678901 /mnt1/dir1 >>>>>> creating 88 byte nxattr
# ./create_20k_file.sh             >>>>>>>>>>>> creating 20k files inside /mnt1/dir1/ to create LDH attribute.

Now if we remove LDH attain with some free inode, the fsck will go to invite loop.

DESCRIPTION:
We were doing calculation error while clearing the LDH attribute from inode.

RESOLUTION:
Fixed the bug with code changes, now FSCK will not hang and will clear the LDH attribute.

* 4135038 (Tracking ID: 4090088)

SYMPTOM:
while panic_on_warn is set, force umount might crash the server with following stacktrace: 

PID: 627188  TASK: ffff901777e89c00  CPU: 7   COMMAND: "vxumount"
 #0  machine_kexec at ffffffffb54653c8
 #1  __crash_kexec at ffffffffb55af0cd
 #2  panic at ffffffffb5e3359f
 #3  __warn.cold at ffffffffb5e337c4
 #4  report_bug at ffffffffb594501a
 #5  handle_bug at ffffffffb5e7be9c
 #6  exc_invalid_op at ffffffffb5e7c024
 #7  asm_exc_invalid_op at ffffffffb6000a62
    [exception RIP: blkdev_flush_mapping+252]
 #8  blkdev_put at ffffffffb58b41b0
 #9  vx_force_umount at ffffffffc1675e8f [vxfs]
#10  vx_aioctl_common at ffffffffc11dd80e [vxfs]
#11  vx_aioctl at ffffffffc11d57db [vxfs]
#12  vx_admin_ioctl at ffffffffc1550b2e [vxfs]
#13  vxportalunlockedkioctl at ffffffffc0a61399 [vxportal]
#14  __x64_sys_ioctl at ffffffffb578aca2
#15  do_syscall_64 at ffffffffb5e7bb2b

DESCRIPTION:
Mode is not properly setting with FMODE_EXCL flag.

RESOLUTION:
Code changes have been checked-in to properly set the flag value.

* 4135040 (Tracking ID: 4092440)

SYMPTOM:
# /opt/VRTS/bin/fsppadm enforce /mnt4
UX:vxfs fsppadm: ERROR: V-3-27988: Placement policy file does not exist for mount point /mnt4: No such file or directory
# echo $?
0

DESCRIPTION:
FSPPADM command was returning rc 0 even in case of error during policy enformentmet.

RESOLUTION:
Fixed the issue by code change.

* 4135042 (Tracking ID: 4068953)

SYMPTOM:
# /opt/VRTS/bin/fsck /dev/vx/rdsk/testdg/testvol
# mount.vxfs /dev/vx/dsk/testdg/testvol /testfsck
# vxupgrade -n 17 /testfsck
# umount /testfsck 
# /opt/VRTS/bin/fsck -o full -n /dev/vx/rdsk/testdg/testvol
pass0 - checking structural files
pass1 - checking inode sanity and blocks
pass2 - checking directory linkage
pass3 - checking reference counts
pass4 - checking resource maps
fileset 1 au 0 imap incorrect - fix (ynq)n                  >>>>>>> NOT EXPECTED
fileset 1 iau 0 summary incorrect - fix? (ynq)n       >>>>>>> NOT EXPECTED
OK to clear log? (ynq)n

DESCRIPTION:
In case of HOLE in ilist file we might hit the issue, because of incorrect calculation of available space.

RESOLUTION:
With the code changes, corrected the way we were calculating the space.

* 4135102 (Tracking ID: 4099740)

SYMPTOM:
While mounting a file system, it fails with EBUSY error. Although on the setup, same device can not be seen as "mounted".

DESCRIPTION:
During mounting a filesystem, if it encounters error in kernel space, it leaks a hold count of the block device. This falsely implies the block device is still open in any future mounts. Because of that, when user retries the mount, the mount fails with EBUSY. It also causes memory leak for the same reason.

RESOLUTION:
Code changes are done to release the hold count on the block device properly.

* 4135105 (Tracking ID: 4112056)

SYMPTOM:
Will have incorrect values in inode fields i_acl and i_default_acl that is 0, however expected value is ACL_NOT_CACHED (-1)

DESCRIPTION:
VxFS does not set get_acl() callback in inode_operations (i_op), hence whenever kernel (version 4.x and above) checks the presence of this callback and does not 
find, it sets i_acl and i_default_act fields to 0.

RESOLUTION:
Corrected the bug with code changes.

* 4135149 (Tracking ID: 4129680)

SYMPTOM:
VxFS rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to VxFS rpm.

* 4136095 (Tracking ID: 4134194)

SYMPTOM:
vxfs/glm worker thread panic with kernel NULL pointer dereference

DESCRIPTION:
In vx_dir_realloc(), When the directory block is full, to fit new file entry it reallocate this directory block into a larger extent.
So as the new extent gets allocated, the old cbuf is now part of the new extent.
But we dont invalidate old cbuf during dir_realloc, which ends up with a staled cbuf in the cache.
This staled buffer can cause the buffer overflow issue.

RESOLUTION:
Code changes are done to invalidate the cbuf immediately after the realloc.

* 4136238 (Tracking ID: 4134884)

SYMPTOM:
After unmounting the FS, when the diskgroup deport is initiated, it gives below error: 
vxvm:vxconfigd: V-5-1-16251 Disk group deport of testdg failed with error 70 - Volume or plex device is open or attached

DESCRIPTION:
During mount of a dirty file system, vxvm device open count is leaked, and consequently, the deport of the vxvm DG got failed. 
During the VXFS FS mount operation the corresponding vxvm device will be opened.
If the FS is not clean, it signifies mount to do the log replay. Later the log replay completes, and the mount will succeed.
But this device open count leak causes the diskgroup deport to fail.

RESOLUTION:
Code changes are done to address the device open count leak.

* 4137137 (Tracking ID: 4136110)

SYMPTOM:
cmd "umount -l" is unmounting mount points even after adding mntlock in sles12 and sles15.

DESCRIPTION:
cmd "umount -l" is unmounting mount points even after adding mntlock in sles12 and sles15 due to missing umount.vxfs binary in /sbin directory.

RESOLUTION:
Code changes have been done to add umount.vxfs binary in /sbin directory in sles12 and sles15.

* 4137139 (Tracking ID: 4126943)

SYMPTOM:
Create lost+found directory in VxFS file system with default ACL permissions as 700.

DESCRIPTION:
Due to security reasons, there was ask to create lost+found directory in VxFS file system with default ACL permissions as 700. So that, except root, no other users are able to access files under lost+found directory.

RESOLUTION:
VxFS filesystem creation with mkfs command will now result in creation of lost+found directory with default ACL permissions as 700.

* 4140587 (Tracking ID: 4136235)

SYMPTOM:
System with higher number of attribute inodes and pnlct inodes my see higher number of IOs on an idle system.

DESCRIPTION:
System with higher number of attribute inodes and pnlct inodes my see higher number of IOs on an idle CFS. Hence reducing the pnlct merge frequency may show
some performance improvement.

RESOLUTION:
Module parameter to change pnlct merge frequency.

* 4140594 (Tracking ID: 4116887)

SYMPTOM:
Running fsck -y on large size metasave with lots of hardlinks is consuming huge amount of system memory.

DESCRIPTION:
On a FS with lot of hardlinks, requires a lot of memory for storing dotdot information in memory. 
Pass1d populates this dotdot linklist. But it never frees this space. During the whole fsck run
if it requires some change in structural files, it will do rebuild. Every time it rebuilds, it will 
add up the space to the already consumed memory and this way the total memory consumption will be huge.

RESOLUTION:
Code changes are done to free the dotdot list.

* 4140599 (Tracking ID: 4132435)

SYMPTOM:
Failures seen in FSQA cmds->fsck tests, panic in get_dotdotlst

DESCRIPTION:
The inode getting processed in pass_unload->clean_dotdotlst(),
was not in the incore imap table, so its related dotdot list is also not created.
Because the dotdotlist is not initialized it hit the null pointer
dereference error in clean_dotdotlst, hence the panic.

RESOLUTION:
Code changes are done to check for inode allocation status 
in incore imap table while cleaning the dotdot list.

* 4140782 (Tracking ID: 4137040)

SYMPTOM:
System got hung due to missing unlock on a file directory, this issue could be hit if there are a lot of mv(1) operations happening against one large VxFS directory.

DESCRIPTION:
In a large VxFS directory, LDH (alternate indexing) is activated once the number of directory entries cross the large directory threshold (vx_dexh_sz),  LDH creates hash attribute inode into the main directory. The exclusive lock of this LDH inode is required during file system rename operations (mv(1)), in case of multiple rename operations happening against one large directory, the trylock of LDH inode may fail due to the contention, and VX_EDIRLOCK is returned.

In case of VX_EDIRLOCK, VxFS should release the exclusive lock of source directory and update the locker list, then retry the operation. However VxFS releases the exclusive lock of target directory wrongly, instead of source and doesnt update the locker list, during the retry operation, although it happens to release the lock (target equals source if rename happens within the same dir), the locker list isnt updated, this locker record still remains in locker list, consequently, the same lock will not get released due to this extra record.

RESOLUTION:
Release the source dir lock instead of target, and update locker list accordingly.

* 4141124 (Tracking ID: 4034246)

SYMPTOM:
In case of race condition in cluster filesystem, link count table in VxFS might miss some metadata.

DESCRIPTION:
Due to race between multiple threads working on link count table in VxFS, flag related to flushing on the buffers for link count table might reset even if there are some pending buffers.

RESOLUTION:
Code is modified to reset the flag related to flushing on the buffers with appropriate locking protection.

* 4141125 (Tracking ID: 4008980)

SYMPTOM:
In cluster filesystem, due to mismatch between size in Linux inode and VxFS inode, wrong file size may be reported.

DESCRIPTION:
In cluster filesystem, when inode ownership is transferred between cluster nodes, due to race condition mismatch between size field in Linux inode and VxFS inode may occur. This will result in reporting of garbage value for file size.

RESOLUTION:
After every ownership change between cluster nodes, synchronize size field between Linux inode and VxFS inode.

* 4149891 (Tracking ID: 4145203)

SYMPTOM:
vxfs startup scripts fails to invoke veki for kernel version higher than 3.

DESCRIPTION:
vxfs startup script failed to start Veki, as it was calling system V init script to start veki instead of the systemctl interface.

RESOLUTION:
Current code changes checks if kernel version is greater than 3.x and if systemd is present then use systemctl interface otherwise use system V  interface

* 4149898 (Tracking ID: 4095890)

SYMPTOM:
Machine is going into the panic state.

DESCRIPTION:
In Solaris, panic seen with changes related to delegation of FREE EAU to primary. From code walk through the error handling code is not initialized properly

RESOLUTION:
Updated code to initialise the error scenario correctly.

* 4149904 (Tracking ID: 4111385)

SYMPTOM:
FS will be disabled for non-debug bits, and PANIC will happen for debug bits.

DESCRIPTION:
When we migrate the FS from big-endian to little-endian using fscdsconv, VxFS essenatial structures get converted based on destination machine endianness. However secure clock fields conversion requires special handing as it contains hash value which salt is endianness based.

RESOLUTION:
Now fscdsconv code consider secure clock value as exception, and re-init during migration.

* 4149906 (Tracking ID: 4085768)

SYMPTOM:
Machine is going into the panic state.

DESCRIPTION:
We are going to adjust the bmap at the lower level of rct inode which do not require adjustment.

RESOLUTION:
Changes made in code to handle the issue.

* 4150621 (Tracking ID: 4103398)

SYMPTOM:
The vxvm IO threads could hang with following stack trace.

__schedule
schedule
schedule_timeout
io_schedule_timeout
io_schedule
get_request
blk_queue_bio
vxvm_gen_strategy
generic_make_request
submit_bio
vx_dev_strategy
vx_snap_strategy
vx_logbuf_write
vx_logbuf_io
vx_logbuf_flush
vx_logflush_disabled
vx_disable
vx_dataioerr_disable
vx_dataioerr
vx_pageiodone
vx_end_io_v2
bio_endio
volkiodone
volsiodone
vol_mv_write_done
voliod_iohandle
voliod_loop
kthread
ret_from_fork_nospec_begin

DESCRIPTION:
In response to the IO error returned by VxVM IO thread, VxFS initiated another IO - to mark FS as DISABLED. This new IO get scheduled in the same VxVM IO thread. Since, VxVM IO thread was waiting in the VxFS function stack. It created a deadlock.

RESOLUTION:
VxFS code instead of issuing the IO "to mark FS as DISABLED" in the VxVM thread context, delegates the task to a different VxFS thread and returns control back to VxVM immediately.

* 4155169 (Tracking ID: 4106777)

SYMPTOM:
After enabling ted_hypochondriac and ted_call_back parameter conformance test failed

DESCRIPTION:
assert in vx_dataioerr() seen for test conform:ioerror, fs can be disabled asynchronously in different thread context
Added  check to handle this case.

RESOLUTION:
added fix to check fs_disabled only when VX_IS_DISABLE_ASYNC is false

* 4155832 (Tracking ID: 4028534)

SYMPTOM:
Reorg optimisation for extents are sandwiched between holes does not consider changes made to file after reorg request is decided.

DESCRIPTION:
Add additional check in reorg optimisation to consider changes after reorg request is decided.

RESOLUTION:
During optimisation in extent reorg, recheck if hole is punched with in reorg length

Patch ID: VRTSvxfs-7.4.2.4200

* 4110765 (Tracking ID: 4110764)

SYMPTOM:
Security Vulnerability observed in Zlib a third party component VxFS uses.

DESCRIPTION:
In an internal security scans vulnerabilities in Zlib were found.

RESOLUTION:
Upgrading the third party component Zlib to address these vulnerabilities.

Patch ID: VRTSvxfs-7.4.2.4100

* 4106702 (Tracking ID: 4106701)

SYMPTOM:
A security vulnerability exists in the third-party component sqlite.

DESCRIPTION:
VXFS uses a third-party component named sqlitein which a security vulnerability exists.

RESOLUTION:
VxFS is updated to use a newer version of sqlitein which the security vulnerability has been addressed.

Patch ID: VRTSvxfs-7.4.2.3900

* 4050870 (Tracking ID: 3987720)

SYMPTOM:
vxms test is having failures.

DESCRIPTION:
vxms test is having failures.

RESOLUTION:
updated vxms.

* 4071105 (Tracking ID: 4067393)

SYMPTOM:
System panicked with the following stack trace:

page_fault 
[exception RIP: vx_ckptdir_nmspc_match+29]
vx_nmspc_resolve
vx_drevalidate 
lookup_dcache 
do_last 
path_openat 
do_filp_open 
do_sys_open 
sys_open

DESCRIPTION:
Negative path lookup on force unmounted file system was not handled, hence NULL pointer
de-reference due to accessing already freed fs struc of force unmounted fs.

RESOLUTION:
Handled cases for force umounted before vx_nmspc_resolve() call, so it can NULL pointer
de-reference.

* 4074298 (Tracking ID: 4069116)

SYMPTOM:
fsck got stuck in pass1 inode validation.

DESCRIPTION:
fsck could land into a infinite retry loop during inode validation with the following stack trace:

pthread_mutex_unlock()
bc_getfreebuf()
sl_getblk()
bc_rgetblk()
fs_getblk()
bmap_bread()
fs_bmap_typ()
fs_callback_bmap()
fsck_callback_bmap()
bmap_check_overlay()
ivalidate()
pass1()
iproc_do_work()
start_thread()

This is because the inode is completely corrupted in such a way that it matches a known inode type in ivalidate() and goes ahead to verify the inode bmap. While trying to do so it requests for a buffer size larger than maximum fsck buffer cache memory and hence gets stuck in a loop.

RESOLUTION:
Added code changes to skip bmap validation if the inode mode bits are corrupted

* 4075873 (Tracking ID: 4075871)

SYMPTOM:
Utility to find possible pending stuck messages.

DESCRIPTION:
Utility to find possible pending stuck messages.

RESOLUTION:
Added utility to find possible pending stuck messages.

* 4075875 (Tracking ID: 4018783)

SYMPTOM:
Metasave collection and restore takes significant amount of time.

DESCRIPTION:
Metasave collection and restore takes significant amount of time.

RESOLUTION:
Code changes have been done in metasave code base to improve metasave collection and metasave restore in the range of 30-40%.

* 4084881 (Tracking ID: 4084542)

SYMPTOM:
Enhance fsadm defrag report to display if FS is badly fragmented.

DESCRIPTION:
Enhance fsadm defrag report to display if FS is badly fragmented.

RESOLUTION:
Added method to identify if FS needs defragmentation.

* 4088078 (Tracking ID: 4087036)

SYMPTOM:
FSCK utility exits with an error while running it with the "-o metasave" option on a shared volume.

DESCRIPTION:
FSCK utility exits with an error while running it with the "-o metasave" option on a shared volume. Besides this, while running this utility with "-n" and either "-o metasave" or "-o dumplog", it silently ignores the latter option(s).

RESOLUTION:
Code changes have been done to resolve the above-mentioned failure and also warning messages have been added to inform users regarding mutually exclusive behavior of "-n" and either of "metasave" and "dumplog" options instead of silently ignoring them.

* 4090573 (Tracking ID: 4056648)

SYMPTOM:
Metasave collection can be executed on a mounted filesystem.

DESCRIPTION:
If metasave image is collected from a mounted filesystem then it might be an inconsistent state of the filesystem as there could be ongoing changes happening on the filesystem.

RESOLUTION:
Code changes have been done to fail default metasave collection for a mounted filesystem. If metasave needs to be collected from mounted filesystem then this can still be achieved with option "-o inconsistent".

* 4090600 (Tracking ID: 4090598)

SYMPTOM:
Utility to detect culprit nodes while cfs hang is observed.

DESCRIPTION:
Utility to detect culprit nodes while cfs hang is observed.Customer can reboot and collect crash from those nodes to get the application up and running. Integrated msgdump and glmdump utiltiy with cfshang_check.

RESOLUTION:
Integrated msgdump and glmdump utiltiy with cfshang_check.

* 4090601 (Tracking ID: 4068143)

SYMPTOM:
fsck->misc is having failures.

DESCRIPTION:
fsck->misc is having failures.

RESOLUTION:
Updated fsck->misc.

* 4090617 (Tracking ID: 4070217)

SYMPTOM:
Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.

DESCRIPTION:
On a disabled cluster-mounted filesystem, release of cluster reservation might fail during unmount operation resulting in a  failure of command fsck with 'cluster reservation failed for volume' message.

RESOLUTION:
Code is modified to release cluster reservation in unmount operation properly even for cluster-mounted filesystem.

* 4090639 (Tracking ID: 4086084)

SYMPTOM:
VxFS mount operation causes system panic when -o context is used.

DESCRIPTION:
VxFS mount operation supports context option to override existing extended attributes, or to specify a different, default context for file systems that do not support extended attributes. System panic observed when -o context is used.

RESOLUTION:
Required code changes are added to avoid panic.

* 4091580 (Tracking ID: 4056420)

SYMPTOM:
VFR  Hardlink file is not getting replicated after modification in incremental sync.

DESCRIPTION:
VFR  Hardlink file is not getting replicated after modification in incremental sync.

RESOLUTION:
Updated code to address: VFR Hardlink file is not getting replicated after modification in incremental sync.

* 4093306 (Tracking ID: 4090127)

SYMPTOM:
CFS hang in vx_searchau().

DESCRIPTION:
As part of SMAP transaction changes, allocator changed its logic to call mdele tryhold always when getting the emap for a particular EAU, and it passes 
nogetdele as 1 to mdele_tryhold, which suggests that mdele_tryhold should not ask for delegation when detecting a free EAU without delegation, so in our case, 
allocator finds such an EAU in device summary tree but without delegation,  and it keeps retrying but without asking for delegation, hence the forever.

RESOLUTION:
In case a FREE EAU is found without delegation, delegate it back to Primary.

Patch ID: VRTSvxfs-7.4.2.3600

* 4089394 (Tracking ID: 4089392)

SYMPTOM:
Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.

DESCRIPTION:
VxFS uses the OpenSSL third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version (1.1.1q) of this third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-7.4.2.3500

* 4083948 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party component Zlib.

RESOLUTION:
Upgrading the third party component Zlib to resolve these vulnerabilities.

Patch ID: VRTSvxfs-7.4.2.3400

* 4079532 (Tracking ID: 4079869)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party components. The Attackers can exploit these security vulnerability 
to attack on system.

RESOLUTION:
Upgrading the third party components to resolve these vulnerabilities.

Patch ID: VRTSvxfs-7.4.2.2600

* 4015834 (Tracking ID: 3988752)

SYMPTOM:
Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.

DESCRIPTION:
bdev_strategy() is deprecated from solaris code and was causing performance issues when used for IO's. Solaris has recommended to use LDI framework for all IO's.

RESOLUTION:
Code is modified to use ldi framework for all IO's in solaris.

* 4040612 (Tracking ID: 4033664)

SYMPTOM:
Multiple issues occur with hardlink replication using VFR.

DESCRIPTION:
Multiple different issues occur with hardlink replication using Veritas File Replicator (VFR).

RESOLUTION:
VFR is updated to fix issues with hardlink replication in the following cases:
1. Files with multiple links
2. Data inconsistency after hardlink file replication
3. Rename and move operations dumping core in multiple different scenarios
4. WORM feature support

* 4040618 (Tracking ID: 4040617)

SYMPTOM:
Veritas file replicator is not performing as per the expectation.

DESCRIPTION:
Veritas FIle replicator was having some bottlenecks at networking layer as well as data transfer level. This was causing additional throttling in the Replication.

RESOLUTION:
Performance optimisations done at multiple places to make use of available resources properly so that Veritas File replicator

* 4060549 (Tracking ID: 4047921)

SYMPTOM:
Replication job was getting into hung state because of the deadlock involving below threads :

Thread : 1  

#0  0x00007f160581854d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f1605813e9b in _L_lock_883 () from /lib64/libpthread.so.0
#2  0x00007f1605813d68 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x000000000043be1f in replnet_sess_bulk_free ()
#4  0x000000000043b1e3 in replnet_server_dropchan ()
#5  0x000000000043ca07 in replnet_client_connstate ()
#6  0x00000000004374e3 in replnet_conn_changestate ()
#7  0x0000000000437c18 in replnet_conn_evalpoll ()
#8  0x000000000044ac39 in vxev_loop ()
#9  0x0000000000405ab2 in main ()

Thread 2 :

#0  0x00007f1605815a35 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x000000000043902b in replnet_msgq_waitempty ()
#2  0x0000000000439082 in replnet_bulk_recv_func ()
#3  0x00007f1605811ea5 in start_thread () from /lib64/libpthread.so.0
#4  0x00007f1603ef29fd in clone () from /lib64/libc.so.6

DESCRIPTION:
When replication job is paused/resumed in a succession multiple times because of the race condition it may lead to a deadlock situation involving two threads.

RESOLUTION:
Fix the locking sequence and add additional holds on resources to avoid race leading to deadlock situation.

* 4060566 (Tracking ID: 4052449)

SYMPTOM:
Cluster goes in an 'unresponsive' mode while invalidating pages due to duplicate page entries in iowr structure.

DESCRIPTION:
While finding pages for invalidation of inodes, VxFS traverses radix tree by taking RCU lock and fills the IO structure with dirty/writeback pages that need to be invalidated in an array. This lock is efficient for read but does not protect the parallel creation/deletion of node. Hence, when VxFS finds page, consistency for the page in checked through radix_tree_exception()/radix_tree_deref_retry(). And if it fails, VxFS restarts the page finding from start offset. But VxFs does not reset the array index, leading to incorrect filling of IO structure's array which was causing  duplicate entries of pages. While trying to destroy these pages, VxFS takes page lock on each page. Because of duplicate entries, VxFS tries to take page lock couple of times on same page, leading to self-deadlock.

RESOLUTION:
Code is modified to reset the array index correctly in case of failure to find pages.

* 4060585 (Tracking ID: 4042925)

SYMPTOM:
Intermittent Performance issue on commands like df and ls.

DESCRIPTION:
Commands like "df" "ls" issue stat system call on node to calculate the statistics of the file system. In a CFS, when stat system call is issued, it compiles statistics from all nodes. When multiple df or ls are fired within specified time limit, vxfs is optimized. vxfs returns the cached statistics, instead of recalculating statistics from all nodes. If multiple such commands are fired in succession and one of the old caller of stat system call takes time, this optimization fails and VxFS recompiles statistics from all nodes. This can lead to bad performance of stat system call, leading to unresponsive situations for df, ls commands.

RESOLUTION:
Code is modified to protect last modified time of stat system call with a sleep lock.

* 4060805 (Tracking ID: 4042254)

SYMPTOM:
vxupgrade sets fullfsck flag in the filesystem if it is unable to upgrade the disk layout version because of ENOSPC.

DESCRIPTION:
If the filesystem is 100 % full and  its disk layout version is upgraded by using vxupgrade, then this utility starts the upgrade and later it fails with ENOSPC and ends up setting fullfsck flag in the filesystem.

RESOLUTION:
Code changes introduced which first calculate the required space to perform the disk layout upgrade. If the required space is not available, it fails the upgrade gracefully without setting fullfsck flag.

* 4061203 (Tracking ID: 4005620)

SYMPTOM:
Inode count maintained in the inode allocation unit (IAU) can be negative when an IAU is marked bad. An error such as the following is logged.

V-2-4: vx_mapbad - vx_inoauchk - /fs1 file system free inode bitmap in au 264 marked bad

Due to the negative inode count, errors like the following might be observed and processes might be stuck at inode allocation with a stack trace as shown.

V-2-14: vx_iget - inode table overflow

	vx_inoauchk 
	vx_inofindau 
	vx_findino 
	vx_ialloc 
	vx_dirmakeinode 
	vx_dircreate 
	vx_dircreate_tran 
	vx_pd_create 
	vx_create1_pd 
	vx_do_create 
	vx_create1 
	vx_create0 
	vx_create 
	vn_open 
	open

DESCRIPTION:
The inode count can be negative if somehow VxFS tries to allocate an inode from an IAU where the counter for regular file and directory inodes is zero. In such a situation, the inode allocation fails and the IAU map is marked bad. But the code tries to further reduce the already-zero counters, resulting in negative counts that can cause subsequent unresponsive situation.

RESOLUTION:
Code is modified to not reduce inode counters in vx_mapbad code path if the result is negative. A diagnostic message like the following flashes.
"vxfs: Error: Incorrect values of ias->ifree and Aus rifree detected."

* 4061527 (Tracking ID: 4054386)

SYMPTOM:
VxFS systemd service may show active status despite the module not being loaded.

DESCRIPTION:
If systemd service fails to load vxfs module, the service still shows status as active instead of failed.

RESOLUTION:
The script is modified to show the correct status in case of such failures.

Patch ID: VRTSvxfs-7.4.2.2200

* 4013420 (Tracking ID: 4013139)

SYMPTOM:
The abort operation on an ongoing online migration from the native file system to VxFS on RHEL 8.x systems.

DESCRIPTION:
The following error messages are logged when the abort operation fails:
umount: /mnt1/lost+found/srcfs: not mounted
UX:vxfs fsmigadm: ERROR: V-3-26835:  umount of source device: /dev/vx/dsk/testdg/vol1 failed, with error: 32

RESOLUTION:
The fsmigadm utility is updated to address the issue with the abort operation on an ongoing online migration.

* 4040238 (Tracking ID: 4035040)

SYMPTOM:
After replication job paused and resumed some of the fields got missed in stats command output and never shows missing fields on onward runs.

DESCRIPTION:
rs_start for the current stat initialized to the start time of the replication and default value of rs_start is zero.
Stat don't show some fields in-case rc_start is zero.

        if (rs->rs_start && dis_type == VX_DIS_CURRENT) {
                if (!rs->rs_done) {
                        diff = rs->rs_update - rs->rs_start;
                }
                else {
                        diff = rs->rs_done - rs->rs_start;
                }

                /*
                 * The unit of time is in seconds, hence
                 * assigning 1 if the amount of data
                 * was too small
                 */

                diff = diff ? diff : 1;
                rate = rs->rs_file_bytes_synced /
                        (diff - rs->rs_paused_duration);
                printf("\t\tTransfer Rate: %s/sec\n", fmt_bytes(h, rate));
        }

In replication we initialize the rs_start to zero and update with the start time but we don't save the stats to disk. That small window leave a case where
in-case, we pause the replication and start again we always see the rs_start to zero.

Now after initializing the rs_start we write to disk in the same function. In-case in resume case we found rs_start to zero, we again initialize the rs_start 
field to current replication start time.

RESOLUTION:
Write rs_start to disk and added a check in resume case to initialize rs_start value in-case found 0.

* 4040608 (Tracking ID: 4008616)

SYMPTOM:
fsck command got hung.

DESCRIPTION:
fsck got stuck due to deadlock when a thread which marked buffer aliased is waiting for itself for the reference drain, while
getting block code was called with NOBLOCK flag.

RESOLUTION:
honour NOBLOCK flag

* 4042686 (Tracking ID: 4042684)

SYMPTOM:
Command fails to resize the file.

DESCRIPTION:
There is a window where a parallel thread can clear IDELXWRI flag which it should not.

RESOLUTION:
setting the delayed extending write flag incase any parallel thread has cleared it.

* 4044184 (Tracking ID: 3993140)

SYMPTOM:
In every 60 seconds, compclock was lagging behind approximate 1.44 seconds from actual time elapsed.

DESCRIPTION:
In every 60 seconds, compclock was lagging behind approximate 1.44 seconds from actual time elapsed.

RESOLUTION:
Made adjustment to logic responsible for calculating and updating compclock timer.

* 4046265 (Tracking ID: 4037035)

SYMPTOM:
Added new tunable "vx_ninact_proc_threads" to control the number of inactive processing threads.

DESCRIPTION:
On high end servers, heavy lock contention was seen during inactive removal processing, which was caused by the large number of inactive worker threads spawned by VxFS. To avoid the contention, new tunable "vx_ninact_proc_threads" was added so that customer can adjust the number of inactive processing threads based on their server config and workload.

RESOLUTION:
Added new tunable "vx_ninact_proc_threads" to control the number of inactive processing threads.

* 4046266 (Tracking ID: 4043084)

SYMPTOM:
panic in vx_cbdnlc_lookup

DESCRIPTION:
Panic observed in the following stack trace:
vx_cbdnlc_lookup+000140 ()
vx_int_lookup+0002C0 ()
vx_do_lookup2+000328 ()
vx_do_lookup+0000E0 ()
vx_lookup+0000A0 ()
vnop_lookup+0001D4 (??, ??, ??, ??, ??, ??)
getFullPath+00022C (??, ??, ??, ??)
getPathComponents+0003E8 (??, ??, ??, ??, ??, ??, ??)
svcNameCheck+0002EC (??, ??, ??, ??, ??, ??, ??)
kopen+000180 (??, ??, ??)
syscall+00024C ()

RESOLUTION:
Code changes to handle memory pressure while changing FC connectivity

* 4046267 (Tracking ID: 4034910)

SYMPTOM:
Garbage values inside global list large_dirinfo.

DESCRIPTION:
Garbage values inside global list large_dirinfo, which will lead to fsck failure.

RESOLUTION:
Make access/updataion to global list large_dirinfo synchronous throughout the fsck binary, so that garbage values due to race condition can be avoided.

* 4046271 (Tracking ID: 3993822)

SYMPTOM:
running fsck on a file system core dumps

DESCRIPTION:
buffer was marked as busy without taking buffer lock while getting buffer from freelist in 1 thread and there was another thread 
that was accessing this buffer through its local variable

RESOLUTION:
marking buffer busy within the buffer lock while getting free buffer.

* 4046272 (Tracking ID: 4017104)

SYMPTOM:
Deleting a huge number of inodes can consume a lot of system resources during inactivations which cause hangs or even panic.

DESCRIPTION:
Delicache inactivations dumps all the inodes in its inventory, all at once for inactivation. This causes a surge in the resource consumptions due to which other processes can starve.

RESOLUTION:
Gradually process the inode inactivation.

* 4046829 (Tracking ID: 3993943)

SYMPTOM:
The fsck utility hit the coredump due to segmentation fault in get_dotdotlst().

Below is stack trace of the issue.

get_dotdotlst 
check_dotdot_tbl 
iproc_do_work
start_thread 
clone ()

DESCRIPTION:
Due to a bug in fsck utility the coredump was generated while running the fsck on the filesystem. The fsck operation aborted in between due to the coredump.

RESOLUTION:
Code changes are done to fix this issue

* 4047568 (Tracking ID: 4046169)

SYMPTOM:
On RHEL8, while doing a directory move from one FS (ext4 or vxfs) to migration VxFS, the migration can fail and FS will be disable. In debug testing, the issue was caught by internal assert, with following stack trace.

panic
ted_call_demon
ted_assert
vx_msgprint
vx_mig_badfile
vx_mig_linux_removexattr_int
__vfs_removexattr
__vfs_removexattr_locked
vfs_removexattr
removexattr
path_removexattr
__x64_sys_removexattr
do_syscall_64

DESCRIPTION:
Due to different implementation of "mv" operation in RHEL8 (as compared to RHEL7), there is a removexattr call on the target FS - which in migration case will be migration VxFS. In this removexattr call, kernel asks "system.posix_acl_default" attribute to be removed from the directory to be moved. But since the directory is not present on the target side yet (and hence no extended attributes for the directory), the code returns ENODATA. When code in vx_mig_linux_removexattr_int() encounter this error, it disables the FS and in debug pkg calls assert.

RESOLUTION:
The fix is to ignore ENODATA error and not assert or disable the FS.

* 4049091 (Tracking ID: 4035057)

SYMPTOM:
On RHEL8, IOs done on FS, while other FS to VxFS migration is in progress can cause panic, with following stack trace.
 machine_kexec
 __crash_kexec
 crash_kexec
 oops_end
 no_context
 do_page_fault
 page_fault
 [exception RIP: memcpy+18]
 _copy_to_iter
 copy_page_to_iter
 generic_file_buffered_read
 new_sync_read
 vfs_read
 kernel_read
 vx_mig_read
 vfs_read
 ksys_read
 do_syscall_64

DESCRIPTION:
- As part of RHEL8 support changes, vfs_read, vfs_write calls were replaced with kernel_read, kernel_write as the vfs_ calls are no longer exported. The kernel_read, kernel_write calls internally set the memory segment of the thread to KERNEL_DS and expects the buffer passed to have been allocated in kernel space.
- In migration code, if the read/write operation cannot be completed using target FS (VxFS), then the IO is redirected to source FS. And in doing so, the code passes the same buffer - which is a user buffer to kernel call. This worked well with vfs_read, vfs_write calls. But is does not work with kernel_read, kernel_write calls, causing a panic.

RESOLUTION:
- Fix is to use vfs_iter_read, vfs_iter_write calls, which work with user buffer. To use these methods the user buffer needs to passed as part of struct iovec.iov_base

* 4049097 (Tracking ID: 4049096)

SYMPTOM:
Tar command errors out with 1 throwing warnings.

DESCRIPTION:
This is happening due to dalloc which is changing the ctime of the file after allocating the extents `(worklist thread)->vx_dalloc_flush -> vx_dalloc_off` in between the 2 fsstat calls in tar.

RESOLUTION:
Avoiding changing ctime while allocating delayed extents in background.

Patch ID: VRTSvxfs-7.4.2.1600

* 4012765 (Tracking ID: 4011570)

SYMPTOM:
WORM attribute replication support in VxFS.

DESCRIPTION:
WORM attribute replication is not supported in VFR. Modified code to replicate WORM attribute during attribute processing in VFR.

RESOLUTION:
Code is modified to replicate WORM attributes in VFR.

* 4014720 (Tracking ID: 4011596)

SYMPTOM:
It throws error saying "No such file or directory present"

DESCRIPTION:
Bug observed during parallel communication between all the nodes. Some required temp files were not present on other nodes.

RESOLUTION:
Fixed to have consistency maintained while parallel node communication. Using hacp for transferring temp files.

* 4015287 (Tracking ID: 4010255)

SYMPTOM:
"vfradmin promote" fails to promote target FS with selinux enabled.

DESCRIPTION:
During promote operation, VxFS remounts FS at target. When remounting FS to remove "protected on" flag from target, VxFS first fetch current mount options. With Selinux enabled (either in permissive mode/enabled), OS adds default "seclable" option to mount. When VxFS fetch current mount options, "seclabel" was not recognized by VxFS. Hence it fails to mount FS.

RESOLUTION:
Code is modified to remove "seclabel" mount option during mount processing on target.

* 4015835 (Tracking ID: 4015278)

SYMPTOM:
System panics during vx_uiomove_by _hand

DESCRIPTION:
During uiomove, VxFS get the pages from OS through get_user_pages() to copy user data. Oracle use hugetablfs internally for performance reason. This can allocate hugepages. Under low memory condition, it is possible that get_user_pages() might return VxFS compound pages. In case of compound pages, only head page has valid mapping set and all other pages are mapped as TAIL_MAPPING. In case of uiomove, if VxFS gets compound page, then it try to check writable mapping for all pages from this compound page. This can result into dereferencing illegal address (TAIL_MAPPING) which was causing panic in  stack. VxFS doesn't support huge pages but it is possible that compound page is present on the system and VxFS might get one through get_user_pages.

RESOLUTION:
Code is modified to get head page in case of tail pages from compound page when VxFS checks writeable mapping.

* 4016721 (Tracking ID: 4016927)

SYMPTOM:
Remove tier command panics the system, crash has panic reason "BUG: unable to handle kernel NULL pointer dereference at 0000000000000150"

DESCRIPTION:
When fsvoladm removes device all devices are not moved. Number of device count also remains same unless it is the last device in the array. So check for free slot before trying to access device.

RESOLUTION:
In the device list check for free slot before accessing the device in that slot.

* 4017282 (Tracking ID: 4016801)

SYMPTOM:
filesystem mark for fullfsck

DESCRIPTION:
In cluster environment, some operation can be perform on primary node only. When such operations are executed from secondary node, message is 
passed to primary node. During this, it may possible sender node has some transaction and not yet reached to disk. In such scenario, if sender node rebooted 
then primary node can see stale data.

RESOLUTION:
Code is modified to make sure transactions are flush to log disk before sending message to primary.

* 4017818 (Tracking ID: 4017817)

SYMPTOM:
NA

DESCRIPTION:
In order to increase the overall throughput of VFR, code changes have been done
to replicate files parallelly.

RESOLUTION:
Code changes have been done to replicate file's data & metadata parallely over
multiple socket connections.

* 4017820 (Tracking ID: 4017819)

SYMPTOM:
Cloud tier add operation fails when user is trying to add the AWS GovCloud.

DESCRIPTION:
Adding AWS GovCloud as a cloud tier was not supported in InfoScale. With these changes, user will be able to add AWS GovCloud type of cloud.

RESOLUTION:
Added support for AWS GovCloud

* 4019877 (Tracking ID: 4019876)

SYMPTOM:
vxfsmisc.so is publicly shared library for samba and doesn't require infoscale license for its usage

DESCRIPTION:
vxfsmisc.so is publicly shared library for samba and doesn't require infoscale license for its usage

RESOLUTION:
Removed license dependency in vxfsmisc library

* 4020055 (Tracking ID: 4012049)

SYMPTOM:
"fsck" supports the "metasave" option but it was not documented anywhere.

DESCRIPTION:
"fsck" supports the "metasave" option while executing with the "-y" option. but it is not documented anywhere. Also, it tries to store metasave in a particular location. The user doesn't have the option to specify the location. If that location doesn't have enough space, "fsck" fails to take the metasave and it continues to change filesystem state.

RESOLUTION:
Code changes have been done to add one new option with which the user can specify the location to store metasave. "metasave" and "target", these two options have been added in the "usage" message of "fsck" binary.

* 4020056 (Tracking ID: 4012049)

SYMPTOM:
"fsck" supports the "metasave" option but it was not documented anywhere.

DESCRIPTION:
"fsck" supports the "metasave" option while executing with the "-y" option. but it is not documented anywhere. Also, it tries to store metasave in a particular location. The user doesn't have the option to specify the location. If that location doesn't have enough space, "fsck" fails to take the metasave and it continues to change filesystem state.

RESOLUTION:
Code changes have been done to add one new option with which the user can specify the location to store metasave. "metasave" and "target", these two options have been added in the "usage" message of "fsck" binary.

* 4020912 (Tracking ID: 4020758)

SYMPTOM:
Filesystem mount or fsck with -y may see hang during log replay

DESCRIPTION:
fsck utility is used to perform the log replay. This log replay is performed during mount operation or during filesystem check with -y option, if needed. In certain cases if there are lot of logs that needs to be replayed then it end up into consuming entire buffer cache. This results into out of buffer scenario and results into hang.

RESOLUTION:
Code is modified to make sure enough buffers are always available.

Patch ID: VRTSdbac-7.4.2.2500

* 4091000 (Tracking ID: 4090485)

SYMPTOM:
Installation of Oracle 12c GRID and database fails on RHEL8.*/OL8.* with GLIBC package error

DESCRIPTION:
On RHEL8/OL8 with GLIBC version 2.2.5, VCSMM lib uses the available default version and hence fails to build with the following error message:

INFO: /u03/app/12201/dbbase/dbhome/lib//libskgxn2.so: undefined reference to `memcpy@GLIBC_2.14' INFO: make: *** [/u03/app/12201/dbbase/dbhome/rdbms/lib/ins_rdbms.mk:1013: /u03/app/12201/dbbase/dbhome/rdbms/lib/orapwd] Error 1

RESOLUTION:
RHEL8/OL8 VCSMM module is built with GLIBC 2.2.5.

Patch ID: VRTSsfcpi-7.4.2.3000

* 4006619 (Tracking ID: 4015976)

SYMPTOM:
On a Solaris system, patch upgrade of InfoScale fails with an error in the alternate boot environment.

DESCRIPTION:
InfoScale does not support patch upgrade in alternate boot environments. Therefore, when you provide the "-rootpath" argument to the installer during a patch upgrade, the patch upgrade operation fails with the following error message: CPI ERROR V-9-0-0 The -rootpath option works only with upgrade tasks.

RESOLUTION:
The installer is enhanced to support patch upgrades in alternate boot environments by using the -rootpath option.

* 4008502 (Tracking ID: 4008744)

SYMPTOM:
Rolling upgrade using response file fails if one or more operating system packages are missing on the cluster nodes.

DESCRIPTION:
When rolling upgrade is performed using response file, the installer script checks for the missing operating system packages and installs them. After installing the missing packages on the cluster nodes, the installer script fails and exit. The following error message is logged:
CPI ERROR V-9-40-1153 Response file error, no configuration for rolling upgrade phase 1.
This issue occurs because the installer script fails to check the sub-cluster numbers in the response file.

RESOLUTION:
The installer script is enhanced to continue with the rolling upgrade after installing the missing OS packages on the cluster nodes.

* 4010025 (Tracking ID: 4010024)

SYMPTOM:
CPI assumes that the third digit in an InfoScale 7.4.2 version indicates a patch version, and not a GA version. Therefore, it upgrades the packages from the patch only and does not upgrade the base packages.

DESCRIPTION:
To compare product versions and to set the type of installation, CPI compares the currently installed version with the target version to be installed. However, instead of comparing all the digits in a version, it incorrectly compares only the first two digits. In this case, CPI compares 7.4 with 7.4.2.xxx, and finds that the first 2 digits match exactly. Therefore, it assumes that the base version is already installed and then installs the patch packages only.

RESOLUTION:
This hotfix updates the CPI to recognize InfoScale 7.4.2 as a base version and 7.4.2.xxx (for example) as a patch version. After you apply this patch, CPI can properly upgrade the base packages first, and then proceed to upgrade the packages that are in the patch.

* 4012032 (Tracking ID: 4012031)

SYMPTOM:
If older versions of the VRTSvxfs and VRTSodm packages are installed in
non-global zones, they are not 
upgraded when upgrade to a newer version of the product.

DESCRIPTION:
If older versions of the VRTSvxfs and VRTSodm packages are installed in
non-global zones, you must 
uninstall them before you perform a product upgrade. After you upgrade those
packages in global zones, you 
must then install the VRTSvxfs and VRTSodm packages manaully in the non-global
zones.

RESOLUTION:
The CPI will handle the VRTSodm and VRTSvxfs package in non-global zone in the
same manner it does in global zone.

* 4013446 (Tracking ID: 4008578)

SYMPTOM:
Even though a cluster node may have a fully qualified hostname, the product installer trims this value and uses the shorter hostname for the cluster configuration.

DESCRIPTION:
The name of a cluster node may be set to a fully qualified hostname, for example, somehost.example.com. However, by default, the product installer trims this value and uses the shorter hostname (for example, somehost) for the cluster configuration.

RESOLUTION:
This hotfix updates the installer to allow the use of the new "-fqdn" option. If this option is specified, the installer uses the fully qualified hostname for cluster configuration. Otherwise, the installer continues with the default behavior.

* 4014920 (Tracking ID: 4015139)

SYMPTOM:
If IPv6 addresses are provided for the system list on a RHEL 8 system, the product installer fails to verify the network communication with the remote systems and cannot proceed with the installation. The following error is logged:
CPI ERROR V-9-20-1104 Cannot ping <IPv6_address>. Please make sure that:
        - Provided hostname is correct
        - System <IPv6_address> is in same network and reachable
        - 'ping' command is available to use (provided by 'iputils' package)

DESCRIPTION:
This issue occurs because the installer uses the ping6 command to verify the communication with the remote systems if IPv6 addresses are provided for the system list. For RHEL 8 and its minor versions, the path for ping6 has changed from /bin/ping6 to /sbin/ping6, but the installer uses the old path.

RESOLUTION:
This hotfix updates the installer to use the correct path for the ping6 command.

* 4014985 (Tracking ID: 4014983)

SYMPTOM:
The product installer does not display a warning at the time of the pre-upgrade check to suggest that you will need to provide telemetry details later on if the cluster nodes are not registered with TES or VCR.

DESCRIPTION:
The product installer prompts you to provide the telemetry details of cluster nodes after upgrading the InfoScale packages but before starting the services. If you cancel the installation at this stage, the Cluster Server resources cannot be brought online. Therefore, a warning message is required during the pre-upgrade checks to remind you to keep these details ready.

RESOLUTION:
The product installer is updated to notify you at the time of the pre-upgrade check, that if the cluster nodes are not registered with TES or VCR, you will need to provide these telemetry details later on.

* 4016078 (Tracking ID: 4007633)

SYMPTOM:
The product installer fails to synchronize the system clocks with the NTP server.

DESCRIPTION:
This issue occurs when the /usr/sbin/ntpdate file is missing on the systems where the clocks need to be synchronized.

RESOLUTION:
Updated the installer to include a dependency on the ntpdate package, which helps the system clocks to be synchronized with the NTP server.

* 4020090 (Tracking ID: 4022920)

SYMPTOM:
The product installer fails to install InfoScale 7.4.2 on SLES 15 SP2 and displays the following error message: 
CPI ERROR V-9-0-0
0 No padv object defined for padv SLESx8664 for system <system_name>

DESCRIPTION:
This issue occurs because the format of the kernel version that is required by SLES 15 SP2 is not included.

RESOLUTION:
The installer is updated to include the format of the kernel version that is required to support installation on SLES 15 SP2.

* 4021517 (Tracking ID: 4021515)

SYMPTOM:
On SLES 12 SP4 and later systems, the installer fails to fetch the media speed of the network interfaces.

DESCRIPTION:
The path of the 'ethtool' command is changed in SLES 12 SP4. Therefore, on SLES 12 SP4 and later systems, the installer does not recognize the changed path and uses an incorrect path '/sbin/ethtool' instead of '/usr/sbin/ethtool' for the 'ethtool' command. Consequently, while configuring the product, the installer fails to fetch the media speed of the network interfaces and displays its value as "Unknown".

RESOLUTION:
This hotfix updates the installer to use the correct path for the 'ethtool' command.

* 4022492 (Tracking ID: 4022640)

SYMPTOM:
The product installer fails to complete the installation after it automatically downloads a required support patch from SORT that contains a VRTSvlic package. The following error is logged:
CPI ERROR V-9-0-0
0 No pkg object defined for pkg VRTSvlic401742 and padv <<PADV>>

DESCRIPTION:
During installation, the product installer looks for any applicable platform support patches that are available on SORT and automatically downloads them. However, it fails to correctly identify the base version of the VRTSvlic package on the system to compare it with the downloaded version. Consequently, even though the appropriate patch is available, the installer fails to complete the installation.

RESOLUTION:
To address this issue, the product installer is updated to correctly identify the base version of the VRTSvlic package on a system.

* 4027741 (Tracking ID: 4027759)

SYMPTOM:
The product installer installs lower versions packages if multiple patch bundles are specified using the patch path options in the incorrect order.

DESCRIPTION:
The product installer expects that the patch bundles, if any, are specified in the lower-to-higher order. Consequently, the installer always overrides the package version from the available package in the last patch bundle in which it exists. If patch bundles are not specified in the expected order, installer installs last available version for a component package.

RESOLUTION:
To address this issue, the product installer is updated to correctly identify the higher package version before installing the patch bundles. It does so regardless of the order in which the patch bundles are specified.

* 4033243 (Tracking ID: 4033242)

SYMPTOM:
When a responsefile is used, the product installer fails to add the required VCS users.

DESCRIPTION:
The product installer fails to add the required VCS users during configuration, if a responsefile is used to provide the corresponding input.

RESOLUTION:
This patch updates the installer so that it adds the required VCS users while using a responsefile for configuration.

* 4033688 (Tracking ID: 4033687)

SYMPTOM:
The InfoScale product installer deletes any existing cluster configuration files during uninstallation.

DESCRIPTION:
During uninstallation, the product installer deletes cluster configuration files like /etc/llthosts, /etc/llttab, and so on. Consequently, when you reinstall the InfoScale product, you need to perform all the cluster configuration procedures again.

RESOLUTION:
This patch updates the product installer so that it no longer deletes any existing cluster configuration files. Consequently, you can reconfigure the clusters quickly by using the existing configuration files.

* 4034357 (Tracking ID: 4033988)

SYMPTOM:
The product installer displays the following error message after the precheck and does not allow you to proceed with the installation:
The higher version of <package_name> is already installed on <system_name>

DESCRIPTION:
The product installer compares the versions of the packages in an Infoscale patch bundle with those of the packages that are installed on a system. If a more recent version of any of the packages in the bundle is found to be already installed on the system, the installer displays an error. It does not allow you to proceed further with the installation.

RESOLUTION:
The product installer is updated to allow the installation of an Infoscale patch bundle that may contain older versions of some packages. Instead of an error message, the installer now displays a warning message and lets you proceed with the installation.

* 4038945 (Tracking ID: 4033957)

SYMPTOM:
The VRTSveki and the VRTSvxfs RPMs fail to upgrade when using yum.

DESCRIPTION:
The product installer assumes that services are stopped if the corresponding modules are unloaded. However, the veki and the vxfs services remain active even after the modules are unloaded, which causes the RPMs to fail during the upgrade.

RESOLUTION:
The installer is enhanced to stop the veki and the vxfs services when the modules are unloaded but the services remain active.

* 4040836 (Tracking ID: 4040833)

SYMPTOM:
After an InfoScale upgrade to version 7.4.2 Update 2 on Solaris, the latest vxfs module is not loaded.

DESCRIPTION:
This issue occurs when the patch_path option of the product installer is used to perform the upgrade. When patch_path is used, the older versions of the packages in the patch are not uninstalled. Thus, the older version of the vxfs package gets loaded after the installation.

RESOLUTION:
This hotfix updates the product installer to address this issue with the patch_path option. When patch_path is used, the newer versions of the packages in the patch are installed only after the corresponding older packages are uninstalled.

* 4041770 (Tracking ID: 4041816)

SYMPTOM:
On RHEL 8.4, the system panics after the InfoScale stack starts.

DESCRIPTION:
The system panics when the CFSMount agent or the Mount agent attempts to register for IMF. This issue occurs because these agents are IMF-enabled by default.

RESOLUTION:
This hotfix updates the product installer to disable IMF for the CFSMount and the Mount agents on RHEL 8.4 systems.

* 4042590 (Tracking ID: 4042591)

SYMPTOM:
On RHEL 8.4, installer disable IMF for the CFSMount and the Mount agents.

DESCRIPTION:
By default IMF for the CFSMount and the Mount agents are enabled. On RHEL 8.4, installer was used to disable IMF for the CFSMount and the Mount agents before starting the agents.

RESOLUTION:
This hotfix updates the product installer to not to disable IMF for the CFSMount and the Mount agents on RHEL 8.4 systems.

* 4042890 (Tracking ID: 4043075)

SYMPTOM:
After performing a phased upgrade of InfoScale, the product installer fails to update the types.cf file.

DESCRIPTION:
You can rename the /etc/llttab file before an OS upgrade and revert to the original configuration after the OS upgrade and before you start the InfoScale stack upgrade. However, if you do not revert the renamed /etc/llttab file, the product installer fails to identify that VCS is configured on the mentioned systems and proceeds with upgrade. Consequently, the installer does not update the .../config/types.cf file.

RESOLUTION:
This hotfix updates the product installer to avoid such a situation. It displays an error and exits after the precheck tasks if the /etc/llttab file is missing and the other VCS configuration files are available on the mentioned systems.

* 4043366 (Tracking ID: 4042674)

SYMPTOM:
The product installer does not honor the single-node mode of a cluster and restarts it in the multi-mode if 'vcs_allowcomms = 1'.

DESCRIPTION:
On a system where a single-node cluster is running, if you upgrade InfoScale using a response file with 'vcs_allowcomms = 1', the existing cluster configuration is not restored. The product installer restarts the cluster in the multi-node mode. When 'vcs_allowcomms = 1', the installer does not consider the value of the ONENODE parameter in the /etc/sysconfig/vcs file. It fails to identify that VCS is configured on the systems mentioned in the response file and proceeds with upgrade. Consequently, the installer neither updates the .../config/types.cf file nor restores the /etc/sysconfig/vcs file.

RESOLUTION:
This hotfix updates the product installer to honor the single-node mode of an existing cluster configuration on a system.

* 4043372 (Tracking ID: 4043371)

SYMPTOM:
If SecureBoot is enabled on the system, the product installer fails to install some InfoScale RPMs (VRTSvxvm, VRTSaslapm, VRTScavf).

DESCRIPTION:
InfoScale installations are not supported on systems where SecureBoot is enabled. However, the product installer does not check whether SecureBoot is enabled on the system. Consequently, it fails to install some InfoScale RPMs even though it proceeds with the installation.

RESOLUTION:
This hotfix updates the product installer to check whether SecureBoot is enabled, and if so, display an appropriate error message and exit.

* 4043892 (Tracking ID: 4043890)

SYMPTOM:
The product installer incorrectly prompts users to install deprecated OS RPMs for LLT over RDMA configurations.

DESCRIPTION:
The following RPMs that are used in LLT RDMA configurations have been deprecated: (1) the libmthca and the libmlx4 OS RPMs have been replaced by libibverbs, and (2) the rdma RPM has been replaced by rdma-core. However, even when the libibverbs and the rdma-core RPMs are installed on a system, the InfoScale installer prompts users to install the corresponding deprecated RPMs. Furthermore, if these deprecated RPMs are not installed, the installer does not allow users to proceed with the InfoScale cluster configuration.

RESOLUTION:
This hotfix updates the product installer to use the correct OS RPMs dependency list for LLT over RDMA configurations.

* 4045881 (Tracking ID: 4043751)

SYMPTOM:
The VRTScps RPM installation may fail on SLES systems.

DESCRIPTION:
The VRTScps RPM has a dependency on the "/etc/SUSE-brand" file that is available in the "branding-SLE" package. If the brand file is not present on the system, the InfoScale product installer may fail to install the VRTScps RPM on SLES systems.

RESOLUTION:
This hotfix adresses the issue by updating the product installer to include the "branding-SLE" package in OS dependency list during the installation pre-check.

* 4046196 (Tracking ID: 4067426)

SYMPTOM:
Package uninstallation during a rolling upgrade fails if non-global zones are under VCS service group control

DESCRIPTION:
During a Rolling upgrade, VCS failover service groups get switched to another machine in the cluster before upgrade starts. Installer checks whether any zone has Veritas packages installed on it and tries to uninstall the packages from the non-global zone before upgrading. Uninstall fails because non-global zone is not on the current machine but failover is on the machine which is not upgrading.

RESOLUTION:
Installer checks if any zone has Veritas packages installed and is under VCS service group control. In such a case, installer does not upgrade zone on the current machine because non-global zone switches to the other machine in the cluster.

* 4050467 (Tracking ID: 4050465)

SYMPTOM:
The InfoScale product installer fails to create VCS users for non-secure clusters.

DESCRIPTION:
In case of non-secure clusters, the product installer fails to locate the binary that is required for password encryption. Consequently, the addition of VCS users fails.

RESOLUTION:
The product installer is updated to be able to successfully create VCS users in case of non-secure clusters.

* 4052860 (Tracking ID: 4052859)

SYMPTOM:
The InfoScale licensing component on AIX and Solaris and the InfoScale agents for Pure Storage replication on AIX do not work if the VRTSPython package is not installed.

DESCRIPTION:
On AIX and Solaris, the VRTSpython package is required to support the InfoScale licensing component. On AIX, the VRTSpython package is required to support the InfoScale agents for Pure Storage replication. These components do not work because the InfoScale product installer does not install the VRTSPython module on AIX and Solaris.

RESOLUTION:
The product installer is enhanced to install the VRTSpython package on AIX and Solaris.

* 4052867 (Tracking ID: 4052866)

SYMPTOM:
After a fresh configuration of VCS, if the InfoScale cluster node is not registered with the Usage Insights service, the expected nagging messages do not get logged in the telemetry and the system logs.

DESCRIPTION:
The CollectorService process needs to be running during an installation or a configuration operation so that the InfoScale node can register itself with the Usage Insights service. This issue occurs because the product installer does not start the CollectorService process during a fresh configuration of VCS.

RESOLUTION:
The product installer is updated to start the CollectorService process during a fresh configuration of VCS.

* 4053752 (Tracking ID: 4053753)

SYMPTOM:
The InfoScale product installer prompts you to enter the host name or the IP address and the port number of a Usage Insights server instance.

DESCRIPTION:
The licensing service is upgraded to help manage your InfoScale licenses more effectively. As part of this upgrade, you can register an InfoScale server with an on-premises Usage Insights server in your data center. The product installer prompts you to enter the host name or the IP address and the port number of the Usage Insights server instance. Using this information, the installer registers the InfoScale server with the Usage Insights server.

RESOLUTION:
The product installer is enhanced to register InfoScale servers with a Veritas Usage Insights server.

* 4053875 (Tracking ID: 4053635)

SYMPTOM:
If a system is restarted immediately after the product installation, the vxconfigd service fails to start when an add node operation is initiated during a fresh configuration.

DESCRIPTION:
When a system is restarted immediately after product installation, the VxVM agent attempts to start the vxvm-boot service. The service does not started properly due to the presence of the install-db file on the system. During a product configuration (or the add node operation), the product installer removes the install-db file and attempts to start the vxvm-boot service. However, because the service was already in the active state, systemd does not execute the vxvm-startup scripts again. Consequently, both the 'vxdctl init' and the 'vxdctl enable' commands fail with the exit code '4'.

RESOLUTION:
The product installer is updated to first restart the vxvm-boot service and then start the vxconfigd service when a system is restarted.

* 4053876 (Tracking ID: 4053638)

SYMPTOM:
The installer prompt to mount shared volumes during an add node operation does not advise that the corresponding CFSMount entries will be updated in main.cf.

DESCRIPTION:
If any shared volumes are mounted on the existing cluster nodes, during the add node operation, the installer provides you an option to mount them on new nodes as well. If you choose to mount the shared volumes on the new nodes, the installer updates the corresponding CFSMount entries in the main.cf file. However, the installer prompt does not advise that main.cf will also be updated.

RESOLUTION:
The installer prompt is updated to advise that the CFSMount entries in main.cf will be updated if you choose to mount shared volumes on new nodes.

* 4054322 (Tracking ID: 4054460)

SYMPTOM:
InfoScale installer fails to start the GAB service with the -start option on Solaris.

DESCRIPTION:
The product installer removes the GAB driver when it stops the service with the -stop option. However, it does not add the GAB driver while starting the service with the -start option, so the GAB service fails to start.

RESOLUTION:
The product installer is updated to load the GAB driver before it attempts to start the GAB service.

* 4054913 (Tracking ID: 4054912)

SYMPTOM:
During upgrade, the product installer fails to stop the vxfs service.

DESCRIPTION:
If upgrading from lower than InfoScale 7.3 version, the product installer assumes that services are stopped if the corresponding modules are unloaded. However, the vxfs services remain active even after the modules are unloaded, which causes the RPMs to fail during the upgrade.

RESOLUTION:
The installer is enhanced to stop the vxfs services when the modules are unloaded but the services remain active.

* 4055055 (Tracking ID: 4055242)

SYMPTOM:
InfoScale installer fails to install a patch on Solaris and displays the following error:
CPI ERROR V-9-0-0
0 Cannot find VRTSpython for padv Sol11sparc in <<media path>>

DESCRIPTION:
The product installer expects the VRTSpython patch to be available in the media path that is provided. When the patch is not available at the expected location, the installer fails to proceed and exits with the aforementioned error message.

RESOLUTION:
The product installer is updated to handle such a scenario and proceed to install the other available patches.

* 4066237 (Tracking ID: 4057908)

SYMPTOM:
The InfoScale product installer fails to configure passwordless SSH communication for remote Solaris systems that have one of these SRUs installed: 11.4.36.101.2 and 11.4.37.101.1.

DESCRIPTION:
To establish passwordless SSH communication with a remote system, the installer tries to fetch the home directory details of the remote system. This task fails due to an issue with the function that fetches those details. The product installation fails because the passwordless SSH communication is not established.

RESOLUTION:
The product installer is updated to address this issue so that the installation does not fail when the aforementioned SRUs are installed on a system.

* 4067433 (Tracking ID: 4067432)

SYMPTOM:
While upgrading to 7.4.2 Update 2 , VRTSvlic patch package fails to install.

DESCRIPTION:
VRTSvlic patch package has a package level dependency on VRTSpython. When upgrading the VRTSvlic patch package, only VRTSvlic publisher is set. VRTSvlic publisher fails to resolve the VRTSpython dependency and installation is unsuccessful.

RESOLUTION:
Installer checks if VRTSvlic and VRTSpython are in the patch list. If both are present, installer sets the publisher for VRTSvlic and VRTSpython before running the command to install the VRTSvlic patch. The package level dependency is thus resolved and VRTSvlic and VRTSpython patches get installed.

* 4070908 (Tracking ID: 4071690)

SYMPTOM:
The InfoScale product installer prompts you to enter the host name or the IP address and the port number of an edge server.

DESCRIPTION:
If the edge server details not provided, the InfoScale server is not registered with the edge server. Consequently, the installer does not perform InfoScale configurations on the InfoScale server.

RESOLUTION:
The installer is updated to perform Infoscale configurations without registering the Infoscale server to an edge server. It no longer prompts you for the edge server details.

* 4079500 (Tracking ID: 4079853)

SYMPTOM:
Patch installer flashes a false error message with -precheck option.

DESCRIPTION:
while installing patch with -precheck option installer flashes a false error message - 'CPI ERROR V-9-0-0 A more recent version of InfoScale Enterprise, 7.4.2.1100, is already installed on server'.

RESOLUTION:
A check added for hotfixupgrade and precheck both for performing further task.

* 4079916 (Tracking ID: 4079922)

SYMPTOM:
The product installer fails to complete the installation after it automatically downloads a required support patch from SORT that contains a VRTSperl package. The following error is logged:
CPI ERROR V-9-0-0
0 No pkg object defined for pkg VRTSperl530 and padv <<PADV>>

DESCRIPTION:
During installation, the product installer looks for any applicable platform support patches that are available on SORT and automatically downloads them. However, it fails to correctly identify the base version of the VRTSperl package on the system to compare it with the downloaded version. Consequently, even though the appropriate patch is available, the installer fails to complete the installation.

RESOLUTION:
To address this issue, the product installer is updated to correctly identify the base version of the VRTSperl package on a system.

* 4080100 (Tracking ID: 4080098)

SYMPTOM:
Installer fails to complete the CP server configuration. The following error is logged:
CPI ERROR V-9-40-4422 Unable to create CA certificate /var/VRTScps/security/certs/ca.crt on <<system>>
CPI ERROR V-9-40-4427 Unable to create csr file /var/VRTSvxfen/security/certs/client_{<<uuid>>}.csr for <<system>> on <<system>>

DESCRIPTION:
Installer uses the 'openssl req' command to create CA certificate and csr file. With VRTSperl 5.34.0.2 onwards, openssl version is updated. The updated version requires config file to be passed with the 'openssl req' command by using -config paramater. Consequently, the installer fails to create CA certificate and csr file causing CP server configuration failure.

RESOLUTION:
Product installer is updated to pass the configuration file with the 'openssl req' command only.

* 4081964 (Tracking ID: 4081963)

SYMPTOM:
VRTSvxfs patch fails to install on Linux platforms while applying the security patch for 742 SP1.

DESCRIPTION:
VRTSvxfs patch needs veki module to be loaded before it loads its modules. VRTSvxfs patch rpm verifies whether Veki module is loaded. If it is not loaded, VRTSvxfs patch does not get installed and an error message appears.

RESOLUTION:
A new preinstall check added to the CPI installer for VRTSvxfs to verify whether veki is loaded, and loads it before VRTSvxfs patch is applied.

* 4084977 (Tracking ID: 4084975)

SYMPTOM:
Installer fails to complete the CP server configuration. The following error is logged:
CPI ERROR V-9-40-4422 Unable to create CA certificate /var/VRTScps/security/certs/ca.crt on <<system>>
CPI ERROR V-9-40-4427 Unable to create csr file /var/VRTSvxfen/security/certs/client_{<<uuid>>}.csr for <<system>> on <<system>>

DESCRIPTION:
To create CA certificate and csr file, installer uses the 'openssl req' command and passes the openssl configuration file '/opt/VRTSperl/non-perl-libs/bin/openssl.cnf' by using -config parameter to the 'openssl req' command. OpenSSL version 1.0.2, does not have an openssl configuration file. Hence, the installer fails to create CA certificate and csr file, and CP server configuration fails.

RESOLUTION:
Installer updated to check and pass the openssl configuration file only if the file is present on the system.

* 4085612 (Tracking ID: 4087319)

SYMPTOM:
Installer fails to uninstall VxVM while upgrading from 7.4.2 to 8.0U1.

DESCRIPTION:
While upgrading, uninstallation of previous rpms fails if the semodule is not loaded. Installer fails to uninstall VxVM if semodule is not loaded before uninstallation.

RESOLUTION:
Installer enhanced to check and take appropriate action if semodule vxvm is not loaded before uninstallation.

* 4086047 (Tracking ID: 4086045)

SYMPTOM:
When Infoscale cluster is reconfigured, LLT, GAB, VXFEN services fail to start after reboot.

DESCRIPTION:
Installer updates the /etc/sysconfig/<<service>> file and incorrectly sets START_<<service>> and STOP_<<service>> value as '0' in pre_configure task even when VCS is not set for reconfiguration. These services thus fail to start after reboot.

RESOLUTION:
Installer is enhanced to not to update the /etc/sysconfig/<<service>> files when VCS is not set for reconfiguration.

* 4086570 (Tracking ID: 4076583)

SYMPTOM:
On a Solaris system, the InfoScale installer runs set/unset publisher several times slowing down deployment.

DESCRIPTION:
On a Solaris system, while installing the packages to upgrade; the publishers were set/unset several times. The upgrade slows down as a result.

RESOLUTION:
Installer sets all the publishers together. Subsequently  the higher version of the package/patch available in the publishers is selected.

* 4086624 (Tracking ID: 4086623)

SYMPTOM:
Installer fails to complete the CP server configuration. The following error is logged:
CPI ERROR V-9-40-4422 Unable to create CA certificate /var/VRTScps/security/certs/ca.crt on <<system>>
CPI ERROR V-9-40-4427 Unable to create csr file /var/VRTSvxfen/security/certs/client_{<<uuid>>}.csr for <<system>> on <<system>>

DESCRIPTION:
Installer check the openssl_conf file on client nodes instead of CP server, consequently even if openssl_conf file is not present on CP server installer tries to utilize the same and fails to generate the CA certificate and csr files.

RESOLUTION:
Product installer is updated to check and pass the configuration file from CP server.

* 4087148 (Tracking ID: 4088698)

SYMPTOM:
CPI installer tries to download a must-have patch whose version is lower than the version specified in media path. If installer is unable to download, the following error message is displayed:
CPI ERROR V-9-30-1114 Failed to connect to SORT (https://sort.veritas.com), the patch <<patchname>> is required to deploy this product.

DESCRIPTION:
CPI installer tries to download a lower version must-have patch even if patch bundle(s) of equal or higher version of all the patches from the must-have patch is provided in the media path.

RESOLUTION:
CPI installer does not download the required must-have patch if equal or higher version patches are supplied in the mediapath.

* 4087809 (Tracking ID: 4086533)

SYMPTOM:
VRTSfsadv pkg fails to upgrade from 7.4.2 U4 to 8.0 U1 while using yum upgrade. Following error observed:
The fsdedupschd.service is running.  Please stop fsdedupschd.service before upgrading.
error: %prein(VRTSfsadv-8.0.0.1700-RHEL8.x86_64) scriptlet failed, exit status 1

DESCRIPTION:
fsdedupschd service is started as a post-installation task of VRTSfsadv 7.4.2.2600 Package. Before yum upgrade to 8.0 U1, installer does not set up the fsdedupschd service and  VRTSfsadv package fails to upgrade.

RESOLUTION:
Installer is enhanced to handle the start and stop of VRTSfsadv-related services i.e fsdedupschd and vxfs_replication.

* 4089657 (Tracking ID: 4089934)

SYMPTOM:
Installer does not update the '/opt/VRTSvcs/conf/config/types.cf' file after a VRTSvcs patch upgrade.

DESCRIPTION:
If '../conf/types.cf' file is changed with a VRTSvcs patch, during patch upgrade installer does not update the '..conf/config/types.cf' file. It needs to be updated  manually to avoid unexpected issues.

RESOLUTION:
Product installer is enhanced to correctly populate the '..conf/config/types.cf' file if '../conf/types.cf' file is changed with a VRTSvcs patch.

* 4089815 (Tracking ID: 4089867)

SYMPTOM:
On Linux, Installer fails to start the fsdedupschd service if VRTSfsadv 7.4.2.0000 is installed on the system.

DESCRIPTION:
VRTSfsadv 7.4.2.0000 doesn't have systemctl wrapper for fsdedupschd start script and there is missing shebang line in the bash script. Because of two issues, installer fails to start the fsdedupschd service.

RESOLUTION:
Installer is enhanced to handle the start and stop of VRTSfsadv-related services i.e fsdedupschd and vxfs_replication only if VRTSfsadv 7.4.2.2600 or higher is installed.

* 4092407 (Tracking ID: 4092408)

SYMPTOM:
CPI installer fails to correctly identify status of vxfs_replication service.

DESCRIPTION:
Installer parses 'ps -ef' output to determine if vxfs_replication service has started. Because of an intermediate 'pidof' process, installer incorrectly identifies status of vxfs_replication service as started and during poststart check fails, as vxfs_replication had not started.

RESOLUTION:
Product installer is updated to skip the 'pidof' process while determining the vxfs_replication service status.

Patch ID: VRTSsfmh-vom-HF07421101

* 4156008 (Tracking ID: 4156005)

SYMPTOM:
NA

DESCRIPTION:
NA

RESOLUTION:
NA

Patch ID: VRTSgab-7.4.2.3100

* 4105325 (Tracking ID: 3991274)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP5 is
now introduced.

Patch ID: VRTSgab-7.4.2.2500

* 4057313 (Tracking ID: 4057312)

SYMPTOM:
After an InfoScale upgrade, the updated values of GAB tunables that are used when loading the corresponding modules fail to persist.

DESCRIPTION:
If the value of a tunable in  /etc/sysconfig/gab is changed before an RPM upgrade, the change does not persist after the upgrade, and the tunable gets reset to the default value.

RESOLUTION:
The GAB module is updated so that its tunable parameters in /etc/sysconfig/gab can retain the existing values even after an RPM upgrade.

Patch ID: VRTSgab-7.4.2.2100

* 4046415 (Tracking ID: 4046413)

SYMPTOM:
After node addition/node deletion gab node count is not updated properly

DESCRIPTION:
gabconfig -m <node count> command displays error despite providing a correct node count

RESOLUTION:
There was a parsing issue which has been resolved by this fix

* 4046419 (Tracking ID: 4046418)

SYMPTOM:
gab startup does not fail even if llt is not configured

DESCRIPTION:
Since gab service depends on llt service, if llt service fails to start/is not configured, gab should not start

RESOLUTION:
This fix will prevent gab to start if llt is not configured

Patch ID: VRTSgab-7.4.2.1300

* 4013034 (Tracking ID: 4011683)

SYMPTOM:
The GAB module failed to start and the system log messages indicate failures with the mknod command.

DESCRIPTION:
The mknod command fails to start the GAB module because its format is invalid. If the names of multiple drivers in an environment contain the value "gab" as a substring, all their major device numbers get passed on to the mknod command. Instead, the command must contain the major device number for the GAB driver only.

RESOLUTION:
This hotfix addresses the issue so that the GAB module starts successfully even when other driver names in the environment contain "gab" as a substring.

Patch ID: VRTScps-7.4.2.3100

* 4090986 (Tracking ID: 4072151)

SYMPTOM:
The message 'Error executing update nodes set is_reachable' might intermittently appear in the syslogs of the InfoScale CP servers, if shared between multiple Infoscale servers.

DESCRIPTION:
Typically, when a Coordination Point server (CP server) is shared among multiple InfoScale clusters, the following message might intermittently appear in the syslogs of the CP server in the context of the CP server timer thread:
'Error executing update nodes set is_reachable = case when last_ping_delay + 2 > 20 then 0 else 1 end, last_ping_delay = last_ping_delay + 2 where last_ping_delay < 1000 and reg=1 sqlite statement'.

RESOLUTION:
The CP server is updated to synchronize CP server time threads in addition to its other database write operations.

Patch ID: VRTScps-7.4.2.2800

* 4088159 (Tracking ID: 4088158)

SYMPTOM:
Security vulnerabilities exists Sqlite third-party components used by VCS.

DESCRIPTION:
VCS uses the  Sqlite third-party components in which some security vulnerability exist.

RESOLUTION:
VCS is updated to use newer versions of Sqlite third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTScps-7.4.2.2500

* 4054435 (Tracking ID: 4018218)

SYMPTOM:
Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2

DESCRIPTION:
Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2.

RESOLUTION:
This hotfix updates the VRTScps module so that InfoScale CP Client can establish secure communication with a CP server using TLSv1.2. However, to enable TLSv1.2 communication between the CP client and CP server after installing this hotfix, you must perform the following steps:

To configure TLSv1.2 for CP server
1. Stop the process resource that has pathname="/opt/VRTScps/bin/vxcpserv"
   # hares -offline <vxcpserv> -sys <sysname> 
2. Check that the vxcpserv daemon is stopped using the following command:
   # ps -eaf | grep "/opt/VRTScps/bin/vxcpserv"
3. When the vxcpserv daemon is stopped, edit the "/etc/vxcps_ssl.properties" file and make the following changes:
   a. Remove or comment the entry: openSSL.server.requireTLSv1 = true 
   b. Add a new entry: openSSL.server.requireTLSv1.2 = true
4. Start the process resource that has pathname="/opt/VRTScps/bin/vxcpserv"
   # hares -offline <vxcpserv> -sys <sysname>

To configure TLSv1.2 for CP Client
Edit the "/etc/vxcps_ssl.properties" file and make the following changes:
   a. Remove or comment the entry: openSSL.server.requireTLSv1 = true 
   b. Add a new entry: openSSL.server.requireTLSv1.2 = true

* 4067464 (Tracking ID: 4056666)

SYMPTOM:
The Error writing to database message may appear in syslogs intermittently on InfoScale CP servers.

DESCRIPTION:
Typically, when a coordination point server (CP server) is shared among multiple InfoScale clusters, the following messages may intermittently appear in syslogs:
CPS CRITICAL V-97-1400-501 Error writing to database! :database is locked.
These messages appear in the context of the CP server protocol handshake between the clients and the server.

RESOLUTION:
The CP server is updated so that, in addition to its other database write operations, all the ones for the CP server protocol handshake action are also synchronized.

Patch ID: -5.30.0.5

* 4079828 (Tracking ID: 4079827)

SYMPTOM:
Security vulnerabilities detected in OpenSSL packaged with VRTSperl/VRTSpython for Infoscale 7.4.2 and its update releases.

DESCRIPTION:
Security vulnerabilities detected in the OpenSSL.

RESOLUTION:
Upgraded OpenSSL version and re-created VRTSperl/VRTSpython version to fix the vulnerability .

Patch ID: VRTSvcsea-7.4.2.3100

* 4091036 (Tracking ID: 4088595)

SYMPTOM:
hapdbmigrate utility fails to online the oracle service group due to a timing issue.

DESCRIPTION:
hapdbmigrate utility fails to online the oracle service group due to a timing issue.
example:
./hapdbmigrate -pdbres pdb1_res -cdbres cdb2_res -XMLdirectory /oracle_xml
Cluster prechecks and validation                                 Done
Taking PDB resource [pdb1_res] offline                           Done
Modification of cluster configuration                            Done
VCS ERROR V-16-41-39 Group [CDB2_grp] is not ONLINE after 300 seconds on %vcs_node%

VCS ERROR V-16-41-41 Group [CDB2_grp] is not ONLINE on some nodes in the cluster

Bringing PDB resource [pdb1_res] online on CDB resource [cdb2_res]Done

For further details, see '/var/VRTSvcs/log/hapdbmigrate.log'

RESOLUTION:
hapdbmigrate utility modified to ensure enough time elapses between probe of PDB resource and online of CDB group.

Patch ID: VRTSvcsea-7.4.2.1100

* 4020528 (Tracking ID: 4001565)

SYMPTOM:
On Solaris 11.4, IMF fails to provide notifications when Oracle processes stop.

DESCRIPTION:
On Solaris 11.4, when Oracle processes stop, IMF provides notification to Oracle agent, but the monitor is not scheduled. As as result, agent fails intelligent monitoring.

RESOLUTION:
Oracle agent now provides notifications when Oracle processes stop.

Patch ID: VRTSvcswiz-7.4.2.2100

* 4049572 (Tracking ID: 4049573)

SYMPTOM:
Veritas High Availability Configuration Wizard (HA-Plugin) is not supported on VMWare vCenter HTML based UI.

DESCRIPTION:
Veritas HA-Plugin was based on Adobe Flex. HA-Plugin fails to work because Flex is now deprecated.

RESOLUTION:
Veritas HA-Plugin now supports VMWare vCenter HTML based UI.

Patch ID: VRTSvxfen-7.4.2.3100

* 4105330 (Tracking ID: 3991274)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP5 is
now introduced.

Patch ID: VRTSvxfen-7.4.2.2500

* 4057309 (Tracking ID: 4057308)

SYMPTOM:
After an InfoScale upgrade, the updated values of vxfen tunables that are used when loading the corresponding module fail to persist.

DESCRIPTION:
When the value of a tunable in /etc/sysconfig/vxfen is changed before an RPM upgrade, the existing value gets reset to the default value.

RESOLUTION:
The vxfen module is updated so that its existing tunable values in /etc/sysconfig/vxfen can be retained even after an RPM upgrade.

* 4067460 (Tracking ID: 4004248)

SYMPTOM:
vxfend process segfaults and coredumped

DESCRIPTION:
During fencing race, sometimes vxfend crashes and generates core dump

RESOLUTION:
Vxfend internally uses fork and exec to execute sub tasks. The new child process was using same file descriptors for logging purpose. This simultaneous read of same file using single file descriptor was resulting incorrect read and hence the process crash and coredump. This fix creates new file descriptor for child process to fix the crash

Patch ID: VRTSvxfen-7.4.2.2100

* 4046423 (Tracking ID: 4043619)

SYMPTOM:
OCPR failed from SCSI3 fencing to Customized mode

DESCRIPTION:
Online Coordination Point Replacement (OCPR) was broken for SCSI3 to Customized mode based fencing. This was due to a regression due to a change in vxfend invocation

RESOLUTION:
OCPR from SCSI3 to Customized mode is working again with this fix

Patch ID: VRTSvxfen-7.4.2.1300

* 4006982 (Tracking ID: 3988184)

SYMPTOM:
The vxfen process cannot complete due to incomplete vxfentab file.

DESCRIPTION:
When I/O fencing starts, the vxfen startup script creates the /etc/vxfentab file on each node. If the coordination disk discovery is slow, the vxfen startup script fails to include all the coordination points in the vxfentab file. As a result, the vxfen startup script gets stuck in a loop.

RESOLUTION:
The vxfen startup process is modified to exit from the loop if it gets stuck while configuring 'vxfenconfig -c'. On exiting from the loop, systemctl starts vxfen again and tries to use the updated vxfentab file.

* 4007375 (Tracking ID: 4000745)

SYMPTOM:
The VxFEN process fails to start due to late discovery of the VxFEN disk group.

DESCRIPTION:
When I/O fencing starts, the VxFEN startup script creates this /etc/vxfentab file on each node. During disk-based fencing, the VxVM module may take longer time to discover the VxFEN disk group. Because of this delay, the 'generate disk list' opreration times out. Therefore, the VxFEN process fails to start and reports the following error: 'ERROR: VxFEN cannot generate vxfentab because vxfendg does not exist'

RESOLUTION:
A new tunable, getdisks_timeout, is introduced to specify the timeout value for the VxFEN disk group discovery. The maximum and the default value for this tunable is 600 seconds. You can set the value of this tunable by adding an getdisks_timeout=<time_in_sec> entry in the /etc/vxfenmode file.

* 4007376 (Tracking ID: 3996218)

SYMPTOM:
In a customized fencing mode, the 'vxfenconfig -c' command creates a new vxfend process even if VxFen is already configured.

DESCRIPTION:
When you configure fencing in the customized mode and run the 'vxfenconfig -c' command, the vxfenconfig utility reports the 'VXFEN ERROR V-11-1-6 vxfen already configured...' error. Moreover, it also creates a new vxfend process even if VxFen is already configured. Such redundant processes may impact the performance of the system.

RESOLUTION:
The vxfenconfig utility is modified so that it does not create a new vxfend process when VxFen is already configured.

* 4007677 (Tracking ID: 3970753)

SYMPTOM:
Freeing uninitialized/garbage memory causes panic in vxfen.

DESCRIPTION:
Freeing uninitialized/garbage memory causes panic in vxfen.

RESOLUTION:
Veritas has modified the VxFen kernel module to fix the issue by initializing the object before attempting to free it.
 .

Patch ID: VRTSspt-7.4.2.1500

* 4139975 (Tracking ID: 4149462)

SYMPTOM:
New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version.

DESCRIPTION:
list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of 
script refer README.list_missing_incidents in VRTSspt package

RESOLUTION:
list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of 
script refer README.list_missing_incidents in VRTSspt package

* 4146957 (Tracking ID: 4149448)

SYMPTOM:
New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog.

DESCRIPTION:
If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt package

RESOLUTION:
If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt package

Patch ID: VRTSdbed-7.4.2.2300

* 4092589 (Tracking ID: 4092588)

SYMPTOM:
SFAE failed to start with systemd.

DESCRIPTION:
SFAE failed to start with systemd as currently SFAE service is used in backward compatibility mode using init script.

RESOLUTION:
Added systemd support for SFAE, such as; systemctl commands - stop/start/status/restart/enable/disable.

Patch ID: -4.01.742.300

* 4049416 (Tracking ID: 4049416)

SYMPTOM:
Frequent Security vulnerabilities reported in JRE.

DESCRIPTION:
There are many vulnerabilities reported in JRE every quarter. To overcome this vulnerabilities issue migrate Telemetry Collector from Java to Python.
All other behavior of Telemetry Collector will remain the same.

RESOLUTION:
Migrated Telemetry Collector from Java to Python.

Patch ID: VRTSpython-3.9.2.0_1

* 4140140 (Tracking ID: 4140139)

SYMPTOM:
There are open exploitable CVEs which are having High/Critical CVSS score, in the current PPL and other modules under VRTSpython.

DESCRIPTION:
There are open exploitable CVEs which are having High/Critical CVSS score, in the current Python Programming Language and other modules under VRTSpython.

RESOLUTION:
Upgrading Python programming language and vulnerable module under VRTSpython to address open exploitable security vulnerabilities.

Patch ID: VRTSpython-3.7.4.40

* 4133298 (Tracking ID: 4133297)

SYMPTOM:
Security vulnerabilities detected in the OpenSSL and python certifi module.

DESCRIPTION:
Some open CVE's are exploitable in VRTSpython for IS 7.4.2

RESOLUTION:
Upgraded OpenSSL version and fixed the certificate under certifi module to fix the vulnerability .

Patch ID: VRTSpython-3.7.4.39

* 4117482 (Tracking ID: 4117483)

SYMPTOM:
Open CVE's detected for the python programming language and other python modules being used in VRTSpython

DESCRIPTION:
Some open CVE's are exploitable in VRTSpython for IS 7.4.2

RESOLUTION:
VRTSpython is patched with all the open CVE's which are impacting IS 7.4.2

Patch ID: VRTSamf-7.4.2.3700

* 4136002 (Tracking ID: 4136003)

SYMPTOM:
A cluster node panics when VCS enabled AMF module that monitors process on/off.

DESCRIPTION:
A cluster node panics which indicates that an issue with the AMF module overruns into user space buffer during it is analyzing an argument of 8K size. The AMF module cannot load that length of data into internal buffer, eventually misleading access into user buffer which is not allowed when kernel SMAP in effect.

RESOLUTION:
The AMF module is constrained to ignore an argument of 8K or bigger size to avoid internal buffer overrun.

Patch ID: VRTSamf-7.4.2.3100

* 4105318 (Tracking ID: 3991274)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP5 is
now introduced.

Patch ID: VRTSamf-7.4.2.2500

* 4066721 (Tracking ID: 4066719)

SYMPTOM:
InfoScale stack upgrade issues observed on Solaris 11.4

DESCRIPTION:
InfoScale stack upgrade showed multiple issues due to separate AMF modules for 11.3, 11.4, and later versions.

RESOLUTION:
AMF now provides single module for Solaris 11.4 and later version. Support for Solaris 11.3 is now removed.

Patch ID: VRTSamf-7.4.2.2100

* 4046524 (Tracking ID: 4041596)

SYMPTOM:
A cluster node panics when the arguments passed to a process that is registered with AMF exceeds 8K characters.

DESCRIPTION:
This issue occurs due to improper parsing and handling of argument lists that are passed to processes registered with AMF.

RESOLUTION:
AMF is updated to correctly parse and handle argument lists for processes.

Patch ID: VRTSvcs-7.4.2.3700

* 4080026 (Tracking ID: 4080024)

SYMPTOM:
Due to boot sequence changes for the Solaris zones services, VxODM goes into a 'maintenance' state if cluster node is rebooted or VCS/ODM is restarted.

DESCRIPTION:
Oracle introduced boot sequence changes for the Solaris zones services in Solaris 11.4 platform release.  VCS dependency exists in ODM and there 
is no zone dependency in VCS. At the time of reboot, zones do not get detached. Zones get started before ODM after a restart and ODM goes into a 'maintenance' 
state.

RESOLUTION:
Added ODM dependency in VCS.

* 4090591 (Tracking ID: 4090590)

SYMPTOM:
Security vulnerabilities exist in the OpenSSL third-party components used by VCS.

DESCRIPTION:
VCS uses the OpenSSL third-party components in which some security vulnerability exist.

RESOLUTION:
VCS package is updated to use newer versions of openssl in which the security vulnerabilities have been addressed.

* 4100721 (Tracking ID: 4100720)

SYMPTOM:
GCO fails to configure for the latest RHEL/SLES platform due to an incorrect CIDR value.

DESCRIPTION:
GCO failed to configure as altname defined for latest RHEL/SLES kernel sends an incorrect CIDR value.

RESOLUTION:
Updated code to pick correct CIDR value in GCO config if altname is defined.

* 4129502 (Tracking ID: 4129493)

SYMPTOM:
Tenable security scan kills the Notifier resource.

DESCRIPTION:
When nmap port scan performed on port 14144 (on which notifier process is listening), notifier gets killed because of connection request.

RESOLUTION:
The required code changes have done to prevent Notifier agent crash when nmap port scan is performed on notifier port 14144.

* 4136360 (Tracking ID: 4136359)

SYMPTOM:
When upgrading InfoScale with latest Public Patch Bundle, types.cf is updated and HTC types definition removed.

DESCRIPTION:
When upgrading InfoScale with latest Public Patch Bundle, types.cf is updated and HTC types definition removed as they replaced /etc/VRTSvcs/conf/types.cf to /etc/VRTSvcs/conf/config/types.cf.

RESOLUTION:
Implemented new external trigger to manually update the types.cf post VRTSvcsag rpm installation, execute below command:

"hatrigger -user_trigger_update_types <triggertype>"

Patch ID: VRTSvcs-7.4.2.3100

* 4090591 (Tracking ID: 4090590)

SYMPTOM:
Security vulnerabilities exist in the OpenSSL third-party components used by VCS.

DESCRIPTION:
VCS uses the OpenSSL third-party components in which some security vulnerability exist.

RESOLUTION:
VCS package is updated to use newer versions of openssl in which the security vulnerabilities have been addressed.

Patch ID: VRTSvcs-7.4.2.2500

* 4059899 (Tracking ID: 4059897)

SYMPTOM:
Executing the hacf -verify command generates core files if main.cf file contains non-ASCII character.

DESCRIPTION:
When main.cf contains a non-ASCII character, in some cases core files get generated for 'hacf -verify'. 
An example snippet of main.cf :
group test1_fail (
        SystemList = { iar730-05vm21 = 0, iar730-05vm23 = 1 }
        %AutoStartList = { iar730-05vm21, iar730-05vm23 }
        )

NULL Length of one buffer causes segmentation fault.

RESOLUTION:
NULL buffer is checked first before finding length.

* 4059901 (Tracking ID: 4059900)

SYMPTOM:
On a VCS node, hastop -local is unresponsive.

DESCRIPTION:
When a resource level online is initiated, VCS does not set any flags on a service group state to mark the group as STARTING. When a VCS stop is initiated on a system, VCS checks all the groups which are either ONLINE or in the process of coming ONLINE. Thus, even though the resources were being brought online, there was no such indication set on the group level. VCS does not mark the CVM group for offline.

The group that comes ONLINE after VCS entered a LEAVING state is missed for offlining causing VCS hastop -local to be unresponsive.

RESOLUTION:
Resource active state is checked to determine if a group is in the process of coming ONLINE.

* 4071007 (Tracking ID: 4070999)

SYMPTOM:
Processes registered under VCS control get killed after running the 'hastop -local -force' command.

DESCRIPTION:
Processes registered under VCS control get killed after running the 'hastop -local -force' command.

RESOLUTION:
'KillMode' value changed from 'control-group' to 'process'.

Patch ID: VRTSvcs-7.4.2.2100

* 4046515 (Tracking ID: 4040705)

SYMPTOM:
hacli hangs indefinitely when command exceeds character limit of 4096.

DESCRIPTION:
hacli hangs indefinitely when '-cmd' option value exceeds character limit 4096. Instead of returning proper error message hacli indefinitely waits for reply from vcs engine.

RESOLUTION:
Increased character limit of hacli '-cmd' option value. Now it's 7680. Also handled validations of different options of hacli. So when '-cmd' option value will exceed this new limit it will give proper error message instead of hanging.

* 4046520 (Tracking ID: 4040656)

SYMPTOM:
In result of ENOMEM error HAD restart with '-restart' option

DESCRIPTION:
When ENOMEM error occurs, HAD retries for some max limit and still if we get ENOMEM error then HAD exits. Then hashadow daemon restarts HAD with '-restart' option. So it doesn't allows to Austostart of failover SG in cluster as it considers as one of the node is in restarting mode.

RESOLUTION:
In nonoccurence of ENOMEM error HAD will gracefully exit and hashadow daemon will restart HAD without '-restart' option. So that node will not be considered as restarted and Autostart of failover SG will be triggered.

* 4046526 (Tracking ID: 4043700)

SYMPTOM:
While Online operation is in progress and the PreOnline trigger is already executing; Multiple PreOnline triggers can be executed on the same/different nodes in the cluster for failover/parallel/hybrid service groups.

DESCRIPTION:
In-progress execution of the PreOnline trigger was not accounted. Thus subsequent online operations can be accepted while there is a PreOnline trigger already executing. Hence multiple PreOnline trigger instances were executed.

RESOLUTION:
While validating an online operation in progress PreOnline triggers were also considered and subsequent online operations were rejected. This fix ensures only one execution of the PreOnline trigger for failover groups.

Patch ID: VRTSvcsag-7.4.2.3700

* 4112578 (Tracking ID: 4113151)

SYMPTOM:
Dependent DiskGroupAgent fails to get its resource online due to disk group import failure.

DESCRIPTION:
VMwareDisksAgent reports its resource online just after VMware disk is attached to virutal machine, if dependent DiskGroup resource starts to online at the moment it must fail because VMware disk is not yet present into vxdmp database due to VxVM transaction latency. Customer used to add retry times to work around this problem but cannot apply the same to every environment.

RESOLUTION:
Added a finite period of wait for VMware disk is present into vxdmp database before online is complete.

* 4113062 (Tracking ID: 4113056)

SYMPTOM:
ReuseMntPt is not honored when the same mountpoint is used for two resources with different FSType.

DESCRIPTION:
The reuseMntPt attribute is set to 1 if the same mount point needs to be specified in more than one mount resource but when both the resource has different FSType, The state of the offline resource shows as offline|unknown instead of offline. No errors on the online resource.

RESOLUTION:
The Required code changes have been done to indicate the correct state of the resource.

* 4120653 (Tracking ID: 4118454)

SYMPTOM:
when root user login shell is set to /sbin/nologin in /etc/passwd file, Process agent resource fails to come online.

DESCRIPTION:
From the engine_A.log,  the below errors were logged:
2023/05/31 11:34:52 VCS NOTICE V-16-10031-20704 Process:Process:imf_getnotification:Received notification for vxamf-group sendmail
2023/05/31 11:35:38 VCS ERROR V-16-10031-9502 Process:sendmail:online:Could not online the resource, make sure user-name is correct.
2023/05/31 11:35:39 VCS INFO V-16-2-13716 Thread(140147853162240) Resource(sendmail): Output of the completed operation (online)
==============================================
This account is currently not available.
==============================================

RESOLUTION:
The Process agent is enhanced to support nologin shell for root user. If user shell is set to /sbin/nologin, the agent starts a process using /bin/bash shell.

* 4134361 (Tracking ID: 4122001)

SYMPTOM:
NIC resource remain online after unplug network cable on ESXi server.

DESCRIPTION:
Previously MII checking network statistics/ping test but now it's directly marking NIC state ONLINE by checking NIC status in operstate. Now there is no any ping check before that. If it's fail to detect operstate file then only it's going to check for PING test. But on ESXi server environment NIC is already marked ONLINE as operstate file is available with state UP & carrier bit is set. So, even if "NetworkHosts" is not reachable then also it's marking NIC resource ONLINE.

RESOLUTION:
The NIC agent's already having "PingOptimize" attribute. We introduced new value (2) for "PingOptimize" attribute to make decision of perform PING test. If "PingOptimize = 2" then only it will do PING test or else it will work as per previous design.

* 4134638 (Tracking ID: 4127320)

SYMPTOM:
The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.

DESCRIPTION:
The agent fails to bring online a resource when the shell for the user is set to /sbin/nologin.

RESOLUTION:
The ProcessOnOnly agent is enhanced to support the /sbin/nologin shell. If the shell is set to /sbin/nologin, the agent uses /bin/bash as shell to start the process.

* 4142044 (Tracking ID: 4142040)

SYMPTOM:
While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.

DESCRIPTION:
While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.
During some instances, the user might be informed to manually copy '/etc/VRTSvcs/conf/types.cf' to the existing '/etc/VRTSvcs/conf/config/types.cf' file. Need to remove the message "Implement /etc/VRTSvcs/conf/types.cf to utilize resource type updates" when updating the VRTSvcsag rpm.

RESOLUTION:
To ensure that '/etc/VRTSvcs/conf/config/types.cf file' is updated correctly following VRTSvcsag updates, the script user_trigger_update_types can be manually triggered by the user. The following message displays:
Leaving existing /etc/VRTSvcs/conf/config/types.cf configuration file unmodified
Copy /opt/VRTSvcs/bin/sample_triggers/VRTSvcs/user_trigger_update_types to /opt/VRTSvcs/bin/triggers
To manually update the types.cf, execute command "hatrigger -user_trigger_update_types 0

* 4149222 (Tracking ID: 4121270)

SYMPTOM:
EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.

DESCRIPTION:
After attaching volume to instance its taking some time to update its device mapping in system. Due to which if we run lsblk -d -o +SERIAL immediately after attaching volume then its not showing that volume details in output. Due to which $native_device was getting blank/uninitialized.
 
So, we need to wait for some time to get device mapping updated in system.

RESOLUTION:
We have added logic to retry once for same command after some interval if in first run we didnt find expected volume device mapping. And now its properly updating attribute NativeDevice.

Patch ID: VRTSvcsag-7.4.2.3100

* 4077735 (Tracking ID: 4075950)

SYMPTOM:
When IPv6 VIP switches from node1 to node2 in a cluster,
a longer time is taken to update its neighboring information and traffic to reach node2 which is on the reassigned address.

DESCRIPTION:
After the Service group switches from node1 to node2, the IPv6 VIP is not reachable from the network switch. The mac address changes after the node switch, but the network is not updated. Similar to IPv4 VIP by gracious ARP, in case of IPV6 VIP switch from node1 to node2; the network must be updated for the mac address change.

RESOLUTION:
The network devices which communicate with the VIP are not able to establish a connection with the VIP. To connect with the VIP, the VIP is pinged from the switch or from the cluster nodes 'ip -6 neighbor flush all' command is run. Neighbour flush logic is added to IP/MultiNIC agents so that the changed mac id during floating VIP switchover is updated in the network.

* 4090282 (Tracking ID: 4056397)

SYMPTOM:
In a rare case, a NIC resource may go into FAILED state even though it is active at the OS level.

DESCRIPTION:
This issue occurs because the haping utility may incorrectly detect the NIC resource as down, even though it is active at the OS level.

RESOLUTION:
The haping utility is modified to use the standard OS ping at the end of the haping tasks. The utility now specifies the source IP of the NIC to ensure that the packet transmission is done using that specific NIC only, so as not to return false positives.

* 4090621 (Tracking ID: 4083099)

SYMPTOM:
When OverlayIP is configured AzureIP resource offline operation fails.

DESCRIPTION:
AzureIP resource fails to go offline when OverlayIP is configured because Azure API routes.delete part of azure-mgmt-network module has been deprecated.

RESOLUTION:
A new API routes.begin_delete is introduced as suggested by Azure in the Azure agent.

* 4090745 (Tracking ID: 4094539)

SYMPTOM:
The MonitorProcesses argument in the resource ArgListValues being passed to the agent (bundled ApplicationAgent) is incorrectly removing an extra needed space from the following process, as found via the recommended CLI process test.

DESCRIPTION:
In the ArgListValues under MonitorProcesses with the extra space it even shows up when displaying the resource.

RESOLUTION:
For the monitored process (not program) only remove leading and trailing spaces. Do not remove extra spaces between words.

* 4091819 (Tracking ID: 4090381)

SYMPTOM:
The VMware disk agent does not support more than 15 disk IDs.

DESCRIPTION:
Customers unable to configure disks with agent having more than 15 IDs. Customers expect the support to be extended upto 64 disks IDs.

RESOLUTION:
Added support for 64 disks IDs in VMwareDisk agent.

Patch ID: VRTSvcsag-7.4.2.2100

* 4045605 (Tracking ID: 4038906)

SYMPTOM:
In case of ESXi 6.7, the VMwareDisks agent fails to perform a failover on a peer node.

DESCRIPTION:
The VMwareDisks agent faults when you try to bring the related service group online or to fail over the service group on a peer node. This issue occurs due to the change in the behavior of the API on ESXi 6.7 that is used to attach VMware disks.

RESOLUTION:
The VMWareDisks agent is updated to support the changed behavior of the API on ESXi 6.7. The agent can now bring the service group online or perform a failover on a peer node successfully.

* 4045606 (Tracking ID: 4042944)

SYMPTOM:
In a hardware replicated environment, a disk group resource may fail to import when the HARDWARE_MIRROR flag is set

DESCRIPTION:
After the VCS hardware replication agent resource fails over control to the secondary site, the DiskGroup agent does not rescan all the required device paths in 
case of a multi-pathing configuration. 
The vxdg import operation fails, as the hardware device characteristics for all the paths are not refreshed.

RESOLUTION:
This hotfix introduces of a new resource attribute for DiskGroup agent called ScanDisks. The ScanDisks attributes enables the user to perform a selective 
devices scan for all disk paths associated with a VxVM disk group. The VxVM and DMP disks attributes are refreshed before attempting to importing hardware clone 
or replicated devices. The default value of ScanDisks is 0, which indicates a selective device scan is not performed. Even when set 0, if the disk group fails 
with an error string containing HARDWARE MIRROR during the first disk group import attempt, the DiskGroup agent will then perform a selective device scan to 
increase of the chances of a successful import.
Sample resource configurations:
For Hardware Clone DiskGroups

DiskGroup tc_dg (
DiskGroup = datadg
DGOptions = "-o useclonedev=on -o updateid"
ForceImport = 0
ScanDisks = 1
)

For Hardware Replicated DiskGroups

DiskGroup tc_dg (
DiskGroup = datadg
ForceImport = 0
ScanDisks = 1
)

* 4046521 (Tracking ID: 4030215)

SYMPTOM:
Azure agents now support azure-identity based credential methods

DESCRIPTION:
Azure credential system is revamped. The new system is available in azure-identity library.

RESOLUTION:
Azure agents now support azure-identity based credential method. With this enhancement, Azure agents  will support following Azure Python SDK versions:

azure-common==1.1.25
azure-core==1.10.0
azure-identity==1.4.1
azure-mgmt-compute==19.0.0
azure-mgmt-core==1.2.2
azure-mgmt-dns==8.0.0
azure-mgmt-network==17.1.0
azure-storage-blob==12.8.0
msrestazure==0.6.4

* 4046525 (Tracking ID: 4046286)

SYMPTOM:
Azure Cloud agents does not handle generic exceptions

DESCRIPTION:
Azure agents are handling only CloudError of Azure APIs, but there can be other error that may occur during certain failure conditions.

RESOLUTION:
Azure agents are enhanced to handle API failure conditions.

* 4048981 (Tracking ID: 4048164)

SYMPTOM:
Cloud agents may report incorrect resource state in case cloud API hangs.

DESCRIPTION:
In case Cloud SDK API/CLI hang, the monitor function of cloud agents times out. This results in un-wanted failover of service group.

RESOLUTION:
The default value of FaultOnMonitorTimeout attribute of all cloud agents are set to 0. This helps in avoiding un-wanted failover because of Cloud SDK API/CLI hang.

Patch ID: VRTSvcsag-7.4.2.1400

* 4007372 (Tracking ID: 4016624)

SYMPTOM:
When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.

DESCRIPTION:
When the ForceImport option is used, a disk group gets imported with the available disks, regardless of whether all the required disks are available or not. In such a scenario, if the ClearClone attribute is enabled, the available disks are successfully imported, but their DGIDs are updated to new values. Thus, the disks within the same disk group end up with different DGIDs, which may cause issues with the functioning of the storage configuration.

RESOLUTION:
The DiskGroup agent is updated to allow the ForceImport and the ClearClone attributes to be set to the following values as per the configuration requirements. ForceImport can be set to 0 or 1. ClearClone can be set to 0, 1, or 2. ClearClone is disabled when set to 0 and enabled when set to 1 or 2. ForceImport is disabled when set to 0 and is ignored when ClearClone is set to 1. To enable both, ClearClone and ForceImport, set ClearClone to 2 and ForceImport to 1.

* 4007374 (Tracking ID: 1837967)

SYMPTOM:
Application agent falsely detects an application as faulted, due to corruption caused by non-redirected STDOUT or STDERR.

DESCRIPTION:
This issue can occur when the STDOUT and STDERR file descriptors of the program to be started and monitored are not redirected to a specific file or to /dev/null. In this case, an application that is started by the Online entry point inherits the STDOUT and STDERR file descriptors from the entry point. Therefore, the entry point and the application, both, read from and write to the same file, which may lead to file corruption and cause the agent entry point to behave unexpectedly.

RESOLUTION:
The Application agent is updated to identify whether STDOUT and STDERR for the configured application are already redirected. If not, the agent redirects them to /dev/null.

* 4012397 (Tracking ID: 4012396)

SYMPTOM:
AzureDisk agent fails to work with latest Azure Storage SDK.

DESCRIPTION:
Latest Python SDK for Azure doesn't work with InfoScale AzureDisk agent.

RESOLUTION:
AzureDisk agent now supports latest Azure Storage Python SDK.

* 4019536 (Tracking ID: 4009761)

SYMPTOM:
A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.

DESCRIPTION:
As part of the Online operation, the NFSRestart agent copies the NFSv4 state data of clients from the shared storage to the local path. However, if the source location contains millions of files, some of which may be stale, their movement may not be completed before the operation times out.

RESOLUTION:
A new action entry point named "cleanup" is provided, which removes stale files. The usage of the entry point is as follows:
$ hares -action <resname> cleanup -actionargs <days> -sys <sys>
  <days>: number of days, deleting files that are <days> old
Example:
$ hares -action NFSRestart_L cleanup -actionargs 30 -sys <sys>
The cleanup action ensures that files older than the number of days specified in the -actionargs option are removed; the minimum expected duration is 30 days. Thus, only the relevant files to be moved remain, and the Online operation is completed in time.

Patch ID: VRTSllt-7.4.2.3700

* 4102403 (Tracking ID: 4100288)

SYMPTOM:
OS gets panic when LLT timer handler unlocks the simple lock used by mac-addr polling.

DESCRIPTION:
AIX7.2TL5 enabled errchecknormal(7), unlocking in the interrupt disabled path must assert that there are no waiters on this lock. LLT takes the lock used by mac-addr polling disabled in llt timer handler, then on a different thread tried to take the same at intbase. This then trigger a panic.

RESOLUTION:
Code change done for LLT kernel extension to use disable lock at both interrupt and thread context.

* 4120940 (Tracking ID: 4087662)

SYMPTOM:
During memory fragmentation LLT module may fail to allocate large memory leading to the node eviction or a node not being able to join.

DESCRIPTION:
When system memory is heavily fragmented, LLT module fails to allocate memory in the form of Linux socket buffers (SKB) from the OS. Due to this a 
cluster node may not be able to join the cluster or a node may get evicted from the cluster.

RESOLUTION:
This HF updates LLT module so that memory is allocated from private memory pools maintained inside LLT and if pools are exhausted LLT module tries 
to allocate memory through vmalloc.

* 4121625 (Tracking ID: 4081574)

SYMPTOM:
LLT unnecessarily replies to an explicit heartbeat request after 'sendhbcap' is over.

DESCRIPTION:
If LLT heartbeats are missed from a particular node in a regular context (OS timer context), then LLT sends heartbeats in the receive context. These are called as out-of-context heartbeats. LLT is supposed to send the out-of-context heartbeats only for 'sendhbcap' time interval which is set to 180 seconds by default. If till then regular HB context (OS timer) is not backed up, node must go out of the cluster. But remote node sends an explicit HB request just before peer-in-act is about to expire and the local node unnecessarily replies to that HB request. The problematic node does not go out of the cluster.

RESOLUTION:
If out-of-context heartbeats are not sent (i.e. if 'sendhbcab' interval has expired), LLT does not reply. This check is missing currently in LLT. Adding that check solves the problem and the node goes out of the cluster.

* 4126360 (Tracking ID: 4124759)

SYMPTOM:
Panic happened with llt_ioship_recv on a server running in AWS.

DESCRIPTION:
In AWS environment packets can be duplicated even it is configured in UDP. As UDP it is not expected.

RESOLUTION:
To avoid the panic we have checked the packet is already in send queue of bucket and decided invalid/duplicate packet.

* 4135222 (Tracking ID: 4065484)

SYMPTOM:
Unable to set a default LLT feature that prevents nodes in an InfoScale cluster from being fenced out.

DESCRIPTION:
Occasionally, in a VMware environment or on physical systems, the operating system may not deliver LLT heartbeat packets on time. Consequently, heartbeats from some of the cluster nodes may be lost, and those nodes may get fenced out. This situation typically occurs when the CPU load or the memory usage is high.

RESOLUTION:
This fix attempts to make clusters more resilient to transient issues by attaching the all-CPU affinity to the LLT interface IRQ line.

* 4135826 (Tracking ID: 4135825)

SYMPTOM:
Once root file system is full during llt start, llt module failing to load forever.

DESCRIPTION:
Disk is full, user rebooted system or restart the product. In this case while LLT loading, it deletes llt links and tried creates new one using link names and "/bin/ln -f -s". As disk is full, it unable to create the links. Even after making space, it failed to create the links as those are deleted. So failed to load LLT module.

RESOLUTION:
If existing links are not present, added the logic to get name of file names to create new links.

* 4137266 (Tracking ID: 4139781)

SYMPTOM:
System panics occasionally in LLT stack where LLT over ether enabled.

DESCRIPTION:
LLT allocates skb memory from own cache for those >4k messages and sets a field of skb_shared_info pointing to a LLT function, later uses the field to determine if skb is allocated from own cache. When receiving a packet, OS also allocates skb from system cache and doesn't reset the field, then passes skb to LLT. Occasionally the stale pointer in memory can mislead LLT to think a skb is from own cache and uses LLT free api by mistake.

RESOLUTION:
LLT uses a hash table to record skb allocated from own cache and doesn't set the field of skb_shared_info.

* 4149099 (Tracking ID: 4128887)

SYMPTOM:
Below warning trace is observed while unloading llt module:
[171531.684503] Call Trace:
[171531.684505]  <TASK>
[171531.684509]  remove_proc_entry+0x45/0x1a0
[171531.684512]  llt_mod_exit+0xad/0x930 [llt]
[171531.684533]  ? find_module_all+0x78/0xb0
[171531.684536]  __do_sys_delete_module.constprop.0+0x178/0x280
[171531.684538]  ? exit_to_user_mode_loop+0xd0/0x130

DESCRIPTION:
While unloading llt module, vxnet/llt dir is not removed properly due to which warning trace is observed .

RESOLUTION:
Proc_remove api is used which cleans up the whole subtree.

* 4152180 (Tracking ID: 4087543)

SYMPTOM:
Node panic observed at llt_rdma_process_ack+189

DESCRIPTION:
LLT in llt_rdma_process_ack() function, trying to access the header mblk becomes invalid, and sends an unnecessary acknowledgement from the network/rdma layer. When ib_post_send() function fails, OS returns an error, and LLT takes care of this error by sending the packet through non-rdma channel. Even when the OS has returned the error, that packet is still sent down and LLT get an acknowledgement. LLT thus receives two acknowledgements for the same buffer, one which rdma layer sends although it report errors while sending, and other that LLT simulates (by design) after delivering the packet through non-rdma channel.
The first acknowledgement context frees up the buffer and so when LLT again calls same function (llt_rdma_process_ack() ) for the same buffer, as the buffer has already been freed, LLT hits panic in that function.

RESOLUTION:
Added a check that stops buffer being freed up twice and hence does not panic node at llt_rdma_process_ack+189.

Patch ID: VRTSllt-7.4.2.3100

* 4105323 (Tracking ID: 3991274)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP5 is
now introduced.

Patch ID: VRTSllt-7.4.2.2500

* 4046420 (Tracking ID: 3989372)

SYMPTOM:
When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.

DESCRIPTION:
Occasionally, in a VMware environment, the operating system may not schedule LLT contexts on time. Consequently, heartbeats from some of the cluster nodes may be lost, and those nodes may get fenced out. This situation typically occurs when the CPU load or the memory usage is high or when the VMDK snapshot or vMotion operations are in progress.

RESOLUTION:
This fix attempts to make clusters more resilient to transient issues by heartbeating using threads bound to every vCPU.

* 4057311 (Tracking ID: 4057310)

SYMPTOM:
After an InfoScale upgrade, the updated values of LLT tunables that are used when loading the corresponding modules fail to persist.

DESCRIPTION:
If the value of a tunable in /etc/sysconfig/llt is changed before an RPM upgrade, the change does not persist after the upgrade, and the tunable gets reset to the default value.

RESOLUTION:
The LLT module is updated so that its tunable parameters in /etc/sysconfig/llt can retain the existing values even after an RPM upgrade.

* 4067422 (Tracking ID: 4040261)

SYMPTOM:
During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.

DESCRIPTION:
Some log messages may have IDs like 00000. When such logs are encountered, it may lead to a core dump by the lltconfig process.

RESOLUTION:
VCS is updated to use appropriate message IDs for logs so that such issues do not occur.

Patch ID: VRTSllt-7.4.2.2100

* 4039475 (Tracking ID: 4045607)

SYMPTOM:
LLT over UDP support for transmission and reception of data over 1500 MTU networks.

DESCRIPTION:
The UDP multiport feature in LLT performs poorly in case of 1500 MTU-based networks. Data packets larger than 1500 bytes cannnot be transmitted over 1500 MTU-based networks, so the IP layer fragments them appropriately for transmission. The loss of a single fragment from the set leads to a total packet (I/O) loss. LLT then retransmits the same packet repeatedly until the transmission is successful. Eventually, you may encounter issues with the Flexible Storage Sharing (FSS) feature. For example, the vxprint process or the disk group creation process may stop responding, or the I/O-shipping performance may degrade severely.

RESOLUTION:
The UDP multiport feature of LLT is updated to fragment the packets such that they can be accommodated in the 1500-byte network frame. The fragments are rearranged on the receiving node at the LLT layer. Thus, LLT can track every fragment to the destination, and in case of transmission failures, retransmit the lost fragments based on the current RTT time.

* 4046200 (Tracking ID: 4046199)

SYMPTOM:
llt over udp configuration now accepts any link tag name

DESCRIPTION:
Previously for llt over udp configuration, the tag field in link definition had to be the ethernet interface name. With this fix any string can be used as a tag name

RESOLUTION:
Any string can be used as link tag name with this fix

* 4046420 (Tracking ID: 3989372)

SYMPTOM:
When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.

DESCRIPTION:
Occasionally, in a VMware environment, the operating system may not schedule LLT contexts on time. Consequently, heartbeats from some of the cluster nodes may be lost, and those nodes may get fenced out. This situation typically occurs when the CPU load or the memory usage is high or when the VMDK snapshot or vMotion operations are in progress.

RESOLUTION:
This fix attempts to make clusters more resilient to transient issues by heartbeating using threads bound to every vCPU.

Patch ID: VRTSllt-7.4.2.1300

* 4019535 (Tracking ID: 4018581)

SYMPTOM:
The LLT module fails to start and the system log messages indicate missing IP address.

DESCRIPTION:
When only the low priority LLT links are configured over UDP, UDPBurst mode must be disabled. UDPBurst mode must only be enabled when the high priority LLT links are configured over UDP. If the UDPBurst mode gets enabled while configuring the low priority links, the LLT module fails to start and logs the following error: "V-14-2-15795 missing ip address / V-14-2-15800 UDPburst:Failed to get link info".

RESOLUTION:
This hotfix updates the LLT module to not enable the UDPBurst mode when only the low priority LLT links are configured over UDP.

Patch ID: VRTSvxvm-7.4.2.4900

* 4069525 (Tracking ID: 4065490)

SYMPTOM:
systemd-udev threads consumes more CPU during system bootup or device discovery.

DESCRIPTION:
During disk discovery when new storage devices are discovered, VxVM udev rules are invoked for creating hardware path
symbolic link and setting SELinux security context on Veritas device files. For creating hardware path symbolic link to each
storage device, "find" command is used internally which is CPU intensive operation. If too many storage devices are attached to
system, then usage of "find" command causes high CPU consumption.

Also, for setting appropriate SELinux security context on VxVM device files, restorecon is done irrespective of SELinux is enabled or disabled.

RESOLUTION:
Usage of "find" command is replaced with "udevadm" command. SELinux security context on VxVM device files is being set
only when SELinux is enabled on system.

* 4092002 (Tracking ID: 4081740)

SYMPTOM:
vxdg flush command slow due to too many luns needlessly access /proc/partitions.

DESCRIPTION:
Linux BLOCK_EXT_MAJOR(block major 259) is used as extended devt for block devices. When partition number of one device is more than 15, the partition device gets assigned under major 259 to solve the sd limitations (16 minors per device), by which more partitions are allowed for one sd device. During "vxdg flush", for each lun in the disk group, vxconfigd reads file /proc/partitions line by line through fgets() to find all the partition devices with major number 259, which would cause vxconfigd to respond sluggishly if there are large amount of luns in the disk group.

RESOLUTION:
Code has been changed to remove the needless access on /proc/partitions for the luns without using extended devt.

* 4094433 (Tracking ID: 4098391)

SYMPTOM:
Kernel panic is observed with following stack:

#6 [ffffa479c21cf6f0] page_fault at ffffffffb240130e
    [exception RIP: bfq_bio_bfqg+37]
    RIP: ffffffffb1e78135  RSP: ffffa479c21cf7a0  RFLAGS: 00010002
    RAX: 000000000000001f  RBX: 0000000000000000  RCX: ffffa479c21cf860
    RDX: ffff8bd779775000  RSI: ffff8bd795b2fa00  RDI: ffff8bd795b2fa00
    RBP: ffff8bd78f136000   R8: 0000000000000000   R9: ffff8bd793a5b800
    R10: ffffa479c21cf828  R11: 0000000000001000  R12: ffff8bd7796b6e60
    R13: ffff8bd78f136000  R14: ffff8bd795b2fa00  R15: ffff8bd7946ad0bc
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
#7 [ffffa479c21cf7b0] bfq_bic_update_cgroup at ffffffffb1e78458
#8 [ffffa479c21cf7e8] bfq_bio_merge at ffffffffb1e6f47f
#9 [ffffa479c21cf840] blk_mq_submit_bio at ffffffffb1e48c09
#10 [ffffa479c21cf8c8] submit_bio_noacct at ffffffffb1e3c7e3
#11 [ffffa479c21cf958] submit_bio at ffffffffb1e3c87b
#12 [ffffa479c21cf9a8] submit_bh_wbc at ffffffffb1d2536a
#13 [ffffa479c21cf9e0] block_read_full_page at ffffffffb1d27ac1
#14 [ffffa479c21cfa90] do_read_cache_page at ffffffffb1c2f7e5
#15 [ffffa479c21cfb48] read_part_sector at ffffffffb1e546b5
#16 [ffffa479c21cfb60] read_lba at ffffffffb1e595d2
#17 [ffffa479c21cfba8] efi_partition at ffffffffb1e59f4d
#18 [ffffa479c21cfcb8] blk_add_partitions at ffffffffb1e54377
#19 [ffffa479c21cfcf8] bdev_disk_changed at ffffffffb1d2a8fa
#20 [ffffa479c21cfd30] __blkdev_get at ffffffffb1d2c16c
#21 [ffffa479c21cfda0] blkdev_get at ffffffffb1d2c2b4
#22 [ffffa479c21cfdb8] __device_add_disk at ffffffffb1e5107e
#23 [ffffa479c21cfe20] dmp_register_disk at ffffffffc0e68ae7 [vxdmp]
#24 [ffffa479c21cfe50] dmp_reconfigure_db at ffffffffc0e8d8bd [vxdmp]
#25 [ffffa479c21cfe80] dmpioctl at ffffffffc0e75cd5 [vxdmp]
#26 [ffffa479c21cfe90] dmp_ioctl at ffffffffc0e9d469 [vxdmp]
#27 [ffffa479c21cfea8] blkdev_ioctl at ffffffffb1e4ed19
#28 [ffffa479c21cfef0] block_ioctl at ffffffffb1d2a719
#29 [ffffa479c21cfef8] ksys_ioctl at ffffffffb1cfb262
#30 [ffffa479c21cff30] __x64_sys_ioctl at ffffffffb1cfb296
#31 [ffffa479c21cff38] do_syscall_64 at ffffffffb1a0538b
#32 [ffffa479c21cff50] entry_SYSCALL_64_after_hwframe at ffffffffb240008c

DESCRIPTION:
VxVM causes kernel panic because of null pointer dereference in kernel code when BFQ disk io scheduler is used. This is observed on SLES15 SP3 minor kernel >= 5.3.18-150300.59.68.1 and SLES15 SP4 minor kernel >= 5.14.21-150400.24.11.1

RESOLUTION:
Code changes have been done to fix this issue in IS-8.0 and IS-8.0.2.

* 4105253 (Tracking ID: 4087628)

SYMPTOM:
When DCM is in replication mode with volumes mounted having large regions for DCM to sync and if slave node reboot is triggered, this might cause CVM to go into faulted state .

DESCRIPTION:
During Resiliency tests, performed sequence of operations as following. 
1. On an AWS FSS-CVR setup, replication is started across the sites for 2 RVGs.
2. The low owner service groups for both the RVGs are online on a Slave node. 
3. Rebooted another Slave node where logowner is not online. 
4. After Slave node come back from reboot, it is unable to join CVM Cluster. 
5. Also vx commands are also hung/stuck on the CVM Master and Logowner slave node.

RESOLUTION:
In RU SIO before requesting vxfs_free_region(), drop IO count and hold it again after. Because the transaction has been locked (vol_ktrans_locked = 1) right 
before calling vxfs_free_region(), we don't need the iocount to hold rvg from being removed.

* 4107084 (Tracking ID: 4107083)

SYMPTOM:
In case of EMC BCV NR LUNs, vxconfigd taking a long time to start post reboot.

DESCRIPTION:
This issue is introduced due to BCV NR LUNs going into error state and the the scci inquiry succeed but the disk retry takes time as it goes into loop for each disk. This is a corner case and was not handled for BCV NR LUNs.

RESOLUTION:
Necessary code changes are done incase of BCV NR LUNs when scci inquiry succeeds, we mark disk as failed so that the vxconfigd boots quickly.

* 4111010 (Tracking ID: 4108475)

SYMPTOM:
vxfentsthdw script failed with "Expect no writes for disks.."

DESCRIPTION:
In dmp_return_io() function DMP_SET_BP_ERROR() macro sets DKE_EACCES error on errbp but it is not reflected in errbp->orig_bp.
Because orig_bp is not a VxIO buffer because IO is not coming from VxIO here. 
DMP_BIODONE() is a macro that checks whether the IO buffer (errbp->orig_bp) is a VxIO buffer, if not then it return success even if the IO error occurred here.

RESOLUTION:
Need to handle this condition to fix this issue, added 2more iodone functions as VxIO signatures to identify the VxIO buffer in vxdmp driver.
handled non VxIO buffer case by setting proper error code on the io buffer.

* 4113327 (Tracking ID: 4102439)

SYMPTOM:
Customer observed failure When trying to run the vxencrypt rekey operation on an encrypted volume (to perform key rotation).

DESCRIPTION:
KMS token is of size 64 bytes, we are restricting the token size to 63 bytes and throw an error if the token size is more than 63.

RESOLUTION:
The issue is resolved by setting the assumption of token size to be size of KMS token, which is 64 bytes.

* 4113661 (Tracking ID: 4091076)

SYMPTOM:
SRL gets into pass-thru mode when it's about to overflow.

DESCRIPTION:
Primary initiated log search for the requested update sent from secondary. The search aborted with head error as a check condition isn't set correctly.

RESOLUTION:
Fixed the check condition to resolve the issue.

* 4113663 (Tracking ID: 4095163)

SYMPTOM:
System panic with below stack:
 #6 [] invalid_op at 
    [exception RIP: __slab_free+414]
 #7 [] kfree at 
 #8 [] vol_ru_free_update at [vxio]
 #9 [] vol_ru_free_updateq at  [vxio]
#10 [] vol_rv_write2_done at [vxio]
#11 [] voliod_iohandle at [vxio]
#12 [] voliod_loop at [vxio]

DESCRIPTION:
The update gets freed as a part of VVR recovery. At the same time, this update also gets freed in VVR second phase of write. Hence there is a race in freeing the updates and caused the system panic.

RESOLUTION:
Code changes have been made to avoid

* 4113664 (Tracking ID: 4091390)

SYMPTOM:
vradmind hit the core dump while accessing pHdr, which is already freed.

DESCRIPTION:
While processing the config message - CFG_UPDATE, we incorrectly freed the existing config message objects. Later, objects are accessed again which dumped the vradmind core.

RESOLUTION:
Changes are done to access the correct configuration objects.

* 4113666 (Tracking ID: 4064772)

SYMPTOM:
After enabling slub debug, system could hang with IO load.

DESCRIPTION:
When creating VxVM I/O memory, VxVM does not align the cache size. This unaligned length will be treated as an invalid I/O length in SCSI layer, which causes some I/O requests are stuck in an invalid state and results in the I/Os never being able to complete. Thus system hang could be observed, especially after cache slub debug is enabled.

RESOLUTION:
Code changes have been done to align the cache size.

* 4114251 (Tracking ID: 4114257)

SYMPTOM:
VxVM cmd is hung and file system was waiting for io to complete.

file system stack:
#3 [] wait_for_completion at 
#4 [] vx_bc_biowait at [vxfs]
#5 [] vx_biowait at [vxfs]
#6 [] vx_isumupd at [vxfs]
#7 [] __switch_to_asm at 
#8 [] vx_process_revokedele at [vxfs]
#9 [] vx_recv_revokedele at [vxfs]
#10 [] vx_recvdele at [vxfs]
#11 [] vx_msg_process_thread at [vxfs]

vxconfigd stack:
[<0>] volsync_wait+0x106/0x180 [vxio]
[<0>] vol_ktrans+0x9f/0x2c0 [vxio]
[<0>] volconfig_ioctl+0x82a/0xdf0 [vxio]
[<0>] volsioctl_real+0x38a/0x450 [vxio]
[<0>] vols_ioctl+0x6d/0xa0 [vxspec]
[<0>] vols_unlocked_ioctl+0x1d/0x20 [vxspec]

One of vxio thread was waiting for IO drain with below stack.

 #2 [] schedule_timeout at 
 #3 [] vol_rv_change_sio_start at [vxio]
 #4 [] voliod_iohandle at [vxio]

DESCRIPTION:
VVR rvdcm flush SIO was triggered by VVR logowner change and it would set the ru_state throttle flags which caused  MDATA_SHIP SIO got queued in rv_mdship_throttleq. As the MDATA_SHIP SIOs are active, it caused rvdcm flush SIO unable to proceed. In the end, rvdcm_flush SIO was waiting for SIOs in rv_mdship_throttleq to complete. SIOs in rv_mdship_throttleq were waiting rvdcm_flush SIO to complete. Hence a  dead lock situation.

RESOLUTION:
Code changes have been made to solve the dead lock issue.

* 4115231 (Tracking ID: 4090772)

SYMPTOM:
vxconfigd/vx commands hang on secondary site in a CVR environment.

DESCRIPTION:
Due to a window with unmatched SRL positions, if any application (e.g. fdisk) trying
to open the secondary RVG volume will acquire a lock and wait for SRL positions to match.
During this if any vxvm transaction kicked in will also have to wait for same lock.
Further logowner node panic'd which triggered logownership change protocol which hung
as earlier transaction was stuck. As logowner change protocol could not complete,
in absence of valid logowner SRL position could not match and caused deadlock. That lead
to vxconfigd and vx command hang.

RESOLUTION:
Added changes to allow read operation on volume even if SRL positions are
unmatched. We are still blocking write IOs and just allowing open() call for read-only
operations, and hence there will not be any data consistency or integrity issues.

* 4116214 (Tracking ID: 4117350)

SYMPTOM:
Below error is observed when trying to import 

# vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg
VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]

Please refer to system log for details.

DESCRIPTION:
REPLICATED flag is used to identify a hardware replicated device so to import dg on the  REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .

RESOLUTION:
REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.

* 4116422 (Tracking ID: 4111254)

SYMPTOM:
vradmind dumps core with the following stack:

#3  0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6
#4  0x000000000045922c in RDS::getHandle ()
#5  0x000000000056ec04 in StatsSession::addHost ()
#6  0x000000000045d9ef in RDS::addRVG ()
#7  0x000000000046ef3d in RDS::createDummyRVG ()
#8  0x000000000044aed7 in PriRunningState::update ()
#9  0x00000000004b3410 in RVG::update ()
#10 0x000000000045cb94 in RDS::update ()
#11 0x000000000042f480 in DBMgr::update ()
#12 0x000000000040a755 in main ()

DESCRIPTION:
vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set.

RESOLUTION:
The issue has been fixed by making code changes.

* 4116427 (Tracking ID: 4108913)

SYMPTOM:
Vradmind dumps core with the following stacks:
#3  0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6
#4  0x00000000005d7a90 in VList::concat () at VList.C:1017
#5  0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280
#6  0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389
#7  0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764
#8  0x00000000004093e9 in process_message () at srvmd.C:418
#9  0x000000000040a66d in main () at srvmd.C:733

#0  0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6
#1  0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6
#2  0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6
#3  0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6
#4  0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6
#5  0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491
#6  0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480
#7  0x00000000005d7244 in VElem::~VElem () at VList.C:480
#8  0x00000000005d8ad9 in VList::~VList () at VList.C:1167
#9  0x000000000040a71a in main () at srvmd.C:743

#0  0x000000000040b826 in DList::head () at ../include/DList.h:82
#1  0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318
#2  0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157
#3  0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117
#4  0x000000000046f610 in RDS::collectStats () at RDS.C:6011
#5  0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799
#6  0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0
#7  0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6

DESCRIPTION:
There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted.

RESOLUTION:
The code changes have been made to fix the issue.

* 4116429 (Tracking ID: 4085404)

SYMPTOM:
Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.

DESCRIPTION:
The active map flush caused RVG serialization. Once RVG gets serialized, all IOs are queued in restart queue, till the active map flush is finished. The too frequent active map flush caused the huge IO drop during flushing SRL to DCM.

RESOLUTION:
The code is modified to adjust the frequency of active map flush and balance the application IO and SRL flush.

* 4116435 (Tracking ID: 4034741)

SYMPTOM:
Due to a common RVIOmem pool being used by multiple RVG, a deadlock scenario gets created, causing high load average and system hang.

DESCRIPTION:
The current fix limits IO load on secondary by retaining the updates in NMCOM pool until the DV write done, by which RVIOMEM pool became easy to fill up and 
deadlock situtaion may occur, esp. when high work load on multiple RVGs or cross direction RVGs.Currently all RVGs share the same RVIOMEM pool, while NMCOM 
pool, RDBACK pool and network/dv update list are all per-RVGs, so the RVIOMEM pool becomes the bottle neck on secondary, which is easy to full and run into 
deadlock situation.

RESOLUTION:
Code changes to honor per-RVG RVIOMEM pool to resolve the deadlock issue.

* 4116437 (Tracking ID: 4072862)

SYMPTOM:
In case RVGLogowner resources get onlined on slave nodes, stop the whole cluster may fail and RVGLogowner resources goes in to offline_propagate state.

DESCRIPTION:
While stopping whole cluster, the racing may happen between CVM reconfiguration and RVGLogowner change SIO.

RESOLUTION:
Code changes have been made to fix these racings.

* 4116576 (Tracking ID: 3972344)

SYMPTOM:
After reboot of a node on a setup where multiple diskgroups / Volumes within diskgroups are present, sometimes in /var/log/messages an error 'vxrecover ERROR V-5-1-11150  Volume <volume_name> does not exist' is logged.

DESCRIPTION:
In volume_startable function (volrecover.c), dgsetup is called to set the current default diskgroup. This does not update the current_group variable leading to inappropriate mappings. Volumes are searched in an incorrect diskgroup which is logged in the error message.
The vxrecover command works fine if the diskgroup name associated with volume is specified. [vxrecover -g <dg_name> -s]

RESOLUTION:
Changed the code to use switch_diskgroup() instead of dgsetup. Current_group is updated and the current_dg is set. Thus vxrecover finds the Volume correctly.

* 4117899 (Tracking ID: 4055159)

SYMPTOM:
vxdisk list showing incorrect value of LUN_SIZE for nvme disks

DESCRIPTION:
vxdisk list showing incorrect value of LUN_SIZE for nvme disks.

RESOLUTION:
Code changes have been done to show correct LUN_SIZE for nvme devices.

* 4117989 (Tracking ID: 4085145)

SYMPTOM:
System with NVME devices can crash due to memory corruption.

DESCRIPTION:
As part of changes done to detect NVME devices through IOCTL, extra buflen was sent to nvme ioctl through the VRTSaslapm component.
This lead to memory corruption and in some cases can cause system to crash.

RESOLUTION:
Appropriate code changes have been done in the VRTSaslapm to resolve the memory corruption.

* 4118256 (Tracking ID: 4028439)

SYMPTOM:
Not able to create cached volume due to SSD tag missing

DESCRIPTION:
Disk mediatype flag was not propagated previously, now updated during disk online.

RESOLUTION:
Code changes have been done to make mediatype tags visible during disk online

* 4120540 (Tracking ID: 4102532)

SYMPTOM:
/etc/default/vxsf file gets world write permission when "vxtune storage_connectivity asymmetric" is run.

DESCRIPTION:
umask for daemon process vxconfigd is 0 and not 0022. This is required for functionality to run properly. For this reason, any file created by vxconfigd gets world-write permission. When "vxtune storage_connectivity asymmetric" is run, a temporary file is created and then it is renamed to vxsf. So vxsf gets world write permission.

RESOLUTION:
Code changes done so that, instead of default permissions, specific permissions are set to file when file is created. So vxsf does not get world write permission.

* 4120545 (Tracking ID: 4090826)

SYMPTOM:
system panic at vol_page_offsetlist_sort with below stack:

vpanic()
kmem_error+0x5f0()
vol_page_offsetlist_sort+0x164()
volpage_freelist+0x278()
vol_cvol_shadow2_done+0xb8()

DESCRIPTION:
Due to a bug in sorting the large offset, the code overwrote the boundary of the allocated memory and caused the panic.

RESOLUTION:
The code change has been made to sort the large offset correctly.

* 4120547 (Tracking ID: 4093067)

SYMPTOM:
System panicked in the following stack:

#9  [] page_fault at  [exception RIP: bdevname+26]
#10 [] get_dip_from_device  [vxdmp]
#11 [] dmp_node_to_dip at [vxdmp]
#12 [] dmp_check_nonscsi at [vxdmp]
#13 [] dmp_probe_required at [vxdmp]
#14 [] dmp_check_disabled_policy at [vxdmp]
#15 [] dmp_initiate_restore at [vxdmp]
#16 [] dmp_daemons_loop at [vxdmp]

DESCRIPTION:
After got block_device from OS, DMP didn't do the NULL pointer check against block_device->bd_part. This NULL pointer further caused system panic when bdevname() was called.

RESOLUTION:
The code changes have been done to fix the problem.

* 4120720 (Tracking ID: 4086063)

SYMPTOM:
VxVM package uninstallation fails as no semodule policy is installed.

DESCRIPTION:
semodule policy gets loaded in %post stage of new package. After package upgrade no semodule policy is loaded, as the %preun stage removes the policy of upgraded package. While uninstalling the package the %preun stage fails as it tries to remove policy of upgraded package which was already removed while upgrading the package.

RESOLUTION:
Add the policy installation part to %posttrans stage of install script. This way policy installation is shifted to last stage of package upgrade. And the uninstallation is done successfully.

* 4120722 (Tracking ID: 4021816)

SYMPTOM:
VxVM package uninstallation fails after upgrade as semodule policy was removed during package upgrade.

DESCRIPTION:
After a VxVM package upgrade no semodule policy is loaded, the %preun stage is used to uninstall semodule policy and is followed in case of upgrade and uninstall of package. It is necessary for the %preun stage to be used only in case of uninstallation of package, as the stage is for old package.

RESOLUTION:
Code change to use %preun stage only in case of uninstallation of package.

* 4120724 (Tracking ID: 3995831)

SYMPTOM:
System hung: A large number of SIOs got queued in FMR.

DESCRIPTION:
When IO load is high, there may be not enough chunks available. In that case, DRL flushsio needs to drive fwait queue which may get some available chunks. Due a race condition and a bug inside DRL, DRL may queue the flushsio and fail to trigger flushsio again, then DRL ends in a permanent hung situation, not able to flush the dirty regions. The queued SIOs fails to be driven further hence system hung.

RESOLUTION:
Code changes have been made to drive SIOs which got queued in FMR.

* 4120728 (Tracking ID: 4090476)

SYMPTOM:
Storage Replicator Log (SRL) is not draining to secondary. rlink status shows the outstanding writes never got reduced in several hours.

VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL

DESCRIPTION:
In poor network environment, VVR seems not syncing. Another reconfigure happened before VVR state became clean, VVR atomic window got set to a large size. VVR couldnt complete all the atomic updates before the next reconfigure. VVR ended kept sending atomic updates from VVR pending position. Hence VVR appears to be stuck.

RESOLUTION:
Code changes have been made to update VVR pending position accordingly.

* 4120769 (Tracking ID: 4014894)

SYMPTOM:
Disk attach taking long time with reboot/hastop in FSS environment.

DESCRIPTION:
Current code in vxattachd is calling 'vxdg -k add disk' command for each disk separately and this is serialised. This means number of transaction are  initiated to add disk  and can impact application IO multiple times due to  IO quiesce/drain activity.

RESOLUTION:
code changes to add all disks in a single command, thus generating less transactions and execution time.

* 4120783 (Tracking ID: 4087294)

SYMPTOM:
VxVM package upgrade fails after OS upgrade

DESCRIPTION:
After an OS upgrade no semodule policy is loaded, but the %preun stage of new package tries to uninstall the policy and fails. Subsequently the upgrade fails.

RESOLUTION:
Added code changes to %post stage of new package to install the semodule policy after the reboot, this way %preun stage will have a policy to uninstall. The new package policy will get installed in %posttrans stage.

* 4120876 (Tracking ID: 4081434)

SYMPTOM:
VVR panic with below stack:

 #2 [ffff9683fa0efcc8] panic at ffffffff90d802cc
 #3 [ffff9683fa0efd48] vol_rv_service_message_start at ffffffffc2eeae0c [vxio]
 #4 [ffff9683fa0efe48] voliod_iohandle at ffffffffc2d2e276 [vxio]
 #5 [ffff9683fa0efe88] voliod_loop at ffffffffc2d2e68c [vxio]
 #6 [ffff9683fa0efec8] kthread at ffffffff906c5e61

DESCRIPTION:
At VVR primary side, data ack is received on primary and it will search its corresponding nio in rp_ack_waitq to call done function. But the nio may has already freed within vol_rp_flush_ack_waitq() during disconnecting rlink, then it caused panic while accessing the nio. With replica connection changes handling, for rp disconnect case, the flag VOL_RPFLAG_ACK_WAITQ_FLUSHING was set within vol_rp_flush_ack_waitq(),  to avoid such issue. But the flag was cleared earlier just after creating rp ports during rlink connect.

RESOLUTION:
The fix clears the flag during handling replica connection changes, for rp connect case symmetrically.

* 4120899 (Tracking ID: 4116024)

SYMPTOM:
kernel panicked at gab_ifreemsg with following stack:
gab_ifreemsg
gab_freemsg
kmsg_gab_send
vol_kmsg_sendmsg
vol_kmsg_sender

DESCRIPTION:
In a CVR environment there is a RVG of > 600 data volumes, enabling vxvvrstatd daemon through service vxvm-recover. vxvvrstatd calls into ioctl(VOL_RV_APPSTATS) , the latter will generate a kmsg whose length is longer than 64k and trigger a kernel panic due to GAB/LLT no support any message longer than 64k.

RESOLUTION:
Code changes have been done to add a limitation to the maximum number of data volumes for which that ioctl(VOL_RV_APPSTATS) can request the VVR statistics.

* 4120903 (Tracking ID: 4100775)

SYMPTOM:
vxconfigd kept waiting for IO drain when removed dmpnodes. It was hung with below stack:
[] dmpsync_wait+0xa7/0xf0 [vxdmp]
[] dmp_destroy_mp_node+0x98/0x120 [vxdmp]
[] dmp_decode_destroy_dmpnode+0xd3/0x100 [vxdmp]
[] dmp_decipher_instructions+0x2d7/0x390 [vxdmp]
[] dmp_process_instruction_buffer+0x1be/0x1e0 [vxdmp]
[] dmp_reconfigure_db+0x5b/0xe0 [vxdmp]
[] gendmpioctl+0x76c/0x950 [vxdmp]
[] dmpioctl+0x39/0x80 [vxdmp]
[] dmp_ioctl+0x3a/0x70 [vxdmp]
[] blkdev_ioctl+0x28a/0xa20
[] block_ioctl+0x41/0x50
[] do_vfs_ioctl+0x3a0/0x5b0
[] SyS_ioctl+0xa1/0xc0

DESCRIPTION:
XFS utilizes chained BIO feature to send BIOs to VxDMP. While the chained BIO isn't supported by VxDMP, it caused VxDMP kept waiting for a completed BIO.

RESOLUTION:
Code changes have been made to support chained BIO on rhel7.

* 4120916 (Tracking ID: 4112687)

SYMPTOM:
vxdisk resize corrupts disk public region and causes file system mount fail.

DESCRIPTION:
For single path disk, during two transactions of resize operation, the private region IOs could be incorrectly sent to partition 3 of the GPT disk, which would cause 48 more sectors shift.  This may make the private region data written to public region and cause corruption.

RESOLUTION:
Code changes have been made to fix the problem.

* 4121075 (Tracking ID: 4100069)

SYMPTOM:
One of standard disk groups fails to auto-import when those disk groups co-exist with the cloned disk group. It failed with below error in syslog.
vxvm:vxconfigd[xxx]: V-5-1-569 Disk group <disk group name>, Disk <dmpnode name> Cannot auto-import group:
vxvm:vxconfigd[xxx]: #011Disk for disk group not found

DESCRIPTION:
The importflags wasn't reset before starting the next disk group import, further caused the next disk group import inherited all the flags from the last round of disk group import. The improper importflags caused the failure.

RESOLUTION:
Code changes have been made to reset importflags in every round of disk group import.

* 4121081 (Tracking ID: 4098965)

SYMPTOM:
Vxconfigd dumping Core when scanning IBM XIV Luns with following stack.

#0  0x00007fe93c8aba54 in __memset_sse2 () from /lib64/libc.so.6
#1  0x000000000061d4d2 in dmp_getenclr_ioctl ()
#2  0x00000000005c54c7 in dmp_getarraylist ()
#3  0x00000000005ba4f2 in update_attr_list ()
#4  0x00000000005bc35c in da_identify ()
#5  0x000000000053a8c9 in find_devices_in_system ()
#6  0x000000000053aab5 in mode_set ()
#7  0x0000000000476fb2 in ?? ()
#8  0x00000000004788d0 in main ()

DESCRIPTION:
This could cause 2 issues if there are more than 1 disk arrays connected:

1. If the incorrect memory address exceeds the range of valid virtual memory, it will trigger "Segmentation fault" and crash vxconfigd.
2. If  the incorrect memory address does not exceed the range of valid virtual memory, it will cause memory corruption issue but maybe not trigger vxconfigd crash issue.

RESOLUTION:
Code changes have been made to correct the problem.

* 4121083 (Tracking ID: 4105953)

SYMPTOM:
System panic with below stack in CVR environment.

 #9 [] page_fault at 
    [exception RIP: vol_ru_check_update_done+183]
#10 [] vol_rv_write2_done at [vxio]
#11 [] voliod_iohandle at [vxio]
#12 [] voliod_loop at [vxio]
#13 [] kthread at

DESCRIPTION:
In CVR environment, when IO is issued in writeack sync mode we ack to application when datavolwrite is done on either log client or logowner depending on 
where IO is issued on. it could happen that VVR freed the metadata I/O update after SRL write is done incase of writeack sync mode, but later after freeing the update, its accessed again and hence we end up in hitting NULL ptr deference.

RESOLUTION:
Code changes have been made to avoid the accessing NULL pointer.

* 4121222 (Tracking ID: 4095718)

SYMPTOM:
vxesd kept waiting for IO drain with below stack and other tasks like vxpath_links were in hung status too.

#0 [] __schedule at
#1 [] schedule at
#2 [] dmpsync_wait at [vxdmp]
#3 [] dmp_drain_path at [vxdmp]
#4 [] dmp_disable_path at [vxdmp]
#5 [] dmp_change_path_state at [vxdmp]
#6 [] gendmpioctl at [vxdmp]
#7 [] dmpioctl at [vxdmp]
#8 [] dmp_ioctl at [vxdmp]

DESCRIPTION:
Due to storage related activities, some subpaths have been changed to DISABLE which was triggered by vxesd. All the subpaths which belong to the same dmpnode were marked as QUIESCED too. In case any subpaths are handling error IO, vxesd needs to wait till the error process is finished. It might happen the error process failed to wake up vxesd due to a bug. The hung vxesd further caused all coming IOs against those dmpnodes got queued in DMP defer queue. As a result, all tasks who are waiting for IO to complete on those dmpnodes are in a permanent hung too.

RESOLUTION:
The code changes have been made to wake up the tasks who are waiting for DMP error handling to be done.

* 4121243 (Tracking ID: 4101588)

SYMPTOM:
vxtune displays vol_rvio_maxpool_sz as zero when it's over 4g.

DESCRIPTION:
The tunable vol_rvio_maxpool_sz is defined as size_t type that is 64 bit long in 64 bit binary, while vxtune displays it as 32 
bit unsigned int type, so it's showed as zero when it's over max unsigned int(4gb).

RESOLUTION:
The issue has been fixed by the code changes.

* 4121254 (Tracking ID: 4115078)

SYMPTOM:
vxconfigd hung was observed when reboot all nodes of the primary site.

DESCRIPTION:
When vvr logowner node wasn't configured on Master. VVR recovery was triggered by node leaving, in case data volume was in recovery, vvr logowner would send ilock request to Master node. Master granted the ilock request and sent a response to vvr logonwer. But due to a bug, ilock requesting node id mismatch was detected by vvr logowner. VVR logowner thought the ilock grant failed, mdship IO went into a permanent hang. vxconfigd was stuck and kept waiting for IO drain.

RESOLUTION:
Code changes have been made to correct the ilock requesting node id in the ilock request in such case.

* 4121681 (Tracking ID: 3995731)

SYMPTOM:
vxconfigd died with below stack info:

#0  in vfprintf () from /lib64/libc.so.6
#1  in vsnprintf () from /lib64/libc.so.6
#2  in msgbody_va ()
#3  in msg () at misc.c:1430
#4  in krecover_mirrorvol () at krecover.c:1244
#5  krecover_dg_objects_20 () at krecover.c:515
#6  krecover_dg_objects () at krecover.c:303
#7  in dg_import_start () at dgimport.c:7721
#8  in dg_reimport () at dgimport.c:3337
#9  in dg_recover_all () at dgimport.c:4885

DESCRIPTION:
the VVR objects were removed before upgrade. vxconfigd accessed the NULL object and died.

RESOLUTION:
Code changes have been made to access the valid record during recovery.

* 4121763 (Tracking ID: 3995308)

SYMPTOM:
vxtask status hang due to incorrect values getting copied into task status information.

DESCRIPTION:
When doing atomic-copy admin task, VxVM copy the entire request structure passed as response of the task status in local copy. This creates some issues of incorrect copy/overwrite of pointer.

RESOLUTION:
Code changes have been made to fix the problem.

* 4121767 (Tracking ID: 4117568)

SYMPTOM:
Vradmind dumps core with the following stack:

#1  std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string (this=0x7ffdc380d810,
    __str=<error reading variable: Cannot access memory at address 0x3736656436303563>)
#2  0x000000000040e02b in ClientMgr::closeStatsSession
#3  0x000000000040d0d7 in ClientMgr::client_ipm_close
#4  0x000000000058328e in IpmHandle::~IpmHandle
#5  0x000000000057c509 in IpmHandle::events
#6  0x0000000000409f5d in main

DESCRIPTION:
After terminating vrstat, the StatSession in vradmind was closed and the corresponding Client object was deleted. When closing the IPM object of vrstat, try to access the removed Client, hence the core dump.

RESOLUTION:
Core changes have been made to fix the issue.

* 4121790 (Tracking ID: 4116496)

SYMPTOM:
System panic at dmp_process_errbp+47 with following call stack.
machine_kexec
__crash_kexec
crash_kexec
oops_end
no_context
__bad_area_nosemaphore
do_page_fault
page_fault
[exception RIP: dmp_process_errbp+47]
dmp_daemons_loop
kthread
ret_from_fork

DESCRIPTION:
When a LUN is detached, bio->bi_disk will be set to NULL, which would cause NULL pointer reference panic when VxDMP calls bio_dev(bio).

RESOLUTION:
Code changes have been made to avoid panic.

* 4121875 (Tracking ID: 4090943)

SYMPTOM:
On Primary, RLink is continuously getting connected/disconnected with below message seen in secondary syslog:
  VxVM VVR vxio V-5-3-0 Disconnecting replica <rlink_name> since log is full on secondary.

DESCRIPTION:
When RVG logowner node panic, RVG recovery happens in 3 phases.
At the end of 2nd phase of recovery in-memory and on-disk SRL positions remains incorrect
and during this time if there is logowner change then Rlink won't get connected.

RESOLUTION:
Handled in-memory and on-disk SRL positions correctly.

* 4122629 (Tracking ID: 4118809)

SYMPTOM:
System panic at dmp_process_errbp with following call stack.
machine_kexec
__crash_kexec
crash_kexec
oops_end
no_context
__bad_area_nosemaphore
do_page_fault
page_fault
[exception RIP: dmp_process_errbp+203]
dmp_daemons_loop
kthread
ret_from_fork

DESCRIPTION:
When a LUN is detached, VxDMP may invoke its error handler to process the error buffer, during that period the OS SCSI device node could have been removed, which will make VxDMP can not find the corresponding path node and introduces a pointer reference panic.

RESOLUTION:
Code changes have been made to avoid panic.

* 4122632 (Tracking ID: 4121564)

SYMPTOM:
Memory leak for volcred_t could be observed in vxio.

DESCRIPTION:
Memory leak could occur if some private region IOs hang on a disk and there are duplicate entries for the disk in vxio.

RESOLUTION:
Code has been changed to avoid memory leak.

* 4123313 (Tracking ID: 4114927)

SYMPTOM:
After enabling dmp_native_support and taking reboot, /boot is not mounted VxDMP node.

DESCRIPTION:
When dmp_native_support is enabled, vxdmproot script is expected to modify the /etc/fstab entry for /boot so that on next boot up, /boot is mounted on dmp device instead of OS device. Also, this operation modifies SELinux context of file /etc/fstab. This causes the machine to go into maintenance mode because of a read permission denied error for /etc/fstab on boot up.

RESOLUTION:
Code changes have been done to make sure SELinux context is preserved for /etc/fstab file and /boot is mounted on dmp device when dmp_native_support is enabled.

* 4124324 (Tracking ID: 4098582)

SYMPTOM:
Customer have found the log file /var/log/vx/ddl.log which breaks one of the policies, as it keeps being created with 644 permissions

DESCRIPTION:
File permissions in /var/log/vx/ddl.log after logrotation to be 640 as per security compliance

RESOLUTION:
Appropriate code changes are done to handle scenario

* 4126041 (Tracking ID: 4124223)

SYMPTOM:
Core dump is generated for vxconfigd in TC execution.

DESCRIPTION:
TC creates a scenario where 0s are written in first block of disk. In such case, Null check is necessary in code before some variable is accessed. This Null check is missing which causes vxconfigd core dump in TC execution.

RESOLUTION:
Necessary Null checks is added in code to avoid vxconfigd core dump.

* 4127473 (Tracking ID: 4089626)

SYMPTOM:
On RHEL8.5, IO hang occurrs when creating XFS on VxDMP devices or writing file on mounted XFS from VxDMP devices.

DESCRIPTION:
XFS utilizes chained BIO feature to send BIOs to VxDMP. While the chained BIO isn't suported by VxDMP, hence the BIOs may struck in SCSI disk driver.

RESOLUTION:
Code changes have been made to support chained BIO.

* 4127475 (Tracking ID: 4114601)

SYMPTOM:
System gets panicked and rebooted

DESCRIPTION:
RCA:
Start the IO on volume device and pull out it's disk from the machine and hit below panic on RHEL8.

 dmp_process_errbp
 dmp_process_errbuf.cold.2+0x328/0x429 [vxdmp]
 dmpioctl+0x35/0x60 [vxdmp]
 dmp_flush_errbuf+0x97/0xc0 [vxio]
 voldmp_errbuf_sio_start+0x4a/0xc0 [vxio]
 voliod_iohandle+0x43/0x390 [vxio]
 voliod_loop+0xc2/0x330 [vxio]
 ? voliod_iohandle+0x390/0x390 [vxio]
 kthread+0x10a/0x120
 ? set_kthread_struct+0x50/0x50

As disk pulled out from the machine VxIO hit a IO error and it routed that IO to dmp layer via kernel-kernel IOCTL for error analysis.
following is the code path for IO routing,

voldmp_errbuf_sio_start()-->dmp_flush_errbuf()--->dmpioctl()--->dmp_process_errbuf()

dmp_process_errbuf() retrieves device number of the underlying path (os-device).
and it tries to get bdev (i.e. block_device) pointer from path-device number.
As path/os-device is removed by disk pull, linux returns fake bdev for the path-device number.
For this fake bdev there is no gendisk associated with it (bdev->bd_disk is NULL).

We are setting this NULL bdev->bd_disk to the IO buffer routed from vxio.
which leads a panic on dmp_process_errbp.

RESOLUTION:
If bdev->bd_disk found NULL then set DMP_CONN_FAILURE error on the IO buffer and return DKE_ENXIO to vxio driver

* 4128868 (Tracking ID: 4128867)

SYMPTOM:
Vulnerabilities have been reported in third party component, OpenSSL that is used by VxVM.

DESCRIPTION:
Third party component OpenSSL in its current versions,  used by VxVM have been reported with security vulnerabilities which 
needs

RESOLUTION:
OpenSSL have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

* 4128885 (Tracking ID: 4115193)

SYMPTOM:
Data corruption on VVR primary with storage loss beyond fault tolerance level in replicated environment.

DESCRIPTION:
In Flexible Storage Sharing (FSS)  environment  any node fault can lead to storage failure. In VVR primary when last  mirror of SRL  (Storage Replicator Log) volume faulted while application writes are in progress replication is expected to go to pass-through mode.
This information is persistently recorded in the kernel log (KLOG). In the event of cascaded storage node failures, the KLOG updation protocol could not update quorum number of copies. This mis-match in on-disk v/s in-core state of VVR objects leading to data loss due to missing recovery when all storage faults are resolved.

RESOLUTION:
Code changes to handle the KLOG update failure in SRL IO failure handling is done to ensure configuration on-disk and in-core is consistent and subsequent application IO will not be allowed to avoid data corruption.

* 4131718 (Tracking ID: 4088941)

SYMPTOM:
While running DMP test suite , setup panics throwing below stack :
#7 [] scsi_queue_rq at [scsi_mod]
#8 [] blk_mq_dispatch_rq_list at 
#9 [] __blk_mq_sched_dispatch_requests at 
#10 [] blk_mq_sched_dispatch_requests at 
#11 [] __blk_mq_run_hw_queue at 
#12 [] __blk_mq_delay_run_hw_queue at 
#13 [] blk_mq_sched_insert_request at 
#14 [] blk_execute_rq at 
#15 [] dmp_send_scsi_work_fn at [vxdmp]
#16 [] process_one_work at 
#17 [] worker_thread at ffffffff8b8c1a9d

DESCRIPTION:
The kernel function used to create request from bios is not considering max_segment_size at the time of append, hence the issue is being observed.

RESOLUTION:
Appropriate logic has been added in the code to handle number of physical segment set correctly.

* 4134702 (Tracking ID: 4122396)

SYMPTOM:
vxvm-recover.service fails to start on linux platforms.

DESCRIPTION:
When using KillMode=control-group, stopping the vxvm-recover.service results in a failed state.
# systemctl status vxvm-boot.service
? vxvm-boot.service - VERITAS Volume Manager Boot service
     Loaded: loaded (/usr/lib/systemd/system/vxvm-boot.service; enabled; vendor preset: disabled)
     Active: failed (Result: timeout) since Thu 2023-06-15 12:41:47 IST; 52s ago

RESOLUTION:
Required code changes has been done to rectify the problem.

* 4134887 (Tracking ID: 4020942)

SYMPTOM:
Data corruption/loss on erasure code (EC) volumes post rebalance/disk movement operations while active application IO in progress.

DESCRIPTION:
In erasure coded (EC) layout, an operation to move a column from one disk to another new disk as part of data rebalance operation uses VxFS smart move. This ensures only in-use blocks of the disk by file-system are moved. During this operation a bug in IO code path of erasure coded (EC) volume undergoing move operation caused new IO's spanning on the columns under move were not written to new column and it just got updated on old column. This caused data corruption post completion of data movement operation. The corruption is detected when application tried to access the data written and found that it is incorrect on new columns.

RESOLUTION:
Bug in erasure coded (EC) layout IO path during rebalance operation is fixed to ensure IO on the column under move is updated properly on both source (old) and destination (new) disk to ensure consistency of data post move operation.

* 4134888 (Tracking ID: 4105204)

SYMPTOM:
Node not able to join the cluster after iLO "press and hold" scenario in loop

DESCRIPTION:
- Node is not able to join cluster because newly elected master and surviving slaves are stuck in previous reconfig
- This is one of Quorum loss/DG disable scenario
- During VCS cleanup of disabled DG,  dg deport is triggered which is stuck.
- Since dg is anyways disabled due to quorum loss, cluster reboot is needed to come out of situation.

- Following vxreconfd stack will be seen on new master and surviving slaves
PID: 8135   TASK: ffff9d3e32b05230  CPU: 5   COMMAND: "vxreconfd"
 #0 [ffff9d3e33c43748] __schedule at ffffffff8f1858da
 #1 [ffff9d3e33c437d0] schedule at ffffffff8f185d89
 #2 [ffff9d3e33c437e0] volsync_wait at ffffffffc349415f [vxio]
 #3 [ffff9d3e33c43848] _vol_syncwait at ffffffffc3939d44 [vxio]
 #4 [ffff9d3e33c43870] vol_rwsleep_rdlock_hipri at ffffffffc360e2ab [vxio]
 #5 [ffff9d3e33c43898] volopenter_hipri at ffffffffc361ae45 [vxio]
 #6 [ffff9d3e33c438a8] volcvm_ktrans_openter at ffffffffc33ba1e6 [vxio]
 #7 [ffff9d3e33c438c8] cvm_send_mlocks at ffffffffc33863f8 [vxio]
 #8 [ffff9d3e33c43910] volmvcvm_cluster_reconfig_exit at ffffffffc3407d1d [vxio]
 #9 [ffff9d3e33c43940] volcvm_master at ffffffffc33da1b8 [vxio]
#10 [ffff9d3e33c439c0] volcvm_vxreconfd_thread at ffffffffc33df481 [vxio]
#11 [ffff9d3e33c43ec8] kthread at ffffffff8eac6691
#12 [ffff9d3e33c43f50] ret_from_fork_nospec_begin at ffffffff8f192d24

RESOLUTION:
cluster reboot is needed to come out of situation

* 4134889 (Tracking ID: 4107401)

SYMPTOM:
SRL goes into passthru mode which causes system to run without replication

DESCRIPTION:
Issue is seen in FSS environment when new logowner selected after any reconfig is not contributing any storage. If SRL and data volume recovery use different plexes inconsistency is seen while reading SRL data.

RESOLUTION:
When SRL is recovered do read-write back so that all plexes are consistent.

* 4135142 (Tracking ID: 4040043)

SYMPTOM:
Warnings in dmesg/ kernel logs  for violating memory usage/handling  protocols.

DESCRIPTION:
using kmem_cache_alloc and copying this memory to user is giving warnings as :"kernel: Bad or missing usercopy whitelist? Kernel 
memory exposure attempt detected from SLUB object 'sgpool-128' (offset 0, size 4096)!"

RESOLUTION:
the earlier caches were created using kmem_cache_create(), and now the linux has introduced a new API to support the need of cache to be communicated to userspace.
the new API: kmem_cache_create_usercopy().

Implemented code to allocate user friendly memory as to avoid kernel warnings of "memory access violations". which could also get converted to PANIC in  next kernel versions.

* 4135150 (Tracking ID: 4114867)

SYMPTOM:
Getting these error messages while adding new disks
[root@server101 ~]# cat /etc/udev/rules.d/41-VxVM-selinux.rules | tail -1
KERNEL=="VxVM*", SUBSYSTEM=="block", ACTION=="add", RUN+="/bin/sh -c 'if [ `/usr/sbin/getenforce` != "Disabled" -a `/usr/sbin/
[root@server101 ~]#
[root@server101 ~]# systemctl restart systemd-udevd.service
[root@server101 ~]# udevadm test /block/sdb 2>&1 | grep "invalid"
invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 104 ('D')

DESCRIPTION:
In /etc/udev/rules.d/41-VxVM-selinux.rules double quotation on Disabled and disable is the issue.

RESOLUTION:
Code changes have been made to correct the problem.

* 4135248 (Tracking ID: 4129663)

SYMPTOM:
vxvm and aslapm rpm do not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to vxvm and aslapm rpm.

* 4136239 (Tracking ID: 4069940)

SYMPTOM:
FS mount failed during Cluster configuration on 24-node physical BOM setup.

DESCRIPTION:
FS mount failed during Cluster configuration on 24-node physical BOM setup due to vxvm transactions were taking time more that vcs timeouts.

RESOLUTION:
Fix is added to reduce unnecessary transaction time on large node setup.

* 4136240 (Tracking ID: 4040695)

SYMPTOM:
vxencryptd getting coredump.

DESCRIPTION:
Due to the static buffer size in vxencryptd code, 
if IOs comes with greater than buffer size then there was no handling for such scenario and we were hitting coredump.

RESOLUTION:
Making BUFFER_SIZE dynamic depend on the tunable values in current state. (tunable vol_maxio)

* 4136316 (Tracking ID: 4098144)

SYMPTOM:
vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume

DESCRIPTION:
vxtask remains stuck since the parent process doesn't exit. It was seen that all childs are completed, but the parent is not able to exit.
(gdb) p active_jobs
$1 = 1
Active jobs are reduced as in when childs complete. Somehow one count is pending and we don't know which child exited without decrementing count. Instrumentation messages are added to capture the issue.

RESOLUTION:
Added a code that will create a log file in /etc/vx/log/. This file will be deleted when vxrecover exists successfully. The file will be present when vxtask parent hang issue is seen.

* 4136482 (Tracking ID: 4132799)

SYMPTOM:
If GLM is not loaded, start CVM fails with the following errors:
# vxclustadm -m gab startnode
VxVM vxclustadm INFO V-5-2-9687 vxclustadm: Fencing driver is in disabled mode - 
VxVM vxclustadm ERROR V-5-1-9743 errno 3

DESCRIPTION:
The error number but the error message is printed while joining CVM fails.

RESOLUTION:
The code changes have been made to fix the issue.

* 4137008 (Tracking ID: 4133793)

SYMPTOM:
DCO experience IO Errors while doing a vxsnap restore on vxvm volumes.

DESCRIPTION:
Dirty flag was getting set in context of an SIO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set. This led to transaction errors while doing a vxsnap restore command in loop for vxvm volumes causing transaction abort. As a result, VxVM tries to cleanup by removing newly added BMs. Now, VxVM tries to access the deleted BMs. however it is not able to since they were deleted previously. This ultimately leads to DCO IO error.

RESOLUTION:
Skip first write klogging in the context of an IO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set.

* 4140562 (Tracking ID: 4134305)

SYMPTOM:
Illegal memory access is detected when an admin SIO is trying to lock a volume.

DESCRIPTION:
While locking a volume, an admin SIO is converted to an incompatible SIO, on which collecting ilock stats causes memory overrun.

RESOLUTION:
The code changes have been made to fix the problem.

* 4140572 (Tracking ID: 4080124)

SYMPTOM:
Data corruption on mirrored volume in shared-nothing (Flexible Shared Storage) environment during failure of VxVM configuration update.

DESCRIPTION:
IN shared-nothing environment, node failure leads to disk IO failures for the disks connected to failed host. Error handling in volume manager layer updates the failures of mirrors/columns of volume in these situation. In cascaded reboot scenarios where a last disks where VxVM configuration is present failed and hence a config update failure occurred. VxVM layer is expected to fail IO in such situation cluster wide. Bug in data change object (DCO) error handling only made IO failure on few nodes instead of all the nodes of cluster, which leads to additional IOs post this condition on volume and those were not considered when the mirrors were re-attached. This caused data loss when application read the data from recovered mirrors.

RESOLUTION:
Error handling code inside DCO object is modified to ensure IO are failed across all nodes in cluster and no further IOs allowed on volume from any node, hence preventing corruption to happen in those cases.

* 4140589 (Tracking ID: 4120068)

SYMPTOM:
A standard disk was added to a cloned diskgroup successfully which is not expected.

DESCRIPTION:
When add a disk to a disk group, a pre-check will be made to avoid ending up with a mixed diskgroup. In a cluster, the local node might fail to use the 
latest record to do the pre-check, which caused a mixed diskgroup in the cluster, further caused node join failure.

RESOLUTION:
Code changes have been made to use latest record to do a mixed diskgroup pre-check.

* 4140690 (Tracking ID: 4100547)

SYMPTOM:
Full volume resync happens(~9hrs) post last node reboot at secondary site in a NBFS DR cluster.

DESCRIPTION:
Sub-volumes getting marked for rwbk SYNC during node reboots or during plex re-attached for layered volume in VVR environments (Primary and secondary both) which is not expected. This caused the long time resync.

RESOLUTION:
Code changes have been made to avoid marking sub-volumes for rwbk sync if they are under VVR config.

* 4140691 (Tracking ID: 4114962)

SYMPTOM:
File system data corruption with mirrored volumes in Flexible Storage Sharing (FSS) environments during beyond fault storage failure situations.

DESCRIPTION:
In FSS environments, data change object (DCO) provides functionality to track changes on detached mirrors using bitmaps. This bitmap is later used for re-sync of detached mirrors data (change delta).
When DCO volume and data volume share the same set of devices, DCO volumes last mirror failure means IOs on data volume is going to fail. In such cases instead of invaliding DCO volumes, proactively IO is failed.
This helps in protecting DCO when entire storage comes back and optimal recovery of mirrors can be performed.
When disk for one of the mirror of DCO object become available, the bug in DCO update incorrectly updates metadata of DCO which lead to ignoring valid DCO maps during actual volume recovery and hence newly recovered mirrors of volume missed blocks of valid application data. This lead to corruption when read IO were serviced from the newly recovered mirrors.

RESOLUTION:
The login of FMR map updating transaction of enabling disks is fixed to resolve the bug. This ensures all valid bitmaps are considered for recovery of mirrors and avoid data loss.

* 4140692 (Tracking ID: 4074002)

SYMPTOM:
In FSS environment, after rebooted 4 nodes in the cluster, VOLD IO hang during node joining with below stack. 

[<0>] volsync_wait+0x117/0x190 [vxio]
[<0>] volsiowait+0xb7/0x100 [vxio]
[<0>] voldio+0x7a2/0xad0 [vxio]
[<0>] volconfig_ioctl+0xd9c/0xdf0 [vxio]
[<0>] volsioctl_real+0x38a/0x450 [vxio]
[<0>] vols_ioctl+0x6d/0xa0 [vxspec]
[<0>] vols_unlocked_ioctl+0x1d/0x20 [vxspec]
[<0>] do_vfs_ioctl+0xa4/0x680

DESCRIPTION:
During node joining, private region IO is initiated either in context of remote disk creation or during dg import triggered as join process. All these IOs are submitted to ioship and on wire. Before IO completes on target node panics/reboots, each IO will be returned only when it hit IO timeout. Those IOs might cause VOLD hang for a long period, then all reconfigurations didn't get processed causing reconfiguration hang.

RESOLUTION:
Code changes have been made to honor FAIL_IO flag with VOLD IO.

* 4140693 (Tracking ID: 4122061)

SYMPTOM:
Observing hung after resync operation, vxconfigd was waiting for slaves' response.

DESCRIPTION:
VVR logowner was in a transaction and returned VOLKMSG_EAGAIN to CVM_MSG_GET_METADATA which is expected. Once the client received VOLKMSG_EAGAIN, it would sleep 
10 jiffies and retry the kmsg . In a busy cluster, it might happen the retried kmsgs plus the new kmsgs got built up and hit the kmsg flowcontrol before the vvr logowner transaction completed. Once the client refused any kmsgs due to the flowcontrol. The transaction on vvr logowner might get stuck because it required kmsg response from all the slave node.

RESOLUTION:
Code changes have been made to increase the kmsg flowcontrol and don't let kmsg receiver fall asleep but handle the kmsg in a restart function.

* 4140694 (Tracking ID: 4128351)

SYMPTOM:
System hung observed when switching log owner.

DESCRIPTION:
VVR mdship SIOs might be throttled due to reaching max allocation count, etc. These SIOs are holding io count. When log owner change kicked in and quiesced RVG. VVR log owner change SIO is waiting for iocount to drop to zero to proceed further. VVR mdship requests from the log client are returned with EAGAIN as RVG quiesced. The throttled mdship SIOs need to be driven by the upcoming mdship requests, hence the deadlock, which caused system hung.

RESOLUTION:
Code changes have been made to flush the mdship queue before VVR log owner change SIO waiting for IO drain.

* 4140706 (Tracking ID: 4130393)

SYMPTOM:
vxencryptd crashed repeatedly due to segfault.

DESCRIPTION:
Linux could pass large IOs with 2MB size to VxVM layer, however vxencryptd only expects IOs with maximum IO size 1MB from kernel and only pre-allocates 1MB buffer size for encryption/decryption. This would cause vxencryptd to crash when processing large IOs.

RESOLUTION:
Code changes have been made to allocate enough buffer.

* 4149660 (Tracking ID: 4106254)

SYMPTOM:
Nodes crashed in shared-nothing (Flexible Shared Storage) environment if node reboot followed by NVME disk failure is executed

DESCRIPTION:
If congested functions are registered in Linux driver, those are called to check if next set of IOs can be issued on devices and device can handle those.
In this case for a given volume, vset related congestion function was getting called which caused node to panic.

RESOLUTION:
Congestion functions are deprecated in newer linux kernel versions and they are required for MD/DM devices NOT for vxvm.
So explicit callback functions are removed and congestion control now refers linux linux standard mechanism.

* 4150574 (Tracking ID: 4077944)

SYMPTOM:
In VVR environment, when I/O throttling gets activated and deactivated by VVR, it may result in an application I/O hang.

DESCRIPTION:
In case VVR throttles and unthrottles I/O, the diving of throttled I/O is not done in one of the cases.

RESOLUTION:
Resolved the issue by making sure the application throttled I/Os get driven in all the cases.

* 4150577 (Tracking ID: 4019380)

SYMPTOM:
vxcloudd daemon dumps core with below mentioned stack:
raise ()
abort ()
__libc_message ()
malloc_printerr ()
_int_free ()
CRYPTO_free ()
engine_pkey_meths_free ()
engine_free_util ()
ENGINE_finish ()
ssl_create_cipher_list ()
SSL_CTX_new ()
ossl_connect_step1 ()
ossl_connect_common ()
Curl_ssl_connect_nonblocking ()
https_connecting ()
Curl_http_connect ()
multi_runsingle ()
curl_multi_perform ()
curl_easy_perform ()
curl_send_request ()
curl_request_perform ()
amz_request_perform ()
amz_download_object ()
cloud_read ()
handle_s3_request ()
cloud_io_thread ()
start_thread ()
clone ()

DESCRIPTION:
vxcloudd daemon is muti-threaded application. OpenSSL is not completely thread safe. It requires thread callbacks to be set for consistency of some of the shared data structures.

RESOLUTION:
Implement the thread call back functions for Curl and OpenSSL.

* 4150589 (Tracking ID: 4085477)

SYMPTOM:
Operations dependent on settag operation are unresponsive.

DESCRIPTION:
dm name already has a da with the same name in DG .

RESOLUTION:
Added a fix to correctly identify disk when da or dm name is provided as an argument.

* 4151832 (Tracking ID: 4005719)

SYMPTOM:
For encrypted volumes, the disk reclaim operation gets hung.

DESCRIPTION:
Reclaim request is not correctly handled for encrypted volumes resulting in a hang.

RESOLUTION:
Skip the encryption IO request path for reclaim requests.

* 4151834 (Tracking ID: 3989340)

SYMPTOM:
Recovery of volume is not triggered post reboot in shared nothing environment

DESCRIPTION:
In shared nothing environments i.e. FSS (Flexible Shared Storage) the node reboot makes storage associated with that node unavailable for IO operations. This results into IO failure on those disks leading to mirrors of volumes coming from those nodes getting DETACHED. When the faulted node(s) joins the cluster, the mirrors associated with those will be back online. The faulted mirrors needs to be recovered once storage connectivity is back. In certain conditions of cascaded reboot scenarios and disk failure situation, this automated recovery of mirrors did not start, leaving volumes with less fault tolerance. The volume recovery operations tag a temporary field on config of the volumes to avoid multiple parallel command trying to recover the same volume. This fields did not get clean up properly in cascaded reboot sequence, which lead to subsequent cluster reconfig not able to start volume recovery.

RESOLUTION:
Code changes made to properly cleanup the temporary fields on volume to ensure subsequent recovery operations are triggered when node/storage is back online.

* 4151837 (Tracking ID: 4024140)

SYMPTOM:
In VVR environments, in case of disabled volumes, DCM read operation does not complete, resulting in application IO hang.

DESCRIPTION:
If all volumes in the RVG have been disabled, then the read on the DCM does not complete. This results in an IO hang and blocks other operations such as transactions and diskgroup delete.

RESOLUTION:
If all the volumes in the RVG are found disabled, then fail the DCM read.

* 4151838 (Tracking ID: 4046560)

SYMPTOM:
vxconfigd aborts on Solaris if device's hardware path is more than 128 characters.

DESCRIPTION:
When vxconfigd started, it claims the devices exist on the node and updates VxVM device
database. During this process, devices which are excluded from vxvm gets excluded from VxVM device database.
To check if device to be excluded, we consider device's hardware full path. If hardware path length is
more than 128 characters, vxconfigd gets aborted. This issue occurred as code is unable to handle hardware
path string beyond 128 characters.

RESOLUTION:
Required code changes has been done to handle long hardware path string.

* 4152117 (Tracking ID: 4142054)

SYMPTOM:
System panicked in the following stack:

[ 9543.195915] Call Trace:
[ 9543.195938]  dump_stack+0x41/0x60
[ 9543.195954]  panic+0xe7/0x2ac
[ 9543.195974]  vol_rv_inactive+0x59/0x790 [vxio]
[ 9543.196578]  vol_rvdcm_flush_done+0x159/0x300 [vxio]
[ 9543.196955]  voliod_iohandle+0x294/0xa40 [vxio]
[ 9543.197327]  ? volted_getpinfo+0x15/0xe0 [vxio]
[ 9543.197694]  voliod_loop+0x4b6/0x950 [vxio]
[ 9543.198003]  ? voliod_kiohandle+0x70/0x70 [vxio]
[ 9543.198364]  kthread+0x10a/0x120
[ 9543.198385]  ? set_kthread_struct+0x40/0x40
[ 9543.198389]  ret_from_fork+0x1f/0x40

DESCRIPTION:
- From the SIO stack, we can see that it is a case of done being called twice. 
- Looking at vol_rvdcm_flush_start(), we can see that when child sio is created, it is being directly added to the the global SIO queue. 
- This can cause child SIO to start while vol_rvdcm_flush_start() is still in process of generating other child SIOs. 
- It means that, say the first child SIO gets done, it can find the children count going to zero and calls done.
- The next child SIO, also independently find children count to be zero and call done.

RESOLUTION:
The code changes have been done to fix the problem.

* 4152119 (Tracking ID: 4142772)

SYMPTOM:
In case SRL overflow frequently happens, SRL reaches 99% filled but the rlink is unable to get into DCM mode.

DESCRIPTION:
When starting DCM mode, need to check if the error mask NM_ERR_DCM_ACTIVE has been set to prevent duplicated triggers. This flag should have been reset after DCM mode was activated by reconnecting the rlink. As there's a racing condition, the rlink reconnect may be completed before DCM is activated, hence the flag isn't able to be cleared.

RESOLUTION:
The code changes have been made to fix the issue.

* 4152549 (Tracking ID: 4089801)

SYMPTOM:
Cluster went in hanged state after rebooting 6 slave nodes

DESCRIPTION:
For shared DG transaction, clusterwide FMR related cleanup happens. This issues DCO meta read on dco volume which has connectivity issue and results in read failure. This mark certain flags incore on dco, but in certain part of transaction flag is ignored before issuing read TOC which is leading to disk detach transaction failure with error "DCO experienced IO errors during the operation. Re-run the operation after ensuring that DCO is accessible". This is causing subsequent node join failures.

RESOLUTION:
Fix is added to check flag at appropriate stages of transaction

* 4152550 (Tracking ID: 3972770)

SYMPTOM:
System panic with voldco_get_mapid() function in kernel stack trace during cluster stop/start operation.

DESCRIPTION:
VxVM triggers a config change operation for any kernel initiated or user initiated changes through transaction. When fast-mirror-sync (FMR) is configured on volumes, transaction like mirror attach/detach requires bitmap manipulation. This bitmap manipulation accesses the in-core metadata of FMR objects. During node stop/start operations, the FMR metadata of the a volume was getting updated in parallel by multiple threads in kernel processing transaction. This lead to two thread incorrectly accessing the metadata leading to panic in voodoo_get_mapid() function.

RESOLUTION:
Code changes are done in FMR transaction code path in kernel to avoid parallel processing of DCO object related information to avoid inconsistent information.

* 4152553 (Tracking ID: 4011582)

SYMPTOM:
In VxVM, minimum and maximum read/write time for the IO workload is not captured using vxstat utility.

DESCRIPTION:
Currently vxstat utility display only average read/write time it takes for the IO workload to complete Inder VxVM layer.

RESOLUTION:
Changes are done to existing vxstat utility to capture and display minimum and maximum read/write time.

* 4152554 (Tracking ID: 4058266)

SYMPTOM:
vxstat stats was flooded with 0 entries in case of no IO activity on objects.

DESCRIPTION:
In case of too many object in DG, printing stats using -i option generates too many entries. If few objects are not having any IO activity in given interval we still print 0 filled entries. Excluding those entries would be helpful with some option. iostat command has option "-z" which does the same in Linux. we should implement some similar option in vxstat

RESOLUTION:
Added an option "-Z" which can be used with another vxstat options to ignore 0 entries.

* 4152732 (Tracking ID: 4111978)

SYMPTOM:
Replication failed to start due to vxnetd threads not running on secondary site.

DESCRIPTION:
Vxnetd was waiting to start "nmcomudpsrv" and "nmcomlistenserver" threads. Due to a race condition of some resource between those two thread, vxnetd was stuck in a dead loop till max retry reached.

RESOLUTION:
Code changes have been made to add lock protection to avoid the race condition.

* 4152963 (Tracking ID: 4100037)

SYMPTOM:
Entries printed with vxstat was not printed properly

DESCRIPTION:
While displaying the stats using vxstat, entries was not displayed correclty, In few cases headers was printed multiple time. While in few headers and stats were not in sync.

RESOLUTION:
Handled the impacted options with code changes.

* 4153768 (Tracking ID: 4120878)

SYMPTOM:
System doesn't come up on taking a reboot after enabling dmp_native_support. System goes into maintenance mode.

DESCRIPTION:
"vxio.ko" is dependent on the new "storageapi.ko" module. "storageapi.ko" was missing from VxDMP_initrd file, which is created when dmp_native_support is enabled. So on reboot, without "storageapi.ko" present, "vxio.ko" fails to load.

RESOLUTION:
Code changes have been made to include "strorageapi.ko" in VxDMP_initrd.

* 4154451 (Tracking ID: 4107801)

SYMPTOM:
/dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.

DESCRIPTION:
vxpath-links is responsible for creating the the hardware paths under /dev/vx/.dmp .
This script get invokes from: /lib/udev/vxpath_links. The "/lib/udev" folder is not present in SLES15SP3.
This folder is explicitly removed from  SLES15SP3 onwards and it is expected to create Veritas specific scripts/libraries from vendor specific folder.

RESOLUTION:
Code changes have been made to invoke "/etc/vx/vxpath-links" instead of "/lib/udev/vxpath-links".

Patch ID: VRTSaslapm 7.4.2.4900

* 4011781 (Tracking ID: 4011780)

SYMPTOM:
This is new array and we need to add support for EMC PowerStore plus PP.

DESCRIPTION:
EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current
ASL.

RESOLUTION:
Code changes to support EMC PowerStore plus PP have been done.

* 4103494 (Tracking ID: 4101807)

SYMPTOM:
"vxdisk -e list" does not show "svol" for Hitachi ShadowImage (SI) svol devices.

DESCRIPTION:
VxVM with DMP is failing to detect Hitachi ShadowImage (SI) svol devices.

RESOLUTION:
Hitachi ASL modified to correctly read SCSI Byte locations and recognize ShadowImage (SI) svol device.

Patch ID: VRTSvxvm-7.4.2.4300

* 4119951 (Tracking ID: 4119950)

SYMPTOM:
Vulnerabilities have been reported in third party components, [curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [curl and libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which 
needs

RESOLUTION:
[curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

Patch ID: VRTSvxvm-7.4.2.4100

* 4116348 (Tracking ID: 4112433)

SYMPTOM:
Vulnerabilities have been reported in third party components, [openssl, curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [openssl, curl and libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which needs

RESOLUTION:
[openssl, curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

Patch ID: VRTSvxvm-7.4.2.3800

* 4110666 (Tracking ID: 4110665)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

* 4110766 (Tracking ID: 4112033)

SYMPTOM:
A security vulnerability exists in the third-party component libxml2.

DESCRIPTION:
VxVM uses a third-party component named libxml2in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libxml2 in which the security vulnerability has been addressed.

Patch ID: VRTSvxvm-7.4.2.3700

* 4106001 (Tracking ID: 4102501)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

Patch ID: VRTSvxvm-7.4.2.3600

* 4052119 (Tracking ID: 4045871)

SYMPTOM:
vxconfigd crashed at ddl_get_disk_given_path with following stacks:
ddl_get_disk_given_path
ddl_reconfigure_all
ddl_find_devices_in_system
find_devices_in_system
mode_set
setup_mode
startup
main
_start

DESCRIPTION:
Under some situations, duplicate paths can be added in one dmpnode in vxconfigd. If the duplicate paths are removed then the empty path entry can be generated for that dmpnode. Thus, later when vxconfigd accesses the empty path entry, it crashes due to NULL pointer reference.

RESOLUTION:
Code changes have been done to avoid the duplicate paths that are to be added.

* 4086043 (Tracking ID: 4072241)

SYMPTOM:
-bash-5.1# /usr/lib/vxvm/voladm.d/bin/dmpdr
Dynamic Reconfiguration Operations

WARN: Please Do not Run any Device Discovery Operations outside the Tool during Reconfiguration operations
INFO: The logs of current operation can be found at location /var/log/vx/dmpdr_20220420_1042.log
ERROR: Failed to open lock file for /usr/lib/vxvm/voladm.d/bin/dmpdr, No such file or directory. Exit.

Exiting the Current DMP-DR Run of the Tool

DESCRIPTION:
VxVM Log location for linux changed which impacted vxdiskadm functionality on solaris .

RESOLUTION:
Required changed have been done to make code changes work across platforms.

* 4090311 (Tracking ID: 4039690)

SYMPTOM:
Change the logger files size to collect the large amount of logs in the system.

DESCRIPTION:
Double the logger files size limit and improve on logger size footprint by using gzip on logger files.

RESOLUTION:
Completed required code changes to do this enhancement.

* 4090411 (Tracking ID: 4054685)

SYMPTOM:
RVG recovery gets hung in case of reconfiguration scenarios in CVR environments leading to vx commands hung on master node.

DESCRIPTION:
As a part of rvg recovery we perform DCM, datavolume recovery. But datavolume recovery takes long time due to wrong IOD handling done in linux platforms.

RESOLUTION:
Fix the IOD handling mechanism to resolve the rvg recovery handling.

* 4090415 (Tracking ID: 4071345)

SYMPTOM:
Replication is unresponsive after failed site is up.

DESCRIPTION:
Autosync and unplanned fallback synchronisation had issues in a mix of cloud and non-cloud Volumes in RVG.
After a cloud volume is found rest of the volumes were getting ignored for synchronisation

RESOLUTION:
Fixed condition to make it iterate over all Volumes.

* 4090442 (Tracking ID: 4078537)

SYMPTOM:
When connection to s3-fips bucket is made below error messages are observed :
2022-05-31 03:53:26 VxVM ERROR V-5-1-19512 amz_request_perform: PUT request failed, url: https://s3-fips.us-east-2.amazonaws.com/fipstier334f3956297c8040078280000d91ab70a/2.txt_27ffff625eff0600442d000013ffff5b_999_7_1077212296_0_1024_38, errno 11
2022-05-31 03:53:26 VxVM ERROR V-5-1-19333 amz_upload_object: amz_request_perform failed for obj:2.txt_27ffff625eff0600442d000013ffff5b_999_7_1077212296_0_1024_38
2022-05-31 03:53:26 VxVM WARNING V-5-1-19752 Try upload_object(fipstier334f3956297c8040078280000d91ab70a/2.txt_27ffff625eff0600442d000013ffff5b_999_7_1077212296_0_1024_38) again, number of requests attempted: 3.
2022-05-31 03:53:26 VxVM ERROR V-5-1-19358 curl_send_request: curl_easy_perform() failed: Couldn't resolve host name
2022-05-31 03:53:26 VxVM ERROR V-5-1-0 curl_request_perform: Error in curl_request_perform 6
2022-05-31 03:53:26 VxVM ERROR V-5-1-19357 curl_request_perform: curl_send_request failed with error: 6

DESCRIPTION:
For s3-fips bucket endpoints, AWS has made it mandatory to use the virtual-hosted style methods to connect to the s3-fips bucket instead of path-hosted style method which is currently used by Infoscale.

RESOLUTION:
Code changes are done to send cloud requestss3-fips bucket successfully.

* 4090541 (Tracking ID: 4058166)

SYMPTOM:
While setting up VVR/CVR on large size data volumes (size > 3TB) with filesystems mounted on them, initial autosync operation takes a lot of time to complete.

DESCRIPTION:
While performing autosync on VVR/CVR setup for a volume with filesystem mounted, if smartmove feature is enabled, the operation does smartsync by syncing only the regions dirtied by filesystem, instead of syncing entire volume, which completes faster than normal case. However, for large size volumes (size > 3TB),

smartmove feature does not get enabled, even with filesystem mounted on them and hence autosync operation syncs entire volume.

This behaviour is due to smaller size DCM plexes allocated for such large size volumes, autosync ends up performing complete volume sync, taking lot more time to complete.

RESOLUTION:
Increase the limit of DCM plex size (loglen) beyond 2MB so that smart move feature can be utilised properly.

* 4090599 (Tracking ID: 4080897)

SYMPTOM:
Observed Performance drop on raw VxVM volume in RHEL 8.x compared to RHEL7.X

DESCRIPTION:
There has been change in file_operations used for character devices from RHEL 7.X and RHEL8.X releases. In RHEL 7.X aio_read and aio_write function pointers are implemented whereas this has changed to read_iter and write_iter respectively in the latest release. In RHEL 8.X changes, VxVM code called generic_file_write_iter(). The problem here is that this function takes an inode-lock. And in the multi-thread write operation, this semaphore basically causes serial processing of IO submission leading to dropped performance.

RESOLUTION:
Use of __generic_file_write_iter() function helps to resolve the issue and vxvm_generic_write_sync() function is implemented which handles the SYNCing part of the write similar to functions like blkdev_write_iter() and generic_file_write_iter().

* 4090604 (Tracking ID: 4044529)

SYMPTOM:
DMP is unable to display PWWN details for some LUNs by "vxdmpadm getportids".

DESCRIPTION:
Udev rules file(/usr/lib/udev/rules.d/63-fc-wwpn-id.rules) from newer RHEL OS will generate an addtional hardware path for a FC device, hence there will be 2 hardware paths for the same device. However vxpath_links script only consider single hardware path for a FC device. In the case of 2 hardware paths, vxpath_links may not treat it as a FC device, thus fail to populate PWWN related information.

RESOLUTION:
Code changes have been done to make vxpath_links correctly detect FC device even there are multiple hardware paths.

* 4090932 (Tracking ID: 3996634)

SYMPTOM:
A system boot with large number of luns managed by VxDMP take a long time.

DESCRIPTION:
When a system has large number of luns managed by VxDMP which mounted on a primary partition or has formatted with some type of File system, during boot, the DMP device would be removed and UDEV trigger event against the OS device, the OS device would be read from lsblk command. The lsblk command is slow and if the lsblk commands issued against multiple devices in parallel, it may be stuck, then the system boot take a long time.

RESOLUTION:
Code has been changed to read the OS device from blkid command rather than lsblk command.

* 4090946 (Tracking ID: 4023297)

SYMPTOM:
Smartmove functionality was not being used after VVR Rlink was paused and resumed during VVR initial sync or DCM resync operation. This was resulting in more data transfer to VVR secondary site than needed.

DESCRIPTION:
The transactions for VVR pause and resume operations were being considered as phases after which smartmove is not necessary to be used. This was resulting in smartmove not being used after the resume operation.

RESOLUTION:
Fixed the condition so that smartmove continues to work beyond pause/resume operations.

* 4090960 (Tracking ID: 4087770)

SYMPTOM:
Data corruption post mirror attach operation seen after complete storage fault for DCO volumes.

DESCRIPTION:
DCO (data change object) tracks delta changes for faulted mirrors. During complete storage loss of DCO volume mirrors in, DCO object will be marked as BADLOG and becomes unusable for bitmap tracking.
Post storage reconnect (such as node rejoin in FSS environments) DCO will be re-paired for subsequent tracking. During this if VxVM finds any of the mirrors detached for data volumes, those are expected to be marked for full-resync as bitmap in DCO has no valid information. Bug in repair DCO operation logic prevented marking mirror for full-resync in cases where repair DCO operation is triggered before data volume is started. This resulted into mirror getting attached without any data being copied from good mirrors and hence reads serviced from such mirrors have stale data, resulting into file-system corruption and data loss.

RESOLUTION:
Code has been added to ensure repair DCO operation is performed only if volume object is enabled so as to ensure detached mirrors are marked for full-resync appropriately.

* 4090970 (Tracking ID: 4017036)

SYMPTOM:
After enabling DMP (Dynamic Multipathing) Native support, enable /boot to be
mounted on DMP device when Linux is booting with systemd.

DESCRIPTION:
Currently /boot is mounted on top of OS (Operating System) device. When DMP
Native support is enabled, only VG's (Volume Groups) are migrated from OS 
device to DMP device.This is the reason /boot is not migrated to DMP device.
With this if OS device path is not available then system becomes unbootable 
since /boot is not available. Thus it becomes necessary to mount /boot on DMP
device to provide multipathing and resiliency. 
The current fix can only work on configurations with single boot partition.

RESOLUTION:
Code changes have been done to migrate /boot on top of DMP device when DMP
Native support is enabled and when Linux is booting with systemd.

* 4091248 (Tracking ID: 4040808)

SYMPTOM:
df command hung in clustered environment

DESCRIPTION:
df command hung in clustered environment due to drl updates are not getting complete causing application IOs to hang.

RESOLUTION:
Fis is added to complete incore DRL updates and drive corresponding application IOs

* 4091588 (Tracking ID: 3966157)

SYMPTOM:
the feature of SRL batching was broken and we were not able to enable it as it might caused problems.

DESCRIPTION:
Batching of updates needs to be done as to get benefit of batching multiple updates and getting performance increased

RESOLUTION:
we have decided to simplify the working as we are now aligning each of the small update within a total batch to 4K size so that,

by default we will get the whole batch aligned one, and then there is no need of book keeping for last update and hence reducing the overhead of

different calculations.

we are padding individual updates to reduce overhead of book keeping things around last update in a batch,
by padding each updates to 4k, we will be having a batch of updates which is 4k aligned itself.

* 4091910 (Tracking ID: 4090321)

SYMPTOM:
vxvm-boot service startup failure

DESCRIPTION:
vxvm-boot service is taking long time to start and getting timed out. With more number of devices device discovery is taking more time to finish.

RESOLUTION:
Increase timeout for service so that discovery gets more time to finish

* 4091911 (Tracking ID: 4090192)

SYMPTOM:
vxvm-boot service startup failure

DESCRIPTION:
vxvm-boot service is taking long time to start and getting timed out. With more number of devices device discovery is taking more time to finish.

RESOLUTION:
Increase device discovery threads to range of 128 to 256 depending on CPUs available on system

* 4091912 (Tracking ID: 4090234)

SYMPTOM:
vxvm-boot service is taking long time to start and getting timed out in large LUNs setups.

DESCRIPTION:
Device discovery layer and infiniband devices(default 120s) are taking long time to discover
the devices which is cause for Volume Manager service timeout. 
Messages logged:
Jul 28 19:52:52 nb-appliance vxvm-boot[17711]: VxVM general startup...
Jul 28 19:57:51 nb-appliance systemd[1]: vxvm-boot.service: start operation timed out. Terminating.
Jul 28 19:57:51 nb-appliance vxvm-boot[17711]: Terminated
Jul 28 19:57:51 nb-appliance systemd[1]: vxvm-boot.service: Control process exited, code=exited status=100
Jul 28 19:59:22 nb-appliance systemd[1]: vxvm-boot.service: State 'stop-final-sigterm' timed out. Killing.
Jul 28 19:59:23 nb-appliance systemd[1]: vxvm-boot.service: Killing process 209714 (vxconfigd) with signal SIGKILL.
Jul 28 20:00:30 nb-appliance systemd[1]: vxvm-boot.service: Failed with result 'timeout'.
Jul 28 20:00:30 nb-appliance systemd[1]: Failed to start VERITAS Volume Manager Boot service.

RESOLUTION:
Completed required changes to fix this issue.

NBA:
https://jira.community.veritas.com/browse/STESC-7281

Flex:
https://jira.community.veritas.com/browse/FLEX-7003

We are suspecting vxvm-boot service timeout due to multiple issues
1. OS is taking long time to discover devices.
2. We have 120 seconds sleep in vxvm-startup when infiniband devices or controllers are present in setup.

Issue1: 
Issue here is vxvm-boot service is taking long time to start and getting timed out. Main issue lies in the device discovery layer which is taking more time. 
There are suspected issues from OS side as well where we have seen OS is also taking long time to discover devices

http://codereview.engba.veritas.com/r/42003/

Issue2:
Earlier in vxvm-startup, we are sleeping 120 seconds for infiniband devices or controller but now we are sleeping 120 seconds only if infinband devices claimed by ASL.

* 4091963 (Tracking ID: 4067191)

SYMPTOM:
In CVR environment after rebooting Slave node, Master node may panic with below stack:

Call Trace:
dump_stack+0x66/0x8b
panic+0xfe/0x2d7
volrv_free_mu+0xcf/0xd0 [vxio]
vol_ru_free_update+0x81/0x1c0 [vxio]
volilock_release_internal+0x86/0x440 [vxio]
vol_ru_free_updateq+0x35/0x70 [vxio]
vol_rv_write2_done+0x191/0x510 [vxio]
voliod_iohandle+0xca/0x3d0 [vxio]
wake_up_q+0xa0/0xa0
voliod_iohandle+0x3d0/0x3d0 [vxio]
voliod_loop+0xc3/0x330 [vxio]
kthread+0x10d/0x130
kthread_park+0xa0/0xa0
ret_from_fork+0x22/0x40

DESCRIPTION:
As part of CVM Master switch a rvg_recovery is triggered. In this step race
condition can occured between the VVR objects due to which the object value
is not updated properly and can cause panic.

RESOLUTION:
Code changes are done to handle the race condition between VVR objects.

* 4091989 (Tracking ID: 4090930)

SYMPTOM:
Relocation of failed data disk of mirror volume leads to data corruption.

DESCRIPTION:
However with existing volume having another faulted mirror and detached mirror being tracked in data change object (DCO) in detach map. At the same time VxVM relocation daemon when decides to relocate another failed disk of volume. This was expected to be full copy of data. Due to bug in relocation code the relocation operation was allowed even when volume is in DISABLED state. When volume became ENABLED the task to copy the data of new mirror incorrectly used detach map instead of full-sync and thus resulting into data loss for the new mirror.

RESOLUTION:
Code has been changed to block triggering relocation of disks when top-level volume is not in ENABLED state.



Mandatory details/instructions while reporting issues
 
1)	Problem

* 4092002 (Tracking ID: 4081740)

SYMPTOM:
vxdg flush command slow due to too many luns needlessly access /proc/partitions.

DESCRIPTION:
Linux BLOCK_EXT_MAJOR(block major 259) is used as extended devt for block devices. When partition number of one device is more than 15, the partition device gets assigned under major 259 to solve the sd limitations (16 minors per device), by which more partitions are allowed for one sd device. During "vxdg flush", for each lun in the disk group, vxconfigd reads file /proc/partitions line by line through fgets() to find all the partition devices with major number 259, which would cause vxconfigd to respond sluggishly if there are large amount of luns in the disk group.

RESOLUTION:
Code has been changed to remove the needless access on /proc/partitions for the luns without using extended devt.

* 4099550 (Tracking ID: 4065145)

SYMPTOM:
During addsec we were unable to processencrypted volume tags for multiple volumes and vsets.
Error we saw:

$ vradmin -g dg2 -encrypted addsec dg2_rvg1 10.210.182.74 10.210.182.75

Error: Duplicate tag name vxvm.attr.enckeytype provided in input.

DESCRIPTION:
The number of tags was not defined and we were processing all the tags at a time instead of processing max number of tags for a volume.

RESOLUTION:
Introduced a number of tags variable depend on the cipher method (CBC/GCM), as well fixed minor code issues.

* 4102424 (Tracking ID: 4103350)

SYMPTOM:
Following error message is seen on running vradmin -encrypted addsec command.

# vradmin -g enc_dg2 -encrypted addsec enc_dg2_rvg1 123.123.123.123 234.234.234.234
Message from Host 234.234.234.234:
Job for vxvm-encrypt.service failed.
See "systemctl status vxvm-encrypt.service" and "journalctl -xe" for details.
VxVM vxvol ERROR V-5-1-18863 Failed to start vxvm-encrypt service. Error:1.

DESCRIPTION:
"vradmin -encrypted addsec" command fails on primary because vxvm-encrypt.service goes into failed state on secondary site. On secondary master, vxvm-encrypt.service tries to restart 5 times and goes into failed state.

RESOLUTION:
Code changes have been done to prevent vxvm-encrypt.service from going into failed state.

Patch ID: VRTSaslapm 7.4.2.3600

* 4012176 (Tracking ID: 3996206)

SYMPTOM:
Paths from different 3PAR LUNs shown under single Volume Manager device (VxVM disk).

DESCRIPTION:
VxVM uses "LUN serial number" to identify a LUN unique. This "LUN serial number" is fetched by doing SCSI inquiry
on VPD page 0. The "LUN serial number" obtained from VPD page 0 doesn't always guarantee uniqueness of LUN. Due to this
when paths from 2 diffent 3PAR LUNs have same "LUN serial number" DMP adds them under a single device/disk.

RESOLUTION:
Changes have been done in 3PAR ASL to fetch "LUN serial number" from VPD page 0x83 that gaurentees unqiue number for LUN.

* 4076495 (Tracking ID: 4076320)

SYMPTOM:
Not Able to get ARRAY_VOLUME_ID, old_udid.
# vxdisk -p list 3pardata1_3 |grep -i ARRAY_VOLUME_ID
# vxdisk -p list 3pardata1_3 |grep -i old_udid.

DESCRIPTION:
AVID, reclaim_cmd_nv, extattr_nv, old_udid_nv is not generated for HPE 3PAR/Primera/Alletra 9000 ALUA array.

RESOLUTION:
Code changes added to generate AVID, reclaim_cmd_nv, extattr_nv, old_udid_nv for HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.

* 4094664 (Tracking ID: 4093396)

SYMPTOM:
All the PowerStore arrays show the same SN. Customer failed to distinguish them. Because the enclose SN was hardcoded. It should be read from storage.

DESCRIPTION:
Code changes have been made to update enclosure SN correctly.

RESOLUTION:
Code changes to support EMC PowerStore have been done.

Patch ID: VRTSvxvm-7.4.2.3300

* 4083792 (Tracking ID: 4082799)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

Patch ID: VRTSvxvm-7.4.2.3200

* 4011971 (Tracking ID: 3991668)

SYMPTOM:
In a VVR configuration with secondary logging enabled, data inconsistency is reported after the "No IBC message arrived" error is encountered.

DESCRIPTION:
It might happen that the VVR secondary node handles updates with larger sequence IDs before the In-Band Control (IBC) update arrives. In this case, VVR drops the IBC update. Due to the updates with the larger sequence IDs than the one for the IBC update, data writes cannot be started, and they get queued. Data loss may occur after the VVR secondary receives an atomic commit and frees the queue. If this situation occurs, the "vradmin verifydata" command reports data inconsistency.

RESOLUTION:
VVR is modified to trigger updates as they are received in order to start data volume writes.

* 4013169 (Tracking ID: 4011691)

SYMPTOM:
Observed high CPU consumption on the VVR secondary nodes because of high pending IO load.

DESCRIPTION:
High replication related IO load on the VVR secondary and the requirement of maintaining write order fidelity with limited memory pools created  contention. This resulted in multiple VxVM kernel threads contending for shared resources and there by increasing the CPU consumption.

RESOLUTION:
Limited the way in which VVR consumes its resources so that a high pending IO load would not result into high CPU consumption.

* 4037288 (Tracking ID: 4034857)

SYMPTOM:
Current load of Vxvm modules were failing on SLES15 SP2(Kernel - 5.3.18-22.2-default).

DESCRIPTION:
With new kernel (5.3.18-22.2-default) below mentioned functions were depricated -
1. gettimeofday() 
2.struct timeval
3. bio_segments()
4. iov_for_each()
5.req filed in struct next_rq
Also, there was susceptible Data corruption with big size IO(>1M) processed by Linux kernel IO splitting.

RESOLUTION:
Code changes are mainly to support kernel 5.3.18 and to provide support for deprecated functions. 
Remove dependency on req->next_rq field in blk-mq code
And, changes related to bypassing the Linux kernel IO split functions, which seems redundant for VxVM IO processing.

* 4048120 (Tracking ID: 4031452)

SYMPTOM:
Add node operation is failing with error "Error found while invoking '' in the new node, and rollback done in both nodes"

DESCRIPTION:
Stack showed a valid address for pointer ptmap2, but still it generated core.
It suggested that it might be a double-free case. Issue lies in freeing a pointer

RESOLUTION:
Added handling for such case by doing NULL assignment to pointers wherever they are freed

* 4051703 (Tracking ID: 4010794)

SYMPTOM:
Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster with below stack when storage activities were going on:
dmp_start_cvm_local_failover+0x118()
dmp_start_failback+0x398()
dmp_restore_node+0x2e4()
dmp_revive_paths+0x74()
gen_update_status+0x55c()
dmp_update_status+0x14()
gendmpopen+0x4a0()

DESCRIPTION:
The system panic occurred due to invalid dmpnode's current primary path when disks were attached/detached in a cluster. When DMP accessed the current primary path without doing sanity check, the system panics due to an invalid pointer.

RESOLUTION:
Code changes have been made to avoid accessing any invalid pointer.

* 4052119 (Tracking ID: 4045871)

SYMPTOM:
vxconfigd crashed at ddl_get_disk_given_path with following stacks:
ddl_get_disk_given_path
ddl_reconfigure_all
ddl_find_devices_in_system
find_devices_in_system
mode_set
setup_mode
startup
main
_start

DESCRIPTION:
Under some situations, duplicate paths can be added in one dmpnode in vxconfigd. If the duplicate paths are removed then the empty path entry can be generated for that dmpnode. Thus, later when vxconfigd accesses the empty path entry, it crashes due to NULL pointer reference.

RESOLUTION:
Code changes have been done to avoid the duplicate paths that are to be added.

* 4054311 (Tracking ID: 4040701)

SYMPTOM:
Below warnings are observed while installing the VXVM package.
WARNING: libhbalinux/libhbaapi is not installed. vxesd will not capture SNIA HBA API library events.
mv: '/var/adm/vx/cmdlog' and '/var/log/vx/cmdlog' are the same file
mv: '/var/adm/vx/cmdlog.1' and '/var/log/vx/cmdlog.1' are the same file
mv: '/var/adm/vx/cmdlog.2' and '/var/log/vx/cmdlog.2' are the same file
mv: '/var/adm/vx/cmdlog.3' and '/var/log/vx/cmdlog.3' are the same file
mv: '/var/adm/vx/cmdlog.4' and '/var/log/vx/cmdlog.4' are the same file
mv: '/var/adm/vx/ddl.log' and '/var/log/vx/ddl.log' are the same file
mv: '/var/adm/vx/ddl.log.0' and '/var/log/vx/ddl.log.0' are the same file
mv: '/var/adm/vx/ddl.log.1' and '/var/log/vx/ddl.log.1' are the same file
mv: '/var/adm/vx/ddl.log.10' and '/var/log/vx/ddl.log.10' are the same file
mv: '/var/adm/vx/ddl.log.11' and '/var/log/vx/ddl.log.11' are the same file

DESCRIPTION:
Some warnings are observed while installing vxvm package.

RESOLUTION:
Appropriate code changes are done to avoid the warnings.

* 4056329 (Tracking ID: 4056156)

SYMPTOM:
VxVM package fails to load on SLES15 SP3

DESCRIPTION:
Changes introduced in SLES15 SP3 impacted VxVM block IO functionality. This included changes in block layer structures in kernel.

RESOLUTION:
Changes have been done to handle the impacted functionalities.

* 4056919 (Tracking ID: 4056917)

SYMPTOM:
In Flexible Storage Sharing (FSS) environments, disk group import operation with few disks missing leads to data corruption.

DESCRIPTION:
In FSS environments, import of disk group with missing disks is not allowed. If disk with highest updated configuration information is not present during import, the import operation fired was leading incorrectly incrementing the config TID on remaining disks before failing the operation. When missing disk(s) with latest configuration came back, import was successful. But because of earlier failed transaction, import operation incorrectly choose wrong configuration to import the diskgroup leading to data corruption.

RESOLUTION:
Code logic in disk group import operation is modified to ensure failed/missing disks check happens early before attempting perform any on-disk update as part of import.

* 4058873 (Tracking ID: 4057526)

SYMPTOM:
Whenever vxnm-vxnetd is loaded, it reports "Cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory" in /var/log/messages.

DESCRIPTION:
New systemd update removed the support for "/var/lock/subsys/" directory. Thus, whenever vxnm-vxnetd is loaded on the systems supporting systemd, it 
reports "cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory"

RESOLUTION:
Added a check to validate if the /var/lock/subsys/ directory is supported in vxnm-vxnetd.sh

* 4060839 (Tracking ID: 3975667)

SYMPTOM:
NMI watchdog: BUG: soft lockup

DESCRIPTION:
When flow control on ioshipping channel is set there is window in code where vol_ioship_sender thread can go in tight loop.
This causes softlockup

RESOLUTION:
Relinquish CPU to schedule other process. vol_ioship_sender() thread will restart after some delay.

* 4060962 (Tracking ID: 3915202)

SYMPTOM:
vxconfigd hang in vxconfigd -k -r reset

DESCRIPTION:
vxconfigd hang is observed since all the file descriptors to the process have been utilized because of fd leak.
This issue was not integrated hence facing the issue.

RESOLUTION:
Appropriate code changes are done to handle scenario of the fd leak.

* 4060966 (Tracking ID: 3959716)

SYMPTOM:
System may panic with sync replication with VVR configuration, when VVR RVG is in DCM mode, with following panic stack:
volsync_wait [vxio]
voliod_iohandle [vxio]
volted_getpinfo [vxio]
voliod_loop [vxio]
voliod_kiohandle [vxio]
kthread

DESCRIPTION:
With sync replication, if ACK for data message is delayed from the secondary site, the 
primary site might incorrectly free the message from the waiting queue at primary site.
Due to incorrect handling of the message, a system panic may happen.

RESOLUTION:
Required code changes are done to resolve the panic issue.

* 4061004 (Tracking ID: 3993242)

SYMPTOM:
vxsnap prepare on vset might throw error : "VxVM vxsnap ERROR V-5-1-19171 Cannot perform prepare operation on cloud 
volume"

DESCRIPTION:
There were  some wrong volume-records entries being fetched for VSET and due to which required validations were failing and triggering the issue .

RESOLUTION:
Code changes have been done to resolve the issue .

* 4061036 (Tracking ID: 4031064)

SYMPTOM:
During master switch with replication in progress, cluster wide hang is seen on VVR secondary.

DESCRIPTION:
With application running on primary, and replication setup between VVR primary & secondary, when master switch operation is attempted on secondary, it gets hung permanently.

RESOLUTION:
Appropriate code changes are done to handle scenario of master switch operation and replication data on secondary.

* 4061055 (Tracking ID: 3999073)

SYMPTOM:
Data corruption occurred when the fast mirror resync (FMR) was enabled and the failed plex of striped-mirror layout was attached.

DESCRIPTION:
To determine and recover the regions of volumes using contents of detach, a plex attach operation with FMR tracking has been enabled.

For the given volume region, the DCO region size being higher than the stripe-unit of volume, the code logic in plex attached code path was incorrectly skipping the bits in detach maps. Thus, some of the regions (offset-len) of volume did not sync with the attached plex leading to inconsistent mirror contents.

RESOLUTION:
To resolve the data corruption issue, the code has been modified to consider all the bits for given region (offset-len) in plex attached code.

* 4061057 (Tracking ID: 3931583)

SYMPTOM:
Node may panic while uninstalling or upgrading the VxVM package or during reboot.

DESCRIPTION:
Due to a race condition in Volume Manager (VxVM), IO may be queued for processing while the vxio module is being unloaded. This results in VxVM acquiring and accessing a lock which is currently being freed and it may panic the system with the following backtrace:

 #0 [ffff88203da089f0] machine_kexec at ffffffff8105d87b
 #1 [ffff88203da08a50] __crash_kexec at ffffffff811086b2
 #2 [ffff88203da08b20] panic at ffffffff816a8665
 #3 [ffff88203da08ba0] nmi_panic at ffffffff8108ab2f
 #4 [ffff88203da08bb0] watchdog_overflow_callback at ffffffff81133885
 #5 [ffff88203da08bc8] __perf_event_overflow at ffffffff811727d7
 #6 [ffff88203da08c00] perf_event_overflow at ffffffff8117b424
 #7 [ffff88203da08c10] intel_pmu_handle_irq at ffffffff8100a078
 #8 [ffff88203da08e38] perf_event_nmi_handler at ffffffff816b7031
 #9 [ffff88203da08e58] nmi_handle at ffffffff816b88ec
#10 [ffff88203da08eb0] do_nmi at ffffffff816b8b1d
#11 [ffff88203da08ef0] end_repeat_nmi at ffffffff816b7d79
 [exception RIP: _raw_spin_unlock_irqrestore+21]
 RIP: ffffffff816b6575 RSP: ffff88203da03d98 RFLAGS: 00000283
 RAX: 0000000000000283 RBX: ffff882013f63000 RCX: 0000000000000080
 RDX: 0000000000000001 RSI: 0000000000000283 RDI: 0000000000000283
 RBP: ffff88203da03d98 R8: 00000000005d1cec R9: ffff8810e8ec0000
 R10: 0000000000000002 R11: ffff88203da03da8 R12: ffff88103af95560
 R13: ffff882013f630c8 R14: 0000000000000001 R15: 0000000000000ca5
 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
--- <NMI exception stack> ---
#12 [ffff88203da03d98] _raw_spin_unlock_irqrestore at ffffffff816b6575
#13 [ffff88203da03da0] voliod_qsio at ffffffffc0fd14c3 [vxio]
#14 [ffff88203da03dd0] vol_sample_timeout at ffffffffc101d8df [vxio]
#15 [ffff88203da03df0] __voluntimeout at ffffffffc0fd34be [vxio]
#16 [ffff88203da03e18] voltimercallback at ffffffffc0fd3568 [vxio]
...
...

RESOLUTION:
Code changes made to handle the race condition and prevent the access of resources that are being freed.

* 4061298 (Tracking ID: 3982103)

SYMPTOM:
When the memory available is low in the system , I/O hang is seen.

DESCRIPTION:
In low memory situation, the memory allocated to some VVR IOs was NOT getting released properly due to which the new application IO
could NOT be served as the VVR memory pool gets fully utilized. This was resulting in IO hang type of situation.

RESOLUTION:
Code changes are done to properly release the memory in the low memory situation.

* 4061317 (Tracking ID: 3925277)

SYMPTOM:
vxdisk resize corrupts disk public region and causes file system mount fail.

DESCRIPTION:
While resizing single path disk with GPT label, update policy data according to the changes made to da/dmrec during two transactions of resize is missed, hence the private region IOs are sent to the old private region device which is on partition 3. This may make the private region data written to public region and cause corruption.

RESOLUTION:
Code changes have been made to fix  the problem.

* 4061509 (Tracking ID: 4043337)

SYMPTOM:
rp_rv.log file uses space for logging.

DESCRIPTION:
rp_rv log files needs to be removed and logger file should have 16 mb rotational log files.

RESOLUTION:
The code changes are implemented to disabel logging for rp_rv.log files

* 4062461 (Tracking ID: 4066785)

SYMPTOM:
When the replicated disks are in SPLIT mode, importing its disk group failed with "Device is a hardware mirror".

DESCRIPTION:
When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported with option `-o usereplicatedev=only`.

RESOLUTION:
The code is enhanced to import the replicated disk group with option `-o usereplicatedev=only`.

* 4062577 (Tracking ID: 4062576)

SYMPTOM:
When hastop -local is used to stop the cluster, dg deport command hangs. Below stack trace is observed in system logs :

#0 [ffffa53683bf7b30] __schedule at ffffffffa834a38d
 #1 [ffffa53683bf7bc0] schedule at ffffffffa834a868
 #2 [ffffa53683bf7bd0] blk_mq_freeze_queue_wait at ffffffffa7e4d4e6
 #3 [ffffa53683bf7c18] blk_cleanup_queue at ffffffffa7e433b8
 #4 [ffffa53683bf7c30] vxvm_put_gendisk at ffffffffc3450c6b [vxio]   
 #5 [ffffa53683bf7c50] volsys_unset_device at ffffffffc3450e9d [vxio]
 #6 [ffffa53683bf7c60] vol_rmgroup_devices at ffffffffc3491a6b [vxio]
 #7 [ffffa53683bf7c98] voldg_delete at ffffffffc34932fc [vxio]
 #8 [ffffa53683bf7cd8] vol_delete_group at ffffffffc3494d0d [vxio]
 #9 [ffffa53683bf7d18] volconfig_ioctl at ffffffffc3555b8e [vxio]
#10 [ffffa53683bf7d90] volsioctl_real at ffffffffc355fc8a [vxio]
#11 [ffffa53683bf7e60] vols_ioctl at ffffffffc124542d [vxspec]
#12 [ffffa53683bf7e78] vols_unlocked_ioctl at ffffffffc124547d [vxspec]
#13 [ffffa53683bf7e80] do_vfs_ioctl at ffffffffa7d2deb4
#14 [ffffa53683bf7ef8] ksys_ioctl at ffffffffa7d2e4f0
#15 [ffffa53683bf7f30] __x64_sys_ioctl at ffffffffa7d2e536

DESCRIPTION:
This issue is seen due to some updation from kernel side w.r.t to handling request queue.Existing VxVM code set the request handling area (make_request_fn) as vxvm_gen_strategy, this functionality is getting impacted.

RESOLUTION:
Code changes are added to handle the request queues using blk_mq_init_allocated_queue.

* 4062746 (Tracking ID: 3992053)

SYMPTOM:
Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex. This is due to 
inconsistent data across the plexes after attaching a plex in layered volumes.

DESCRIPTION:
When a plex is detached in a layered volume, the regions which are dirty/modified are tracked in DCO (Data change object) map.
When the plex is attached back, the data corresponding to these dirty regions is re-synced to the plex being attached.
There was a defect in the code due to which the some particular regions were NOT re-synced when a plex is attached.
This issue only happens only when the offset of the sub-volume is NOT aligned with the region size of DCO (Data change object) volume.

RESOLUTION:
The code defect is fixed to correctly copy the data for dirty regions when the sub-volume offset is NOT aligned with the DCO region size.

* 4062747 (Tracking ID: 3943707)

SYMPTOM:
vxconfigd reconfig hang when joing a cluster with below stack:
volsync_wait [vxio]
_vol_syncwait [vxio]
voldco_await_shared_tocflush [vxio]
volcvm_ktrans_fmr_cleanup [vxio]
vol_ktrans_commit [vxio]
volconfig_ioctl [vxio]
volsioctl_real [vxio]
vols_ioctl [vxspec]
vols_unlocked_ioctl [vxspec]
vfs_ioctl  
do_vfs_ioctl

DESCRIPTION:
There is a race condition that caused the current seqno on tocsio does not get incremented on one of the nodes. While master and other slaves move to next stage with higher seqno, this slave drops the DISTRIBUTE message. The messages is retried from master and slave keeps on dropping, leading to hang.

RESOLUTION:
Code changes have been made to avoid the race condition.

* 4062751 (Tracking ID: 3989185)

SYMPTOM:
In a Veritas Volume Manager(VVR) environment vxrecover command can hang.

DESCRIPTION:
When vxrecover is triggered after storage failure it is possible that the vxrecover operation may hang.
This is because vxrecover does the RVG recovery. As part of this recovery dummy updates are written on SRL.
Due to a bug in the code these updated were written incorrectly on the SRL which led the flush operation from SRL to data volume hang.

RESOLUTION:
Code changes are done appropriately so that the dummy updates are written  correctly to the SRL.

* 4062755 (Tracking ID: 3978453)

SYMPTOM:
Reconfig hang during master takeover with below stack:
volsync_wait+0xa7/0xf0 [vxio]
volsiowait+0xcb/0x110 [vxio]
vol_commit_iolock_objects+0xd3/0x270 [vxio]
vol_ktrans_commit+0x5d3/0x8f0 [vxio]
volconfig_ioctl+0x6ba/0x970 [vxio]
volsioctl_real+0x436/0x510 [vxio]
vols_ioctl+0x62/0xb0 [vxspec]
vols_unlocked_ioctl+0x21/0x30 [vxspec]
do_vfs_ioctl+0x3a0/0x5a0

DESCRIPTION:
There is a hang in dcotoc protocol on slave and that is causing couple of slave nodes not respond with LEAVE_DONE to master, hence the issue.

RESOLUTION:
Code changes have been made to add handling of transaction overlapping with a shared toc update. Passing toc update sio flag from old to new object during transaction, to resume recovery if required.

* 4063374 (Tracking ID: 4005121)

SYMPTOM:
Application IOs appear hung or progress slowly until SRL to DCM flush finished.

DESCRIPTION:
When VVR SRL gets full DCM protection was triggered and application IO appear hung until SRL to DCM flush finished.

RESOLUTION:
Added fix that avoided duplicate DCM tracking through vol_rvdcm_log_update(), which reduced the IOPS drop comparatively.

* 4064523 (Tracking ID: 4049082)

SYMPTOM:
I/O read error is displayed when remote FSS node rebooting.

DESCRIPTION:
When rebooting remote FSS node, I/O read requests to a mirror volume that is scheduled on the remote disk from the FSS node should be redirected to the remaining plex. However, current vxvm does not handle this correctly. The retrying I/O requests could still be sent to the offline remote disk, which cause to final I/O read failure.

RESOLUTION:
Code changes have been done to schedule the retrying read request on the remaining plex.

* 4066930 (Tracking ID: 3951527)

SYMPTOM:
Data loss issue is seen because of incorrect version check handling done as a part of SRL 4k update alignment changes in 7.4 release.

DESCRIPTION:
On primary, rv_target_rlink field always is set to NULL which internally skips checking the 4k version  in VOL_RU_INIT_UPDATE macro. It causes SRL writes to be written in a 4k aligned manner even though remote rvg version is <= 7.3.1. This resulted in data loss.

RESOLUTION:
Changes are done to use rv_replicas rather than rv_target_rlink to check the version appropriately for all sites and not write SRL IO's in 4k aligned manner. 
Also, RVG version is not upgraded as part of diskgroup upgrades if rlinks are in attached state. RVG version can be upgraded using vxrvg upgrade command after detaching the rlinks and also when all sites are upgraded.

* 4067706 (Tracking ID: 4060462)

SYMPTOM:
System is unresponsive while adding new nodes.

DESCRIPTION:
After a node is removed, and adding node with  different node name is attempted; system turns
unresponsive. When a node leaves a cluster, in-memory information related to the node is not cleared due to the race condition.

RESOLUTION:
Fixed race condition to clear in-memory information of the node that leaves the cluster.

* 4067710 (Tracking ID: 4064208)

SYMPTOM:
Node is unresponsive while it gets added to the cluster.

DESCRIPTION:
While a node joins the cluster, if bits on the node are upgraded; size
of the object is interpreted incorrectly. Issue is observed when number of objects is higher and on
InfoScale 7.3.1 and above.

RESOLUTION:
Correct sizes are calculated for the data received from the master node.

* 4067712 (Tracking ID: 3868140)

SYMPTOM:
VVR primary site node might panic if the rlink disconnects while some data is getting replicated to secondary with below stack: 

dump_stack()
panic()
vol_rv_service_message_start()
update_curr()
put_prev_entity()
voliod_iohandle()
voliod_loop()
voliod_iohandle()

DESCRIPTION:
If rlink disconnects, VVR will clear some handles to the in-progress updates in memory, but if some IOs are still getting acknowledged from secondary to primary, then accessing updates for these IOs might result in panic at primary node.

RESOLUTION:
Code fix is implemented to correctly access the primary node updates in order to avoid the panic.

* 4067713 (Tracking ID: 3997531)

SYMPTOM:
VVR replication is not working, as vxnetd does not start properly.

DESCRIPTION:
If vxnetd restarts, a race condition blocks the completion of vxnetd start function after the shutdown process is completed.

RESOLUTION:
To avoid the race condition, the vxnetd start and stop functions are Synchronized.

* 4067715 (Tracking ID: 4008740)

SYMPTOM:
System panic

DESCRIPTION:
Due to a race condition there was code accessing freed VVR update which resulted in system panic

RESOLUTION:
Fixed race condition to avoid incorrect memory access

* 4067717 (Tracking ID: 4009151)

SYMPTOM:
Auto-import of diskgroup on system reboot fails with error:
"Disk for diskgroup not found"

DESCRIPTION:
When diskgroup is auto-imported, VxVM (Veritas Volume Manager) tries to find the disk with latest configuration copy. During this the DG import process searches through all the disks. The procedure also tries to find out if the DG contains clone disks or standard disks. While doing this calculation the DG import process incorrectly determines that current DG contains cloned disks instead of standard disks because of the stale value being there for the previous DG selected. Since VxVM incorrectly decides to import cloned disks instead of standard disks the import fails with "Disk for diskgroup not found" error.

RESOLUTION:
Code has been modified to accurately determine whether the DG contains standard or cloned disks and accordingly use those disks for DG import.

* 4067914 (Tracking ID: 4037757)

SYMPTOM:
VVR services always get started on boot up even if VVR is not being used.

DESCRIPTION:
VVR services get auto start as they are integrated in system init.d or similar framework.

RESOLUTION:
Added a tunable to not start VVR services on boot up

* 4067915 (Tracking ID: 4059134)

SYMPTOM:
Resync task takes too long on large size raid-5 volume

DESCRIPTION:
The resync of raid-5 volume will be done by small regions, and a check point will be setup for each region. If the size of raid-5 volume is large, it will be divided to large number of regions for resync, and check point setup will be issued against each region in loop. In each cycle, resync utility will open and connect vxconfigd daemon to do that,  each client would be created in vxconfigds context along with each region. As the number of created clients is large, it will take long time for vxconfigd which need to traverse the client list, thus, introduce the performance issue for resync.

RESOLUTION:
Code changes are made so that only one client created during the whole resync process, few time spent in client list traversing.

* 4069522 (Tracking ID: 4043276)

SYMPTOM:
If admin has offlined disks with "vxdisk offline <disk_name>" then vxattachd may brings the disk back to online state.

DESCRIPTION:
The "offline" state of VxVM disks is not stored persistently, it is recommended to use "vxdisk define" to persistently offline a disk.
For Veritas Netbackup Appliance there is a requirement that vxattachd shouldn't online the disks offlined with "vxdisk offline" operation.
To cater this request we have added tunable based enhancement to vxattachd for Netbackup Appliance use case.
The enhancement are done specifically so that Netback Appliance script can use it.
Following are the tunable details.
If skip_offline tunable is set then it will avoid offlined disk into online state. 
If handle_invalid_disk is set then it will offlined the "online invalid" SAN disks.
If remove_disable_dmpnode is set then it will cleanup stale entries from disk.info file and VxVM layer.
By default these tunables are off, we DONOT recommend InfoScale users to enable these vxattachd tunables.

RESOLUTION:
Code changes are done in vxattachd to cater Netbackup Appliance usecases.

* 4069523 (Tracking ID: 4056751)

SYMPTOM:
When importing a disk group containing all cloned disks with cloneoff option(-c) and some of disks are in read only status, the import fails and some of writable disks are removed from the disk group unexpectedly.

DESCRIPTION:
During disk group import with cloneoff option, a flag DA_VFLAG_ASSOC_DG gets set as update dgid is necessary. When associating da record of the read only disks, update private region TOC failed because of write failure, so the pending associations get aborted for all disks. During the aborting, for those of disks containing flag DA_VFLAG_ASSOC_DG would be removed from the dg and offline their config copy. Hence we can see a kind of private region corruption on writeable disks, actually they were removed from the dg.

RESOLUTION:
The issue has been fixed by failing the import at early stage if some of disks are read-only.

* 4069524 (Tracking ID: 4056954)

SYMPTOM:
When performing addsec using the VIPs with SSL enable the hang is observed

DESCRIPTION:
the issues comes when on primary side,  vradmin tries to create a local socket with endpoints as Local VIP & Interface IP and ends up calling SSL_accept and gets stuck infinitely.

RESOLUTION:
Appropriate code changes are done to handle scenario of vvr_islocalip() function to identify if the ip is local to the node.
So now by using the vvr_islocalip() the SSL_accept() func get called only if the ip is remote ip

* 4070099 (Tracking ID: 3159650)

SYMPTOM:
vxtune did not support vol_vvr_use_nat.

DESCRIPTION:
Platform specific methods were required to set vol_vvr_use_nat tunable, as its support for vxtune command was not present.

RESOLUTION:
Added vol_vvr_use_nat support for vxtune command.

* 4070186 (Tracking ID: 4041822)

SYMPTOM:
In an SRDF/Metro array setup, the last path is in the enabled state even after all the host and the array-side switch ports are disabled.

DESCRIPTION:
In case of an SRDF/Metro array setup, if the path connectivity is disrupted, one path may still appear to be in the enabled state until the connectivity is restored. For example: 
# vxdmpadm getsubpaths dmpnodename=emc0_02e0
NAME                      STATE[A]    PATH-TYPE[M]  CTLR-NAME  ENCLR-TYPE  ENCLR-NAME  ATTRS  PRIORITY
=======================================================================================================
c7t50000975B00AD80Ad9s2   DISABLED       -           c7        EMC         emc0        -      -
c7t50000975B00AD84Ad9s2   DISABLED       -           c7        EMC         emc0        -      -
c7t50000975B00AF40Ad70s2  DISABLED       -           c7        EMC         emc0        -      -
c7t50000975B00AF44Ad70s2  DISABLED       -           c7        EMC         emc0        -      -
c8t50000975B00AD80Ad9s2   DISABLED       -           c8        EMC         emc0        -      -
c8t50000975B00AD84Ad9s2   DISABLED       -           c8        EMC         emc0        -      -
c8t50000975B00AF40Ad70s2  ENABLED(A)     -           c8        EMC         emc0        -      -
c8t50000975B00AF44Ad70s2  DISABLED       -           c8        EMC         emc0        -      -
Note: This situation does not occur if I/Os are in progress through this DMP node. DMP identifies the disruption in the connectivity and correctly updates the state of the path.

RESOLUTION:
Made code changes to fixed this issue, To refresh the state of the paths sooner, run the 'vxdisk scandisks' command.

* 4070253 (Tracking ID: 3911930)

SYMPTOM:
Valid PGR operations sometimes fail on a DMP node.

DESCRIPTION:
As part of the PGR operations, if the inquiry command finds that PGR is not
supported on the DMP node, the PGR_FLAG_NOTSUPPORTED flag is set on the node. Further PGR operations check this value and issue PGR commands only if the flag is not set. PGR_FLAG_NOTSUPPORTED remains set even if the hardware is changed so as to support PGR.

RESOLUTION:
A new command, enablepr, is provided in the vxdmppr utility to clear this flag on the specified DMP node.

* 4071131 (Tracking ID: 4071605)

SYMPTOM:
A security vulnerability exists in the third-party component libxml2.

DESCRIPTION:
VxVM uses a third-party component named libxml2 in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libxml2 in which the security vulnerability has been addressed.

* 4072874 (Tracking ID: 4046786)

SYMPTOM:
During reboot , nodes go out of cluster and FS is not mounted .

DESCRIPTION:
NVMe asl can some time give different UDID (difference with actual UDID would be absence of space characters in UDID) during discovery.

RESOLUTION:
usage of nvme ioctl to fetch data has been removed and sysfs will be used instead

Patch ID: VRTSaslapm 7.4.2.3200

* 4070186 (Tracking ID: 4041822)

SYMPTOM:
In an SRDF/Metro array setup, the last path is in the enabled state even after all the host and the array-side switch ports are disabled.

DESCRIPTION:
In case of an SRDF/Metro array setup, if the path connectivity is disrupted, one path may still appear to be in the enabled state until the connectivity is restored. For example:
# vxdmpadm getsubpaths dmpnodename=emc0_02e0
NAME                      STATE[A]    PATH-TYPE[M]  CTLR-NAME  ENCLR-TYPE  ENCLR-NAME  ATTRS  PRIORITY
=======================================================================================================
c7t50000975B00AD80Ad9s2   DISABLED       -           c7        EMC         emc0        -      -
c7t50000975B00AD84Ad9s2   DISABLED       -           c7        EMC         emc0        -      -
c7t50000975B00AF40Ad70s2  DISABLED       -           c7        EMC         emc0        -      -
c7t50000975B00AF44Ad70s2  DISABLED       -           c7        EMC         emc0        -      -
c8t50000975B00AD80Ad9s2   DISABLED       -           c8        EMC         emc0        -      -
c8t50000975B00AD84Ad9s2   DISABLED       -           c8        EMC         emc0        -      -
c8t50000975B00AF40Ad70s2  ENABLED(A)     -           c8        EMC         emc0        -      -
c8t50000975B00AF44Ad70s2  DISABLED       -           c8        EMC         emc0        -      -
Note: This situation does not occur if I/Os are in progress through this DMP node. DMP identifies the disruption in the connectivity and correctly updates the state of the path.

RESOLUTION:
Made code changes to fixed this issue, To refresh the state of the paths sooner, run the 'vxdisk scandisks' command.

Patch ID: VRTSvxvm-7.4.2.2200

* 4018173 (Tracking ID: 3852146)

SYMPTOM:
In a CVM cluster, when a shared DG is imported by specifying both, the "-c" and the "-o noreonline" options, you may encounter the following error: 
VxVM vxdg ERROR V-5-1-10978 Disk group <disk_group_name>: import failed: Disk for disk group not found.

DESCRIPTION:
The "-c" option updates the disk ID and the DG ID on the private region of the disks in the DG that is being imported. Such updated information is not yet seen by the slave because the disks have not been brought online again because the "noreonline" option was specified. As a result, the slave cannot identify the disk(s) based on the updated information sent from the master, which caused the import to fail with the error: Disk for disk group not found.

RESOLUTION:
VxVM is updated so that a shared DG import completes successfully even when the "-c" and the "-o noreonline" options are specified together.

* 4018178 (Tracking ID: 3906534)

SYMPTOM:
After Dynamic Multi-Pathing (DMP) Native support is enabled, /boot should to be mounted on the DMP device.

DESCRIPTION:
Typically, /boot is mounted on top of an Operating System (OS) device. When DMP Native support is enabled, only the volume groups (VGs) are migrated from the OS device to the DMP device, but /boot is not migrated. Parallely, if the OS device path is not available, the system becomes unbootable, because /boot is not available. Thus, it is necessary to mount /boot on the DMP device to provide multipathing and resiliency.

RESOLUTION:
The module is updated to migrate /boot on top of a DMP device when DMP Native support is enabled. Note: This fix is available for RHEL 6 only. For other Linux platforms, /boot will still not be mounted on the DMP device.

* 4031342 (Tracking ID: 4031452)

SYMPTOM:
Add node operation is failing with error "Error found while invoking '' in the new node, and rollback done in both nodes"

DESCRIPTION:
Stack showed a valid address for pointer ptmap2, but still it generated core.
It suggested that it might be a double-free case. Issue lies in freeing a pointer

RESOLUTION:
Added handling for such case by doing NULL assignment to pointers wherever they are freed

* 4037283 (Tracking ID: 4021301)

SYMPTOM:
Data corruption issue happened with the big size IO processed by Linux kernel IO split on RHEL8.

DESCRIPTION:
On RHEL8 or as of Linux kernel 3.13, it introduces some changes in Linux kernel block layer, new item of the bio iterator structure is used to represent the start offset of bio or bio vectors after the IO processed by Linux kernel IO split functions. Also, in recent version of vxfs, it can generate bio with larger size than the size limitation defined within Linux kernel block layer and VxVM, which lead the IO from vxfs could be split by Linux kernel. For such split IOs, VxVM does not take the new item of the bio iterator into account while process them, which caused the data is written to wrong position of volume/disk. Hence, data corruption.

RESOLUTION:
Code changes have been made to bypass the Linux kernel IO split functions, which seems redundant for VxVM IO processing.

* 4042038 (Tracking ID: 4040897)

SYMPTOM:
This is new array and we need to add support for claiming HPE MSA 2060 arrays.

DESCRIPTION:
HPE MSA 2060 is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE MSA 2060 array have been done.

* 4046906 (Tracking ID: 3956607)

SYMPTOM:
When removing a VxVM disk using the vxdg-rmdisk operation, the following error occurs while requesting a disk reclaim:
VxVM vxdg ERROR V-5-1-0 Disk <device_name> is used by one or more subdisks which are pending to be reclaimed.
Use "vxdisk reclaim <device_name>" to reclaim space used by these subdisks, and retry "vxdg rmdisk" command.
Note: The reclamation operation is irreversible. However, a core dump occurs when vxdisk-reclaim is executed.

DESCRIPTION:
This issue occurs due to a memory allocation failure in the disk-reclaim code, which fails to be detected and causes an invalid address to be referenced. Consequently, a core dump occurs.

RESOLUTION:
The disk-reclaim code is updated to handle memory allocation failures properly.

* 4046907 (Tracking ID: 4041001)

SYMPTOM:
When some nodes of a system are rebooted, they cannot join back because the required disk attach transactions fail.

DESCRIPTION:
In a VxVM environment, when some nodes are rebooted, some plexes of the volume are detached. It may happen that all the plexes of a volume are disabled. In this case, if all the plexes of some DCO volume become inaccessible, that DCO volume state does not get marked as BADLOG. Consequently, transactions fail with the following error:
VxVM ERROR V-5-1-10128  DCO experienced IO errors during the operation. Re-run the operation after ensuring that DCO is accessible.
The system hangs and the nodes cannot join, because the transactions fail.

RESOLUTION:
VxVM is updated to assress this issue. When all the plexes of a DCO become inaccessible during I/O load, the DCO state is marked as BADLOG.

* 4046908 (Tracking ID: 4038865)

SYMPTOM:
System panick at vxdmp module with following calltrace in IRQ stack.
native_queued_spin_lock_slowpath
queued_spin_lock_slowpath
_raw_spin_lock_irqsave7
dmp_get_shared_lock
gendmpiodone
dmpiodone
bio_endio
blk_update_request
scsi_end_request
scsi_io_completion
scsi_finish_command
scsi_softirq_done
blk_done_softirq
__do_softirq
call_softirq
do_softirq
irq_exit
do_IRQ
 <IRQ stack>

DESCRIPTION:
A deadlock issue can happen between inode_hash_lock and DMP shared lock, when one process holding inode_hash_lock but acquires the DMP shared lock in IRQ context, in the mean time other process holding the DMP shared lock may acquire inode_hash_lock.

RESOLUTION:
Code changes have been done to avoid the deadlock issue.

* 4047592 (Tracking ID: 3992040)

SYMPTOM:
VxFS Testing CFS Stress hits a kernel panic, f:vx_dio_bio_done:2

DESCRIPTION:
In RHEL8.0/SLES15 kernel code, The value in bi_status isn't a standard error code at and there are completely separate set of values that are all small positive integers (for example, BLK_STS_OK and BLK_STS_IOERROR) while actual errors sent by VM are different hence VM should send proper bi_status to FS with newer kernel. This fix avoids further kernel crashes.

RESOLUTION:
Code changes are done to have a map for bi_status and bi_error conversion( as it's been there in Linux Kernel code - blk-core.c)

* 4047695 (Tracking ID: 3911930)

SYMPTOM:
Valid PGR operations sometimes fail on a DMP node.

DESCRIPTION:
As part of the PGR operations, if the inquiry command finds that PGR is not
supported on the DMP node, the PGR_FLAG_NOTSUPPORTED flag is set on the node. Further PGR operations check this value and issue PGR commands only if the flag is not set. PGR_FLAG_NOTSUPPORTED remains set even if the hardware is changed so as to support PGR.

RESOLUTION:
A new command, enablepr, is provided in the vxdmppr utility to clear this flag on the specified DMP node.

* 4047722 (Tracking ID: 4023390)

SYMPTOM:
Vxconfigd crashes as a disk contains invalid privoffset(160), which is smaller than minimum required offset(VTOC 265, GPT 208).

DESCRIPTION:
There may have disk label corruption or stale information residents on the disk header, which caused unexpected label written.

RESOLUTION:
Add a assert when updating CDS label to ensure the valid privoffset written to disk header.

* 4049268 (Tracking ID: 4044583)

SYMPTOM:
A system goes into the maintenance mode when DMP is enabled to manage native devices.

DESCRIPTION:
The "vxdmpadm gettune dmp_native_support=on" command is used to enable DMP to manage native devices. After you change the value of the dmp_native_support tunable, you need to reboot the system needs for the changes to take effect. However, the system goes into the maintenance mode after it reboots. The issue occurs due to the copying of the local liblicmgr72.so file instead of the original one while creating the vx_initrd image.

RESOLUTION:
Code changes have been made to copy the correct liblicmgr72.so file. The system successfully reboots without going into maintenance mode.

Patch ID: VRTSaslapm 7.4.2.2200

* 4047510 (Tracking ID: 4042420)

SYMPTOM:
APM modules fails to load as hard link does not get created.

DESCRIPTION:
A symlink needs to be created for the apm to get created in different partition.

RESOLUTION:
The code changes will create symlink instead of hardlinks in order to facilitate the link creation when source and destination are at different partition and also to honor thr general script flow.

Patch ID: VRTSvxvm-7.4.2.1500

* 4018182 (Tracking ID: 4008664)

SYMPTOM:
System panic occurs with the following stack:

void genunix:psignal+4()
void vxio:vol_logger_signal_gen+0x40()
int vxio:vollog_logentry+0x84()
void vxio:vollog_logger+0xcc()
int vxio:voldco_update_rbufq_chunk+0x200()
int vxio:voldco_chunk_updatesio_start+0x364()
void vxio:voliod_iohandle+0x30()
void vxio:voliod_loop+0x26c((void *)0)
unix:thread_start+4()

DESCRIPTION:
Vxio keeps vxloggerd proc_t that is used to send a signal to vxloggerd. In case vxloggerd has been ended for some reason, the signal may be sent to an unexpected process, which may cause panic.

RESOLUTION:
Code changes have been made to correct the problem.

* 4020207 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature as ON, it can happen vxiod with ID as 128 starts replication where RVG was in DCM mode, this vxiod is waiting for filesystem's response if a given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a dead lock situation.

RESOLUTION:
Code changes have been made to avoid handling MDSHIP IO in vxiod whose ID is bigger than 127.

* 4020438 (Tracking ID: 4020046)

SYMPTOM:
The following IO errors are reported on VxVM sub-disks result in DRL log detached without any SCSI errors detected.

VxVM vxio V-5-0-1276 error on Subdisk [xxxx] while writing volume [yyyy][log] offset 0 length [zzzz]
VxVM vxio V-5-0-145 DRL volume yyyy[log] is detached

DESCRIPTION:
DRL plexes detached as an atomic write flag (BIT_ATOMIC) was set on BIO unexpectedly. The BIT_ATOMIC flag gets set on bio only if VOLSIO_BASEFLAG_ATOMIC_WRITE flag is set on SUBDISK SIO and its parent MVWRITE SIO's sio_base_flags. When generating MVWRITE SIO,  it's sio_base_flags was copied from a gio structure, because the gio structure memory isn't initialized it may contain gabarge values, hence the issue.

RESOLUTION:
Code changes have been made to fix the issue.

* 4021238 (Tracking ID: 4008075)

SYMPTOM:
Observed with ASL changes for NVMe, This issue observed in reboot scenario. For every reboot machine was hitting panic And this was happening in loop.

DESCRIPTION:
panic was hitting for such splitted bios, root cause for this is RHEL8 introduced a new field named as __bi_remaining.
where __bi_remaining is maintanins the count of chained bios, And for every endio that __bi_remaining gets atomically decreased in bio_endio() function.
While decreasing __bi_remaining OS checks that the __bi_remaining 'should not <= 0' and in our case __bi_remaining was always 0 and we were hitting OS
BUG_ON.

RESOLUTION:
>>> For scsi devices maxsize is 4194304,
[   26.919333] DMP_BIO_SIZE(orig_bio) : 16384, maxsize: 4194304
[   26.920063] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 4194304

>>>and for NVMe devices maxsize is 131072
[  153.297387] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 131072
[  153.298057] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 131072

* 4021240 (Tracking ID: 4010612)

SYMPTOM:
$ vxddladm set namingscheme=ebn lowercase=no
This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on. means every nvme/ssd disks names would be 
hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....

DESCRIPTION:
$ vxddladm set namingscheme=ebn lowercase=no
This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on.
means every nvme/ssd disks names would be hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....
eg.
smicro125_nvme0_0 <--- disk1
smicro125_nvme1_0 <--- disk2

for lowercase=no our current code is suppressing the suffix digit of enclosurname and hence multiple disks gets same name and it is showing udid_mismatch 
because whatever udid of private region is not matching with ddl. ddl database showing wrong info because of multiple disks gets same name.

smicro125_nvme_0 <--- disk1   <<<<<<<-----here suffix digit of nvme enclosure suppressed
smicro125_nvme_0 <--- disk2

RESOLUTION:
Append the suffix integer while making da_name

* 4021346 (Tracking ID: 4010207)

SYMPTOM:
System panic occurred with the below stack:

native_queued_spin_lock_slowpath()
queued_spin_lock_slowpath()
_raw_spin_lock_irqsave()
volget_rwspinlock()
volkiodone()
volfpdiskiodone()
voldiskiodone_intr()
voldmp_iodone()
bio_endio()
gendmpiodone()
dmpiodone()
bio_endio()
blk_update_request()
scsi_end_request()
scsi_io_completion()
scsi_finish_command()
scsi_softirq_done()
blk_done_softirq()
__do_softirq()
call_softirq()

DESCRIPTION:
As part of collecting the IO statistics collection, the vxstat thread acquires a spinlock and tries to copy data to the user space. During the data copy, if some page fault happens, then the thread would relinquish the CPU and provide the same to some other thread. If the thread which gets scheduled on the CPU requests the same spinlock which vxstat thread had acquired, then this results in a hard lockup situation.

RESOLUTION:
Code has been changed to properly release the spinlock before copying out the data to the user space during vxstat collection.

* 4021366 (Tracking ID: 4008741)

SYMPTOM:
VxVM device files appears to have device_t SELinux label.

DESCRIPTION:
If an unauthorized or modified device is allowed to exist on the system, there is the possibility the system may perform unintended or unauthorized operations.
eg: ls -LZ
...
...
/dev/vx/dsk/testdg/vol1   system_u:object_r:device_t:s0
/dev/vx/dmpconfig         system_u:object_r:device_t:s0
/dev/vx/vxcloud           system_u:object_r:device_t:s0

RESOLUTION:
Code changes made to change the device labels to misc_device_t, fixed_disk_device_t.

* 4023095 (Tracking ID: 4007920)

SYMPTOM:
vol_snap_fail_source tunable is set still largest and oldest snapshot automatically deleted when cache object becomes full

DESCRIPTION:
If vol_snap_fail_source tunable is set then oldest snapshot should not be deleted in case of cache object full. Flex requires these snapshots for rollback.

RESOLUTION:
Added fix to stop auto snapshot deletion in vxcached

Patch ID: VRTSaslapm 7.4.2.1500

* 4021235 (Tracking ID: 4010667)

SYMPTOM:
NVMe devices are not detected by Veritas Volume Manager(VxVM) on RHEL 8.

DESCRIPTION:
VxVM uses SCSI inquiry interface to detect the storage devices. From RHEL8 onwards SCSI inquiry interface is not available for NVMe devices.
Due to this VxVM fails to detect the NVMe devices.

RESOLUTION:
Code changes have been done to use NVMe IOCTL interface to detect the NVMe devices.

* 4021946 (Tracking ID: 4017905)

SYMPTOM:
VSPEx is new array that we need to support. The current ASL is not able to claim it.

DESCRIPTION:
VSPEx is new array and current ASL is not able to claim it. So, we need to modify our code to support this array.

RESOLUTION:
Modified the asl code to support claim for VSPEx array.

* 4022942 (Tracking ID: 4017656)

SYMPTOM:
This is new array and we need to add support for claiming XP8 arrays.

DESCRIPTION:
XP8 is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support XP8 array have been done.

Patch ID: VRTScavf-7.4.2.4900

* 4113616 (Tracking ID: 4027640)

SYMPTOM:
Global symbol "$py" requires explicit package name (did you forget to declare "my $py"?) in VCS logs agent/engine logs.

DESCRIPTION:
The issue is result of 'py' variable not declared

RESOLUTION:
Code changes to declare the variable before using it.

* 4133718 (Tracking ID: 4079285)

SYMPTOM:
CVMVolDg resource takes many minutes to online with CPS fencing.

DESCRIPTION:
When fencing is configured as CP server based but not disk based SCSI3PR, Diskgroups are still imported with SCSI3 reservations, which causes SCSI3 PR errors during import and it will take long time due to retries.

RESOLUTION:
Code changes have been done to import Diskgroup without SCSI3 reservations when SCSI3 PR is disabled.

Patch ID: VRTScavf-7.4.2.3700

* 4092597 (Tracking ID: 4092596)

SYMPTOM:
Support of unmounting all the filesystems in case of offline of cfmount agent on Solaris and Linux

DESCRIPTION:
Unmounting all the filesystems in case of offline of cfmount agent on Solaris and Linux

RESOLUTION:
Removed attribute changes and modified the linux and solaris clean_ref scripts to handle auto mounts

Patch ID: VRTScavf-7.4.2.1400

* 4054857 (Tracking ID: 4035066)

SYMPTOM:
For better debugging of monitor and EP timeout events, improved instrumentation in the code.

DESCRIPTION:
For better debugging of monitor and EP timeout events, improved instrumentation in the code.

RESOLUTION:
N/A

Patch ID: VRTSfsadv-7.4.2.4900

* 4130256 (Tracking ID: 4130255)

SYMPTOM:
Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.

DESCRIPTION:
VxFS uses the OpenSSL third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version (1.1.1v) of this third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSfsadv-7.4.2.3900

* 4105309 (Tracking ID: 4103002)

SYMPTOM:
Replication failures observed in internal testing

DESCRIPTION:
Replication related code changes done in VxFS repository to fix replication failures. The replication binaries are part of VRTSfsadv.

RESOLUTION:
Compiled VRTSfsadv with VxFS changes.

Patch ID: VRTSfsadv-7.4.2.3600

* 4088025 (Tracking ID: 4088024)

SYMPTOM:
Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.

DESCRIPTION:
VxFS uses the OpenSSL third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version (1.1.1q) of this third-party components in which the security vulnerabilities have been addressed. To accommodate the changes vxfs_solutions is added with libboost_system entries in Makefile [dedup/pdde/sdk/common/Makefile].

Patch ID: VRTSfsadv-7.4.2.2600

* 4070366 (Tracking ID: 4070367)

SYMPTOM:
After Upgrade "/var/VRTS/fsadv" directory is getting deleted.

DESCRIPTION:
"/var/VRTS/fsadv" directory is required to keep the logs which is getting deleted after we are upgrading VRTSfsadv package.

RESOLUTION:
Made necessary code changes to fix the issue.

* 4071090 (Tracking ID: 4040281)

SYMPTOM:
Security vulnerabilities exist in some third-party components used by VxFS.

DESCRIPTION:
VxFS uses several third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer versions of the third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSveki-7.4.2.4900

* 4125096 (Tracking ID: 4110457)

SYMPTOM:
Veki packaging failure due to missing of storageapi specific files

DESCRIPTION:
While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes 
were not taken care in the Veki mk-symlink and build scripts.

RESOLUTION:
Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles.
This is helping to package the storageapi along with veki and resolving all interdependencies

* 4135057 (Tracking ID: 4130815)

SYMPTOM:
VEKI rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to VEKI rpm.

Patch ID: VRTSveki-7.4.2.3700

* 4105334 (Tracking ID: 4105335)

SYMPTOM:
Failed to load VEKI module

DESCRIPTION:
Need recompilation of VEKI module

RESOLUTION:
Recompiled the VEKI module

Patch ID: VRTSveki-7.4.2.2600

* 4057596 (Tracking ID: 4055072)

SYMPTOM:
Upgrading VRTSveki package using yum reports following error as "Starting veki /etc/vx/veki: line 51: [: too many arguments"

DESCRIPTION:
While upgrading VRTSveki package, presence of multiple module directories might result in upgrade script printing error message.

RESOLUTION:
Code is modified to check for specific module directory related to current kernel version in VRTSveki upgrade script.

Patch ID: VRTSgms-7.4.2.4900

* 4125931 (Tracking ID: 4125932)

SYMPTOM:
no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.

DESCRIPTION:
no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.

RESOLUTION:
Updated the code to build gms with correct kbuild symbols.

* 4135270 (Tracking ID: 4129707)

SYMPTOM:
GMS rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to GMS rpm.

Patch ID: VRTSgms-7.4.2.3800

* 4105296 (Tracking ID: 4105297)

SYMPTOM:
GMS module failed to load on SLES12

DESCRIPTION:
Need recompilation of GMS module

RESOLUTION:
Recompiled the GMS module

Patch ID: VRTSgms-7.4.2.2600

* 4057424 (Tracking ID: 4057176)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock.

* 4061646 (Tracking ID: 4061644)

SYMPTOM:
GMS failed to start after the kernel upgrade with the following error as
"modprobe FATAL Module vxgms not found in directory /lib/modules/>"

DESCRIPTION:
Due to a race GMS module (vxgms.ko) is not copied to the latest kernel location inside /lib/module directory during reboot.

RESOLUTION:
Added code to copy it to the directory.

Patch ID: VRTSgms-7.4.2.1200

* 4023553 (Tracking ID: 4023552)

SYMPTOM:
VRTSgms module is not able to load on linux.

DESCRIPTION:
Need recompilation of VRTSgms due to recent changes in VRTSgms 
due to which some symbols are not being resolved.

RESOLUTION:
Recompiled the VRTSgms to load vxgms module.

Patch ID: VRTSglm-7.4.2.4900

* 4098108 (Tracking ID: 4087259)

SYMPTOM:
While upgrading CFS protocol from 90 to 135 (latest), system may panic with following stack trace.

schedule()
vxg_svar_sleep_unlock() 
vxg_create_kthread()
vxg_startthread()
vxg_thread_create()
vxg_leave_local_scopes()
vxg_recv_restart_reply()
vxg_recovery_helper()
vxg_kthread_init()
kthread()

DESCRIPTION:
In GLM (Group lock manager), while upgrading GLM protocol version from 90 to 135 (latest), GLM need to process structures for local scope functionality. GLM 
creates child threads to do this processing. The child threads are created while holding spin lock, which is causing this issue.

RESOLUTION:
Code is changed to create child threads for processing local scope structures after releasing spin lock.

* 4134673 (Tracking ID: 4126298)

SYMPTOM:
System may panic due to unable to handle kernel paging request 
and memory corruption could happen.

DESCRIPTION:
Panic may occur due to a race between a spurious wakeup and normal 
wakeup of thread waiting for glm lock grant. Due to the race, 
the spurious wakeup would have already freed a memory and then 
normal wakeup thread might be passing that freed and reused memory 
to wake_up function causing memory corruption and panic.

RESOLUTION:
Fixed the race between a spurious wakeup and normal wakeup threads
by making wake_up lock protected.

* 4135184 (Tracking ID: 4129714)

SYMPTOM:
GLM rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to GLM rpm.

Patch ID: VRTSglm-7.4.2.3800

* 4105278 (Tracking ID: 4105277)

SYMPTOM:
GLM module failed to load on Sles12

DESCRIPTION:
Need recompilation of GLM module

RESOLUTION:
Recompiled the GLM module

Patch ID: VRTSglm-7.4.2.1500

* 4014719 (Tracking ID: 4011596)

SYMPTOM:
Man page is missing details about of the feature we support

DESCRIPTION:
Need to include new option "-h" in glmdump in man page for using hacli utility for communicating across the nodes in the cluster.

RESOLUTION:
Added the details about the feature supported by glmdump in man page

Patch ID: VRTSodm-7.4.2.4900

* 4093943 (Tracking ID: 4076185)

SYMPTOM:
VxODM goes into maintenance mode after reboot, if Solaris local zones are configured.

DESCRIPTION:
Solaris changed their booting sequence in SOL11.4 SRU 42. When upgrading to SOL11.4 SRU 42 or greater, after reboot, VxODM in the global zone goes into maintenance mode if the Solaris local zones are configured on the system.

RESOLUTION:
Removed VCS dependency from VxODM and added zones service dependency on VxODM.

* 4126254 (Tracking ID: 4126256)

SYMPTOM:
no symbol version warning for "ki_get_boot" in dmesg after SFCFSHA configuration

DESCRIPTION:
modpost is unable to read VEKI's Module.symvers while building ODM module, which results in no symbol version warning for "ki_get_boot" symbol of VEKI.

RESOLUTION:
Modified the code to make sure that modpost picks all the dependent symbols while building ODM module.

* 4135325 (Tracking ID: 4129837)

SYMPTOM:
ODM rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to ODM rpm.

* 4149888 (Tracking ID: 4118154)

SYMPTOM:
System may panic in simple_unlock_mem() when errcheckdetail enabled with stack trace as follows.
		simple_unlock_mem()
		odm_io_waitreq()
		odm_io_waitreqs()
		odm_request_wait()
		odm_io()
		odm_io_stat()
		vxodmioctl()

DESCRIPTION:
odm_io_waitreq() has taken a lock and waiting to complete the IO request but it is interrupted by odm_iodone() to perform IO and unlocked a lock taken by odm_io_waitreq(). So when odm_io_waitreq() tries to unlock the lock it leads to panic as lock was unlocked already.

RESOLUTION:
Code has been modified to resolve this issue.

Patch ID: VRTSodm-7.4.2.4200

* 4112305 (Tracking ID: 4112304)

SYMPTOM:
VRTSodm driver will not load with VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm with latest VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new VRTSvxfs .

Patch ID: VRTSodm-7.4.2.3900

* 4105305 (Tracking ID: 4105306)

SYMPTOM:
VRTSodm driver will not load with VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm with latest VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new VRTSvxfs .

Patch ID: VRTSodm-7.4.2.3500

* 4087157 (Tracking ID: 4087155)

SYMPTOM:
VRTSodm driver will not load with VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm with latest VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new VRTSvxfs .

Patch ID: VRTSodm-7.4.2.3400

* 4080777 (Tracking ID: 4080776)

SYMPTOM:
VRTSodm driver will not load with 7.4.1.3400 VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm with latest VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new VRTSvxfs .

Patch ID: VRTSodm-7.4.2.2600

* 4057429 (Tracking ID: 4056673)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock. Corrected vxgms dependency in odm service file.

* 4060584 (Tracking ID: 3868609)

SYMPTOM:
While applying Oracle redo logs, a significant increase is observed in the CPU usage by the vxfs thread.

DESCRIPTION:
To avoid memory deadlocks and to track exiting threads with outstanding ODM requests, the kernels memory management was analysed. While the Oracle threads are being rescheduled, they hold the mmap_sem. The FDD threads keep waiting for mmap_sem to be released, which causes the contention and the high CPU usage.

RESOLUTION:
The bouncing of the spinlock between the CPUs is removed to reduce the CPU spike.

Patch ID: VRTSodm-7.4.2.2200

* 4049440 (Tracking ID: 4049438)

SYMPTOM:
VRTSodm driver will not load with 7.4.2.2200 VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm due to recent changes in VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new changes in VRTSvxfs.

Patch ID: VRTSodm-7.4.2.1500

* 4023556 (Tracking ID: 4023555)

SYMPTOM:
VRTSodm module is not able to load on linux.

DESCRIPTION:
Need recompilation of VRTSodm due to recent changes in VRTSodm 
due to which some symbols are not being resolved.

RESOLUTION:
Recompiled the VRTSodm to load vxodm module.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sles12_x86_64-Patch-7.4.2.5100.tar.gz to /tmp
2. Untar infoscale-sles12_x86_64-Patch-7.4.2.5100.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sles12_x86_64-Patch-7.4.2.5100.tar.gz
    # tar xf /tmp/infoscale-sles12_x86_64-Patch-7.4.2.5100.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale742P5100 [<host1> <host2>...]

You can also install this patch together with 7.4.2 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.2 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


KNOWN ISSUES
------------
* Tracking ID: 3993256

SYMPTOM: softlock up messages get logged in syslog.

WORKAROUND: Currently there is no workaround for this. However, if we enable softlockup_panic then it will generate kernel core. If this parameter is not set then it is expected to have no visible impact.

* Tracking ID: 4097111

SYMPTOM: While doing two or more mount (of vxfs file system) operations in parallel, underneath an already existing vxfs mount point, if a force umount is attempted on the parent vxfs mount point, then sometimes the force unmount operation hangs permanently.

WORKAROUND: None except rebooting the system.



SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE