infoscale-sles15_x86_64-Patch-7.4.1.3100

 Basic information
Release type: Patch
Release date: 2022-01-16
OS update support: SLES15 x86-64 SP 2
Technote: None
Documentation: None
Popularity: 592 viewed    downloaded
Download size: 424.09 MB
Checksum: 804758958

 Applies to one or more of the following products:
InfoScale Availability 7.4.1 On SLES15 x86-64
InfoScale Enterprise 7.4.1 On SLES15 x86-64
InfoScale Foundation 7.4.1 On SLES15 x86-64
InfoScale Storage 7.4.1 On SLES15 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
dbed-sles15_x86_64-Patch-7.4.1.1600 (obsolete) 2019-12-19
vcsea-sles15_x86_64-Patch-7.4.1.1600 (obsolete) 2019-12-19

 Fixes the following incidents:
3969748, 3970852, 3972077, 3973119, 3976693, 3979596, 3980547, 3980564, 3980944, 3982248, 3983165, 3983991, 3983996, 3984155, 3984175, 3984343, 3985584, 3986468, 3986572, 3986960, 3987228, 3989085, 3989099, 3989587, 3991386, 3991388, 3991390, 3991392, 3991538, 3991733, 3992045, 3992091, 3992092, 3992222, 3992902, 3993898, 3995684, 3995826, 3996220, 3999398, 3999671, 4000598, 4000746, 4002584, 4003442, 4004174, 4004182, 4004927, 4006619, 4006950, 4008070, 4009762, 4011097, 4012032, 4012318, 4013643, 4014715, 4014718, 4014984, 4015142, 4015824, 4016077, 4016082, 4016283, 4016291, 4016487, 4016488, 4016625, 4016768, 4017194, 4017284, 4019003, 4019290, 4020128, 4020803, 4021371, 4021427, 4022784, 4022791, 4023762, 4026389, 4026815, 4028124, 4028780, 4029112, 4029173, 4031342, 4033162, 4033163, 4033172, 4033216, 4033515, 4033989, 4036426, 4038100, 4038915, 4038916, 4038919, 4039240, 4039244, 4039525, 4039526, 4039527, 4039671, 4039684, 4039685, 4039686, 4040183, 4040842, 4040893, 4041107, 4041318, 4041319, 4042130, 4042494, 4042685, 4042947, 4042983, 4043494, 4044135, 4044143, 4044340, 4044639, 4045494, 4049416, 4050229, 4050664, 4051040, 4051532, 4051815, 4051887, 4051889, 4051896, 4053149, 4054243, 4054244, 4054264, 4054265, 4054266, 4054267, 4054269, 4054270, 4054271, 4054272, 4054273, 4054276, 4054323, 4054325, 4054387, 4054412, 4054416, 4054697, 4054724, 4054725, 4054726, 4055071, 4055653, 4055660, 4055668, 4055697, 4055772, 4055858, 4055895, 4055899, 4055905, 4055925, 4055938, 4056103, 4056107, 4056121, 4056124, 4056144, 4056146, 4056154, 4056525, 4056567, 4056672, 4056832, 4056918, 4057175, 4058763, 4060792, 4061465

 Patch ID:
VRTSglm-7.4.1.3100-SLES15
VRTScps-7.4.1.3100-SLES15
VRTSsfcpi-7.4.1.3100-GENERIC
VRTSdbed-7.4.1.3100-SLES
VRTSvlic-4.01.741.300-SLES
VRTSpython-3.6.6.10-SLES15
VRTSllt-7.4.1.3300-SLES15
VRTSaslapm-7.4.1.3300-SLES15
VRTSvxvm-7.4.1.3300-SLES15
VRTSgab-7.4.1.3300-SLES15
VRTSamf-7.4.1.3400-SLES15
VRTSvcsag-7.4.1.3300-SLES15
VRTSvcs-7.4.1.3300-SLES15
VRTSvcsea-7.4.1.3300-SLES15
VRTSvxfs-7.4.1.3400-SLES15
VRTSodm-7.4.1.3400-SLES15
VRTSveki-7.4.1.3400-SLES15
VRTScavf-7.4.1.3400-SLES15
VRTSgms-7.4.1.3400-SLES15
VRTSvxfen-7.4.1.3300-SLES15

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.1 * * *
                         * * * Patch 3100 * * *
                         Patch Date: 2022-01-14


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH
   * KNOWN ISSUES


PATCH NAME
----------
InfoScale 7.4.1 Patch 3100


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES15 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTScavf
VRTScps
VRTSdbed
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSpython
VRTSsfcpi
VRTSvcs
VRTSvcsag
VRTSvcsea
VRTSveki
VRTSvlic
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.1
   * InfoScale Enterprise 7.4.1
   * InfoScale Foundation 7.4.1
   * InfoScale Storage 7.4.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTScps-7.4.1.3100
* 4038100 (4034933) After installing VRTScps 6.2.1.002, the following error is logged in cpserver_A.log "CPS CRITICAL V-97-1400-22017 Error executing update nodes set is_reachable..."
Patch ID: VRTSgms-7.4.1.3400
* 4057175 (4057176) Rebooting the system results into emergency mode due to corruption of module dependency files.
* 4061465 (4061644) GMS failed to start after the kernel upgrade.
Patch ID: VRTSgms-7.4.1.1600
* 3991392 (3991391) GMS module failed to load on SLES15 SP1
Patch ID: VRTSvcsag-7.4.1.3300
* 4019290 (4012396) The AzureDisk agent needs to support the latest Azure Storage SDK.
* 4042947 (4042944) In a hardware replication environment, a disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.
* 4054269 (4030215) The InfoScale agents for Azure agents did not support credential validation methods based on the azure-identity library.
* 4054270 (4057550) The InfoScale agents for Azure do not handle generic exceptions.
* 4054273 (4044567) The HostMonitor agent faults while logging the memory usage of a system.
* 4054276 (4048164) When a cloud API that an InfoScale agent has called hangs, an unwanted failover of the associated service group may occur.
Patch ID: VRTSvcsag-7.4.1.3200
* 4021371 (4021370) The AWSIP and EBSVol resources fail to come online when IMDSv2 is used for requesting instance metadata.
* 4028124 (4027915) Processes configured for HA using the ProcessOnOnly agent get killed during shutdown or reboot, even if they are still in use.
* 4038915 (1837967) Application agent falsely detects an application as faulted, due to corruption caused by non-redirected STDOUT or STDERR.
* 4038916 (3860766) HostMonitor agent shows incorrect swap space usage in the agent logs.
* 4038919 (4038906) In case of ESXi 6.7, the VMwareDisks agent fails to perform a failover on a peer node.
* 4042947 (4042944) In a hardware replicated environment, a disk group resource may fail to import when the HARDWARE_MIRROR flag is set
Patch ID: VRTSvcsag-7.4.1.2800
* 3984343 (3982300) A warning message related to the process priority is logged in the ProcessOnOnly agent log every minute.
* 4006950 (4006979) When the AzureDisk resource comes online on a cluster node, it goes into the UNKNOWN state on all the other nodes.
* 4009762 (4009761) A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.
* 4016488 (4007764) The NFS locks related log file is flooded with the "sync_dir:copy failed for link" error messages.
* 4016625 (4016624) When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.
Patch ID: VRTSgab-7.4.1.3300
* 4054264 (4046413) After a node is added to or removed from a cluster, the GAB node count or the fencing quorum is not updated.
* 4054265 (4046418) The GAB module starts up even if LLT is not configured.
* 4060792 (4057312) Load time GAB tunables fail to persist the updated value after upgrade.
Patch ID: VRTSgab-7.4.1.3100
* 4041319 (4038112) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 2 (SLES 15 SP2).
Patch ID: VRTSgab-7.4.1.2800
* 4016487 (4007726) When a GAB message that is longer than the value specified by GAB_MAX_MSGSIZE is transfered, an error message is added to the VCS logs. However the error message is not sufficiently descriptive.
Patch ID: VRTSgab-7.4.1.1800
* 3992091 (3992044) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).
Patch ID: VRTSllt-7.4.1.3300
* 4050664 (4046199) LLT configurations over UDP accept only ethernet interface names as link tag names.
* 4051040 (3989372) When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.
* 4054272 (4045607) Performance improvement of the UDP multiport feature of LLT on 1500 MTU-based networks.
* 4054697 (3985775) Sometimes, the system log may get flooded with LLT heartbeat loss messages that do not necessarily indicate any actual issues with LLT.
* 4058763 (4057310) After an InfoScale upgrade, the updated values of LLT and GAB tunables that are used when loading the corresponding modules fail to persist.
Patch ID: VRTSllt-7.4.1.3100
* 4022791 (4022792) A cluster node panics during an FSS I/O transfer over LLT.
* 4029112 (4029253) LLT may not reuse the buffer slots on which NAK is received from the earlier RDMA writes.
* 4041318 (4038112) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 2 (SLES 15 SP2).
Patch ID: VRTSllt-7.4.1.2800
* 3999398 (3989440) The dash (-) in the device name may cause the LLT link configuration to fail.
* 4002584 (3994996) Adding -H miscellaneous flag to add new functionalities in lltconfig. Add a tunable to allow skb alloc with SLEEP flag.
* 4003442 (3983418) In a rare case, after a panic or a reboot of a node, it may fail to join the CVM master due to an inconsistent LLT port state on the master.
Patch ID: VRTSllt-7.4.1.1800
* 3992045 (3992044) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).
Patch ID: VRTSvxfen-7.4.1.3300
* 4051532 (4057308) After an InfoScale upgrade, the updated values of vxfen tunables that are used when loading the corresponding module fail to persist.
Patch ID: VRTSvxfen-7.4.1.3100
* 3996220 (3996218) In a customized fencing mode, the 'vxfenconfig -c' command creates a new vxfend process even if VxFen is already configured.
* 4028780 (4029261) An entire InfoScale cluster may go down unexpectedly if one of its nodes receives a RECONFIG message during a shutdown or a restart operation.
Patch ID: VRTSvxfen-7.4.1.2800
* 4000746 (4000745) The VxFEN process fails to start due to late discovery of the VxFEN disk group.
Patch ID: VRTSveki-7.4.1.3400
* 4055071 (4055072) Upgrading VRTSveki package using yum reports error
* 4056103 (3992450) VRTSveki installation is failing.
Patch ID: VRTSveki-7.4.1.3100
* 4041107 (4039061) Veki failed to load on SLES15SP2
Patch ID: VRTSveki-7.4.1.1600
* 3991733 (3991734) veki module failed to load on SLES15 SP1
Patch ID: VRTSodm-7.4.1.3400
* 4056672 (4056673) Rebooting the system results into emergency mode due to corruption of module dependency files. Incorrect vxgms dependency in odm service file.
Patch ID: VRTSodm-7.4.1.3100
* 4039686 (4036034) ODM module failed to load on SLES15SP2.
Patch ID: VRTSodm-7.4.1.2800
* 4020803 (4020800) VRTSodm-7.4.1 module unable to load on SLES15SP1.
Patch ID: VRTSodm-7.4.1.1700
* 3991388 (3991387) ODM module failed to load on SLES15 SP1
Patch ID: VRTSvxfs-7.4.1.3400
* 4026389 (4026388) Unable to unpin a file back after pinning while testing SMARTIO.
* 4050229 (4018995) Panic in AIX while doing read ahead of size greater than 4 GB.
* 4053149 (4043084) panic in vx_cbdnlc_lookup
* 4054243 (4014274) Gsed cmd might report a bad address error
* 4054244 (4052449) Cluster goes in an 'unresponsive' mode while invalidating pages due to duplicate page entries in iowr structure.
* 4054387 (4054386) If systemd service fails to load vxfs module, the service still shows status as active instead of failed.
* 4054412 (4042254) A new feature has been added in vxupgrade which fails disk-layout upgrade if sufficient space is not available in the filesystem.
* 4054416 (4005620) Internal counter of inodes from Inode Allocation Unit (IAU) can be negative if IAU is marked bad.
* 4054724 (4018697) After installing InfoScale 7.4.2, the system fails to start.
* 4054725 (4051108) While storing multiple attributes for a file in an immediate area of inode, system might be unresponsive due to a wrong loop increment statement.
* 4054726 (4051026) If file system is created with inode size 512, the file system might report inconsistencies with the bitmap after running fsck .
* 4055858 (4042925) Intermittent Performance issue on commands like df and ls.
Patch ID: VRTSvxfs-7.4.1.3200
* 3983991 (3864111) The vx_nospace encountered randomly when starting Oracle DB on file system with
multiple checkpoints.
* 3983996 (3941620) Memory starvation during heavy write activity.
* 3989587 (3989588) Mark FCL bad for fileset (checkpoint) and deactivate FCL
* 4039671 (4021368) FSCK fails with below error 
             "UX:vxfs fsck: ERROR: V-3-26113: bc_getfreebuf internal error"
* 4039684 (4036018) VxFS module failed to load on SLES15SP2
* 4042130 (3993822) fsck stops running on a file system
* 4042685 (4042684) ODM resize fails for size 8192.
* 4042983 (3990168) Accommodation of new interpretation of error from bio structure in linux kernel greater than 4.12.14
* 4043494 (3985749) Observed hang in internal fsdedup test
* 4044639 (4013139) "fsmigadm" utility has been updated to successfully abort the online migration from native filesystem to VxFS on RHEL 8.x
Patch ID: VRTSvxfs-7.4.1.2800
* 3976693 (4016085) fsdb command "xxxiau" refers wrong device to dump information
* 3983165 (3975019) Under IO load running with NFS v4 using NFS lease, may panic the server
* 4004182 (4004181) Read the value of VxFS compliance clock
* 4004927 (3983350) Secondary may falsely assume that the ilist extent is pushed and do the allocation, even if the actual push transaction failed on primary.
* 4014718 (4011596) man page changes for glmdump
* 4015824 (4015278) System panics during vx_uiomove_by _hand.
* 4016077 (4009328) In cluster filesystem, unmount hang could be observed if smap is marked bad previously.
* 4016082 (4000465) FSCK binary loops when it detects break in sequence of log ids.
Patch ID: VRTSvxfs-7.4.1.1700
* 3991386 (3991385) VxFS module failed to load on SLES15 SP1
Patch ID: VRTSpython-3.6.6.10
* 4056525 (4049692) Additional Python modules are required in the VRTSpython package to support changes in the InfoScale licensing component.
Patch ID: VRTSvlic-4.01.741.300
* 4049416 (4049416) Migrate Licensing Collector service from Java to Python.
Patch ID: VRTScavf-7.4.1.3400
* 4056567 (4054462) In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.
Patch ID: VRTSvcsea-7.4.1.3300
* 4054325 (4043289) In an Oracle ASM 19c environment on Solaris, the ASMInst agent fails to come online or to detect the state of the related resources.
Patch ID: VRTSvcsea-7.4.1.3100
* 4044135 (4044136) Package installation failed on CentOS platform.
Patch ID: VRTSvcsea-7.4.1.1600
* 3982248 (3989510) The VCS agent for Oracle does not support Oracle 19c databases.
Patch ID: VRTSvcs-7.4.1.3300
* 4054266 (4040705) When a command exceeds 4096 characters, hacli hangs indefinitely.
* 4054267 (4040656) When the ENOMEM error occurs, HAD does not shut down gracefully.
* 4054271 (4043700) In case of failover, parallel, or hybrid service groups, multiple PreOnline triggers can be executed on the same node or on different nodes in a cluster while an online operation is in already progress.
Patch ID: VRTSvcs-7.4.1.3100
* 4026815 (4026819) When IPv6 is disabled, non-root guest users cannot run HAD CLI commands.
Patch ID: VRTSvcs-7.4.1.2800
* 3995684 (3995685) Discrepancy in engine log messages of PR and DR site in GCO configuration.
* 4012318 (4012518) The gcoconfig command does not accept "." in the interface name.
Patch ID: VRTSamf-7.4.1.3400
* 4054323 (4001565) On Solaris 11.4, IMF fails to provide notifications when Oracle processes stop.
Patch ID: VRTSamf-7.4.1.3200
* 4044340 (4041703) The system panics when the Mount and the CFSMount agents fail to register with AMF.
Patch ID: VRTSamf-7.4.1.2800
* 4019003 (4018791) A cluster node panics when the AMF module attempts to access an executable binary or a script using its absolute path.
Patch ID: VRTSamf-7.4.1.1800
* 3992092 (3992044) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).
Patch ID: VRTSvxvm-7.4.1.3300
* 3984175 (3917636) Filesystems from /etc/fstab file are not mounted automatically on boot through systemd on RHEL7 and SLES12.
* 4011097 (4010794) When storage activity was going on, Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster.
* 4039527 (4018086) The system hangs when the RVG in DCM resync with SmartMove is set to ON.
* 4045494 (4021939) The "vradmin syncvol" command fails due to recent changes related to binding sockets without specifying IP addresses.
* 4051815 (4031597) vradmind generates a core dump in __strncpy_sse2_unaligned.
* 4051887 (3956607) A core dump occurs when you run the vxdisk reclaim command.
* 4051889 (4019182) In case of a VxDMP configuration, an InfoScale server panics when applying a patch.
* 4051896 (4010458) In a Veritas Volume Replicator (VVR) environment, the rlink might inconsistently disconnect due to unexpected transactions.
* 4055653 (4049082) I/O read error is displayed when remote FSS node rebooting.
* 4055660 (4046007) The private disk region gets corrupted if the cluster name is changed in FSS environment.
* 4055668 (4045871) vxconfigd crashed at ddl_get_disk_given_path.
* 4055697 (4047793) Unable to import diskgroup even replicated disks are in SPLIT mode
* 4055772 (4043337) logging fixes for VVR
* 4055895 (4038865) The system panics due to deadlock between inode_hash_lock and DMP shared lock.
* 4055899 (3993242) vxsnap prepare command when run on vset sometimes fails.
* 4055905 (4052191) Unexcepted scripts or commands are run due to an incorrect comments format in the vxvm-configure script.
* 4055925 (4031064) Master switch operation is hung in VVR secondary environment.
* 4055938 (3999073) The file system corrupts when the cfsmount group goes into offline state.
* 4056107 (4036181) Volumes that are under a RVG (Replicated Volume Group), report an IO error.
* 4056124 (4008664) System panic when signal vxlogger daemon that has ended.
* 4056144 (3906534) After Dynamic Multi-Pathing (DMP) Native support is enabled, /boot should to be mounted on the DMP device.
* 4056146 (3983832) VxVM commands hang in CVR environment.
* 4056832 (4057526) Adding check for init while accessing /var/lock/subsys/ path in vxnm-vxnetd.sh script.
* 4056121 (4003617) Vxconfigd daemon dump core during node join operation
* 4056154 (3975081) DMP (Dynamic Multipathing) Native support fails to get enabled after reboot.
* 4056918 (3995648) Import of disk group in Flexible Storage Sharing (FSS) with missing disks can lead to data corruption.
Patch ID: VRTSvxvm-7.4.1.3100
* 4013643 (4010207) System panicked due to hard-lockup due to a spinlock not released properly during the vxstat collection.
* 4017284 (4011691) High CPU consumption on the VVR secondary nodes because of high pending IO load.
* 4023762 (4020046) DRL log plex gets detached unexpectedly.
* 4031342 (4031452) vxesd core dump in esd_write_fc()
* 4033162 (3968279) Vxconfigd dumping core for NVME disk setup.
* 4033163 (3959716) System may panic with sync replication with VVR configuration, when the RVG is in DCM mode.
* 4033172 (3994368) vxconfigd daemon abort cause I/O write error
* 4033216 (3993050) vxdctl dumpmsg command gets stuck on large node cluster
* 4033515 (3984266) DCM flag in on the RVG volume may get deactivated after a master switch, which may cause excessive RVG recovery after subsequent node reboots.
* 4036426 (4036423) Race condition while reading config file in docker volume plugin caused the issue in Flex Appliance.
* 4039240 (4027261) World writable permission not required for /var/VRTSvxvm/in.vxrsyncd.stderr and /var/adm/vx/vxdmpd.log
* 4039244 (4010612) This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on. means every nvme/ssd disks names would be 
hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....
* 4039525 (4012763) IO hang may happen in VVR (Veritas Volume Replicator) configuration when SRL overflows for one rlink while another one rlink is in AUTOSYNC mode.
* 4039526 (4034616) vol_seclog_limit_ioload tunable needs to be enabled on Linux only.
* 4039527 (4018086) system hang was observed when RVG was in DCM resync with SmartMove as ON.
* 4040183 (4034857) VxVM support on SLES 15 SP2
* 4040842 (4009353) Post enabling dmp native support machine is going in to mantaince mode
Patch ID: VRTSvxvm-7.4.1.2800
* 3984155 (3976678) vxvm-recover:  cat: write error: Broken pipe error encountered in syslog.
* 4016283 (3973202) A VVR primary node may panic due to accessing already freed memory.
* 4016291 (4002066) Panic and Hang seen in reclaim
* 4016768 (3989161) The system panic occurs when dealing with getting log requests from vxloggerd.
* 4017194 (4012681) If vradmind process terminates due to some reason, it is not properly restarted by RVG agent of VCS.
Patch ID: VRTSvxvm-7.4.1.1800
* 3991538 (3989949) SLES15 SP1 support to VxVM
* 3992902 (3975667) Softlock in vol_ioship_sender kernel thread
Patch ID: VRTSglm-7.4.1.3100
* 4039685 (4038994) GLM module failed to load on SLES15SP2
Patch ID: VRTSglm-7.4.1.2800
* 4014715 (4011596) man page changes for glmdump
Patch ID: VRTSglm-7.4.1.1600
* 3991390 (3991389) GLM module failed to load on SLES15 SP1
Patch ID: VRTSdbed-7.4.1.3100
* 4044143 (4044136) Package installation failed on CentOS platform.
Patch ID: VRTSdbed-7.4.1.1600
* 3980547 (3989826) The VCS agent for Oracle does not support Oracle 19c databases.
Patch ID: VRTSsfcpi-7.4.1.3100
* 3969748 (3969438) Rolling upgrade fails when local node is specified at the system name prompt on non-fss environment.
* 3970852 (3970848) The CPI configures the product even if you use the -install option while installing the InfoScale Foundation version 7.x or later.
* 3972077 (3972075) Veki fails to unload during patch installation when -patch_path option is used with the installer.
* 3973119 (3973114) While upgrading to Infoscale 7.4, the installer fails to stop vxspec, vxio and
vxdmp as vxcloudd is still running.
* 3979596 (3979603) While upgrading from InfoScale 7.4.1 to 7.4.1.xxx, CPI installs the packages from the 7.4.1.xxx patch only and not the base packages of 7.4.1 GA.
* 3980564 (3980562) CPI does not perform the installation when two InfoScale patch paths are provided, and displays the following message: "CPI ERROR V-9-30-1421 The patch_path and patch2_path patches are both for the same package: VRTSinfoscale".
* 3980944 (3981519) An uninstallation of InfoScale 7.4.1 using response files fails with an error.
* 3985584 (3985583) The addnode operation fails during symmetry check of a new node with other nodes in the cluster.
* 3986468 (3987894) An InfoScale 7.4.1 installation fails even though the installer automatically downloads the appropriate platform support patch from SORT.
* 3986572 (3965602) Rolling upgrade Phase2 fails if a patch is installed as a part of Rolling
upgrade Phase1.
* 3986960 (3986959) The installer fails to install the 'infoscale-sles12.4_x86_64-Patch-7.4.1.100' patch on SLES 12 SP4.
* 3987228 (3987171) The installer takes longer than expected time to start installation process.
* 3989085 (3989081) When a system is restarted after a successful VVR configuration, it becomes unresponsive.
* 3989099 (3989098) For SLES15, system clock synchronization using the NTP server fails while configuring server-based fencing.
* 3992222 (3992254) The installer fails to install InfoScale 7.4.1 on SLES12 SP5.
* 3993898 (3993897) On SLES12 SP4, if the kernel version is not 4.12.14-94.41-default, the installer fails to install InfoScale 7.4.1.
* 3995826 (3995825) The installer script fails to stop the vxfen service while configuring InfoScale components or applying patches.
* 3999671 (3999669) A single-node HA configuration failed on a NetBackup Appliance system because CollectorService failed to start.
* 4000598 (4000596) The 'showversion' option of the InfoScale 7.4.1 installer fails to download the available maintenance releases or patch releases.
* 4004174 (4004172) On SLES15 SP1, while installing InfoScale 7.4.1 along with product patch, the installer fails to install some of the base rpms and exits with an error.
* 4006619 (4015976) On a Solaris system, patch upgrade of InfoScale fails with an error in the alternate boot environment.
* 4008070 (4008578) Even though a cluster node may have a fully qualified hostname, the product installer trims this value and uses the shorter hostname for the cluster configuration.
* 4012032 (4012031) Installer does not upgrade VRTSvxfs and VRTSodm inside
non-global zones
* 4014984 (4014983) The product installer does not display a warning at the time of the pre-upgrade check to suggest that you will need to provide telemetry details later on if the cluster nodes are not registered with TES or VCR.
* 4015142 (4015139) Product installer fails to install InfoScale on RHEL 8 systems if IPv6 addresses are provided for the system list.
* 4020128 (4020370) The product installer fails to complete a fresh configuration of InfoScale Enterprise on a RHEL 7 system when it fails to stop the vxglm service.
* 4021427 (4021515) On SLES 12 SP4 and later systems, the installer fails to fetch the media speed of the network interfaces.
* 4022784 (4022640) Installer fails to complete installation after it automatically downloads a required support patch from SORT that contains a VRTSvlic package.
* 4029173 (4027759) The product installer installs lower versions packages if multiple patch bundles are specified using the patch path options in the incorrect order.
* 4033989 (4033988) The product installer does not allow the installation of an Infoscale patch bundle if a more recent version of any package in the bundle is already installed on the system.
* 4040893 (4040946) The product installer fails to install InfoScale 7.4.1 on SLES 15 SP2.
* 4042494 (4042674) The product installer does not honor the single-node mode of a cluster and restarts it in the multi-mode if 'vcs_allowcomms = 1'.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTScps-7.4.1.3100

* 4038100 (Tracking ID: 4034933)

SYMPTOM:
After installing VRTScps 6.2.1.002, the following error is logged in cpserver_A.log "CPS CRITICAL V-97-1400-22017 Error executing update nodes set is_reachable..."

DESCRIPTION:
This issue occurs due to unexpected locking of the CP server database that is related to the stale key detection feature.

RESOLUTION:
This hotfix updates the VRTScps RPM so that the unexpected database lock is cleared and the nodes can updated successfully.

Patch ID: VRTSgms-7.4.1.3400

* 4057175 (Tracking ID: 4057176)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock.

* 4061465 (Tracking ID: 4061644)

SYMPTOM:
GMS failed to start after the kernel upgrade with the following error as
"modprobe FATAL Module vxgms not found in directory /lib/modules/>"

DESCRIPTION:
Due to a race GMS module (vxgms.ko) is not copied to the latest kernel location inside /lib/module directory during reboot.

RESOLUTION:
Added code to copy it to the directory.

Patch ID: VRTSgms-7.4.1.1600

* 3991392 (Tracking ID: 3991391)

SYMPTOM:
GMS module failed to load on SLES15 SP1

DESCRIPTION:
The SLES15 SP1 is new release and it has some changes in kernel which caused GMS module failed to load
on it.

RESOLUTION:
Added code to support GMS on SLES15 SP1

Patch ID: VRTSvcsag-7.4.1.3300

* 4019290 (Tracking ID: 4012396)

SYMPTOM:
The AzureDisk agent fails to work with the latest Azure Storage SDK.

DESCRIPTION:
The InfoScale AzureDisk agent does not yet support the latest Python SDK for Azure Storage.

RESOLUTION:
The AzureDisk agent is now updated to support the latest Azure Storage Python SDK.

* 4042947 (Tracking ID: 4042944)

SYMPTOM:
In a hardware replication environment, a disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.

DESCRIPTION:
After the VCS hardware replication agent resource fails over control to the secondary site, the DiskGroup agent does not rescan all the required device paths in case of a multi-pathing configuration. The vxdg import operation fails, because the hardware device characteristics for all the paths are not refreshed.

RESOLUTION:
A new resource-level attribute, ScanDisks, is introduced for the DiskGroup agent. The ScanDisks attribute lets you perform a selective devices scan for all the disk paths that are associated with a VxVM disk group. Before attempting to import a hardware clone or a hardware replicated device, the VxVM and the DMP attributes of a disk are refreshed. The default value of ScanDisks is 0, which indicates that a selective device scan is not performed. Even when ScanDisks is set to 0, if the disk group fails with an error string containing HARDWARE_MIRROR during the first disk group import attempt, the DiskGroup agent performs a selective device scan to increase the chances of a successful import.
Sample resource configuration for hardware clone disk groups:
DiskGroup tc_dg (
DiskGroup = datadg
DGOptions = "-o useclonedev=on -o updateid"
ForceImport = 0
ScanDisks = 1
)
Sample resource configuration for hardware replicated disk groups:
DiskGroup tc_dg (
DiskGroup = datadg
ForceImport = 0
ScanDisks = 1
)

* 4054269 (Tracking ID: 4030215)

SYMPTOM:
The InfoScale agents for Azure agents did not support credential validation methods based on the azure-identity library.

DESCRIPTION:
The Microsoft Azure credential system is revamped, and the new system is available in the azure-identity library.

RESOLUTION:
The InfoScale agents for Azure have been enhanced to support credential validation methods based on the azure-identity library. Now, the agents support the following Azure Python SDK versions:
azure-common==1.1.25
azure-core==1.10.0
azure-identity==1.4.1
azure-mgmt-compute==19.0.0
azure-mgmt-core==1.2.2
azure-mgmt-dns==8.0.0
azure-mgmt-network==17.1.0
azure-storage-blob==12.8.0
msrestazure==0.6.4

* 4054270 (Tracking ID: 4057550)

SYMPTOM:
The InfoScale agents for Azure do not handle generic exceptions.

DESCRIPTION:
The InfoScale agents can handle only the CloudError exception of the Azure APIs. It cannot handle other errors that may occur during certain failure conditions.

RESOLUTION:
The InfoScale agents for Azure are enhanced to handle several API failure conditions.

* 4054273 (Tracking ID: 4044567)

SYMPTOM:
The HostMonitor agent faults while logging the memory usage of a system.

DESCRIPTION:
The HostMonitor agent runs in the background to monitor the usage of the resources of a system. It faults and terminates unexpectedly while logging the memory usage of a system and generates a core dump.

RESOLUTION:
This patch updates the HostMonitor agent to handle the issue that it encounters while logging the memory usage data of a system.

* 4054276 (Tracking ID: 4048164)

SYMPTOM:
When a cloud API that an InfoScale agent has called hangs, an unwanted failover of the associated service group may occur.

DESCRIPTION:
When a cloud SDK API or a CLI command hangs, the monitor function of an InfoScale agent that has called the API or the command may time out. Consequently, the agent may report incorrect resource states, and an unwanted failover of the associated service group may occur.

RESOLUTION:
To avoid this issue, the default value of the FaultOnMonitorTimeout attribute is set to 0 for all the InfoScale agents for cloud support.

Patch ID: VRTSvcsag-7.4.1.3200

* 4021371 (Tracking ID: 4021370)

SYMPTOM:
The AWSIP and EBSVol resources fail to come online when IMDSv2 is used for requesting instance metadata.

DESCRIPTION:
By default, the AWSIP and EBSVol agents are developed to use IMDSv1 for requesting instance metadata. If the AWS cloud environment is configured to use IMDSv2, the AWSIP and EBSVol resource fail to come online and goes into UNKNOWN state.

RESOLUTION:
This hotfix updates the AWSIP and EBSVol agents to access the instance metadata based on the instance configuration for IMDS.

* 4028124 (Tracking ID: 4027915)

SYMPTOM:
Processes configured for HA using the ProcessOnOnly agent get killed during shutdown or reboot, even if they are still in use.

DESCRIPTION:
Processes that are started by the ProcessOnOnly agent do not have any dependencies on vcs.service. Such processes can therefore get killed during shutdown or reboot, even if they are being used by other VCS processes. Consequently, issues occur while bringing down Infoscale services during shutdown or reboot.

RESOLUTION:
This hotfix adresses the issue by enhancing the ProcessOnOnly agent such that the configured processes have their own systemd service files. The service file is used to set dependencies, so that the corresponding process is not killed unexpectedly during shutdown or reboot.

* 4038915 (Tracking ID: 1837967)

SYMPTOM:
Application agent falsely detects an application as faulted, due to corruption caused by non-redirected STDOUT or STDERR.

DESCRIPTION:
This issue can occur when the STDOUT and STDERR file descriptors of the program to be started and monitored are not redirected to a specific file or to /dev/null. In this case, an application that is started by the Online entry point inherits the STDOUT and STDERR file descriptors from the entry point. Therefore, the entry point and the application, both, read from and write to the same file, which may lead to file corruption and cause the agent entry point to behave unexpectedly.

RESOLUTION:
The Application agent is updated to identify whether STDOUT and STDERR for the configured application are already redirected. If not, the agent redirects them to /dev/null.

* 4038916 (Tracking ID: 3860766)

SYMPTOM:
When the swap space is in terabytes, HostMonitor agent shows incorrect swap space usage in the agent logs.

DESCRIPTION:
When the swap space on the system is in terabytes, HostMonitor incorrectly calculated the available swap 
capacity. The incorrect swap space usage was displayed in the HostMonitor agent log.

RESOLUTION:
Veritas has modified the HostMonitor agent code to correctly calculate the swap space capacity on the 
system.

* 4038919 (Tracking ID: 4038906)

SYMPTOM:
In case of ESXi 6.7, the VMwareDisks agent fails to perform a failover on a peer node.

DESCRIPTION:
The VMwareDisks agent faults when you try to bring the related service group online or to fail over the service group on a peer node. This issue occurs due to the change in the behavior of the API on ESXi 6.7 that is used to attach VMware disks.

RESOLUTION:
The VMWareDisks agent is updated to support the changed behavior of the API on ESXi 6.7. The agent can now bring the service group online or perform a failover on a peer node successfully.

* 4042947 (Tracking ID: 4042944)

SYMPTOM:
In a hardware replicated environment, a disk group resource may fail to import when the HARDWARE_MIRROR flag is set

DESCRIPTION:
After the VCS hardware replication agent resource fails over control to the secondary site, the DiskGroup agent does not rescan all the required device paths in 
case of a multi-pathing configuration. 
The vxdg import operation fails, as the hardware device characteristics for all the paths are not refreshed.

RESOLUTION:
This hotfix introduces of a new resource attribute for DiskGroup agent called ScanDisks. The ScanDisks attributes enables the user to perform a selective 
devices scan for all disk paths associated with a VxVM disk group. The VxVM and DMP disks attributes are refreshed before attempting to importing hardware clone 
or replicated devices. The default value of ScanDisks is 0, which indicates a selective device scan is not performed. Even when set 0, if the disk group fails 
with an error string containing HARDWARE MIRROR during the first disk group import attempt, the DiskGroup agent will then perform a selective device scan to 
increase of the chances of a successful import.
Sample resource configurations:
For Hardware Clone DiskGroups

DiskGroup tc_dg (
DiskGroup = datadg
DGOptions = "-o useclonedev=on -o updateid"
ForceImport = 0
ScanDisks = 1
)

For Hardware Replicated DiskGroups

DiskGroup tc_dg (
DiskGroup = datadg
ForceImport = 0
ScanDisks = 1
)

Patch ID: VRTSvcsag-7.4.1.2800

* 3984343 (Tracking ID: 3982300)

SYMPTOM:
A warning message related to the process priority is logged in the ProcessOnOnly agent log every minute.

DESCRIPTION:
This issue occurs because value of the Priority attribute of processes monitored by the ProcessOnOnly agent did not match the actual process priority value. As part of the Monitor function, if the priority of a process is found to be different from the value that is configured for the Priority attribute, warning messages are logged in the following scenarios:
1. The process is started outside VCS control with a different priority.
2. The priority of the process is changed after it is started by VCS.

RESOLUTION:
The ProcessOnOnly agent is updated to set the current value of the priority of a process to the Priority attribute if these values are found to be different.

* 4006950 (Tracking ID: 4006979)

SYMPTOM:
When the AzureDisk resource comes online on a cluster node, it goes into the UNKNOWN state on all the other nodes.

DESCRIPTION:
When an AzureDisk resource is online on one node, the status of that resource appears as UNKNOWN, instead of OFFLINE, on the other nodes in the cluster. Also, if the resource is brought online on a different node, its status on the remaining nodes appears as UNKNOWN. However, if the resource is not online on any node, its status correctly appears as OFFLINE on all the nodes.
This issue occurs when the VM name on the Azure portal does not match the local hostname of the cluster node. The monitor operation of the agent compares these two values to identify whether the VM to which the AzureDisk resource is attached is part of a cluster or not. If the values do not match, the agent incorrectly concludes that the resource is attached to a VM outside the cluster. Therefore, it displays the status of the resource as UNKNOWN.

RESOLUTION:
The AzureDisk agent is modified to compare the VM name with the appropriate attribute of the of the agent so that the status of an AzureDisk resource is reported correctly.

* 4009762 (Tracking ID: 4009761)

SYMPTOM:
A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.

DESCRIPTION:
As part of the Online operation, the NFSRestart agent copies the NFSv4 state data of clients from the shared storage to the local path. However, if the source location contains millions of files, some of which may be stale, their movement may not be completed before the operation times out.

RESOLUTION:
A new action entry point named "cleanup" is provided, which removes stale files. The usage of the entry point is as follows:
$ hares -action <resname> cleanup -actionargs <days> -sys <sys>
  <days>: number of days, deleting files that are <days> old
Example:
$ hares -action NFSRestart_L cleanup -actionargs 30 -sys <sys>
The cleanup action ensures that files older than the number of days specified in the -actionargs option are removed; the minimum expected duration is 30 days. Thus, only the relevant files to be moved remain, and the Online operation is completed in time.

* 4016488 (Tracking ID: 4007764)

SYMPTOM:
The NFS locks related log file is flooded with the "sync_dir:copy failed for link" error messages.

DESCRIPTION:
The smsyncd daemon used by the NFSRestart agent copies the symbolic links and the NFS locks from the /var/statmon/sm directory to a specific directory. These files and links are used to track the clients who have set a lock on the NFS mount points. If this directory already has a symbolic link with the same name that the smsyncd daemon is trying to copy, the /bin/cp commands fails and logs an error message.

RESOLUTION:
The smsyncd daemon is enhanced to copy the symbolic links even if the link with same name is present.

* 4016625 (Tracking ID: 4016624)

SYMPTOM:
When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.

DESCRIPTION:
When the ForceImport option is used, a disk group gets imported with the available disks, regardless of whether all the required disks are available or not. In such a scenario, if the ClearClone attribute is enabled, the available disks are successfully imported, but their DGIDs are updated to new values. Thus, the disks within the same disk group end up with different DGIDs, which may cause issues with the functioning of the storage configuration.

RESOLUTION:
The DiskGroup agent is updated to allow the ForceImport and the ClearClone attributes to be set to the following values as per the configuration requirements. ForceImport can be set to 0 or 1. ClearClone can be set to 0, 1, or 2. ClearClone is disabled when set to 0 and enabled when set to 1 or 2. ForceImport is disabled when set to 0 and is ignored when ClearClone is set to 1. To enable both, ClearClone and ForceImport, set ClearClone to 2 and ForceImport to 1.

Patch ID: VRTSgab-7.4.1.3300

* 4054264 (Tracking ID: 4046413)

SYMPTOM:
After a node is added to or removed from a cluster, the GAB node count or the fencing quorum is not updated.

DESCRIPTION:
The gabconfig -m <node_count> command returns an error even if the correct node count is provided.

RESOLUTION:
To address this issue, a parsing issue with the GAB module is fixed.

* 4054265 (Tracking ID: 4046418)

SYMPTOM:
The GAB module starts up even if LLT is not configured.

DESCRIPTION:
Since the GAB service depends on the LLT service, if LLT fails to start or if it is not configured, GAB should not start.

RESOLUTION:
The GAB module is updated to start only if LLT is configured.

* 4060792 (Tracking ID: 4057312)

SYMPTOM:
Load time vxfen tunables fail to persist the updated value after upgrade.

DESCRIPTION:
Typically, when any tunable value of GAB from /etc/sysconfig/gab is changed before rpm upgrade, it is observed that the old tunable values get updated with the default values.

RESOLUTION:
All the tunable values of GAB from /etc/sysconfig/gab will persist old values even after rpm upgrade.

Patch ID: VRTSgab-7.4.1.3100

* 4041319 (Tracking ID: 4038112)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 2 (SLES 15 SP2).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP1.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP2 is
now introduced

Patch ID: VRTSgab-7.4.1.2800

* 4016487 (Tracking ID: 4007726)

SYMPTOM:
When a GAB message that is longer than the value specified by GAB_MAX_MSGSIZE is transfered, an error message is added to the VCS logs. However the error message is not sufficiently descriptive.

DESCRIPTION:
The current error message does not mention the type of the GAB message that was transferred and the port that was used to transfer the message. Thus, the error message is not useful for troubleshooting.

RESOLUTION:
This hotfix addresses the issue by enhacing the error message that is logged. It now mentions whether the message type was DIRECTED or BROADCAST and also the port number that was used to transer the GAB message.

Patch ID: VRTSgab-7.4.1.1800

* 3992091 (Tracking ID: 3992044)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP1.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP1 is
now introduced.

Patch ID: VRTSllt-7.4.1.3300

* 4050664 (Tracking ID: 4046199)

SYMPTOM:
LLT configurations over UDP accept only ethernet interface names as link tag names.

DESCRIPTION:
The tag field in the link definition accepts only the ethernet interface name as a value.

RESOLUTION:
The LLT module is updated to accept any string a as link tag name.

* 4051040 (Tracking ID: 3989372)

SYMPTOM:
When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.

DESCRIPTION:
Occasionally, in a VMware environment, the operating system may not schedule LLT contexts on time. Consequently, heartbeats from some of the cluster nodes may be lost, and those nodes may get fenced out. This situation typically occurs when the CPU load or the memory usage is high or when the VMDK snapshot or vMotion operations are in progress.

RESOLUTION:
This fix attempts to make clusters more resilient to transient issues by heartbeating using threads bound to every vCPU.

* 4054272 (Tracking ID: 4045607)

SYMPTOM:
LLT over UDP support for transmission and reception of data over 1500 MTU networks.

DESCRIPTION:
The UDP multiport feature in LLT performs poorly in case of 1500 MTU-based networks. Data packets larger than 1500 bytes cannnot be transmitted over 1500 MTU-based networks, so the IP layer fragments them appropriately for transmission. The loss of a single fragment from the set leads to a total packet (I/O) loss. LLT then retransmits the same packet repeatedly until the transmission is successful. Eventually, you may encounter issues with the Flexible Storage Sharing (FSS) feature. For example, the vxprint process or the disk group creation process may stop responding, or the I/O-shipping performance may degrade severely.

RESOLUTION:
The UDP multiport feature of LLT is updated to fragment the packets such that they can be accommodated in the 1500-byte network frame. The fragments are rearranged on the receiving node at the LLT layer. Thus, LLT can track every fragment to the destination, and in case of transmission failures, retransmit the lost fragments based on the current RTT time.

* 4054697 (Tracking ID: 3985775)

SYMPTOM:
Sometimes, the system log may get flooded with LLT heartbeat loss messages that do not necessarily indicate any actual issues with LLT.

DESCRIPTION:
LLT heartbeat loss messages can appear in the system log either due to actual heartbeat drops in the network or due to heartbeat packets arriving out of order. In either case, these messages are only informative and do not indicate any issue in the LLT functionality. Sometimes, the system log may get flooded with these messages, which are not useful.

RESOLUTION:
The LLT module is updated to lower the frequency of printing LLT heartbeat loss messages. This is achieved by increasing the number of missed sequential HB packets required to print this informative message.

* 4058763 (Tracking ID: 4057310)

SYMPTOM:
After an InfoScale upgrade, the updated values of LLT and GAB tunables that are used when loading the corresponding modules fail to persist.

DESCRIPTION:
When the value of a tunable in /etc/sysconfig/llt or /etc/sysconfig/gab is changed before an RPM upgrade, the existing value gets reset to the default value.

RESOLUTION:
The LLT and the GAB modules are updated so that their tunable values in /etc/sysconfig/llt and /etc/sysconfig/gab can retain the existing values even after an RPM upgrade.

Patch ID: VRTSllt-7.4.1.3100

* 4022791 (Tracking ID: 4022792)

SYMPTOM:
A cluster node panics during an FSS I/O transfer over LLT.

DESCRIPTION:
In a Flexible Storage Sharing (FSS) setup, LLT uses sockets to transfer data between nodes. If a remote node is rebooted while the FSS I/O is running on the local node, the socket that was closed as part of the reboot process may still be used. If a NULL socket is thus accidentally used by the socket selection algorithm, it results in a node panic.

RESOLUTION:
This hotfix updates the LLT module to avoid the selection of such closed sockets.

* 4029112 (Tracking ID: 4029253)

SYMPTOM:
LLT may not reuse the buffer slots on which NAK is received from the earlier RDMA writes.

DESCRIPTION:
On receiving the buffer advertisement after an RDMA write, LLT also waits for the hardware/OS ACK for that RDMA write. Only after the ACK is received, LLT sets the state of the buffers to free (usable). If the connection between the cluter nodes breaks after LLT receives the buffer advertisement but before receiving the ACK, the local node generates a NAK. LLT does not acknowledge this NAK, and so, that specific buffer slot remains unusable. Over time, the number of buffer slots in the unusable state increases, which sets the flow control for the LLT client. This conditions leads to an FSS I/O hang.

RESOLUTION:
This hotfix updates the LLT module to mark a buffer slot as free (usable) even when a NAK is received from the previous RDMA write.

* 4041318 (Tracking ID: 4038112)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 2 (SLES 15 SP2).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP1.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP2 is
now introduced

Patch ID: VRTSllt-7.4.1.2800

* 3999398 (Tracking ID: 3989440)

SYMPTOM:
The dash (-) in the device name may cause the LLT link configuration to fail.

DESCRIPTION:
While configuring LLT links, if the LLT module finds a dash in the device name, it assumes that the device name is in the 'eth-<mac-address>' format and considers the string after the dash as the mac address. However, if the user specifies an interface name that includes a dash, the string after the dash is not intended to be a MAC address. In such a case, the LLT link configuration fails.

RESOLUTION:
The LLT module is updated to check for the string 'eth-' before validating the device name with the 'eth-<mac-address>' format. If the string 'eth-' is not found, LLT assumes the name to be an interface name.

* 4002584 (Tracking ID: 3994996)

SYMPTOM:
Adding -H miscellaneous flag to add new functionalities in lltconfig. Add a tunable to allow skb alloc with SLEEP flag.

DESCRIPTION:
Adding -H miscellaneous flag, which we will use to add new functionalities in lltconfig, as very less alphabets are left to be able to assign an alphabet to each functionality.

RESOLUTION:
Inside -H flag
1. Add a tunable to allow skb alloc with SLEEP flag, in case memory is scarce.
2. Add skb_alloc failure count in lltstat output.

* 4003442 (Tracking ID: 3983418)

SYMPTOM:
In a rare case, after a panic or a reboot of a node, it may fail to join the CVM master due to an inconsistent LLT port state on the master.

DESCRIPTION:
When a node tries to join the cluster after a reboot or a panic, in a rare case, on one of the remaining nodes the port state of CVM or any other port may be in an inconsistent state with respect to LLT.

RESOLUTION:
This hotfix updates the LLT module to fix the issue by not accepting a particular type of a packet when not connected to the remote node and also adds more states to log into the LLT circular buffer.

Patch ID: VRTSllt-7.4.1.1800

* 3992045 (Tracking ID: 3992044)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP1.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP1 is
now introduced.

Patch ID: VRTSvxfen-7.4.1.3300

* 4051532 (Tracking ID: 4057308)

SYMPTOM:
After an InfoScale upgrade, the updated values of vxfen tunables that are used when loading the corresponding module fail to persist.

DESCRIPTION:
When the value of a tunable in /etc/sysconfig/vxfen is changed before an RPM upgrade, the existing value gets reset to the default value.

RESOLUTION:
The vxfen module is updated so that its existing tunable values in /etc/sysconfig/vxfen can be retained even after an RPM upgrade.

Patch ID: VRTSvxfen-7.4.1.3100

* 3996220 (Tracking ID: 3996218)

SYMPTOM:
In a customized fencing mode, the 'vxfenconfig -c' command creates a new vxfend process even if VxFen is already configured.

DESCRIPTION:
When you configure fencing in the customized mode and run the 'vxfenconfig -c' command, the vxfenconfig utility reports the 'VXFEN ERROR V-11-1-6 vxfen already configured...' error. Moreover, it also creates a new vxfend process even if VxFen is already configured. Such redundant processes may impact the performance of the system.

RESOLUTION:
The vxfenconfig utility is modified so that it does not create a new vxfend process when VxFen is already configured.

* 4028780 (Tracking ID: 4029261)

SYMPTOM:
An entire InfoScale cluster may go down unexpectedly if one of its nodes receives a RECONFIG message during a shutdown or a restart operation.

DESCRIPTION:
If a cluster node receives a RECONFIG message while a shutdown or a restart operation is in progress, it may participate in the fencing race. The node may also win the race and then proceed to shut down. If this situation occurs, the fencing module panics the nodes that lost the race, which may cause the entire cluster to go down.

RESOLUTION:
This hotfix updates the fencing module so that it stops a cluster node from joining a race, if it receives a RECONFIG message while a shutdown or a restart operation is in progress.

Patch ID: VRTSvxfen-7.4.1.2800

* 4000746 (Tracking ID: 4000745)

SYMPTOM:
The VxFEN process fails to start due to late discovery of the VxFEN disk group.

DESCRIPTION:
When I/O fencing starts, the VxFEN startup script creates this /etc/vxfentab file on each node. During disk-based fencing, the VxVM module may take longer time to discover the VxFEN disk group. Because of this delay, the 'generate disk list' opreration times out. Therefore, the VxFEN process fails to start and reports the following error: 'ERROR: VxFEN cannot generate vxfentab because vxfendg does not exist'

RESOLUTION:
A new tunable, getdisks_timeout, is introduced to specify the timeout value for the VxFEN disk group discovery. The maximum and the default value for this tunable is 600 seconds. You can set the value of this tunable by adding an getdisks_timeout=<time_in_sec> entry in the /etc/vxfenmode file.

Patch ID: VRTSveki-7.4.1.3400

* 4055071 (Tracking ID: 4055072)

SYMPTOM:
Upgrading VRTSveki package using yum reports following error as "Starting veki /etc/vx/veki: line 51: [: too many arguments"

DESCRIPTION:
While upgrading VRTSveki package, presence of multiple module directories might result in upgrade script printing error message.

RESOLUTION:
Code is modified to check for specific module directory related to current kernel version in VRTSveki upgrade script.

* 4056103 (Tracking ID: 3992450)

SYMPTOM:
If the VRTSveki is installed when VxVM modules are already loaded through YUM , then the VRTSveki installation fails

DESCRIPTION:
If the VRTSveki is installed when VxVM modules are already loaded through YUM , then the VRTSveki installation fails and it does not create the proper links in /lib/modules/ directory. Since the links are not created in the directory then installation of VxVM also fails because of the same.

RESOLUTION:
code changes have been done to resolve the issue.

Patch ID: VRTSveki-7.4.1.3100

* 4041107 (Tracking ID: 4039061)

SYMPTOM:
Veki failed to load on SLES15SP2

DESCRIPTION:
The SLES15SP2 is new release and it has some changes in kernel which caused Veki failed to load
on it.

RESOLUTION:
Added code to support Veki on SLES15SP2.

Patch ID: VRTSveki-7.4.1.1600

* 3991733 (Tracking ID: 3991734)

SYMPTOM:
veki module failed to load on SLES15 SP1

DESCRIPTION:
The SLES15 SP1 is new release and it has some changes in kernel which caused veki module failed to load
on it.

RESOLUTION:
Added code to support VRTSveki on SLES15 SP1

Patch ID: VRTSodm-7.4.1.3400

* 4056672 (Tracking ID: 4056673)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock. Corrected vxgms dependency in odm service file.

Patch ID: VRTSodm-7.4.1.3100

* 4039686 (Tracking ID: 4036034)

SYMPTOM:
ODM module failed to load on SLES15SP2

DESCRIPTION:
The SLES15SP2 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on SLES15SP2.

Patch ID: VRTSodm-7.4.1.2800

* 4020803 (Tracking ID: 4020800)

SYMPTOM:
VRTSodm-7.4.1 module unable to load on SLES15SP1.

DESCRIPTION:
Need recompilation of VRTSodm due to recent changes in VRTSvxfs 
header files due to which some symbols are not being resolved.

RESOLUTION:
Recompiled the VRTSodm with new changes in VRTSvxfs header files.

Patch ID: VRTSodm-7.4.1.1700

* 3991388 (Tracking ID: 3991387)

SYMPTOM:
ODM module failed to load on SLES15 SP1

DESCRIPTION:
The SLES15 SP1 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on SLES15 SP1

Patch ID: VRTSvxfs-7.4.1.3400

* 4026389 (Tracking ID: 4026388)

SYMPTOM:
fscache and sfcache command exited with the following error while performing unpin file
UX:vxfs fscache: ERROR: V-3-28059:  -o load is supported only for pin option

DESCRIPTION:
fscache and sfcache  commands report error while unpinning file from VxFS cache device.

RESOLUTION:
Code changes introduced to fix the error.

* 4050229 (Tracking ID: 4018995)

SYMPTOM:
System gets panic in AIX when doing read ahead of size greater than 4 GB.

DESCRIPTION:
There is a data type mismatch (64bit - 32bit) in VxFS read ahead code path which leads to page list size being truncated and causing the system to panic with the following stack:
vx_getpage_cleanup
vx_do_getpage
vx_mm_getpage
vx_do_read_ahead
vx_cache_read_noinline
vx_read_common_noinline
vx_read1
vx_read
vx_rdwr_attr

RESOLUTION:
The data type mismatch has been fixed in code to handle such scenarios.

* 4053149 (Tracking ID: 4043084)

SYMPTOM:
panic in vx_cbdnlc_lookup

DESCRIPTION:
Panic observed in the following stack trace:
vx_cbdnlc_lookup+000140 ()
vx_int_lookup+0002C0 ()
vx_do_lookup2+000328 ()
vx_do_lookup+0000E0 ()
vx_lookup+0000A0 ()
vnop_lookup+0001D4 (??, ??, ??, ??, ??, ??)
getFullPath+00022C (??, ??, ??, ??)
getPathComponents+0003E8 (??, ??, ??, ??, ??, ??, ??)
svcNameCheck+0002EC (??, ??, ??, ??, ??, ??, ??)
kopen+000180 (??, ??, ??)
syscall+00024C ()

RESOLUTION:
Code changes to handle memory pressure while changing FC connectivity

* 4054243 (Tracking ID: 4014274)

SYMPTOM:
Gsed cmd might report a bad address error when VxFS receives 
ACE(access control entry) masking flag.

DESCRIPTION:
While performing Gsed cmd on VxFS filesystem, if VxFS receives ACE_GETACLCNT 
and ACE_GETACL masking flags; VxFS might report bad address error as VxFS 
does not support ACE(access control entry) flags.

RESOLUTION:
Added a code to handle ACE flags

* 4054244 (Tracking ID: 4052449)

SYMPTOM:
Cluster goes in an 'unresponsive' mode while invalidating pages due to duplicate page entries in iowr structure.

DESCRIPTION:
While finding pages for invalidation of inodes, VxFS traverses radix tree by taking RCU lock and fills the IO structure with dirty/writeback pages that need to be invalidated in an array. This lock is efficient for read but does not protect the parallel creation/deletion of node. Hence, when VxFS finds page, consistency for the page in checked through radix_tree_exception()/radix_tree_deref_retry(). And if it fails, VxFS restarts the page finding from start offset. But VxFs does not reset the array index, leading to incorrect filling of IO structure's array which was causing  duplicate entries of pages. While trying to destroy these pages, VxFS takes page lock on each page. Because of duplicate entries, VxFS tries to take page lock couple of times on same page, leading to self-deadlock.

RESOLUTION:
Code is modified to reset the array index correctly in case of failure to find pages.

* 4054387 (Tracking ID: 4054386)

SYMPTOM:
VxFS systemd service may show active status despite the module not being loaded.

DESCRIPTION:
If systemd service fails to load vxfs module, the service still shows status as active instead of failed.

RESOLUTION:
The script is modified to show the correct status in case of such failures.

* 4054412 (Tracking ID: 4042254)

SYMPTOM:
vxupgrade sets fullfsck flag in the filesystem if it is unable to upgrade the disk layout version because of ENOSPC.

DESCRIPTION:
If the filesystem is 100 % full and  its disk layout version is upgraded by using vxupgrade, then this utility starts the upgrade and later it fails with ENOSPC and ends up setting fullfsck flag in the filesystem.

RESOLUTION:
Code changes introduced which first calculate the required space to perform the disk layout upgrade. If the required space is not available, it fails the upgrade gracefully without setting fullfsck flag.

* 4054416 (Tracking ID: 4005620)

SYMPTOM:
Inode count maintained in the inode allocation unit (IAU) can be negative when an IAU is marked bad. An error such as the following is logged.

V-2-4: vx_mapbad - vx_inoauchk - /fs1 file system free inode bitmap in au 264 marked bad

Due to the negative inode count, errors like the following might be observed and processes might be stuck at inode allocation with a stack trace as shown.

V-2-14: vx_iget - inode table overflow

	vx_inoauchk 
	vx_inofindau 
	vx_findino 
	vx_ialloc 
	vx_dirmakeinode 
	vx_dircreate 
	vx_dircreate_tran 
	vx_pd_create 
	vx_create1_pd 
	vx_do_create 
	vx_create1 
	vx_create0 
	vx_create 
	vn_open 
	open

DESCRIPTION:
The inode count can be negative if somehow VxFS tries to allocate an inode from an IAU where the counter for regular file and directory inodes is zero. In such a situation, the inode allocation fails and the IAU map is marked bad. But the code tries to further reduce the already-zero counters, resulting in negative counts that can cause subsequent unresponsive situation.

RESOLUTION:
Code is modified to not reduce inode counters in vx_mapbad code path if the result is negative. A diagnostic message like the following flashes.
"vxfs: Error: Incorrect values of ias->ifree and Aus rifree detected."

* 4054724 (Tracking ID: 4018697)

SYMPTOM:
After installing InfoScale 7.4.2, when the system is started, it fails to start with the following message:
an inconsistency in the boot archive was detected the boot archive will be updated, then the system will reboot

DESCRIPTION:
Due to a defect in the vxfs-modload service, vxfs modules get copied to the system on each reboot. As a result, the system goes into an inconsistent state and gets stuck in a reboot loop.

RESOLUTION:
The vxfs-modload script is modified to address this defect.

* 4054725 (Tracking ID: 4051108)

SYMPTOM:
While storing multiple attributes for a file in an immediate area of inode, system might be unresponsive due to a wrong loop increment statement.

DESCRIPTION:
While storing attribute for a file in an immediate area of inode, loop is used to store multiple attributes one-by-one. Presence of wrong loop increment statement might cause loop to execute indefinitely. The system might be unresponsive as a result.

RESOLUTION:
Code is corrected to increment loop variable properly after loop iterations.

* 4054726 (Tracking ID: 4051026)

SYMPTOM:
If file system is created with inode size 512, the file system might report inconsistencies with the bitmap after running fsck .

DESCRIPTION:
With inode size 512, while running fsck some of the inodes are getting marked as allocated even though they are free . Bitmaps are actually correct on-disk but fsck reports it wrong.

RESOLUTION:
Fixed FSCK binary to mark indoes allocated/free correctly.

* 4055858 (Tracking ID: 4042925)

SYMPTOM:
Intermittent Performance issue on commands like df and ls.

DESCRIPTION:
Commands like "df" "ls" issue stat system call on node to calculate the statistics of the file system. In a CFS, when stat system call is issued, it compiles statistics from all nodes. When multiple df or ls are fired within specified time limit, vxfs is optimized. vxfs returns the cached statistics, instead of recalculating statistics from all nodes. If multiple such commands are fired in succession and one of the old caller of stat system call takes time, this optimization fails and VxFS recompiles statistics from all nodes. This can lead to bad performance of stat system call, leading to unresponsive situations for df, ls commands.

RESOLUTION:
Code is modified to protect last modified time of stat system call with a sleep lock.

Patch ID: VRTSvxfs-7.4.1.3200

* 3983991 (Tracking ID: 3864111)

SYMPTOM:
Oracle database start failure, with trace log like this:

ORA-63999: data file suffered media failure
ORA-01114: IO error writing block to file 304 (block # 722821)
ORA-01110: data file 304: <file_name>
ORA-17500: ODM err:ODM ERROR V-41-4-2-231-28 No space left on device

DESCRIPTION:
In vx_logged_cwrite(), the request size is not rounded up to the next block
boarder before parsing from bytes into blocks. So if the request is smaller than a
block, the space argument for extent allocation will be 0, which will cause the
return of ENOSPC.

RESOLUTION:
Round up the request size to next block boarder in vx_logged_cwrite().

* 3983996 (Tracking ID: 3941620)

SYMPTOM:
High memory usage is seen on Solaris system with delayed allocation enabled.

DESCRIPTION:
With VxFS delayed allocation enabled, in some unnecessary condition, it 
could 
make some put page operations return early before pages flushed and without 
retry later. Then only one vxfs working thread works on page flushing 
through 
delayed allocation flush path. This will result in the dirty page flushing 
much slow, and cause the memory starvation of the system.

RESOLUTION:
Code change has been done to make the page flushing work through multiple 
threads as normal.

* 3989587 (Tracking ID: 3989588)

SYMPTOM:
Mark FCL bad for fileset (checkpoint) and deactivate FCL

DESCRIPTION:
During FCL (file change log) merge in cluster environment, if it fails then corresponding FCL is marked as bad and FCL is deactivated. This was happening due to race between fileset (checkpoint) removal and FCL merge. Since fileset is already removed there is no impact of data.

RESOLUTION:
Code is fixed to not to mark the FCL bad in such cases.

* 4039671 (Tracking ID: 4021368)

SYMPTOM:
FSCK fails with below error
             "UX:vxfs fsck: ERROR: V-3-26113: bc_getfreebuf internal error"

DESCRIPTION:
During fsck read ahead, it was observed that read ahead buffers were deleted before finishing actual async read. This resulted in leaking buffers in user buffer cache maintained by FSCK. When fsck tries to find buffer in buffer cache for next subsequent operations, it was failing to get free buffers because of these leak. This was resulting in failing fsck with "bc_getfreebuf internal error".

RESOLUTION:
Code is modified to wait deletion of buffers till async read complete.

* 4039684 (Tracking ID: 4036018)

SYMPTOM:
VxFS module failed to load on SLES15SP2

DESCRIPTION:
The SLES15SP2 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on SLES15SP2.

* 4042130 (Tracking ID: 3993822)

SYMPTOM:
running fsck on a file system core dumps

DESCRIPTION:
buffer was marked as busy without taking buffer lock while getting buffer from freelist in 1 thread and there was another thread 
that was accessing this buffer through its local variable

RESOLUTION:
marking buffer busy within the buffer lock while getting free buffer.

* 4042685 (Tracking ID: 4042684)

SYMPTOM:
Command fails to resize the file.

DESCRIPTION:
There is a window where a parallel thread can clear IDELXWRI flag which it should not.

RESOLUTION:
setting the delayed extending write flag incase any parallel thread has cleared it.

* 4042983 (Tracking ID: 3990168)

SYMPTOM:
Accommodation of new interpretation of error from bio structure in linux kernel greater than 4.12.14

DESCRIPTION:
Interpretation of error from bio structure in linux kernel greater than 4.12 is changed. VxFS need to accommodate these changes to correctly interpret error from bio structure.

RESOLUTION:
Code added to accommodate new interpretation of error from bio structure in linux kernel greater than 4.12.14 for VxFS.

* 4043494 (Tracking ID: 3985749)

SYMPTOM:
Observed hang in internal fsdedup test

DESCRIPTION:
During our internal fsdedup testing, one thread went into the hang state. The hang was occurred due to deadlock scenario and the stack trace of hang thread was as below.

schedule 
io_schedule 
sync_page 
__wait_on_bit_lock 
__lock_page 
lock_page at 
vx_page_alloc_sync
vx_page_alloc 
vx_do_getpage 
vx_getpage1
vx_segmap_getmap
vx_cache_read
vx_read1
vx_vop_read
vx_read 
vfs_read 
sys_read 
system_call_fastpath

RESOLUTION:
Added code changes to fix possible deadlocks.

* 4044639 (Tracking ID: 4013139)

SYMPTOM:
Operation for aborting the online migration from native filesystem to VxFS fails on RHEL 8.x

DESCRIPTION:
The operation for aborting the online migration from native filesystem to VxFS fails with the below error message:
umount: /mnt1/lost+found/srcfs: not mounted
UX:vxfs fsmigadm: ERROR: V-3-26835:  umount of source device: /dev/vx/dsk/testdg/vol1 failed, with error: 32

RESOLUTION:
Code changes have been done so that "fsmigadm abort <mount point>" successfully aborts the online migration.

Patch ID: VRTSvxfs-7.4.1.2800

* 3976693 (Tracking ID: 4016085)

SYMPTOM:
Shows garbage values, instead of correct information

DESCRIPTION:
Dumps information of device 0 while it was asking for other device information(may be device 1, 2).

RESOLUTION:
Updated curpos pointer to point to correct device as needed.

* 3983165 (Tracking ID: 3975019)

SYMPTOM:
Under IO load running with NFS v4 using NFS lease, system may panic with below message.
Kernel panic - not syncing: GAB: Port h halting system due to client process failure

DESCRIPTION:
NFS v4 uses lease per file. This delegation can be taken in RD or RW mode and can be released conditionally. For CFS, we release such delegation from specific node while inode is being normalized (i.e. losing ownership). This can race with another setlease operation on this node and can end up into deadlock for ->i_lock.

RESOLUTION:
Code changes are made to disable lease.

* 4004182 (Tracking ID: 4004181)

SYMPTOM:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

DESCRIPTION:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

RESOLUTION:
Provide an API on mount point to read the Compliance clock for that filesystem

* 4004927 (Tracking ID: 3983350)

SYMPTOM:
Inodes are allocated without pushing the ilist extent

DESCRIPTION:
Multiple messages can be sent from vx_cfs_ilist_push for inodes that are in same block. On receiver side i.e primary node vx_find_iposition() may return bno VX_OVERLAY and btranid 0 until anyone actually do the push. All these will get serialize in vx_ilist_pushino() on VX_IRWLOCK_RANGE and VX_CFS_IGLOCK. First one will do the push and set the btranid to last commit id. As btranid is non null vx_recv_ilist_push_msg() will wait for vx_tranidflush() to flush the transaction to disk. Other receiver threads will not do the push and will have tranid 0 so they will return success without waiting for the transactions to be flushed to disk. Now if the file system gets disable while flushing we end up in an inconsistent state because some of the inodes have actually returned success and marked this block as pushed in-core on secondary.

RESOLUTION:
If the block is pushed or pulled and tranid is 0 again lookup for ilist extent containing the inode. This will populate the correct tranid from ilptranid and the thread will wait for transaction flush.

* 4014718 (Tracking ID: 4011596)

SYMPTOM:
Man page is missing details about of the feature we support

DESCRIPTION:
Need to include new option "-h" in glmdump in man page for using hacli utility for communicating across the nodes in the cluster.

RESOLUTION:
Added the details about the feature supported by glmdump in man page

* 4015824 (Tracking ID: 4015278)

SYMPTOM:
System panics during vx_uiomove_by _hand

DESCRIPTION:
During uiomove, VxFS get the pages from OS through get_user_pages() to copy user data. Oracle use hugetablfs internally for performance reason. This can allocate hugepages. Under low memory condition, it is possible that get_user_pages() might return VxFS compound pages. In case of compound pages, only head page has valid mapping set and all other pages are mapped as TAIL_MAPPING. In case of uiomove, if VxFS gets compound page, then it try to check writable mapping for all pages from this compound page. This can result into dereferencing illegal address (TAIL_MAPPING) which was causing panic in  stack. VxFS doesn't support huge pages but it is possible that compound page is present on the system and VxFS might get one through get_user_pages.

RESOLUTION:
Code is modified to get head page in case of tail pages from compound page when VxFS checks writeable mapping.

* 4016077 (Tracking ID: 4009328)

SYMPTOM:
In a cluster filesystem, if smap corruption is seen and the smap is marked bad then it could cause hang while unmounting the filesystem.

DESCRIPTION:
While freeing an extent in vx_extfree1() for logversion >= VX_LOGVERSION13 if we are freeing whole AUs we set VX_AU_SMAPFREE flag for those AUs. This ensures that revoke of delegation for that AU is delayed till the AU has SMAP free transaction in progress. This flag gets cleared either in post commit/undo processing of the transaction or during error handling in vx_extfree1(). In one scenario when we are trying to free a whole AU and its smap is marked bad, we do not return any error to vx_extfree1() and neither do we add the subfunction to free the extent to the transaction. So, the VX_AU_SMAPFREE flag is not cleared and remains set even if there is no SMAP free transaction in progress. This could lead to hang while unmounting the cluster filesystem.

RESOLUTION:
Code changes have been done to add error handling in vx_extfree1 to clear VX_AU_SMAPFREE flag in case where error is returned due to bad smap.

* 4016082 (Tracking ID: 4000465)

SYMPTOM:
FSCK binary loops when it detects break in sequence of log ids.

DESCRIPTION:
When FS is not cleanly unmounted, FS will end up with unflushed intent log. This intent log will either be flushed during next subsequent mount or when fsck ran on the FS. Currently to build the transaction list that needs to be replayed, VxFS uses binary search to find out head and tail. But if there are breakage in intent log, then current code is susceptible to loop. To avoid this loop, VxFS is now going to use sequential search to find out range instead of binary search.

RESOLUTION:
Code is modified to incorporate sequential search instead of binary search to find out replayable transaction range.

Patch ID: VRTSvxfs-7.4.1.1700

* 3991386 (Tracking ID: 3991385)

SYMPTOM:
VxFS module failed to load on SLES15 SP1

DESCRIPTION:
The SLES15 SP1 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on SLES15 SP1

Patch ID: VRTSpython-3.6.6.10

* 4056525 (Tracking ID: 4049692)

SYMPTOM:
Additional Python modules are required in the VRTSpython package to support changes in the InfoScale licensing component.

DESCRIPTION:
To support the changes in the InfoScale licensing component, some additional modules are required in the VRTSpython package.

RESOLUTION:
The VRTSpython package is updated to include additional Python modules.

Patch ID: VRTSvlic-4.01.741.300

* 4049416 (Tracking ID: 4049416)

SYMPTOM:
Frequent Security vulnerabilities reported in JRE.

DESCRIPTION:
There are many vulnerabilities reported in JRE every quarter. To overcome this vulnerabilities issue migrate Licensing Collector service from Java to Python.
All other behavior of Licensing Collector service will remain the same.

RESOLUTION:
Migrated Licensing Collector service from Java to Python.

Patch ID: VRTScavf-7.4.1.3400

* 4056567 (Tracking ID: 4054462)

SYMPTOM:
In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.

DESCRIPTION:
After the VCS hardware replication agent resource fails over control to the secondary site, the CVMVolDg agent does not rescan all the required device paths in case of a multi-pathing configuration. The vxdg import operation fails, because the hardware device characteristics for all the paths are not refreshed.

RESOLUTION:
This hotfix addresses the issue by providing two new resource-level attributes for the CVMVolDg agent.
- The ScanDisks attribute specifies whether to perform a selective for all the disk paths that are associated with a VxVM disk group. When ScanDisks is set to 1, the agent performs a selective devices scan. Before attempting to import a hardware clone or a hardware replicated device, the VxVM and the DMP attributes of a disk are refreshed. ScanDisks is set to 0 by default, which indicates that a selective device scan is not performed. However, even when ScanDisks is set to 0, if the disk group fails during the first import attempt, the agent checks the error string. If the string contains the text HARDWARE_MIRROR, the agent performs a selective device scan to increase the chances of a successful import.
- The DGOptions attribute specifies options to be used with the vxdg import command that is executed by the agent to bring the CVMVolDg resource online.
Sample resource configuration for hardware replicated shared disk groups:
CVMVolDg tc_dg (
    CVMDiskGroup = datadg
    CVMVolume = { vol01 }
    CVMActivation = sw
    CVMDeportOnOffline = 1
    ClearClone = 1
    ScanDisks = 1
    DGOptions = "-t -o usereplicatedev=on"
    )

Patch ID: VRTSvcsea-7.4.1.3300

* 4054325 (Tracking ID: 4043289)

SYMPTOM:
In an Oracle ASM 19c environment on Solaris, the ASMInst agent fails to come online or to detect the state of the related resources.

DESCRIPTION:
This issue occurs due to incorrect permissions on certain Oracle files on Solaris.

RESOLUTION:
The ASMInst agent is updated to handle the change in permissions that causes this issue.

Patch ID: VRTSvcsea-7.4.1.3100

* 4044135 (Tracking ID: 4044136)

SYMPTOM:
Package installation failed on CentOS platform.

DESCRIPTION:
Due to lower packaging version package failed to install.

RESOLUTION:
Increased package versioning to 7.4.1.3100, now package is able to install on CentOS platform.

Patch ID: VRTSvcsea-7.4.1.1600

* 3982248 (Tracking ID: 3989510)

SYMPTOM:
The VCS agent for Oracle does not support Oracle 19c databases.

DESCRIPTION:
In case of non-CDB or CDB-only databases, the VCS agent for Oracle does not recognize that an Oracle 19c resource is intentionally offline after a graceful shutdown. This functionality is never supported for PDB-type databases.

RESOLUTION:
The agent is updated to recognize the graceful shutdown of an Oracle 19c resource as intentional offline in case of non-CDB or CDB-only databases. For details, refer to the article at: https://www.veritas.com/support/en_US/article.100046803.

Patch ID: VRTSvcs-7.4.1.3300

* 4054266 (Tracking ID: 4040705)

SYMPTOM:
When a command exceeds 4096 characters, hacli hangs indefinitely.

DESCRIPTION:
Instead of returning the appropriate error message, hacli waits indefinitely for a reply from the VCS engine.

RESOLUTION:
Increased the limit of the hacli '-cmd' option to 7680 characters. Validations for the various hacli options are also now handled better. Thus, when the value of the '-cmd' option exceeds the new limit, hacli no longer hangs, but instead returns the appropriate proper error message.

* 4054267 (Tracking ID: 4040656)

SYMPTOM:
When the ENOMEM error occurs, HAD does not shut down gracefully.

DESCRIPTION:
When HAD encounters the ENOMEM error, it reattempts the operation a few times until it reaches a predefined maximum limit, and then it exits. The hashadow daemon restarts HAD with the '-restart' option. The failover service group in the cluster is not started automatically, because it appears that one of the cluster nodes is in the 'restarting' mode.

RESOLUTION:
HAD is enhanced to exit gracefully when it encounters the ENOMEM error, and the hashadow daemon is updated to restart HAD without the '-restart' option. This enhancement ensures that in such a scenario, the autostart of a failover service group is triggered as expected.

* 4054271 (Tracking ID: 4043700)

SYMPTOM:
In case of failover, parallel, or hybrid service groups, multiple PreOnline triggers can be executed on the same node or on different nodes in a cluster while an online operation is in already progress.

DESCRIPTION:
This issue occurs because the validation of online operations did not take the ongoing execution of PreOnline triggers into consideration. Thus, subsequent online operations are accepted while a PreOnline trigger is already being executed. Consequently, multiple PreOnline trigger instances are executed.

RESOLUTION:
A check for PreOnline triggers is added to the validation of ongoing online operations, and subsequent calls for online operations are rejected. Thus, the PreOnline trigger for failover groups is executed only once.

Patch ID: VRTSvcs-7.4.1.3100

* 4026815 (Tracking ID: 4026819)

SYMPTOM:
Non-root users of GuestGroup in a secure cluster cannot execute VCS commands like "hagrp -state".

DESCRIPTION:
When a non-root guest user runs a HAD CLI command, the command fails to execute and the following error is logged: "VCS ERROR V-16-1-10600 Cannot connect to VCS engine". This issue occurs when IPv6 is disabled.

RESOLUTION:
This hotfix updates the VCS module to run HAD CLI commands successfully even when IPv6 is disabled.

Patch ID: VRTSvcs-7.4.1.2800

* 3995684 (Tracking ID: 3995685)

SYMPTOM:
A discrepancy was observed between the VCS engine log messages at the primary site and those at the DR site in a GCO configuration.

DESCRIPTION:
If a resource that was online at the primary site is taken offline outside VCS control, the VCS engine logs the messages related to the unexpected change in the state of the resource[, successful clean Entry Point execution and so on]. The messages clearly indicate that the resource is faulted. However, the VCS engine does not log any debugging error messages regarding the fault on the primary site, but instead logs them at the DR site. Consequently, there is a discrepancy in the engine log messages at the primary site and those at the DR site.

RESOLUTION:
The VCS engine module is updated to log the appropriate debugging error messages at the primary site when a resource goes into the Faulted state.

FILE / VERSION:
had.exe / 7.4.10004.0
hacf.exe / 7.4.10004.0
haconf.exe  / 7.4.10004.0

* 4012318 (Tracking ID: 4012518)

SYMPTOM:
The gcoconfig command does not accept "." in the interface name.

DESCRIPTION:
The naming guidelines for network interfaces allow the "." character to be included as part of the name string. However, if this character is included, the gcoconfig command returns an error stating that the NIC name is invalid.

RESOLUTION:
This hotfix updates the gcoconfig command code to allow the inclusion of the "." character when providing interface names.

Patch ID: VRTSamf-7.4.1.3400

* 4054323 (Tracking ID: 4001565)

SYMPTOM:
On Solaris 11.4, IMF fails to provide notifications when Oracle processes stop.

DESCRIPTION:
On Solaris 11.4, when Oracle processes stop, IMF provides notification to Oracle agent, but the monitor is not scheduled. As as result, agent fails intelligent monitoring.

RESOLUTION:
Oracle agent now provides notifications when Oracle processes stop.

Patch ID: VRTSamf-7.4.1.3200

* 4044340 (Tracking ID: 4041703)

SYMPTOM:
The system panics when the Mount and the CFSMount agents fail to register with AMF.

DESCRIPTION:
This issue occurs after an operating system upgrade. The agents fail to register with AMF, which leads to a system panic.

RESOLUTION:
Added support for cursor in mount structures starting RHEL 8.4.

Patch ID: VRTSamf-7.4.1.2800

* 4019003 (Tracking ID: 4018791)

SYMPTOM:
A cluster node panics when the AMF module module attempts to access an executable binary or a script using its absolute path.

DESCRIPTION:
A cluster node panics and generates a core dump, which indicates that an issue with the AMF module. The AMF module function that locates an executable binary or a script using its absolute path fails to handle NULL values.

RESOLUTION:
The AMF module is updated to handle NULL values when locating an executable binary or a script using its absolute path.

Patch ID: VRTSamf-7.4.1.1800

* 3992092 (Tracking ID: 3992044)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 1 (SLES 15 SP1).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP1.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP1 is
now introduced.

Patch ID: VRTSvxvm-7.4.1.3300

* 3984175 (Tracking ID: 3917636)

SYMPTOM:
Filesystems from /etc/fstab file are not mounted automatically on boot through systemd on RHEL7 and SLES12.

DESCRIPTION:
While bootup, when systemd tries to mount using the devices mentioned in the /etc/fstab file on the device, the device cannot be accessed leading to the failure of the mount operation. As the device is discovered through udev infrastructure, the udev-rules for the device should be applied when the volumes are created so that the device gets registered with systemd. In case, udev rules are executed even before the device in the "/dev/vx/dsk" directory is created, the device will not be registered with systemd leading to the failure of mount operation.

RESOLUTION:
To register the device, create all the volumes and run the "udevadm trigger" to execute all the udev rules.

* 4011097 (Tracking ID: 4010794)

SYMPTOM:
Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster with below stack when storage activities were going on:
dmp_start_cvm_local_failover+0x118()
dmp_start_failback+0x398()
dmp_restore_node+0x2e4()
dmp_revive_paths+0x74()
gen_update_status+0x55c()
dmp_update_status+0x14()
gendmpopen+0x4a0()

DESCRIPTION:
The system panic occurred due to invalid dmpnode's current primary path when disks were attached/detached in a cluster. When DMP accessed the current primary path without doing sanity check, the system panics due to an invalid pointer.

RESOLUTION:
Code changes have been made to avoid accessing any invalid pointer.

* 4039527 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature set to ON, the vxiod with ID as 128 starts the replication where RVG is in DCM mode. Thus, the vxiod awaits the filesystem's response if the given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in the code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a deadlock situation occurs.

RESOLUTION:
Code changes have been made to avoid handling the MDSHIP IO in vxiod whose ID is bigger than 127.

* 4045494 (Tracking ID: 4021939)

SYMPTOM:
The "vradmin syncvol" command fails and the following message is logged: "VxVM VVR vxrsync ERROR V-5-52-10206 no server host systems specified".

DESCRIPTION:
VVR sockets now bind without specifying IP addresses. This recent change causes issues when such interfaces are used to identify whether the associated remote host is same as the localhost. For example, in case of the "vradmin syncvol" command, VVR incorrectly assumes that the local host has been provided as the remote host, logs the error message and exits.

RESOLUTION:
Updated the vradmin utility to correctly identify the remote hosts that are passed to the "vradmin syncvol" command.

* 4051815 (Tracking ID: 4031597)

SYMPTOM:
vradmind generates a core dump in __strncpy_sse2_unaligned.

DESCRIPTION:
The following core dump is generated:
(gdb)bt
Thread 1 (Thread 0x7fcd140b2780 (LWP 90066)):
#0 0x00007fcd12b1d1a5 in __strncpy_sse2_unaligned () from /lib64/libc.so.6
#1 0x000000000059102e in IpmServer::accept (this=0xf21168, new_handlesp=0x0) at Ipm.C:3406
#2 0x0000000000589121 in IpmHandle::events (handlesp=0xf12088, new_eventspp=0x7ffc8e80a4e0, serversp=0xf120c8, new_handlespp=0x0, ms=100) at Ipm.C:613
#3 0x000000000058940b in IpmHandle::events (handlesp=0xfc8ab8, vlistsp=0xfc8938, ms=100) at Ipm.C:645
#4 0x000000000040ae2a in main (argc=1, argv=0x7ffc8e80e8e8) at srvmd.C:722

RESOLUTION:
vradmind is updated to properly handle getpeername(), which addresses this issue.

* 4051887 (Tracking ID: 3956607)

SYMPTOM:
When removing a VxVM disk using the vxdg-rmdisk operation, the following error occurs while requesting a disk reclaim:
VxVM vxdg ERROR V-5-1-0 Disk <device_name> is used by one or more subdisks which are pending to be reclaimed.
Use "vxdisk reclaim <device_name>" to reclaim space used by these subdisks, and retry "vxdg rmdisk" command.
Note: The reclamation operation is irreversible. However, a core dump occurs when vxdisk-reclaim is executed.

DESCRIPTION:
This issue occurs due to a memory allocation failure in the disk-reclaim code, which fails to be detected and causes an invalid address to be referenced. Consequently, a core dump occurs.

RESOLUTION:
The disk-reclaim code is updated to handle memory allocation failures properly.

* 4051889 (Tracking ID: 4019182)

SYMPTOM:
In case of a VxDMP configuration, an InfoScale server panics when applying a patch. The following stack trace is generated:
unix:panicsys+0x40()
unix:vpanic_common+0x78()
unix:panic+0x1c()
unix:mutex_enter() - frame recycled
vxdmp(unloaded text):0x108b987c(jmpl?)()
vxdmp(unloaded text):0x108ab380(jmpl?)(0)
genunix:callout_list_expire+0x5c()
genunix:callout_expire+0x34()
genunix:callout_execute+0x10()
genunix:taskq_thread+0x42c()
unix:thread_start+4()

DESCRIPTION:
Some VxDMP functions create callouts. The VxDMP module may already be unloaded when a callout expires, which may cause the server to panic. VxDMP should cancel any previous timeout function calls before it unloads itself.

RESOLUTION:
VxDMP is updated to cancel any previous timeout function calls before unloading itself.

* 4051896 (Tracking ID: 4010458)

SYMPTOM:
In a VVR environment, the rlink might inconsistently disconnect due to unexpected transactions, and the following message might get logged:
"VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed"

DESCRIPTION:
In a VVR environment, a transaction is triggered when a change in the VxVM or the VVR objects needs to be persisted on disk. In some scenarios, a few unnecessary transactions get triggered in a loop, which was causes multiple rlink disconnects and the aforementioned message gets logged frequently. One such unexpected transaction occurs when the open/close command is issued for a volume as part of SmartIO caching. The vradmind daemon also issues some open/close commands on volumes as part of the I/O statistics collection, which triggers unnecessary transactions. Additionally, some unexpected transactions occur due to incorrect references to some temporary flags on the volumes.

RESOLUTION:
VVR is updated to first check whether SmartIO caching is configured on a system. If it is not configured, VVR disables SmartIO caching on the associated volumes. VVR is also updated to avoid the unexpected transactions that may occur due to incorrect references on certain temporary flags on the volumes.

* 4055653 (Tracking ID: 4049082)

SYMPTOM:
I/O read error is displayed when remote FSS node rebooting.

DESCRIPTION:
When rebooting remote FSS node, I/O read requests to a mirror volume that is scheduled on the remote disk from the FSS node should be redirected to the remaining plex. However, current vxvm does not handle this correctly. The retrying I/O requests could still be sent to the offline remote disk, which cause to final I/O read failure.

RESOLUTION:
Code changes have been done to schedule the retrying read request on the remaining plex.

* 4055660 (Tracking ID: 4046007)

SYMPTOM:
In FSS environment if the cluster name is changed then the private disk region gets corrupted.

DESCRIPTION:
Under some conditions, when vxconfigd tries to update the TOC (table of contents) blocks of disk private region, the allocation maps cannot be initialized in the memory. This could make allocation maps incorrect and lead to corruption of the private region on the disk.

RESOLUTION:
Code changes have been done to avoid corruption of private disk region.

* 4055668 (Tracking ID: 4045871)

SYMPTOM:
vxconfigd crashed at ddl_get_disk_given_path with following stacks:
ddl_get_disk_given_path
ddl_reconfigure_all
ddl_find_devices_in_system
find_devices_in_system
mode_set
setup_mode
startup
main
_start

DESCRIPTION:
Under some situations, duplicate paths can be added in one dmpnode in vxconfigd. If the duplicate paths are removed then the empty path entry can be generated for that dmpnode. Thus, later when vxconfigd accesses the empty path entry, it crashes due to NULL pointer reference.

RESOLUTION:
Code changes have been done to avoid the duplicate paths that are to be added.

* 4055697 (Tracking ID: 4047793)

SYMPTOM:
When replicated disks are in SPLIT mode, importing its diskgroup failed with "Device is a hardware mirror".

DESCRIPTION:
When replicated disks are in SPLIT mode, which are r/w, importing its diskgroup failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. Now DMP refers to its REPLICATED status to judge if diskgroup import is allowed or not. `-o usereplicatedev=on/off` is enhanced to archive it.

RESOLUTION:
The code is enhanced to allow diskgroup import when replicated disks are in SPLIT mode.

* 4055772 (Tracking ID: 4043337)

SYMPTOM:
rp_rv.log file uses space for logging.

DESCRIPTION:
rp_rv log files needs to be removed and logger file should have 16 mb rotational log files.

RESOLUTION:
The code changes are implemented to disabel logging for rp_rv.log files

* 4055895 (Tracking ID: 4038865)

SYMPTOM:
In IRQ stack, the system panics at VxDMP module with the following calltrace:
native_queued_spin_lock_slowpath
queued_spin_lock_slowpath
_raw_spin_lock_irqsave7
dmp_get_shared_lock
gendmpiodone
dmpiodone
bio_endio
blk_update_request
scsi_end_request
scsi_io_completion
scsi_finish_command
scsi_softirq_done
blk_done_softirq
__do_softirq
call_softirq
do_softirq
irq_exit
do_IRQ
 <IRQ stack>

DESCRIPTION:
A deadlock issue occurred between inode_hash_lock and DMP shared lock when one process was holding inode_hash_lock, but acquired the DMP shared lock in IRQ context and the other processes holding the DMP shared lock acquired the inode_hash_lock.

RESOLUTION:
To avoid the deadlock issue, the code changes are done.

* 4055899 (Tracking ID: 3993242)

SYMPTOM:
vxsnap prepare on vset might throw error : "VxVM vxsnap ERROR V-5-1-19171 Cannot perform prepare operation on cloud 
volume"

DESCRIPTION:
There were  some wrong volume-records entries being fetched for VSET and due to which required validations were failing and triggering the issue .

RESOLUTION:
Code changes have been done to resolve the issue .

* 4055905 (Tracking ID: 4052191)

SYMPTOM:
Any scripts or command files in the / directory may run unexpectedly when the system starts and vxvm volumes will not be available until those scripts or commands are complete.

DESCRIPTION:
If this issue occurs, /var/svc/log/system-vxvm-vxvm-configure:default.log indicates that a script or a command located in the / directory has been executed.
For example,
ABC Script ran!!
/lib/svc/method/vxvm-configure[241] abc.sh not found
/lib/svc/method/vxvm-configure[242] abc.sh not found
/lib/svc/method/vxvm-configure[243] abc.sh not found
/lib/svc/method/vxvm-configure[244] app/ cannot execute
In this example, abc.sh is located in the / directory and just echoes "ABC script ran !!". vxvm-configure launched abc.sh.

RESOLUTION:
The incorrect comments format in the SunOS_5.11.vxvm-configure.sh script is corrected.

* 4055925 (Tracking ID: 4031064)

SYMPTOM:
During master switch with replication in progress, cluster wide hang is seen on VVR secondary.

DESCRIPTION:
With application running on primary, and replication setup between VVR primary & secondary, when master switch operation is attempted on secondary, it gets hung permanently.

RESOLUTION:
Appropriate code changes are done to handle scenario of master switch operation and replication data on secondary.

* 4055938 (Tracking ID: 3999073)

SYMPTOM:
Data corruption occurred when the fast mirror resync (FMR) was enabled and the failed plex of striped-mirror layout was attached.

DESCRIPTION:
To determine and recover the regions of volumes using contents of detach, a plex attach operation with FMR tracking has been enabled.

For the given volume region, the DCO region size being higher than the stripe-unit of volume, the code logic in plex attached code path was incorrectly skipping the bits in detach maps. Thus, some of the regions (offset-len) of volume did not sync with the attached plex leading to inconsistent mirror contents.

RESOLUTION:
To resolve the data corruption issue, the code has been modified to consider all the bits for given region (offset-len) in plex attached code.

* 4056107 (Tracking ID: 4036181)

SYMPTOM:
IO error has been reported when RVG is not in enabled state after boot-up.

DESCRIPTION:
When RVG is not enabled/active, the volumes under a RVG will report an IO error.
Messages logged:
systemd[1]: Starting File System Check on /dev/vx/dsk/vvrdg/vvrdata1...
systemd-fsck[4977]: UX:vxfs fsck.vxfs: ERROR: V-3-20113: Cannot open : No such device or address  
systemd-fsck[4977]: fsck failed with error code 31.
systemd-fsck: UX:vxfs fsck.vxfs: ERROR: V-3-20005: read of super-block on /dev/vx/dsk/vvrdg/vvrdata1 failed: Input/output error

RESOLUTION:
Issue got fixed by enabling the RVG using vxrvg command if the RVG is in disabled/recover state.

* 4056124 (Tracking ID: 4008664)

SYMPTOM:
System panic occurs with the following stack:

void genunix:psignal+4()
void vxio:vol_logger_signal_gen+0x40()
int vxio:vollog_logentry+0x84()
void vxio:vollog_logger+0xcc()
int vxio:voldco_update_rbufq_chunk+0x200()
int vxio:voldco_chunk_updatesio_start+0x364()
void vxio:voliod_iohandle+0x30()
void vxio:voliod_loop+0x26c((void *)0)
unix:thread_start+4()

DESCRIPTION:
Vxio keeps vxloggerd proc_t that is used to send a signal to vxloggerd. In case vxloggerd has been ended for some reason, the signal may be sent to an unexpected process, which may cause panic.

RESOLUTION:
Code changes have been made to correct the problem.

* 4056144 (Tracking ID: 3906534)

SYMPTOM:
After Dynamic Multi-Pathing (DMP) Native support is enabled, /boot should to be mounted on the DMP device(Specific to Linux).

DESCRIPTION:
Typically, /boot is mounted on top of an Operating System (OS) device. When DMP Native support is enabled, only the volume groups (VGs) are migrated from the OS device to the DMP device, but /boot is not migrated. Parallely, if the OS device path is not available, the system becomes unbootable, because /boot is not available. Thus, it is necessary to mount /boot on the DMP device to provide multipathing and resiliency(Specific to Linux).

RESOLUTION:
The module is updated to migrate /boot on top of a DMP device when DMP Native support is enabled. Note: This fix is available for RHEL 6 only. For other Linux platforms, /boot will still not be mounted on the DMP device(Specific to Linux).

* 4056146 (Tracking ID: 3983832)

SYMPTOM:
When the disk groups are deleted, multiple VxVM commands get hang in CVR secondary site.

DESCRIPTION:
VxVM command hangs when a deadlock was encountered during kmsg broadcast while deleting disk group and IBC unfreeze operation.

RESOLUTION:
Changes are done in VxVM code check either by transactions or avoiding deadlock.

* 4056832 (Tracking ID: 4057526)

SYMPTOM:
Whenever vxnm-vxnetd is loaded, it reports "Cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory" in /var/log/messages.

DESCRIPTION:
New systemd update removed the support for "/var/lock/subsys/" directory. Thus, whenever vxnm-vxnetd is loaded on the systems supporting systemd, it 
reports "cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory"

RESOLUTION:
Added a check to validate if the /var/lock/subsys/ directory is supported in vxnm-vxnetd.sh

* 4056121 (Tracking ID: 4003617)

SYMPTOM:
Vxconfigd daemon dump core during node join operation on any node

DESCRIPTION:
Vxconfigd daemon dump core during node join operation on any node with below stack
which may lead to further transactions not getting processed due to vxconfigd level communication
between nodes is down

chosen_dalist_delete ()
reonline_received_disks ()
slave_check_darec ()
slave_response ()
fillnextreq ()
vold_getrequest ()
request_loop ()
main ()

RESOLUTION:
Workaround is to restart vxconfigd if HA is not available

* 4056154 (Tracking ID: 3975081)

SYMPTOM:
For AIX, After DMP Native support is enabled on the system, vxdmpadm native list command fails with following error.
VxVM vxdmpadm ERROR V-5-1-15206 DMP support for LVM bootability is disabled.

DESCRIPTION:
FOR AIX, VxVM (Veritas Volume Manager) has a binary i.e vxdmpboot which runs at boot time to enable DMP Native Support for Root FileSystem.
The binary executes at very early stage in boot phase on AIX platform and performs some licensing checks during execution. Earlier the licensing specific
library used to be a static library but now it has been changed to Dynamic library. At the boot time when vxdmpboot binary is executed, most of the file systems
are not mounted. The directory containing the Dynamic library is also not mounted and hence library would not be available at boot time when the vxdmpboot
binary is executed causing the failure in the licensing checks and failure of vxdmpboot execution. This failure prevents native support from getting enabled at
boot time for Root Filesystem.

RESOLUTION:
FOR AIX, Licensing checks as part of vxdmpboot binary have been disabled  to enable DMP native support successfully for root filesystem.

* 4056918 (Tracking ID: 3995648)

SYMPTOM:
In Flexible Storage Sharing (FSS) environments, disk group import operation with few disks missing leads to data corruption.

DESCRIPTION:
In FSS environments, import of disk group with missing disks is not allowed. If disk with highest updated configuration information is not present during
import, the import operation fired was leading incorrectly incrementing the config TID on remaining disks before failing the operation. When missing disk(s)
with latest configuration came back, import was successful. But because of earlier failed transaction, import operation incorrectly choose wrong configuration
to import the diskgroup leading to data corruption.

RESOLUTION:
Code logic in disk group import operation is modified to ensure failed/missing disks check happens early before attempting perform any on-disk update as part of
import.

Patch ID: VRTSvxvm-7.4.1.3100

* 4013643 (Tracking ID: 4010207)

SYMPTOM:
System panic occurred with the below stack:

native_queued_spin_lock_slowpath()
queued_spin_lock_slowpath()
_raw_spin_lock_irqsave()
volget_rwspinlock()
volkiodone()
volfpdiskiodone()
voldiskiodone_intr()
voldmp_iodone()
bio_endio()
gendmpiodone()
dmpiodone()
bio_endio()
blk_update_request()
scsi_end_request()
scsi_io_completion()
scsi_finish_command()
scsi_softirq_done()
blk_done_softirq()
__do_softirq()
call_softirq()

DESCRIPTION:
As part of collecting the IO statistics collection, the vxstat thread acquires a spinlock and tries to copy data to the user space. During the data copy, if some page fault happens, then the thread would relinquish the CPU and provide the same to some other thread. If the thread which gets scheduled on the CPU requests the same spinlock which vxstat thread had acquired, then this results in a hard lockup situation.

RESOLUTION:
Code has been changed to properly release the spinlock before copying out the data to the user space during vxstat collection.

* 4017284 (Tracking ID: 4011691)

SYMPTOM:
Observed high CPU consumption on the VVR secondary nodes because of high pending IO load.

DESCRIPTION:
High replication related IO load on the VVR secondary and the requirement of maintaining write order fidelity with limited memory pools created  contention. This resulted in multiple VxVM kernel threads contending for shared resources and there by increasing the CPU consumption.

RESOLUTION:
Limited the way in which VVR consumes its resources so that a high pending IO load would not result into high CPU consumption.

* 4023762 (Tracking ID: 4020046)

SYMPTOM:
The following IO errors are reported on VxVM sub-disks result in DRL log detached without any SCSI errors detected.

VxVM vxio V-5-0-1276 error on Subdisk [xxxx] while writing volume [yyyy][log] offset 0 length [zzzz]
VxVM vxio V-5-0-145 DRL volume yyyy[log] is detached

DESCRIPTION:
DRL plexes detached as an atomic write flag (BIT_ATOMIC) was set on BIO unexpectedly. The BIT_ATOMIC flag gets set on bio only if VOLSIO_BASEFLAG_ATOMIC_WRITE flag is set on SUBDISK SIO and its parent MVWRITE SIO's sio_base_flags. When generating MVWRITE SIO,  it's sio_base_flags was copied from a gio structure, because the gio structure memory isn't initialized it may contain gabarge values, hence the issue.

RESOLUTION:
Code changes have been made to fix the issue.

* 4031342 (Tracking ID: 4031452)

SYMPTOM:
Add node operation is failing with error "Error found while invoking '' in the new node, and rollback done in both nodes"

DESCRIPTION:
Stack showed a valid address for pointer ptmap2, but still it generated core.
It suggested that it might be a double-free case. Issue lies in freeing a pointer

RESOLUTION:
Added handling for such case by doing NULL assignment to pointers wherever they are freed

* 4033162 (Tracking ID: 3968279)

SYMPTOM:
Vxconfigd dumps core with SEGFAULT/SIGABRT on boot for NVME setup.

DESCRIPTION:
For NVME setup, vxconfigd dumps core while doing device discovery as the data structure is accessed by multiple threads and can hit a race condition. For sector size other than 512, the partition size mismatch is seen because we are doing comparison with partition size from devintf_getpart() and it is in sector size of the disk. This can lead to call of NVME device discovery.

RESOLUTION:
Added mutex lock while accessing the data structure so as to prevent core. Made calculations in terms of sector size of the disk to prevent the partition size mismatch.

* 4033163 (Tracking ID: 3959716)

SYMPTOM:
System may panic with sync replication with VVR configuration, when VVR RVG is in DCM mode, with following panic stack:
volsync_wait [vxio]
voliod_iohandle [vxio]
volted_getpinfo [vxio]
voliod_loop [vxio]
voliod_kiohandle [vxio]
kthread

DESCRIPTION:
With sync replication, if ACK for data message is delayed from the secondary site, the 
primary site might incorrectly free the message from the waiting queue at primary site.
Due to incorrect handling of the message, a system panic may happen.

RESOLUTION:
Required code changes are done to resolve the panic issue.

* 4033172 (Tracking ID: 3994368)

SYMPTOM:
During node 0 shutting down, vxconfigd daemon abort on node 1, and I/O write error happened on node 1

DESCRIPTION:
Examining the vxconfigd core we found that it entered into endless sigio processing which resulted in stack overflow and hence vxconfigd core dumped.
After that vxconfigd restarted and ended up in dg disable scenario.

RESOLUTION:
We have done the appropriate code changes to handle the scenario of stack overflow.

* 4033216 (Tracking ID: 3993050)

SYMPTOM:
vxdctl dumpmsg command gets stuck on large node cluster during reconfiguration

DESCRIPTION:
vxdctl dumpmsg command gets stuck on large node cluster during reconfiguration with following stack. This causes /var/adm/vx/voldctlmsg.log file
to get filled with old repeated messages in GBs consuming most of /var space.
# pstack 210460
voldctl_get_msgdump ()
do_voldmsg ()
main ()

RESOLUTION:
Code changes have been done to dump correct required messages to file

* 4033515 (Tracking ID: 3984266)

SYMPTOM:
DCM flag in on the RVG (Replicated Volume Group) volume may get deactivated after a master switch in CVR (Clustered Volume Replicator) which may cause excessive RVG recovery after subsequent node reboots.

DESCRIPTION:
After master switch, the DCM flag needs to be updated on the new CVM master node. Due to a transaction initiated in parallel with master switch, the DCM flag was getting lost. This was causing excessive RVG recovery during next node reboots as the DCM write position was NOT updated for a long time.

RESOLUTION:
The code is fixed to handle the race in updating the DCM flag during a master switch.

* 4036426 (Tracking ID: 4036423)

SYMPTOM:
Race condition while reading config file in docker volume plugin caused the issue in Flex Appliance.

DESCRIPTION:
If 2 simultaneous requests come for say MountVolume, then both of them update the global variables and it leads to wrong parameter values
in some cases.

RESOLUTION:
Fix is to read this file only once during startup in init() function. If the user wants to change default values in the config file,
then he will have to restart the vxinfoscale-docker service.

* 4039240 (Tracking ID: 4027261)

SYMPTOM:
These log files have permissions rw-rw-rw which are being flagged during customer's security scans.

DESCRIPTION:
There have been multiple concerns about world-writeable permissions on /var/VRTSvxvm/in.vxrsyncd.stderr and /var/adm/vx/vxdmpd.log .These log files have permissions rw-rw-rw which are being flagged by customer's security scans.

RESOLUTION:
The files are just log files with no sensitive information to leak, not much of a security threat. The files may not require world write permissions and can be restricted to the root user. Hence the permission of these files have been changed now !

* 4039244 (Tracking ID: 4010612)

SYMPTOM:
$ vxddladm set namingscheme=ebn lowercase=no
This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on. means every nvme/ssd disks names would be 
hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....

DESCRIPTION:
$ vxddladm set namingscheme=ebn lowercase=no
This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on.
means every nvme/ssd disks names would be hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....
eg.
smicro125_nvme0_0 <--- disk1
smicro125_nvme1_0 <--- disk2

for lowercase=no our current code is suppressing the suffix digit of enclosurname and hence multiple disks gets same name and it is showing udid_mismatch 
because whatever udid of private region is not matching with ddl. ddl database showing wrong info because of multiple disks gets same name.

smicro125_nvme_0 <--- disk1   <<<<<<<-----here suffix digit of nvme enclosure suppressed
smicro125_nvme_0 <--- disk2

RESOLUTION:
Append the suffix integer while making da_name

* 4039525 (Tracking ID: 4012763)

SYMPTOM:
IO hang may happen in VVR (Veritas Volume Replicator) configuration when SRL overflows for one rlink while another one rlink is in AUTOSYNC mode.

DESCRIPTION:
In VVR, if the SRL overflow happens for rlink (R1) and some other rlink (R2) is ongoing the AUTOSYNC, then AUTOSYNC is aborted for R2, R2 gets detached and DCM mode is activated on R1 rlink.

However, due to a race condition in code handling AUTOSYNC abort and DCM activation in parallel, the DCM could not be activated properly and IO which caused DCM activation gets queued incorrectly, this results in a IO hang.

RESOLUTION:
The code has been modified to fix the race issue in handling the AUTOSYNC abort and DCM activation at same time.

* 4039526 (Tracking ID: 4034616)

SYMPTOM:
vol_seclog_limit_ioload tunable needs to be enabled on Linux only.

DESCRIPTION:
vol_seclog_limit_ioload tunable needs to be enabled on Linux only.

RESOLUTION:
The code changes are implemented to disable the tunable 'vol_seclog_limit_ioload' on non-linux platforms.

* 4039527 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature as ON, it can happen vxiod with ID as 128 starts replication where RVG was in DCM mode, this vxiod is waiting for filesystem's response if a given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a dead lock situation.

RESOLUTION:
Code changes have been made to avoid handling MDSHIP IO in vxiod whose ID is bigger than 127.

* 4040183 (Tracking ID: 4034857)

SYMPTOM:
Current load of Vxvm modules were failing on SLES15 SP2(Kernel - 5.3.18-22.2-default).

DESCRIPTION:
With new kernel (5.3.18-22.2-default) below mentioned functions were depricated -
1. gettimeofday() 
2.struct timeval
3. bio_segments()
4. iov_for_each()
5.req filed in struct next_rq
Also, there was susceptible Data corruption with big size IO(>1M) processed by Linux kernel IO splitting.

RESOLUTION:
Code changes are mainly to support kernel 5.3.18 and to provide support for deprecated functions. 
Remove dependency on req->next_rq field in blk-mq code
And, changes related to bypassing the Linux kernel IO split functions, which seems redundant for VxVM IO processing.

* 4040842 (Tracking ID: 4009353)

SYMPTOM:
After the command, vxdmpadm settune dmp_native_support=on, machine goes into maintenance mode. Issue is produced on physical setup with root lvm disk

DESCRIPTION:
If there is '-' in native vgname, then the script is taking an inaccurate vgname.

RESOLUTION:
Code changes have been made to fix the issue.

Patch ID: VRTSvxvm-7.4.1.2800

* 3984155 (Tracking ID: 3976678)

SYMPTOM:
vxvm-recover:  cat: write error: Broken pipe error encountered in syslog multiple times.

DESCRIPTION:
Due to a bug in vxconfigbackup script which is started by vxvm-recover "cat : write error: Broken pipe" is encountered in syslog 
and it is reported under vxvm-recover. In vxconfigbackup code multiple subshells are created in a function call and the first subshell is for cat command. When a particular if condition is satistfied, return is called exiting the later subshells even when there is data to be read in the created cat subshell, which results in broken pipe error.

RESOLUTION:
Changes are done in VxVM code to handle the broken pipe error.

* 4016283 (Tracking ID: 3973202)

SYMPTOM:
A VVR primary node may panic with below stack due to accessing the freed memory:
nmcom_throttle_send()
nmcom_sender()
kthread ()
kernel_thread()

DESCRIPTION:
After sending the data to VVR (Veritas Volume Replicator) secondary site, the code was accessing some variables for which the memory was already released due to the data ACK getting processed quite early. This was a rare race condition which may happen due to accessing the freed memory.

RESOLUTION:
Code changes have been made to avoid the incorrect memory access.

* 4016291 (Tracking ID: 4002066)

SYMPTOM:
System panic with below stack when do reclaim:
__wake_up_common_lock+0x7c/0xc0
sbitmap_queue_wake_all+0x43/0x60
blk_mq_tag_wakeup_all+0x15/0x30
blk_mq_wake_waiters+0x3d/0x50
blk_set_queue_dying+0x22/0x40
blk_cleanup_queue+0x21/0xd0
vxvm_put_gendisk+0x3b/0x120 [vxio]
volsys_unset_device+0x1d/0x30 [vxio]
vol_reset_devices+0x12b/0x180 [vxio]
vol_reset_kernel+0x16c/0x220 [vxio]
volconfig_ioctl+0x866/0xdf0 [vxio]

DESCRIPTION:
With recent kernel, it is expected that kernel will return the pre-allocated sense buffer. These sense buffer pointers are supposed to be unchanged across multiple uses of a request. They are pre-allocated and expected to be unchanged until such a time as the request memory is to be freed. DMP overwrote the  original sense buffer, hence the issue.

RESOLUTION:
Code changes have been made to avoid tampering the pre-allocated sense buffer.

* 4016768 (Tracking ID: 3989161)

SYMPTOM:
The system panic occurs because of hard lockup with the following stack:

#13 [ffff9467ff603860] native_queued_spin_lock_slowpath at ffffffffb431803e
#14 [ffff9467ff603868] queued_spin_lock_slowpath at ffffffffb497a024
#15 [ffff9467ff603878] _raw_spin_lock_irqsave at ffffffffb4988757
#16 [ffff9467ff603890] vollog_logger at ffffffffc105f7fa [vxio]
#17 [ffff9467ff603918] vol_rv_update_childdone at ffffffffc11ab0b1 [vxio]
#18 [ffff9467ff6039f8] volsiodone at ffffffffc104462c [vxio]
#19 [ffff9467ff603a88] vol_subdisksio_done at ffffffffc1048eef [vxio]
#20 [ffff9467ff603ac8] volkcontext_process at ffffffffc1003152 [vxio]
#21 [ffff9467ff603b10] voldiskiodone at ffffffffc0fd741d [vxio]
#22 [ffff9467ff603c40] voldiskiodone_intr at ffffffffc0fda92b [vxio]
#23 [ffff9467ff603c80] voldmp_iodone at ffffffffc0f801d0 [vxio]
#24 [ffff9467ff603c90] bio_endio at ffffffffb448cbec
#25 [ffff9467ff603cc0] gendmpiodone at ffffffffc0e4f5ca [vxdmp]
... ...
#50 [ffff9497e99efa60] do_page_fault at ffffffffb498d975
#51 [ffff9497e99efa90] page_fault at ffffffffb4989778
#52 [ffff9497e99efb40] conv_copyout at ffffffffc10005da [vxio]
#53 [ffff9497e99efbc8] conv_copyout at ffffffffc100044e [vxio]
#54 [ffff9497e99efc50] volioctl_copyout at ffffffffc1032db3 [vxio]
#55 [ffff9497e99efc80] vol_get_logger_data at ffffffffc105e4ce [vxio]
#56 [ffff9497e99efcf8] voliot_ioctl at ffffffffc105e66b [vxio]
#57 [ffff9497e99efd78] volsioctl_real at ffffffffc10aee82 [vxio]
#58 [ffff9497e99efe50] vols_ioctl at ffffffffc0646452 [vxspec]
#59 [ffff9497e99efe70] vols_unlocked_ioctl at ffffffffc06464c1 [vxspec]
#60 [ffff9497e99efe80] do_vfs_ioctl at ffffffffb4462870
#61 [ffff9497e99eff00] sys_ioctl at ffffffffb4462b21

DESCRIPTION:
Vxio kernel sends a signal to vxloggerd to flush the log as it is almost full. Vxloggerd calls into vxio kernel to copy the log buffer out.  As vxio copy log date from kernel to user with holding a spinlock, if a page fault occurs during the copy out, hard lockup and panic occur.

RESOLUTION:
Code changes have been made the fix the problem.

* 4017194 (Tracking ID: 4012681)

SYMPTOM:
If vradmind process terminates due to some reason, it is not properly restarted by RVG agent of VCS.

DESCRIPTION:
The RVG(Replicated Volume Group) agent of VCS(Veritas Cluster Server) restarts the vradmind process if it gets killed or terminated
due to some reason, this was not working properly on systemd enabled platforms like RHEL-7.
In the systemd enabled platforms, after the vradmind process dies, the vras-vradmind service used to stay in active/running state, due to this, even 
after the RVG agent issued a command to start the vras-vradmind service, the vradmind process was not getting started.

RESOLUTION:
The code is modified to fix the parameters for vras-vradmind service, so that the service status will change to failed/faulted if vradmind process gets killed. 
The service can be manually started later or RVG agent of VCS can start the service, which will start the vradmind process as well.

Patch ID: VRTSvxvm-7.4.1.1800

* 3991538 (Tracking ID: 3989949)

SYMPTOM:
Existing package failed to load on SLES 15 Sp1 server.

DESCRIPTION:
SLES15 SP1 is a new release and hence VxVM module is compiled with this new kernel along with other few MQ changes .

RESOLUTION:
changes have been done to keep MQ code under single flag .

* 3992902 (Tracking ID: 3975667)

SYMPTOM:
NMI watchdog: BUG: soft lockup

DESCRIPTION:
When flow control on ioshipping channel is set there is window in code where vol_ioship_sender thread can go in tight loop.
This causes softlockup

RESOLUTION:
Relinquish CPU to schedule other process. vol_ioship_sender() thread will restart after some delay.

Patch ID: VRTSglm-7.4.1.3100

* 4039685 (Tracking ID: 4038994)

SYMPTOM:
GLM module failed to load on SLES15SP2

DESCRIPTION:
The SLES15SP2 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on SLES15SP2.

Patch ID: VRTSglm-7.4.1.2800

* 4014715 (Tracking ID: 4011596)

SYMPTOM:
Man page is missing details about of the feature we support

DESCRIPTION:
Need to include new option "-h" in glmdump in man page for using hacli utility for communicating across the nodes in the cluster.

RESOLUTION:
Added the details about the feature supported by glmdump in man page

Patch ID: VRTSglm-7.4.1.1600

* 3991390 (Tracking ID: 3991389)

SYMPTOM:
GLM module failed to load on SLES15 SP1

DESCRIPTION:
The SLES15 SP1 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on SLES15 SP1

Patch ID: VRTSdbed-7.4.1.3100

* 4044143 (Tracking ID: 4044136)

SYMPTOM:
Package installation failed on CentOS platform.

DESCRIPTION:
Due to lower packaging version package failed to install.

RESOLUTION:
Increased package versioning to 7.4.1.3100, now package is able to install on CentOS platform.

Patch ID: VRTSdbed-7.4.1.1600

* 3980547 (Tracking ID: 3989826)

SYMPTOM:
The VCS agent for Oracle does not support Oracle 19c databases.

DESCRIPTION:
The VCS agent for Oracle does not support Oracle 19c databases.

RESOLUTION:
The VCS agent for Oracle now supports Oracle 19c databases.

Patch ID: VRTSsfcpi-7.4.1.3100

* 3969748 (Tracking ID: 3969438)

SYMPTOM:
Rolling upgrade fails with the error:
CPI ERROR V-9-0-0
0 Cannot execute _cmd_vxdg list | _cmd_grep enabled | _cmd_awk '{print $1}' on system_name with undefined sys->{padv}

DESCRIPTION:
The following prompt appears as part of the rolling upgrade process: Enter the system name of the cluster on which you would like to perform rolling upgrade [q,?] After the prompt, the installer suggests the names of systems on which to perform the upgrade. You may select one of the suggested system names, or you may type a system name, and then press Enter. Either way, if you specify the system name of the local node, the installer displays the following error and exits.
CPI ERROR V-9-0-0
0 Cannot execute _cmd_vxdg list | _cmd_grep enabled | _cmd_awk '{print $1}' on system_name with undefined sys->{padv}

RESOLUTION:
The installer is enhanced to verify if the fss disk group is available. This enhancements was made for the Linux, Solaris, and Aix platforms.

* 3970852 (Tracking ID: 3970848)

SYMPTOM:
The CPI configures the product even if you use the -install option while installing the InfoScale Foundation version 7.x or later.

DESCRIPTION:
When you use the CPI to install InfoScale Foundation, the ./installer option performs the installation and configurations by default. If you use the -install option, the installer must only install the packages and not perform the configuration. However, in case of InfoScale Foundation version 7.x or later, when you use the -install option, the installer continues to configure the product after installation.

RESOLUTION:
For InfoScale Foundation, the CPI is modified so that the -install option only install the packages. Users can then use the -configure option to configure the product.

* 3972077 (Tracking ID: 3972075)

SYMPTOM:
When the -patch_path option is used, the installer fails to install the VRTSveki patch.

DESCRIPTION:
When the VRTSveki patch is present in a patch bundle, if the GA installer is run with the -patch_path option, it fails to install the VRTSveki patch. Consequently, the installation of any dependent VxVM patches also fails. This issue occurs because the veki module, and any other modules that depend on it, are loaded immediately after the packages are installed. When a patch installation starts, the Veki patch attempts to unload the veki module, but fails, because the dependent modules have a hold on it.

RESOLUTION:
This hotfix updates CPI so that if the -patch_path option is used when a VRTSveki patch is present in the bundle, it first unloads the dependent modules and then unloads the veki module.

* 3973119 (Tracking ID: 3973114)

SYMPTOM:
While upgrading Infoscale 7.4, the installer fails to stop vxspec, vxio and vxdmp as
vxcloudd is still running.

DESCRIPTION:
As the vxcloudd is running it fails to stop vxspec, vxio and vxdmp.

RESOLUTION:
Modified the CPI code to stop vxcloudd before stopping vxspec, vxio and vxdmp.

* 3979596 (Tracking ID: 3979603)

SYMPTOM:
CPI assumes that the third digit in an InfoScale 7.4.1 version indicates a patch version, and not a GA version. Therefore, it upgrades the packages from the patch only and does not upgrade the base packages.

DESCRIPTION:
To compare product versions and to set the type of installation, CPI compares the currently installed version with the target version to be installed. However, instead of comparing all the digits in a version, it incorrectly compares only the first two digits. In this case, CPI compares 7.4 with 7.4.1.xxx, and finds that the first 2 digits match exactly. Therefore, it assumes that the base version is already installed and then installs the patch packages only.

RESOLUTION:
This hotfix updates the CPI to recognize InfoScale 7.4.1 as a base version and 7.4.1.xxx (for example) as a patch version. After you apply this patch, CPI can properly upgrade the base packages first, and then proceed to upgrade the packages that are in the patch.

* 3980564 (Tracking ID: 3980562)

SYMPTOM:
CPI does not perform the installation when two InfoScale patch paths are provided, and displays the following message: "CPI ERROR V-9-30-1421 The patch_path and patch2_path patches are both for the same package: VRTSinfoscale".

DESCRIPTION:
CPI checks whether the patches mentioned with the patch_path option are the same. Even though the patch bundles are different, they have the same starting string, VRTSinfoscale, followed by the version number. CPI does not compare the version part of the patch bundle name, and so it incorrectly assumes them to be the same. Therefore, CPI does not install the patch bundles simultaneously, and instead, displays the inappropriate error.

RESOLUTION:
CPI is updated to check the complete patch bundle name including the version, for example: VRTSinfoscale740P1100. Now, CPI can properly differentiate between the patches and begin the installation without displaying the inappropriate error.

* 3980944 (Tracking ID: 3981519)

SYMPTOM:
An uninstallation of InfoScale 7.4.1 using response files fails with an error:
CPI ERROR V-9-40-1582 Following edge server related entries are missing in response file. Please correct.
edgeserver_host
edgeserver_port

DESCRIPTION:
When you uninstall InfoScale using a response file, the installer checks whether the response file contains the edge server related entries. If these entries are not available in the response file, the uninstall operation fails with the following error:
CPI ERROR V-9-40-1582 Following edge server related entries are missing in response file. Please correct.
edgeserver_host
edgeserver_port
This issue occurs because the installer checks the availability of these entries in the response file while performing any operations using the response files. However, these entries are required only for the configure and upgrade operations.

RESOLUTION:
The installer is enhanced to check the availability of the edge server related entries in the response file only for the configure or upgrade operations.

* 3985584 (Tracking ID: 3985583)

SYMPTOM:
For InfoScale 7.4.1 Patch 1200 onwards, the addnode operation fails with the following error message: 
"Cluster protocol version mismatch was detected between cluster <<cluster_name>> and <<new node name>>. Refer to the Configuration and Upgrade Guide for more details on how to set or upgrade cluster protocol version."

DESCRIPTION:
In InfoScale 7.4.1 Patch 1200 the cluster protocol version was changed from 220 to 230 in the VxVM component. However, the same update is not made in the installer, which continues to set the default cluster protocol version to 220. During the addnode operation, when the new node is compared with the other cluster nodes, the cluster protocol versions do not match, and the error is displayed.

RESOLUTION:
The common product installer is enhanced to check the installed VRTSvxvm package version on the new node and accordingly set the cluster protocol version.

* 3986468 (Tracking ID: 3987894)

SYMPTOM:
An InfoScale 7.4.1 installation fails even though the installer automatically downloads the appropriate platform support patch from SORT.

DESCRIPTION:
During an InfoScale installation, the product installer looks for any applicable platform support patches that are available on SORT and automatically downloads them. Thereafter however, it fails to correctly identify the base version of the product package on the system, and hence fails to complete the installation even though the appropriate patch is available.

RESOLUTION:
The installer is updated to correctly identify the base version of the product package.

* 3986572 (Tracking ID: 3965602)

SYMPTOM:
When a patch is installed as a part of rolling upgrade Phase1, the rolling
upgrade Phase2 fails with the error:
A more recent version of InfoScale Enterprise, 7.3.1.xxx, is already
installed.

DESCRIPTION:
When a patch is installed as a part of rolling upgrade Phase1, the kernel
package version might get upgraded to a version that is higher than the version
considered for upgrade. 

This results in failure of the rolling upgrade Phase2 with the error:  A
more recent version of InfoScale Enterprise, 7.3.1.xxx, is already
installed.

RESOLUTION:
The CPI rolling upgrade prerequisites are modified to continue even if a patch
for InfoScale product is installed as part of Rolling upgrade Phase1.

* 3986960 (Tracking ID: 3986959)

SYMPTOM:
The installer fails to install the 'infoscale-sles12.4_x86_64-Patch-7.4.1.100' patch on SLES 12 SP4.

DESCRIPTION:
InfoScale support for SUSE Linux Enterprise Server 12 SP4 is introduced with the 'infoscale-sles12.4_x86_64-Patch-7.4.1.100' patch. However, the path of the 'ethtool' command is changed in SLES 12 SP4. Therefore, the installer does not recognize the changed path and fails to install. The following error is logged:
CPI ERROR V-9-30-1570 The following required OS commands are missing on <<node name>>:
/bin/ls: cannot access '/sbin/ethtool': No such file or directory

RESOLUTION:
The path of the 'ethtool' command is updated in the installer for SLES12 SP4.

* 3987228 (Tracking ID: 3987171)

SYMPTOM:
The installer takes longer than expected time to start installation process.

DESCRIPTION:
This issue occurs because, before it begins the installation, the installer incorrectly attempts to download latest CPI patches using a private IPv6 address until the process times out.

RESOLUTION:
The installer is updated to use first available public IP address to download latest CPI patches so that it can complete the activity and start the installation within the expected amount of time.

* 3989085 (Tracking ID: 3989081)

SYMPTOM:
When a system is restarted after a successful VVR configuration, it becomes unresponsive.

DESCRIPTION:
The installer starts the vxconfigd service in the user slice instead of the system slice, and does not make the vxvm-boot service active. When the system is restarted after VVR configuration, the services from the user slice get killed and the CVMVxconfigd agent cannot bring vxconfigd online. As a result, the system becomes unresponsive and fails to come up at the primary site.

RESOLUTION:
The installer is updated to start and stop the vxconfigd service using the systemctl commands.

* 3989099 (Tracking ID: 3989098)

SYMPTOM:
For SLES15, system clock synchronization using the NTP server fails while configuring server-based fencing.

DESCRIPTION:
While configuring server-based fencing, when multiple CP servers are provided in the configuration, and the ssh connection is not already established, then the installer does not create the sys object with the required values on the first CP server. As a result, the system clock synchronization always fails on the first CP server.

RESOLUTION:
The installer is enhanced to create the sys object with the appropriate values.

* 3992222 (Tracking ID: 3992254)

SYMPTOM:
On SLES12 SP5, the installer fails to install InfoScale 7.4.1 and displays the following message: "CPI ERROR V-9-0-0 This release is intended to operate on SLES x86_64 version 3.12.22  but <<hostname>> is running version 4.12.14-120-default".

DESCRIPTION:
On SLES12 SP5, the installer fails to install InfoScale 7.4.1. During the release compatibility check, it displays the following message: "CPI ERROR V-9-0-0 This release is intended to operate on SLES x86_64 version 3.12.22  but <<hostname>> is running version 4.12.14-120-default". This issue occurs because the kernel versions included in the installer is different that the kernel version required by SLES12 SP5.

RESOLUTION:
The installer is enhanced to include the kernel version that is required to support SLES12 SP5.

* 3993898 (Tracking ID: 3993897)

SYMPTOM:
On SLES12 SP4, if the kernel version is not 4.12.14-94.41-default, the installer fails to install InfoScale 7.4.1.

DESCRIPTION:
On SLES12 SP4, if the kernel version is not 4.12.14-94.41-default, the installer fails to install InfoScale 7.4.1 and displays the following message: "CPI ERROR V-9-0-0 0 No padv object defined for padv SLES15x8664 for system". When installing InfoScale, the product installer performs a release compatibility check and defines the padv object. However, during this check, the installer fails if the kernel versions included in the installer is different than the kernel version installed on the system, and the padv object cannot be defined.

RESOLUTION:
The installer is enhanced to support different kernel versions for SLES12 SP4.

* 3995826 (Tracking ID: 3995825)

SYMPTOM:
The installer script fails to stop the vxfen service while configuring InfoScale components or applying patches.

DESCRIPTION:
When InfoScale Enterprise is installed using the CPI, the value of the START_<<COMP>> and the STOP_<<COMP>> variables is set to '1' in the configuration files for some components like vxfen, amf, and llt. During the post-installation phase, the CPI script sets these values back to '0'.
When InfoScale Enterprise is installed using Yum or Red Hat Satellite, the values of these variables remain set to '1' even after the installation is completed. Later, if CPI is used to install patches or to configure any of the components on such an installation, the installer script fails to stop the vxfen service.

RESOLUTION:
To avoid such issues, the installer script is updated to check the values of the START_<<COMP>> and the STOP_<<COMP>> variables and set them to '0' during the pre-configuration phase.

* 3999671 (Tracking ID: 3999669)

SYMPTOM:
A single-node HA configuration failed on a NetBackup Appliance system because CollectorService failed to start.

DESCRIPTION:
A single-node HA setup fails on a NetBackup Appliance system, and the following error is logged: "Failed to Configure the VCS". This issue occurs because the CollectorService fails to start. The CollectorService is not directly related to a cluster setup, so its failure should not impact the HA configuration.

RESOLUTION:
The InfoScale product installer addresses this issue by blocking the start of CollectorService on any appliance-based system.

* 4000598 (Tracking ID: 4000596)

SYMPTOM:
The 'showversion' option of the InfoScale 7.4.1 installer fails to download the available maintenance releases or patch releases.

DESCRIPTION:
The "showversion" option of installer lets you view the currently installed InfoScale version, and to download the available maintenance or patch releases. However, the InfoScale 7.4.1 installer fails to identify the path of the repository where the maintenance or the patch releases should be downloaded.

RESOLUTION:
The installer is updated to either use /OPT/VRTS/repository as the default repository or to accept a different location to download the suggested releases.

* 4004174 (Tracking ID: 4004172)

SYMPTOM:
On SLES15 SP1, while installing InfoScale 7.4.1 along with product patch, the installer fails to install some of the base rpms and exits with an error.

DESCRIPTION:
While installing InfoScale 7.4.1 along with product patch 'infoscale-sles15_x86_64-Patch-7.4.1.1800', the installer first installs the base packages and then the other patches available in the patch bundle. However, the installer fails to install the VTRSvxvm, VRTSaslapm, and VRTScavf base rpms and exit with the following error although the patches available in patch bundle installs successfully:
The following rpms failed to install on <<system name>>:
VRTSvxvm
VRTSaslapm
VRTScavf

RESOLUTION:
The installer is enhanced to ignore the failures in the base package installation when InfoScale 7.4.1 is installed along with the available product patch.

* 4006619 (Tracking ID: 4015976)

SYMPTOM:
On a Solaris system, patch upgrade of InfoScale fails with an error in the alternate boot environment.

DESCRIPTION:
InfoScale does not support patch upgrade in alternate boot environments. Therefore, when you provide the "-rootpath" argument to the installer during a patch upgrade, the patch upgrade operation fails with the following error message: CPI ERROR V-9-0-0 The -rootpath option works only with upgrade tasks.

RESOLUTION:
The installer is enhanced to support patch upgrades in alternate boot environments by using the -rootpath option.

* 4008070 (Tracking ID: 4008578)

SYMPTOM:
Even though a cluster node may have a fully qualified hostname, the product installer trims this value and uses the shorter hostname for the cluster configuration.

DESCRIPTION:
The name of a cluster node may be set to a fully qualified hostname, for example, somehost.example.com. However, by default, the product installer trims this value and uses the shorter hostname (for example, somehost) for the cluster configuration.

RESOLUTION:
This hotfix updates the installer to allow the use of the new "-fqdn" option. If this option is specified, the installer uses the fully qualified hostname for cluster configuration. Otherwise, the installer continues with the default behavior.

* 4012032 (Tracking ID: 4012031)

SYMPTOM:
If older versions of the VRTSvxfs and VRTSodm packages are installed in
non-global zones, they are not 
upgraded when upgrade to a newer version of the product.

DESCRIPTION:
If older versions of the VRTSvxfs and VRTSodm packages are installed in
non-global zones, you must 
uninstall them before you perform a product upgrade. After you upgrade those
packages in global zones, you 
must then install the VRTSvxfs and VRTSodm packages manaully in the non-global
zones.

RESOLUTION:
The CPI will handle the VRTSodm and VRTSvxfs package in non-global zone in the
same manner it does in global zone.

* 4014984 (Tracking ID: 4014983)

SYMPTOM:
The product installer does not display a warning at the time of the pre-upgrade check to suggest that you will need to provide telemetry details later on if the cluster nodes are not registered with TES or VCR.

DESCRIPTION:
The product installer prompts you to provide the telemetry details of cluster nodes after upgrading the InfoScale packages but before starting the services. If you cancel the installation at this stage, the Cluster Server resources cannot be brought online. Therefore, a warning message is required during the pre-upgrade checks to remind you to keep these details ready.

RESOLUTION:
The product installer is updated to notify you at the time of the pre-upgrade check, that if the cluster nodes are not registered with TES or VCR, you will need to provide these telemetry details later on.

* 4015142 (Tracking ID: 4015139)

SYMPTOM:
If IPv6 addresses are provided for the system list on a RHEL 8 system, the product installer fails to verify the network communication with the remote systems and cannot proceed with the installation. The following error is logged:
CPI ERROR V-9-20-1104 Cannot ping <IPv6_address>. Please make sure that:
        - Provided hostname is correct
        - System <IPv6_address> is in same network and reachable
        - 'ping' command is available to use (provided by 'iputils' package)

DESCRIPTION:
This issue occurs because the installer uses the ping6 command to verify the communication with the remote systems if IPv6 addresses are provided for the system list. For RHEL 8 and its minor versions, the path for ping6 has changed from /bin/ping6 to /sbin/ping6, but the installer uses the old path.

RESOLUTION:
This hotfix updates the installer to use the correct path for the ping6 command.

* 4020128 (Tracking ID: 4020370)

SYMPTOM:
The product installer fails to complete a fresh configuration of InfoScale Enterprise on a RHEL 7 system, and the following error is logged:
Veritas InfoScale Enterprise Shutdown did not complete successfully.
vxglm failed to stop on <system_name>

DESCRIPTION:
This issue occurs because the installer tries to stop the vxglm service by using the following command:
# /etc/init.d/vxglm stop
The /etc/init.d/vxglm file is removed as part of the recent patches, and so, the installer fails to stop the vxglm service in this manner.

RESOLUTION:
This hotfix addresses the issue by updating the installer to use the systemctl command to stop the vxglm service instead.

* 4021427 (Tracking ID: 4021515)

SYMPTOM:
On SLES 12 SP4 and later systems, the installer fails to fetch the media speed of the network interfaces.

DESCRIPTION:
The path of the 'ethtool' command is changed in SLES 12 SP4. Therefore, on SLES 12 SP4 and later systems, the installer does not recognize the changed path and uses an incorrect path '/sbin/ethtool' instead of '/usr/sbin/ethtool' for the 'ethtool' command. Consequently, while configuring the product, the installer fails to fetch the media speed of the network interfaces and displays its value as "Unknown".

RESOLUTION:
This hotfix updates the installer to use the correct path for the 'ethtool' command.

* 4022784 (Tracking ID: 4022640)

SYMPTOM:
The product installer fails to complete the installation after it automatically downloads a required support patch from SORT that contains a VRTSvlic package. The following error is logged:
CPI ERROR V-9-0-0
0 No pkg object defined for pkg VRTSvlic401742 and padv <<PADV>>

DESCRIPTION:
During installation, the product installer looks for any applicable platform support patches that are available on SORT and automatically downloads them. However, it fails to correctly identify the base version of the VRTSvlic package on the system to compare it with the downloaded version. Consequently, even though the appropriate patch is available, the installer fails to complete the installation.

RESOLUTION:
To address this issue, the product installer is updated to correctly identify the base version of the VRTSvlic package on a system.

* 4029173 (Tracking ID: 4027759)

SYMPTOM:
The product installer installs lower versions packages if multiple patch bundles are specified using the patch path options in the incorrect order.

DESCRIPTION:
The product installer expects that the patch bundles, if any, are specified in the lower-to-higher order. Consequently, the installer always overrides the package version from the available package in the last patch bundle in which it exists. If patch bundles are not specified in the expected order, installer installs last available version for a component package.

RESOLUTION:
To address this issue, the product installer is updated to correctly identify the higher package version before installing the patch bundles. It does so regardless of the order in which the patch bundles are specified.

* 4033989 (Tracking ID: 4033988)

SYMPTOM:
The product installer displays the following error message after the precheck and does not allow you to proceed with the installation:
The higher version of <package_name> is already installed on <system_name>

DESCRIPTION:
The product installer compares the versions of the packages in an Infoscale patch bundle with those of the packages that are installed on a system. If a more recent version of any of the packages in the bundle is found to be already installed on the system, the installer displays an error. It does not allow you to proceed further with the installation.

RESOLUTION:
The product installer is updated to allow the installation of an Infoscale patch bundle that may contain older versions of some packages. Instead of an error message, the installer now displays a warning message and lets you proceed with the installation.

* 4040893 (Tracking ID: 4040946)

SYMPTOM:
The product installer fails to install InfoScale 7.4.1 on SLES 15 SP2 and displays the following error message: 
CPI ERROR V-9-0-0
0 No padv object defined for padv SLESx8664 for system <system_name>

DESCRIPTION:
This issue occurs because the format of the kernel version that is required by SLES 15 SP2 is not included.

RESOLUTION:
The installer is updated to include the format of the kernel version that is required to support installation on SLES 15 SP2.

* 4042494 (Tracking ID: 4042674)

SYMPTOM:
The product installer does not honor the single-node mode of a cluster and restarts it in the multi-mode if 'vcs_allowcomms = 1'.

DESCRIPTION:
On a system where a single-node cluster is running, if you upgrade InfoScale using a response file with 'vcs_allowcomms = 1', the existing cluster configuration is not restored. The product installer restarts the cluster in the multi-node mode. When 'vcs_allowcomms = 1', the installer does not consider the value of the ONENODE parameter in the /etc/sysconfig/vcs file. It fails to identify that VCS is configured on the systems mentioned in the response file and proceeds with upgrade. Consequently, the installer neither updates the .../config/types.cf file nor restores the /etc/sysconfig/vcs file.

RESOLUTION:
This hotfix updates the product installer to honor the single-node mode of an existing cluster configuration on a system.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sles15_x86_64-Patch-7.4.1.3100.tar.gz to /tmp
2. Untar infoscale-sles15_x86_64-Patch-7.4.1.3100.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sles15_x86_64-Patch-7.4.1.3100.tar.gz
    # tar xf /tmp/infoscale-sles15_x86_64-Patch-7.4.1.3100.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale741P3100 [<host1> <host2>...]

You can also install this patch together with 7.4.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


KNOWN ISSUES
------------
* Tracking ID: 4063559

SYMPTOM: If I/O writes are issued on secondary site of VVR configuration then Unexpected error: 30 is logged.

WORKAROUND: Currently there is no workaround for these default messages getting logged, will be considering this for future patch.



SPECIAL INSTRUCTIONS
--------------------
Following vulnerabilities has been resolved in 741U6(VRTSpython 3.6.6.10)
CVE-2017-18342
CVE-2020-14343
CVE-2020-1747
Resolution: Upgraded pyyaml to 5.4.1 version
CVE-2019-10160
CVE-2019-9636
CVE-2020-27619
Resolution: Patched the source code of the python itself referenced in corresponding BDSA to resolve the vulnerabilities.


OTHERS
------
NONE