infoscale-sles15_x86_64-Patch-8.0.0.2900

 Basic information
Release type: Patch
Release date: 2023-07-13
OS update support: None
Technote: None
Documentation: None
Popularity: 212 viewed    downloaded
Download size: 690.74 MB
Checksum: 3386389693

 Applies to one or more of the following products:
InfoScale Availability 8.0 On SLES15 x86-64
InfoScale Enterprise 8.0 On SLES15 x86-64
InfoScale Foundation 8.0 On SLES15 x86-64
InfoScale Storage 8.0 On SLES15 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vm-sles15_x86_64-Patch-8.0.0.1300 (obsolete) 2022-04-01

 Fixes the following incidents:
3951882, 4030767, 4038088, 4055808, 4056647, 4056684, 4056997, 4057420, 4057427, 4057432, 4058802, 4061158, 4062606, 4062799, 4065565, 4065569, 4065628, 4065651, 4065679, 4065680, 4065685, 4065686, 4065820, 4065841, 4066063, 4066092, 4066213, 4066225, 4066259, 4066667, 4066735, 4066834, 4067091, 4067092, 4067237, 4067239, 4067609, 4067635, 4068407, 4068960, 4070027, 4070098, 4071108, 4072228, 4072234, 4073050, 4075150, 4078335, 4078520, 4078531, 4079142, 4079173, 4079190, 4079345, 4079372, 4079559, 4079637, 4079662, 4080041, 4080105, 4080122, 4080269, 4080276, 4080277, 4080579, 4080630, 4080845, 4080846, 4081150, 4081684, 4081774, 4081790, 4082260, 4082865, 4083335, 4083337, 4083948, 4084675, 4084880, 4085610, 4085619, 4085623, 4085839, 4086085, 4087166, 4087233, 4087258, 4087439, 4087791, 4088061, 4088066, 4088076, 4088079, 4088341, 4088483, 4088762, 4088973, 4089033, 4089041, 4089046, 4089136, 4089163, 4089723, 4089724, 4089728, 4091306, 4091983, 4092150, 4092518, 4095889, 4096274, 4097466, 4101808, 4103001, 4103077, 4107367, 4108322, 4109554, 4111302, 4111341, 4111346, 4111349, 4111350, 4111442, 4111444, 4111457, 4111469, 4111560, 4111571, 4111580, 4111610, 4111618, 4111910, 4112219, 4112345, 4112417, 4112549, 4112609, 4112708, 4112919, 4113012, 4113223, 4113225, 4113310, 4113328, 4113331, 4113342, 4113357, 4114322, 4114375, 4114621, 4114963, 4115251, 4115252, 4115381, 4115475, 4115481, 4116548, 4116551, 4116557, 4116559, 4116562, 4116565, 4116567, 4116688, 4117110, 4117385, 4117657, 4118108, 4118111, 4118318, 4118448, 4118455, 4118568, 4118733, 4118767, 4118769, 4118779, 4118795, 4118845, 4119023, 4119087, 4119105, 4119107, 4119111, 4119113, 4119216, 4119257, 4119276, 4119438, 4120350, 4121241, 4121714, 4121828, 4123143, 4124200, 4124418, 4124419, 4124420, 4124421, 4124424

 Patch ID:
VRTSrest-2.0.0.1300-linux
VRTSdbed-8.0.0.1800-SLES
VRTScps-8.0.0.1900-SLES15
VRTSsfmh-8.0.0.411_Linux.rpm
VRTSpython-3.9.2.24-SLES15
VRTSvcs-8.0.0.2300-SLES15
VRTSfsadv-8.0.0.2600-SLES15
VRTSspt-8.0.0.1400-SLES15
VRTSperl-5.34.0.4-SLES15
VRTSvcsag-8.0.0.2500-SLES15
VRTSgab-8.0.0.2500-SLES15
VRTSdbac-8.0.0.2400-SLES15
VRTSllt-8.0.0.2500-SLES15
VRTScavf-8.0.0.2800-SLES15
VRTSamf-8.0.0.2500-SLES15
VRTSvcsea-8.0.0.2500-SLES15
VRTSgms-8.0.0.2800-SLES15
VRTSveki-8.0.0.2800-SLES15
VRTSvxfen-8.0.0.2500-SLES15
VRTSglm-8.0.0.2800-SLES15
VRTSvxvm-8.0.0.2600-SLES15
VRTSvxfs-8.0.0.2900-SLES15
VRTSodm-8.0.0.2900-SLES15
VRTSaslapm-8.0.0.2600-SLES15

Readme file
                          * * * READ ME * * *
                       * * * InfoScale 8.0 * * *
                         * * * Patch 2900 * * *
                         Patch Date: 2023-07-10


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 8.0 Patch 2900


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES15 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTScavf
VRTScps
VRTSdbac
VRTSdbed
VRTSfsadv
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSperl
VRTSpython
VRTSrest
VRTSsfmh
VRTSspt
VRTSvcs
VRTSvcsag
VRTSvcsea
VRTSveki
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 8.0
   * InfoScale Enterprise 8.0
   * InfoScale Foundation 8.0
   * InfoScale Storage 8.0


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSamf-8.0.0.2500
* 4124418 (4124417) Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.
Patch ID: VRTSamf-8.0.0.2300
* 4111444 (4090439) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).
Patch ID: VRTSamf-8.0.0.1800
* 4089724 (4089722) VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.
Patch ID: VRTSamf-8.0.0.1300
* 4067092 (4056991) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).
Patch ID: VRTSsfmh-vom-HF0800411
* 4113012 (4113011) VIOM VRTSsfmh package on Linux to fix dclid/vxlist issue with InfoScale VRTSvxvm 8.0.0.2200
Patch ID: VRTScavf-8.0.0.2800
* 4118779 (4074274) DR test and failover activity might not succeed for hardware-replicated disk groups.
Patch ID: VRTScavf-8.0.0.2400
* 4112609 (4079285) CVMVolDg resource takes many minutes to online with CPS fencing.
* 4112708 (4054462) In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.
Patch ID: VRTSvxvm-8.0.0.2600
* 4124200 (4124223) Core dump is generated for vxconfigd in TC execution.
* 4109554 (4105953) system panic due to VVR accessed a NULL pointer.
* 4111442 (4066785) create new option usereplicatedev=only to import the replicated LUN only.
* 4112549 (4112701) Nodes stuck in reconfig hang and vxconfigd coredump after rebooting all nodes with a delay of 5min in between them.
* 4113310 (4114601) Panic: in dmp_process_errbp() for disk pull scenario.
* 4113357 (4112433) Security vulnerabilities exists in third party components [openssl, curl and libxml].
* 4114963 (4114962) [NBFS-3.1][DL]:MASTER and CAT_FS got corrupted while performing multiple NVMEs failure
* 4115251 (4115195) [NBFS-3.1][DL]:MEDIA_FS got corrupted after panic loop test
* 4115252 (4115193) Data corruption observed after the node fault and cluster restart in DR environment
* 4115381 (4091783) build script and mb.sh changes in unixvm for integration of storageapi
* 4116548 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference.
* 4116551 (4108913) Vradmind dumps core because of memory corruption.
* 4116557 (4085404) Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.
* 4116559 (4091076) SRL gets into pass-thru mode because of head error.
* 4116562 (4114257) Observed IO hung and high system load average after rebooted master and one slave node rejoins cluster.
* 4116565 (4034741) The current fix from limits IO load on secondary causing deadlock situtaion
* 4116567 (4072862) Stop cluster hang because of RVGLogowner and CVMClus resources fail to offline.
* 4117110 (4113841) VVR panic in replication connection handshake request from network scan tool.
* 4118108 (4114867) systemd-udevd[2224]: invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 103 ('D')
* 4118111 (4065490) VxVM udev rules consumes more CPU and appears in "top" output when system has thousands of storage devices attached.
* 4118733 (4106689) Solaris Zones cannot be started due to Method "/lib/svc/method/fs-local" failed with exit status 95
* 4118845 (4116024) machine panic due to access illegal address.
* 4119087 (4067191) IS8.0_SUSE15_CVR: After rebooted slave node master node got panic
* 4119257 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd
* 4119276 (4090943) VVR Primary RLink cannot connect as secondary reports SRL log is full.
* 4119438 (4117985) EC volume corruption due to lockless access of FPU
* 4120350 (4120878) After enabling the dmp_native_support, system failed to boot.
* 4121241 (4114927) Failed to mount /boot on dmp device after enabling dmp_native_support.
* 4121714 (4081740) vxdg flush command slow due to too many luns needlessly access /proc/partitions.
Patch ID: VRTSaslapm 8.0.0.2600
* 4101808 (4101807) VxVM with DMP is failing to detect Hitachi ShadowImage (SI) svol devices.
* 4116688 (4085145) EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.
* 4117385 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .
* 4121828 (4124457) VxVM Support for SLES15 Azure SP4 kernel
Patch ID: VRTSvxvm-8.0.0.2300
* 4108322 (4107083) In case of EMC BCV NR LUNs, vxconfigd taking a long time to start post reboot.
* 4111302 (4092495) VxVM Support on SLES15SP4
* 4111442 (4066785) create new option usereplicatedev=only to import the replicated LUN only.
* 4111560 (4098391) Continuous system crash is observed during VxVM installation.
* 4112219 (4069134) "vxassist maxsize alloc:array:<enclosure_name>" command may fail.
* 4113223 (4093067) System panic occurs because of NULL pointer in block device structure.
* 4113225 (4068090) System panic occurs because of block device inode ref count leaks.
* 4113328 (4102439) Volume Manager Encryption EKM Key Rotation (vxencrypt rekey) Operation Fails on IS 7.4.2/rhel7
* 4113331 (4105565) In CVR environment, system panic due to NULL pointer when VVR was doing recovery.
* 4113342 (4098965) Crash at memset function due to invalid memory access.
* 4115475 (4017334) vxio stack trace warning message kmsg_mblk_to_msg can be seen in systemlog
Patch ID: VRTSaslapm 8.0.0.2300
* 4115481 (4098395) VRTSaslapm package(rpm) doesn't function correctly for SLES15SP4 .
Patch ID: VRTSvxvm-8.0.0.1800
* 4067609 (4058464) vradmin resizevol fails when FS is not mounted on master.
* 4067635 (4059982) vradmind need not check for rlink connect during migrate.
* 4070098 (4071345) Unplanned fallback synchronisation is unresponsive
* 4078531 (4075860) Tutil putil rebalance flag is not getting cleared during +4 or more node addition
* 4079345 (4069940) FS mount failed during Cluster configuration on 24-node physical HP BOM2 setup.
* 4080041 (4056953) 3PAR PE LUNs are reported in error state by 3PAR ASL.
* 4080105 (4045837) Sub disks are in relocate state after exceed fault slave node panic.
* 4080122 (4044068) After disc replacement, Replace Node operation failed at Configure Netbackup stage.
* 4080269 (4044898) Copy rlink tags from reprec to info rec, through vxdg upgrade path.
* 4080276 (4065145) multivolume and vset not able to overwrite encryption tags on secondary.
* 4080277 (3966157) SRL batching feature is broken
* 4080579 (4077876) System is crashed when EC log replay is in progress after node reboot.
* 4080845 (4058166) Increase DCM log size based on volume size without exceeding region size limit of 4mb.
* 4080846 (4058437) Replication between 8.0 and 7.4.x fails due to sector size field.
* 4081790 (4080373) SFCFSHA configuration failed on RHEL 8.4.
* 4083337 (4081890) On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4.
* 4085619 (4086718) VxVM modules fail to load with latest minor kernel of SLES15SP2
* 4087233 (4086856) For Appliance FLEX product using VRTSdocker-plugin, need to add platform-specific dependencies service ( docker.service and podman.service ) change.
* 4087439 (4088934) Kernel Panic while running LM/CFS CONFORMANCE - variant (SLES15SP3)
* 4087791 (4087770) NBFS: Data corruption due to skipped full-resync of detached mirrors of volume after DCO repair operation
* 4088076 (4054685) In case of CVR environment, RVG recovery gets hung in linux platforms.
* 4088483 (4088484) Failed to load DMP_APM NVME modules
* 4088762 (4087099) DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6.
Patch ID: VRTSaslapm 8.0.0.1800
* 4080041 (4056953) 3PAR PE LUNs are reported in error state by 3PAR ASL.
* 4088762 (4087099) DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6.
Patch ID: VRTSvxvm-8.0.0.1700
* 4081684 (4082799) A security vulnerability exists in the third-party component libcurl.
Patch ID: VRTSvxvm-8.0.0.1600
* 4057420 (4060462) Nidmap information is not cleared after a node leaves, resulting in add node failure subsequently.
* 4062799 (4064208) Node failed to join the existing cluster after bits are upgraded to a newer version.
* 4065841 (4065495) Add support for DELL EMC PowerStore.
* 4066213 (4052580) Supporting multipathing for NVMe devices under VxVM.
* 4068407 (4068404) ASL request for HPE 3PAR/Primera/Alletra 9000 ALUA support.
Patch ID: VRTSaslapm 8.0.0.1600
* 4065841 (4065495) Add support for DELL EMC PowerStore.
* 4068407 (4068404) ASL request for HPE 3PAR/Primera/Alletra 9000 ALUA support.
Patch ID: VRTSvxvm-8.0.0.1400
* 4057420 (4060462) Nidmap information is not cleared after a node leaves, resulting in add node failure subsequently.
* 4065569 (4056156) VxVM Support for SLES15 Sp3
* 4066259 (4062576) hastop -local never finishes on Rhel8.4 and RHEL8.5 servers with latest minor kernels due to hang in vxdg deport command.
* 4066735 (4057526) Adding check for init while accessing /var/lock/subsys/ path in vxnm-vxnetd.sh script.
* 4066834 (4046007) The private disk region gets corrupted if the cluster name is changed in FSS environment.
* 4067237 (4058894) Messages in /var/log/messages regarding "ignore_device".
Patch ID: VRTSaslapm 8.0.0.1400
* 4067239 (4057110) ASLAPM rpm Support on SLES15 sp3
Patch ID: VRTSvxvm-8.0.0.1300
* 4065628 (4065627) VxVM Modules failed to load after OS or kernel upgrade  .
Patch ID: VRTSvxfs-8.0.0.2900
* 4092518 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel.
* 4097466 (4114176) After failover, job sync fails with error "Device or resource busy".
* 4107367 (4108955) VFR job hangs on source if thread creation fails on target.
* 4111457 (4117827) For EO compliance, there is a requirement to support 3 types of log file permissions  600, 640 and 644 with 600 being default 
new eo_perm tunable is added in vxtunefs command to manage the log file permissions.
* 4112417 (4094326) mdb invocation displays message "failed to add vx_sl_node_level walker: walk name already in use"
* 4114621 (4113060) On newer linux kernels, executing a binary on a vxfs mountpoint resulted in an EINVAL error.
* 4118795 (4100021) Running setfacl followed by getfacl resulting in "No such device or address" error.
* 4119023 (4116329) While checking FS sanity with the help of "fsck -o full -n" command, we tried to correct the FS flag value (WORM/Softworm), but failed because -n (read-only) option was given.
* 4119107 (4119106) VxFS support for SLES15-SP4 azure kernel.
* 4123143 (4123144) fsck binary generating coredump
Patch ID: VRTSvxfs-8.0.0.2600
* 4084880 (4084542) Enhance fsadm defrag report to display if FS is badly fragmented.
* 4088079 (4087036) The fsck binary has been updated to fix a failure while running with the "-o metasave" option on a shared volume.
* 4111350 (4098085) VxFS support for SLES15 SP4.
* 4111910 (4090127) CFS hang in vx_searchau().
Patch ID: VRTSvxfs-8.0.0.2500
* 4112919 (4110764) Security Vulnerability observed in Zlib a third party component used by VxFS .
Patch ID: VRTSvxfs-8.0.0.2100
* 4095889 (4095888) Security vulnerabilities exist in the Sqlite third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.0.1800
* 4068960 (4073203) Veritas file replication might generate a core while replicating the files to target.
* 4071108 (3988752) Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.
* 4072228 (4037035) VxFS should have the ability to control the number of inactive processing threads.
* 4078335 (4076412) Addressing Executive Order (EO) 14028,  initial requirements which is intended to improve the Federal Governments investigative and remediation capabilities related to cybersecurity incidents.
* 4078520 (4058444) Loop mounts using files on VxFS fail on Linux systems.
* 4079142 (4077766) VxFS kernel module might leak memory during readahead of directory blocks.
* 4079173 (4070217) Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.
* 4082260 (4070814) Security Vulnerability observed in Zlib a third party component VxFS uses.
* 4082865 (4079622) Existing migration read/write iter operation handling is not fully functional as vxfs uses normal read/write file operation only.
* 4083335 (4076098) Fix migration issues seen with falcon-sensor.
* 4085623 (4085624) While running fsck, fsck might dump core.
* 4085839 (4085838) Command fsck may generate core due to processing of zero size attribute inode.
* 4086085 (4086084) VxFS mount operation causes system panic.
* 4088341 (4065575) Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition
Patch ID: VRTSvxfs-8.0.0.1700
* 4081150 (4079869) Security Vulnerability in VxFS third party components
* 4083948 (4070814) Security Vulnerability in VxFS third party component Zlib
Patch ID: VRTSvxfs-8.0.0.1400
* 4055808 (4062971) Enable partition directory on WORM file system
* 4056684 (4056682) New features information on a filesystem with fsadm(file system administration utility) from a device is not displayed.
* 4062606 (4062605) Minimum retention time cannot be set if the maximum retention time is not set.
* 4065565 (4065669) Creating non-WORM checkpoints fails when the tunables - minimum retention time and maximum retention time are set.
* 4065651 (4065666) Enable partition directory on WORM file system having WORM enabled on files with retention period not expired.
Patch ID: VRTSvxfs-8.0.0.1300
* 4065679 (4056797) VxFS support for SLES15 SP3.
Patch ID: VRTSvxfen-8.0.0.2500
* 4117657 (4108561) Reading vxfen reservation not working
* 4124421 (4124417) Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.
Patch ID: VRTSvxfen-8.0.0.2300
* 4111571 (4090439) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).
Patch ID: VRTSvxfen-8.0.0.1800
* 4087166 (4087134) The error message 'Touch /var/VRTSvcs/log/vxfen/vxfen.log failed' appears after starting vxfen service.
* 4088061 (4089052) On RHEL9, Coordination Point Replacement operation is causing node panic
Patch ID: VRTSvxfen-8.0.0.1400
* 3951882 (4004248) vxfend generates core sometimes during vxfen race in CPS based fencing configuration
Patch ID: VRTSveki-8.0.0.2800
* 4118568 (4110457) Veki packaging were failing due to dependency
* 4119216 (4119215) VEKI support for SLES15-SP4 azure kernel.
Patch ID: VRTSveki-8.0.0.2400
* 4111580 (4111579) VEKI support for SLES15 SP4.
Patch ID: VRTSveki-8.0.0.1800
* 4056647 (4055072) Upgrading VRTSveki package using yum reports error
Patch ID: VRTSveki-8.0.0.1200
* 4070027 (4066550) Increasing Veki systemd service start timeout to 300 seconds
Patch ID: VRTSvcsea-8.0.0.2500
* 4118769 (4073508) Oracle virtual fire-drill is failing.
Patch ID: VRTSvcsea-8.0.0.1800
* 4030767 (4088595) hapdbmigrate utility fails to online the oracle service group
* 4079559 (4064917) As a part of Oracle 21c support, fixed the issue Oracle agent fails to generate ora_api using the build_oraapi.sh script
Patch ID: VRTSvcsag-8.0.0.2500
* 4118318 (4113151) VMwareDisksAgent reports resource online before VMware disk to be online is present into vxvm/dmp database.
* 4118448 (4075950) IPv6 neighbor flush logic needs to be added to IP/MultiNIC agents
* 4118455 (4118454) Process agent fails to come online when root user shell is set to /sbin/nologin.
* 4118767 (4094539) Agent resource monitor not parsing process name correctly.
Patch ID: VRTSvcsag-8.0.0.1800
* 4030767 (4088595) hapdbmigrate utility fails to online the oracle service group
* 4058802 (4073842) SFAE changes to support Oracle 21c
* 4079372 (4073842) SFAE changes to support Oracle 21c
* 4079559 (4064917) As a part of Oracle 21c support, fixed the issue Oracle agent fails to generate ora_api using the build_oraapi.sh script
* 4081774 (4083099) AzureIP resource fails to go offline when OverlayIP is configured.
Patch ID: VRTSvcs-8.0.0.2300
* 4038088 (4100720) netstat command is deprecated in SLES15.
Patch ID: VRTSvcs-8.0.0.2100
* 4103077 (4103073) Upgrading Netsnmp component to fix security vulnerabilities .
Patch ID: VRTSvcs-8.0.0.1800
* 4084675 (4089059) gcoconfig.log file permission is changes to 0600.
Patch ID: VRTSvcs-8.0.0.1400
* 4065820 (4065819) Protocol version upgrade failed.
Patch ID: VRTSspt-8.0.0.1400
* 4085610 (4090433) iostat and vmstat command option changes in FirstLook
* 4088066 (4090446) vxstat log collection improvements in FirstLook
* 4091983 (4092090) FirstLook should have OS flavor information stored in its log directory.
* 4096274 (4095687) While restoring version-8 metasave on sparse volume, the restore operation is not happening correctly.
Patch ID: VRTSrest-2.0.0.1300
* 4088973 (4089451) When a read-only filesystem was created on a Volume, GET on mountpoint's details was throwing error .
* 4089033 (4089453) Some VCS REST APIs were crashing the Gunicorn worker.
* 4089041 (4089449) GET resources API on empty service group was throwing an error.
* 4089046 (4089448) Logging in REST API is not EO-compliant.
Patch ID: -3.9.2.24
* 4114375 (4113851) For VRTSpython need to fix some open CVE's
Patch ID: -5.34.0.4
* 4072234 (4069607) Security vulnerability detected on VRTSperl 5.34.0.0 released with Infoscale 8.0.
* 4075150 (4075149) Security vulnerabilities detected in OpenSSL packaged with VRTSperl/VRTSpython released with Infoscale 8.0.
Patch ID: VRTSodm-8.0.0.2900
* 4057432 (4056673) Rebooting the system results into emergency mode due to corruption of module dependency files. Incorrect vxgms dependency in odm service file.
* 4119105 (4119104) ODM support for SLES15-SP4 azure kernel.
Patch ID: VRTSodm-8.0.0.2600
* 4111349 (4092232) ODM support for SLES15 SP4.
Patch ID: VRTSodm-8.0.0.2500
* 4114322 (4114321) VRTSodm driver will not load with VRTSvxfs 8.0.0.2500 patch.
Patch ID: VRTSodm-8.0.0.1800
* 4089136 (4089135) VRTSodm driver does not load with VRTSvxfs patch.
Patch ID: VRTSodm-8.0.0.1200
* 4065680 (4056799) ODM support for SLES15 SP3.
Patch ID: VRTSllt-8.0.0.2500
* 4124419 (4124417) Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.
Patch ID: VRTSllt-8.0.0.2300
* 4111469 (4090439) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).
* 4112345 (4087662) During memory fragmentation LLT module may fail to allocate large memory leading to the node eviction or a node not being able to join.
Patch ID: VRTSllt-8.0.0.1800
* 4061158 (4061156) IO error on /sys/kernel/slab folder
* 4079637 (4079636) LLT over IPsec is causing node crash
* 4079662 (3981917) Support LLT UDP multiport on 1500 MTU based networks.
* 4080630 (4046953) Delete / disable confusing messages.
Patch ID: VRTSllt-8.0.0.1400
* 4066063 (4066062) Node panic is observed while using llt udp with multiport enabled.
* 4066667 (4040261) During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.
Patch ID: VRTSgms-8.0.0.2800
* 4057427 (4057176) Rebooting the system results into emergency mode due to corruption of module dependency files.
* 4119111 (4119110) GMS support for SLES15-SP4 azure kernel.
Patch ID: VRTSgms-8.0.0.2400
* 4111346 (4092229) GMS support for SLES15 SP4.
Patch ID: VRTSgms-8.0.0.1800
* 4079190 (4071136) gms.config is not created when installing GMS rpm.
Patch ID: VRTSgms-8.0.0.1200
* 4065686 (4056803) GMS support for SLES15 SP3.
Patch ID: VRTSglm-8.0.0.2800
* 4119113 (4119112) GLM support for SLES15-SP4 azure kernel.
Patch ID: VRTSglm-8.0.0.2400
* 4087258 (4087259) System panics while upgrading CFS protocol from 90 to 135 (latest).
* 4111341 (4092225) GLM support for SLES15 SP4.
Patch ID: VRTSglm-8.0.0.1800
* 4089163 (4089162) The GLM module fails to load.
Patch ID: VRTSglm-8.0.0.1200
* 4065685 (4056801) GLM support for SLES15 SP3.
Patch ID: VRTSgab-8.0.0.2500
* 4124420 (4124417) Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.
Patch ID: VRTSgab-8.0.0.2300
* 4111469 (4090439) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).
* 4111618 (4106321) After stopping HAD on SFCFHA stack for SLES15 SP4 minor kernel(kernel version > 5.14.21-150400.24.28) panic is observed.
Patch ID: VRTSgab-8.0.0.1800
* 4089723 (4089722) VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.
Patch ID: VRTSgab-8.0.0.1300
* 4067091 (4056991) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).
Patch ID: VRTSfsadv-8.0.0.2600
* 4103001 (4103002) Replication failures observed in internal testing
Patch ID: VRTSfsadv-8.0.0.2100
* 4092150 (4088024) Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.
Patch ID: VRTSfsadv-8.0.0.1200
* 4066092 (4057644) Getting a warning in the dmesg i.e. SysV service '/etc/init.d/fsdedupschd' lacks a native systemd unit file.
Patch ID: VRTSdbed-8.0.0.1800
* 4079372 (4073842) SFAE changes to support Oracle 21c
Patch ID: VRTSdbac-8.0.0.2400
* 4124424 (4124417) Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.
Patch ID: VRTSdbac-8.0.0.2300
* 4111610 (4090439) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).
Patch ID: VRTSdbac-8.0.0.1800
* 4089728 (4089722) VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.
Patch ID: VRTSdbac-8.0.0.1200
* 4056997 (4056991) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).
Patch ID: VRTScps-8.0.0.1900
* 4091306 (4088158) Security vulnerabilities exists in Sqlite third-party components used by VCS.
Patch ID: VRTScps-8.0.0.1800
* 4073050 (4018218) Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2
Patch ID: VRTScps-8.0.0.1400
* 4066225 (4056666) The Error writing to database message may intermittently appear in syslogs on CP servers.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSamf-8.0.0.2500

* 4124418 (Tracking ID: 4124417)

SYMPTOM:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.

DESCRIPTION:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel as SLES community maintains Azure kernel separately.

RESOLUTION:
Veritas InfoScale Availability support for SUSE Linux Enterprise Server 15 SP4 for Azure kernel is now introduced.

Patch ID: VRTSamf-8.0.0.2300

* 4111444 (Tracking ID: 4090439)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP4 is
now introduced.

Patch ID: VRTSamf-8.0.0.1800

* 4089724 (Tracking ID: 4089722)

SYMPTOM:
VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.

DESCRIPTION:
Need recompilation of VRTSgab , VRTSamf and VRTSdbed with latest changes.

RESOLUTION:
Recompiled the VRTSgab , VRTSamf and VRTSdbed.

Patch ID: VRTSamf-8.0.0.1300

* 4067092 (Tracking ID: 4056991)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP2.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP3 is
now introduced.

Patch ID: VRTSsfmh-vom-HF0800411

* 4113012 (Tracking ID: 4113011)

SYMPTOM:
vxlist output on InfoScale server shows volume name as "-" and status Unknown.

DESCRIPTION:
vxlist output on InfoScale server shows volume name as "-" and status Unknown.

RESOLUTION:
New vxdclid plugin for VxVM has been created.

Patch ID: VRTScavf-8.0.0.2800

* 4118779 (Tracking ID: 4074274)

SYMPTOM:
DR test and failover activity might not succeed for hardware-replicated disk groups.

DESCRIPTION:
In case of hardware-replicated disks like EMC SRDF, failover of disk groups might not automatically succeed and a manual intervention might be needed. After failover, disks at the new primary site have the 'udid_mismatch' flag which needs to be updated manually for a successful failover.

RESOLUTION:
For DMP environments, the VxVM & DMP extended attributes need to be refreshed by using 'vxdisk scandisks' prior to import. VxVM has also provided a new vxdg import option '-o usereplicatedev=only' with DMP. This option selects only the hardware-replicated disks during LUN selection process.

Sample main.cf configuration for DMP managed hardware Replicated disks.
CVMVolDg srdf_dg (
CVMDiskGroup = LINUXSRDF
CVMVolume = { scott, scripts }
CVMActivation = sw
CVMDeportOnOffline = 1
ClearClone = 1
ScanDisks = 1
DGOptions = { "-t -o usereplicatedev=only" }
)
All 4 options(CVMDeportOnOffline, ClearClone, ScanDisks and DGOptions) are recommended with hardware-replicated disk groups.

Patch ID: VRTScavf-8.0.0.2400

* 4112609 (Tracking ID: 4079285)

SYMPTOM:
CVMVolDg resource takes many minutes to online with CPS fencing.

DESCRIPTION:
When fencing is configured as CP server based but not disk based SCSI3PR, Diskgroups are still imported with SCSI3 reservations, which causes SCSI3 PR errors during import and it will take long time due to retries.

RESOLUTION:
Code changes have been done to import Diskgroup without SCSI3 reservations when SCSI3 PR is disabled.

* 4112708 (Tracking ID: 4054462)

SYMPTOM:
In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.

DESCRIPTION:
After the VCS hardware replication agent resource fails over control to the secondary site, the CVMVolDg agent does not rescan all the required device paths in case of a multi-pathing configuration. The vxdg import operation fails, because the hardware device characteristics for all the paths are not refreshed.

RESOLUTION:
This hotfix addresses the issue by providing two new resource-level attributes for the CVMVolDg agent.
- The ScanDisks attribute specifies whether to perform a selective for all the disk paths that are associated with a VxVM disk group. When ScanDisks is set to 1, the agent performs a selective devices scan. Before attempting to import a hardware clone or a hardware replicated device, the VxVM and the DMP attributes of a disk are refreshed. ScanDisks is set to 0 by default, which indicates that a selective device scan is not performed. However, even when ScanDisks is set to 0, if the disk group fails during the first import attempt, the agent checks the error string. If the string contains the text HARDWARE_MIRROR, the agent performs a selective device scan to increase the chances of a successful import.
- The DGOptions attribute specifies options to be used with the vxdg import command that is executed by the agent to bring the CVMVolDg resource online.
Sample resource configuration for hardware replicated shared disk groups:
CVMVolDg tc_dg (
    CVMDiskGroup = datadg
    CVMVolume = { vol01 }
    CVMActivation = sw
    CVMDeportOnOffline = 1
    ClearClone = 1
    ScanDisks = 1
    DGOptions = "-t -o usereplicatedev=on"
    )
NOTE: The new "-o usereplicatedev=on" vxdg option is provided with VxVM hot-fixes from 7.4.1.x onwards.

Patch ID: VRTSvxvm-8.0.0.2600

* 4124200 (Tracking ID: 4124223)

SYMPTOM:
Core dump is generated for vxconfigd in TC execution.

DESCRIPTION:
TC creates a scenario where 0s are written in first block of disk. In such case, Null check is necessary in code before some variable is accessed. This Null check is missing which causes vxconfigd core dump in TC execution.

RESOLUTION:
Necessary Null checks is added in code to avoid vxconfigd core dump.

* 4109554 (Tracking ID: 4105953)

SYMPTOM:
System panic with below stack in CVR environment.

 #9 [] page_fault at 
    [exception RIP: vol_ru_check_update_done+183]
#10 [] vol_rv_write2_done at [vxio]
#11 [] voliod_iohandle at [vxio]
#12 [] voliod_loop at [vxio]
#13 [] kthread at

DESCRIPTION:
In CVR environment, when IO is issued in writeack sync mode we ack to application when datavolwrite is done on either log client or logowner depending on 
where IO is issued on. it could happen that VVR freed the metadata I/O update after SRL write is done incase of writeack sync mode, but later after freeing the update, its accessed again and hence we end up in hitting NULL ptr deference.

RESOLUTION:
Code changes have been made to avoid the accessing NULL pointer.

* 4111442 (Tracking ID: 4066785)

SYMPTOM:
When the replicated disks are in SPLIT mode, importing its disk group failed with "Device is a hardware mirror".

DESCRIPTION:
When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported with option `-o usereplicatedev=only`.

RESOLUTION:
The code is enhanced to import the replicated disk group with option `-o usereplicatedev=only`.

* 4112549 (Tracking ID: 4112701)

SYMPTOM:
Observed reconfig hang on 8 nodes ISO vm setup after rebooting all nodes with a delay of 5min in between them due to Vxconfigd core dumped on master node

DESCRIPTION:
1. reconfig hang on 8 nodes ISO vm setup after rebooting all nodes with a delay of 5min.
             2. This is because fork failed during command shipping which caused vxconfigd core dump on master. So all reconfigurations after that failed to 
                 process.

RESOLUTION:
Reboot master node where vold is core dumped.

* 4113310 (Tracking ID: 4114601)

SYMPTOM:
System gets panicked and rebooted

DESCRIPTION:
RCA:
Start the IO on volume device and pull out it's disk from the machine and hit below panic on RHEL8.

 dmp_process_errbp
 dmp_process_errbuf.cold.2+0x328/0x429 [vxdmp]
 dmpioctl+0x35/0x60 [vxdmp]
 dmp_flush_errbuf+0x97/0xc0 [vxio]
 voldmp_errbuf_sio_start+0x4a/0xc0 [vxio]
 voliod_iohandle+0x43/0x390 [vxio]
 voliod_loop+0xc2/0x330 [vxio]
 ? voliod_iohandle+0x390/0x390 [vxio]
 kthread+0x10a/0x120
 ? set_kthread_struct+0x50/0x50

As disk pulled out from the machine VxIO hit a IO error and it routed that IO to dmp layer via kernel-kernel IOCTL for error analysis.
following is the code path for IO routing,

voldmp_errbuf_sio_start()-->dmp_flush_errbuf()--->dmpioctl()--->dmp_process_errbuf()

dmp_process_errbuf() retrieves device number of the underlying path (os-device).
and it tries to get bdev (i.e. block_device) pointer from path-device number.
As path/os-device is removed by disk pull, linux returns fake bdev for the path-device number.
For this fake bdev there is no gendisk associated with it (bdev->bd_disk is NULL).

We are setting this NULL bdev->bd_disk to the IO buffer routed from vxio.
which leads a panic on dmp_process_errbp.

RESOLUTION:
If bdev->bd_disk found NULL then set DMP_CONN_FAILURE error on the IO buffer and return DKE_ENXIO to vxio driver

* 4113357 (Tracking ID: 4112433)

SYMPTOM:
Vulnerabilities have been reported in third party components, [openssl, curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [openssl, curl and libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which needs

RESOLUTION:
[openssl, curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

* 4114963 (Tracking ID: 4114962)

SYMPTOM:
File system data corruption with mirrored volumes in Flexible Storage Sharing (FSS) environments during beyond fault storage failure situations.

DESCRIPTION:
In FSS environments, data change object (DCO) provides functionality to track changes on detached mirrors using bitmaps. This bitmap is later used for re-sync of detached mirrors data (change delta).
When DCO volume and data volume share the same set of devices, DCO volumes last mirror failure means IOs on data volume is going to fail. In such cases instead of invaliding DCO volumes, proactively IO is failed.
This helps in protecting DCO when entire storage comes back and optimal recovery of mirrors can be performed.
When disk for one of the mirror of DCO object become available, the bug in DCO update incorrectly updates metadata of DCO which lead to ignoring valid DCO maps during actual volume recovery and hence newly recovered mirrors of volume missed blocks of valid application data. This lead to corruption when read IO were serviced from the newly recovered mirrors.

RESOLUTION:
The login of FMR map updating transaction of enabling disks is fixed to resolve the bug. This ensures all valid bitmaps are considered for recovery of mirrors and avoid data loss.

* 4115251 (Tracking ID: 4115195)

SYMPTOM:
Data corruption on file-systems with  erasure coded volumes

DESCRIPTION:
In Erasure coded (EC) volume are used in Flexible shared storage (FSS) environments, data change object (DCO) is used to tracking changes in volume with faulted columns. The DCO provides a bitmap of all changed regions during rebuild of the faulted columns. When EC volume starts with few faulted columns, the log-replay IO could not be performed on those columns. Those additional writes are expected to be tracked in DCO bitmap. Due to bug those IOs are not getting tracked in DCO bitmap as DCO bitmaps are not yet enabled when log-replay is triggered. Hence when the remaining columns are attached back, appropriate data blocks of those log-replay IOs are skipped during rebuild. This leads to data corruption when reads are serviced from those columns.

RESOLUTION:
Code changes are done to ensure log-replay on EC volume is triggered only after ensuring DCO tracking is enabled. This ensures that all IOs from log-replay operations are tracked in DCO maps for remaining faulted columns of volume.

* 4115252 (Tracking ID: 4115193)

SYMPTOM:
Data corruption on VVR primary with storage loss beyond fault tolerance level in replicated environment.

DESCRIPTION:
In Flexible Storage Sharing (FSS)  environment  any node fault can lead to storage failure. In VVR primary when last  mirror of SRL  (Storage Replicator Log) volume faulted while application writes are in progress replication is expected to go to pass-through mode.
This information is persistently recorded in the kernel log (KLOG). In the event of cascaded storage node failures, the KLOG updation protocol could not update quorum number of copies. This mis-match in on-disk v/s in-core state of VVR objects leading to data loss due to missing recovery when all storage faults are resolved.

RESOLUTION:
Code changes to handle the KLOG update failure in SRL IO failure handling is done to ensure configuration on-disk and in-core is consistent and subsequent application IO will not be allowed to avoid data corruption.

* 4115381 (Tracking ID: 4091783)

SYMPTOM:
Buildarea creation for unixvm were failing

DESCRIPTION:
unixvm build were failing as there is dependency on the storageapi while creation of build area for unixvm and veki.

This intern were causing issues in Veki packaging during unixvm builds and vxio driver compilation dependency

RESOLUTION:
Added support for storageapi build area creation and building the storageapi internally from unixvm build scripts.

* 4116548 (Tracking ID: 4111254)

SYMPTOM:
vradmind dumps core with the following stack:

#3  0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6
#4  0x000000000045922c in RDS::getHandle ()
#5  0x000000000056ec04 in StatsSession::addHost ()
#6  0x000000000045d9ef in RDS::addRVG ()
#7  0x000000000046ef3d in RDS::createDummyRVG ()
#8  0x000000000044aed7 in PriRunningState::update ()
#9  0x00000000004b3410 in RVG::update ()
#10 0x000000000045cb94 in RDS::update ()
#11 0x000000000042f480 in DBMgr::update ()
#12 0x000000000040a755 in main ()

DESCRIPTION:
vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set.

RESOLUTION:
The issue has been fixed by making code changes.

* 4116551 (Tracking ID: 4108913)

SYMPTOM:
Vradmind dumps core with the following stacks:
#3  0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6
#4  0x00000000005d7a90 in VList::concat () at VList.C:1017
#5  0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280
#6  0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389
#7  0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764
#8  0x00000000004093e9 in process_message () at srvmd.C:418
#9  0x000000000040a66d in main () at srvmd.C:733

#0  0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6
#1  0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6
#2  0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6
#3  0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6
#4  0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6
#5  0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491
#6  0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480
#7  0x00000000005d7244 in VElem::~VElem () at VList.C:480
#8  0x00000000005d8ad9 in VList::~VList () at VList.C:1167
#9  0x000000000040a71a in main () at srvmd.C:743

#0  0x000000000040b826 in DList::head () at ../include/DList.h:82
#1  0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318
#2  0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157
#3  0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117
#4  0x000000000046f610 in RDS::collectStats () at RDS.C:6011
#5  0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799
#6  0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0
#7  0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6

DESCRIPTION:
There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted.

RESOLUTION:
The code changes have been made to fix the issue.

* 4116557 (Tracking ID: 4085404)

SYMPTOM:
Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.

DESCRIPTION:
The active map flush caused RVG serialization. Once RVG gets serialized, all IOs are queued in restart queue, till the active map flush is finished. The too frequent active map flush caused the huge IO drop during flushing SRL to DCM.

RESOLUTION:
The code is modified to adjust the frequency of active map flush and balance the application IO and SRL flush.

* 4116559 (Tracking ID: 4091076)

SYMPTOM:
SRL gets into pass-thru mode when it's about to overflow.

DESCRIPTION:
Primary initiated log search for the requested update sent from secondary. The search aborted with head error as a check condition isn't set correctly.

RESOLUTION:
Fixed the check condition to resolve the issue.

* 4116562 (Tracking ID: 4114257)

SYMPTOM:
VxVM cmd is hung and file system was waiting for io to complete.

file system stack:
#3 [] wait_for_completion at 
#4 [] vx_bc_biowait at [vxfs]
#5 [] vx_biowait at [vxfs]
#6 [] vx_isumupd at [vxfs]
#7 [] __switch_to_asm at 
#8 [] vx_process_revokedele at [vxfs]
#9 [] vx_recv_revokedele at [vxfs]
#10 [] vx_recvdele at [vxfs]
#11 [] vx_msg_process_thread at [vxfs]

vxconfigd stack:
[<0>] volsync_wait+0x106/0x180 [vxio]
[<0>] vol_ktrans+0x9f/0x2c0 [vxio]
[<0>] volconfig_ioctl+0x82a/0xdf0 [vxio]
[<0>] volsioctl_real+0x38a/0x450 [vxio]
[<0>] vols_ioctl+0x6d/0xa0 [vxspec]
[<0>] vols_unlocked_ioctl+0x1d/0x20 [vxspec]

One of vxio thread was waiting for IO drain with below stack.

 #2 [] schedule_timeout at 
 #3 [] vol_rv_change_sio_start at [vxio]
 #4 [] voliod_iohandle at [vxio]

DESCRIPTION:
VVR rvdcm flush SIO was triggered by VVR logowner change and it would set the ru_state throttle flags which caused  MDATA_SHIP SIO got queued in rv_mdship_throttleq. As the MDATA_SHIP SIOs are active, it caused rvdcm flush SIO unable to proceed. In the end, rvdcm_flush SIO was waiting for SIOs in rv_mdship_throttleq to complete. SIOs in rv_mdship_throttleq were waiting rvdcm_flush SIO to complete. Hence a  dead lock situation.

RESOLUTION:
Code changes have been made to solve the dead lock issue.

* 4116565 (Tracking ID: 4034741)

SYMPTOM:
Due to a common RVIOmem pool being used by multiple RVG, a deadlock scenario gets created, causing high load average and system hang.

DESCRIPTION:
The current fix limits IO load on secondary by retaining the updates in NMCOM pool until the DV write done, by which RVIOMEM pool became easy to fill up and 
deadlock situtaion may occur, esp. when high work load on multiple RVGs or cross direction RVGs.Currently all RVGs share the same RVIOMEM pool, while NMCOM 
pool, RDBACK pool and network/dv update list are all per-RVGs, so the RVIOMEM pool becomes the bottle neck on secondary, which is easy to full and run into 
deadlock situation.

RESOLUTION:
Code changes to honor per-RVG RVIOMEM pool to resolve the deadlock issue.

* 4116567 (Tracking ID: 4072862)

SYMPTOM:
In case RVGLogowner resources get onlined on slave nodes, stop the whole cluster may fail and RVGLogowner resources goes in to offline_propagate state.

DESCRIPTION:
While stopping whole cluster, the racing may happen between CVM reconfiguration and RVGLogowner change SIO.

RESOLUTION:
Code changes have been made to fix these racings.

* 4117110 (Tracking ID: 4113841)

SYMPTOM:
VVR panic happened in below code path:

kmsg_sys_poll()
nmcom_get_next_mblk() 
nmcom_get_hdr_msg() 
nmcom_get_next_msg() 
nmcom_wait_msg_tcp() 
nmcom_server_main_tcp()

DESCRIPTION:
When the network scan tool send request to VVR which is unexpected, during the VVR connection handshake, the tcp connection may be terminated immediately by the network scan tool, which may lead to the sock released. Hence, VVR panic when try to refer to it as hit the NULL pointer during the processing.

RESOLUTION:
The code change has been made to check sock is valid, otherwise, return without continue with the VVR connection.

* 4118108 (Tracking ID: 4114867)

SYMPTOM:
Getting these error messages while adding new disks
[root@server101 ~]# cat /etc/udev/rules.d/41-VxVM-selinux.rules | tail -1
KERNEL=="VxVM*", SUBSYSTEM=="block", ACTION=="add", RUN+="/bin/sh -c 'if [ `/usr/sbin/getenforce` != "Disabled" -a `/usr/sbin/
[root@server101 ~]#
[root@server101 ~]# systemctl restart systemd-udevd.service
[root@server101 ~]# udevadm test /block/sdb 2>&1 | grep "invalid"
invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 104 ('D')

DESCRIPTION:
In /etc/udev/rules.d/41-VxVM-selinux.rules double quotation on Disabled and disable is the issue.

RESOLUTION:
Code changes have been made to correct the problem.

* 4118111 (Tracking ID: 4065490)

SYMPTOM:
systemd-udev threads consumes more CPU during system bootup or device discovery.

DESCRIPTION:
During disk discovery when new storage devices are discovered, VxVM udev rules are invoked for creating hardware path
symbolic link and setting SELinux security context on Veritas device files. For creating hardware path symbolic link to each
storage device, "find" command is used internally which is CPU intensive operation. If too many storage devices are attached to
system, then usage of "find" command causes high CPU consumption.

Also, for setting appropriate SELinux security context on VxVM device files, restorecon is done irrespective of SELinux is enabled or disabled.

RESOLUTION:
Usage of "find" command is replaced with "udevadm" command. SELinux security context on VxVM device files is being set
only when SELinux is enabled on system.

* 4118733 (Tracking ID: 4106689)

SYMPTOM:
Solaris Zones cannot be started due to Method "/lib/svc/method/fs-local" failed with exit status 95. The error logs are observed as below:
Mounting ZFS filesystems: cannot mount 'rpool/export' on '/export': directory is not empty
cannot mount 'rpool/export' on '/export': directory is not empty
cannot mount 'rpool/export/home' on '/export/home': failure mounting parent dataset
cannot mount 'rpool/export/home/addm' on /export/home/addm': directory is not empty
.... ....
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: one or more file systems failed.

DESCRIPTION:
When DMP native support is enabled and the "faulted" zpools are found, VxVM will deport the faulty zpools and re-import them. In case fs-local isn't started before vxvm-startup2, this error handling will cause a non-empty /export which further cause zfs mount failure.

RESOLUTION:
Code changes have been made to guarantee the mount order of rpool and zpools.

* 4118845 (Tracking ID: 4116024)

SYMPTOM:
kernel panicked at gab_ifreemsg with following stack:
gab_ifreemsg
gab_freemsg
kmsg_gab_send
vol_kmsg_sendmsg
vol_kmsg_sender

DESCRIPTION:
In a CVR environment there is a RVG of > 600 data volumes, enabling vxvvrstatd daemon through service vxvm-recover. vxvvrstatd calls into ioctl(VOL_RV_APPSTATS) , the latter will generate a kmsg whose length is longer than 64k and trigger a kernel panic due to GAB/LLT no support any message longer than 64k.

RESOLUTION:
Code changes have been done to add a limitation to the maximum number of data volumes for which that ioctl(VOL_RV_APPSTATS) can request the VVR statistics.

* 4119087 (Tracking ID: 4067191)

SYMPTOM:
In CVR environment after rebooting Slave node, Master node may panic with below stack:

Call Trace:
dump_stack+0x66/0x8b
panic+0xfe/0x2d7
volrv_free_mu+0xcf/0xd0 [vxio]
vol_ru_free_update+0x81/0x1c0 [vxio]
volilock_release_internal+0x86/0x440 [vxio]
vol_ru_free_updateq+0x35/0x70 [vxio]
vol_rv_write2_done+0x191/0x510 [vxio]
voliod_iohandle+0xca/0x3d0 [vxio]
wake_up_q+0xa0/0xa0
voliod_iohandle+0x3d0/0x3d0 [vxio]
voliod_loop+0xc3/0x330 [vxio]
kthread+0x10d/0x130
kthread_park+0xa0/0xa0
ret_from_fork+0x22/0x40

DESCRIPTION:
As part of CVM Master switch a rvg_recovery is triggered. In this step race
condition can occured between the VVR objects due to which the object value
is not updated properly and can cause panic.

RESOLUTION:
Code changes are done to handle the race condition between VVR objects.

* 4119257 (Tracking ID: 4090772)

SYMPTOM:
vxconfigd/vx commands hang on secondary site in a CVR environment.

DESCRIPTION:
Due to a window with unmatched SRL positions, if any application (e.g. fdisk) trying
to open the secondary RVG volume will acquire a lock and wait for SRL positions to match.
During this if any vxvm transaction kicked in will also have to wait for same lock.
Further logowner node panic'd which triggered logownership change protocol which hung
as earlier transaction was stuck. As logowner change protocol could not complete,
in absence of valid logowner SRL position could not match and caused deadlock. That lead
to vxconfigd and vx command hang.

RESOLUTION:
Added changes to allow read operation on volume even if SRL positions are
unmatched. We are still blocking write IOs and just allowing open() call for read-only
operations, and hence there will not be any data consistency or integrity issues.

* 4119276 (Tracking ID: 4090943)

SYMPTOM:
On Primary, RLink is continuously getting connected/disconnected with below message seen in secondary syslog:
  VxVM VVR vxio V-5-3-0 Disconnecting replica <rlink_name> since log is full on secondary.

DESCRIPTION:
When RVG logowner node panic, RVG recovery happens in 3 phases.
At the end of 2nd phase of recovery in-memory and on-disk SRL positions remains incorrect
and during this time if there is logowner change then Rlink won't get connected.

RESOLUTION:
Handled in-memory and on-disk SRL positions correctly.

* 4119438 (Tracking ID: 4117985)

SYMPTOM:
Memory/data corruption hit for EC volumes

DESCRIPTION:
This is a porting request original request was already reviewed:http://codereview.engba.veritas.com/r/42056/

Memory corruption hitting in EC was fixed by calling kernel_fpu_begin() for kernel version < rhel8.6. But in latest kernel kernel_fpu_begin() symbol is not 
available, We can not use it. So we have created separate Module with name 'storageapi' which is having implementation of _fpu_begin and _fpu_end
VxIO module is dependent on 'storageapi'

RESOLUTION:
take a fpu lock for FPU related operations

* 4120350 (Tracking ID: 4120878)

SYMPTOM:
System doesn't come up on taking a reboot after enabling dmp_native_support. System goes into maintenance mode.

DESCRIPTION:
"vxio.ko" is dependent on the new "storageapi.ko" module. "storageapi.ko" was missing from VxDMP_initrd file, which is created when dmp_native_support is enabled. So on reboot, without "storageapi.ko" present, "vxio.ko" fails to load.

RESOLUTION:
Code changes have been made to include "strorageapi.ko" in VxDMP_initrd.

* 4121241 (Tracking ID: 4114927)

SYMPTOM:
After enabling dmp_native_support and taking reboot, /boot is not mounted VxDMP node.

DESCRIPTION:
When dmp_native_support is enabled, vxdmproot script is expected to modify the /etc/fstab entry for /boot so that on next boot up, /boot is mounted on dmp device instead of OS device. Also, this operation modifies SELinux context of file /etc/fstab. This causes the machine to go into maintenance mode because of a read permission denied error for /etc/fstab on boot up.

RESOLUTION:
Code changes have been done to make sure SELinux context is preserved for /etc/fstab file and /boot is mounted on dmp device when dmp_native_support is enabled.

* 4121714 (Tracking ID: 4081740)

SYMPTOM:
vxdg flush command slow due to too many luns needlessly access /proc/partitions.

DESCRIPTION:
Linux BLOCK_EXT_MAJOR(block major 259) is used as extended devt for block devices. When partition number of one device is more than 15, the partition device gets assigned under major 259 to solve the sd limitations (16 minors per device), by which more partitions are allowed for one sd device. During "vxdg flush", for each lun in the disk group, vxconfigd reads file /proc/partitions line by line through fgets() to find all the partition devices with major number 259, which would cause vxconfigd to respond sluggishly if there are large amount of luns in the disk group.

RESOLUTION:
Code has been changed to remove the needless access on /proc/partitions for the luns without using extended devt.

Patch ID: VRTSaslapm 8.0.0.2600

* 4101808 (Tracking ID: 4101807)

SYMPTOM:
"vxdisk -e list" does not show "svol" for Hitachi ShadowImage (SI) svol devices.

DESCRIPTION:
VxVM with DMP is failing to detect Hitachi ShadowImage (SI) svol devices.

RESOLUTION:
Hitachi ASL modified to correctly read SCSI Byte locations and recognize ShadowImage (SI) svol device.

* 4116688 (Tracking ID: 4085145)

SYMPTOM:
The issue we are discussing is with AWS environment, on-prim physical/vm host this issue does not exist.( as ioctl and sysfs is giving same values)

DESCRIPTION:
The UDID value in case of Amazon EBS devices was going beyond its limit (read from sysfs as ioctl is not supported by AWS)

RESOLUTION:
Did code changes to fetch LSN through IOCTL as we have fix for intermittent ioctl failure.

* 4117385 (Tracking ID: 4117350)

SYMPTOM:
Below error is observed when trying to import 

# vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg
VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]

Please refer to system log for details.

DESCRIPTION:
REPLICATED flag is used to identify a hardware replicated device so to import dg on the  REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .

RESOLUTION:
REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.

* 4121828 (Tracking ID: 4124457)

SYMPTOM:
VxVM module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
VxVM module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSvxvm-8.0.0.2300

* 4108322 (Tracking ID: 4107083)

SYMPTOM:
In case of EMC BCV NR LUNs, vxconfigd taking a long time to start post reboot.

DESCRIPTION:
This issue is introduced due to BCV NR LUNs going into error state and the the scci inquiry succeed but the disk retry takes time as it goes into loop for each disk. This is a corner case and was not handled for BCV NR LUNs.

RESOLUTION:
Necessary code changes are done incase of BCV NR LUNs when scci inquiry succeeds, we mark disk as failed so that the vxconfigd boots quickly.

* 4111302 (Tracking ID: 4092495)

SYMPTOM:
VxVM installation fails on SLES15 SP4

DESCRIPTION:
This new OS has 5.14.21 kernel . There have been multiple changes done regarding block devices , IO handling , partition table in gendisk, etc hence VxVM code is not compatible with these new code changes done in kernel .

RESOLUTION:
Required changes has been done to make VxVM compatible with SLES15SP4.

* 4111442 (Tracking ID: 4066785)

SYMPTOM:
When the replicated disks are in SPLIT mode, importing its disk group failed with "Device is a hardware mirror".

DESCRIPTION:
When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported with option `-o usereplicatedev=only`.

RESOLUTION:
The code is enhanced to import the replicated disk group with option `-o usereplicatedev=only`.

* 4111560 (Tracking ID: 4098391)

SYMPTOM:
Kernel panic is observed with following stack:

#6 [ffffa479c21cf6f0] page_fault at ffffffffb240130e
    [exception RIP: bfq_bio_bfqg+37]
    RIP: ffffffffb1e78135  RSP: ffffa479c21cf7a0  RFLAGS: 00010002
    RAX: 000000000000001f  RBX: 0000000000000000  RCX: ffffa479c21cf860
    RDX: ffff8bd779775000  RSI: ffff8bd795b2fa00  RDI: ffff8bd795b2fa00
    RBP: ffff8bd78f136000   R8: 0000000000000000   R9: ffff8bd793a5b800
    R10: ffffa479c21cf828  R11: 0000000000001000  R12: ffff8bd7796b6e60
    R13: ffff8bd78f136000  R14: ffff8bd795b2fa00  R15: ffff8bd7946ad0bc
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
#7 [ffffa479c21cf7b0] bfq_bic_update_cgroup at ffffffffb1e78458
#8 [ffffa479c21cf7e8] bfq_bio_merge at ffffffffb1e6f47f
#9 [ffffa479c21cf840] blk_mq_submit_bio at ffffffffb1e48c09
#10 [ffffa479c21cf8c8] submit_bio_noacct at ffffffffb1e3c7e3
#11 [ffffa479c21cf958] submit_bio at ffffffffb1e3c87b
#12 [ffffa479c21cf9a8] submit_bh_wbc at ffffffffb1d2536a
#13 [ffffa479c21cf9e0] block_read_full_page at ffffffffb1d27ac1
#14 [ffffa479c21cfa90] do_read_cache_page at ffffffffb1c2f7e5
#15 [ffffa479c21cfb48] read_part_sector at ffffffffb1e546b5
#16 [ffffa479c21cfb60] read_lba at ffffffffb1e595d2
#17 [ffffa479c21cfba8] efi_partition at ffffffffb1e59f4d
#18 [ffffa479c21cfcb8] blk_add_partitions at ffffffffb1e54377
#19 [ffffa479c21cfcf8] bdev_disk_changed at ffffffffb1d2a8fa
#20 [ffffa479c21cfd30] __blkdev_get at ffffffffb1d2c16c
#21 [ffffa479c21cfda0] blkdev_get at ffffffffb1d2c2b4
#22 [ffffa479c21cfdb8] __device_add_disk at ffffffffb1e5107e
#23 [ffffa479c21cfe20] dmp_register_disk at ffffffffc0e68ae7 [vxdmp]
#24 [ffffa479c21cfe50] dmp_reconfigure_db at ffffffffc0e8d8bd [vxdmp]
#25 [ffffa479c21cfe80] dmpioctl at ffffffffc0e75cd5 [vxdmp]
#26 [ffffa479c21cfe90] dmp_ioctl at ffffffffc0e9d469 [vxdmp]
#27 [ffffa479c21cfea8] blkdev_ioctl at ffffffffb1e4ed19
#28 [ffffa479c21cfef0] block_ioctl at ffffffffb1d2a719
#29 [ffffa479c21cfef8] ksys_ioctl at ffffffffb1cfb262
#30 [ffffa479c21cff30] __x64_sys_ioctl at ffffffffb1cfb296
#31 [ffffa479c21cff38] do_syscall_64 at ffffffffb1a0538b
#32 [ffffa479c21cff50] entry_SYSCALL_64_after_hwframe at ffffffffb240008c

DESCRIPTION:
VxVM causes kernel panic because of null pointer dereference in kernel code when BFQ disk io scheduler is used. This is observed on SLES15 SP3 minor kernel >= 5.3.18-150300.59.68.1 and SLES15 SP4 minor kernel >= 5.14.21-150400.24.11.1

RESOLUTION:
As of now there is no fix available. It is recommended to use mq-deadline as io scheduler. Code changes have been done to automatically change the disk io scheduler to mq-deadline.

* 4112219 (Tracking ID: 4069134)

SYMPTOM:
"vxassist maxsize alloc:array:<enclosure_name>" command may fail with below error:
VxVM vxassist ERROR V-5-1-18606 No disks match specification for Class: array, Instance: <enclosure_name>

DESCRIPTION:
If the enclosure name is greater than 16 chars then "vxassist maxsize alloc:array"  command can fail. 
This is because if the enclosure name is more than 16 chars then it gets truncated while copying from VxDMP to VxVM.
This further cause the above vxassist command to fail.

RESOLUTION:
Code changes are done to avoid the truncation of enclosure name while copying from VxDMP to VxVM.

* 4113223 (Tracking ID: 4093067)

SYMPTOM:
System panicked in the following stack:

#9  [] page_fault at  [exception RIP: bdevname+26]
#10 [] get_dip_from_device  [vxdmp]
#11 [] dmp_node_to_dip at [vxdmp]
#12 [] dmp_check_nonscsi at [vxdmp]
#13 [] dmp_probe_required at [vxdmp]
#14 [] dmp_check_disabled_policy at [vxdmp]
#15 [] dmp_initiate_restore at [vxdmp]
#16 [] dmp_daemons_loop at [vxdmp]

DESCRIPTION:
After got block_device from OS, DMP didn't do the NULL pointer check against block_device->bd_part. This NULL pointer further caused system panic when bdevname() was called.

RESOLUTION:
The code changes have been done to fix the problem.

* 4113225 (Tracking ID: 4068090)

SYMPTOM:
System panicked in the following stack:

#7 page_fault at ffffffffbce010fe
[exception RIP: vx_bio_associate_blkg+56]
#8 vx_dio_physio at ffffffffc0f913a3 [vxfs]
#9 vx_dio_rdwri at ffffffffc0e21a0a [vxfs]
#10 vx_dio_read at ffffffffc0f6acf6 [vxfs]
#11 vx_read_common_noinline at ffffffffc0f6c07e [vxfs]
#12 vx_read1 at ffffffffc0f6c96b [vxfs]
#13 vx_vop_read at ffffffffc0f4cce2 [vxfs]
#14 vx_naio_do_work at ffffffffc0f240bb [vxfs]
#15 vx_naio_worker at ffffffffc0f249c3 [vxfs]

DESCRIPTION:
To get VxVM volume's block_device from gendisk, VxVM calls bdget_disk(), which increases device inode ref count. The ref count gets decreased by bdput() call that is missed in our code, hence the inode count leaks occurs, which may cause panic in vxfs when issuing IO on VxVM volume.

RESOLUTION:
The code changes have been done to fix the problem.

* 4113328 (Tracking ID: 4102439)

SYMPTOM:
Customer observed failure When trying to run the vxencrypt rekey operation on an encrypted volume (to perform key rotation).

DESCRIPTION:
KMS token is of size 64 bytes, we are restricting the token size to 63 bytes and throw an error if the token size is more than 63.

RESOLUTION:
The issue is resolved by setting the assumption of token size to be size of KMS token, which is 64 bytes.

* 4113331 (Tracking ID: 4105565)

SYMPTOM:
In Cluster Volume Replication(CVR) environment, system panic with below stack when Veritas Volume Replicator(VVR) was doing recovery:
[] do_page_fault 
[] page_fault 
[exception RIP: volvvr_rvgrecover_nextstage+747] 
[] volvvr_rvgrecover_done [vxio]
[] voliod_iohandle[vxio]
[] voliod_loop at[vxio]

DESCRIPTION:
There might be a race condition  which caused VVR failed to trigger a DCM flush sio. VVR failed to do sanity check against this sio. Hence triggered system panic.

RESOLUTION:
Code changes have been made to do a sanity check of the DCM flush sio.

* 4113342 (Tracking ID: 4098965)

SYMPTOM:
Vxconfigd dumping Core when scanning IBM XIV Luns with following stack.

#0  0x00007fe93c8aba54 in __memset_sse2 () from /lib64/libc.so.6
#1  0x000000000061d4d2 in dmp_getenclr_ioctl ()
#2  0x00000000005c54c7 in dmp_getarraylist ()
#3  0x00000000005ba4f2 in update_attr_list ()
#4  0x00000000005bc35c in da_identify ()
#5  0x000000000053a8c9 in find_devices_in_system ()
#6  0x000000000053aab5 in mode_set ()
#7  0x0000000000476fb2 in ?? ()
#8  0x00000000004788d0 in main ()

DESCRIPTION:
This could cause 2 issues if there are more than 1 disk arrays connected:

1. If the incorrect memory address exceeds the range of valid virtual memory, it will trigger "Segmentation fault" and crash vxconfigd.
2. If  the incorrect memory address does not exceed the range of valid virtual memory, it will cause memory corruption issue but maybe not trigger vxconfigd crash issue.

RESOLUTION:
Code changes have been made to correct the problem.

* 4115475 (Tracking ID: 4017334)

SYMPTOM:
VXIO call stack trace generated in /var/log/messages

DESCRIPTION:
This issue occurs due to a limitation in the way InfoScale interacts with the RHEL8.2 kernel.
 Call Trace:
 kmsg_sys_rcv+0x16b/0x1c0 [vxio]
 nmcom_get_next_mblk+0x8e/0xf0 [vxio]
 nmcom_get_hdr_msg+0x108/0x200 [vxio]
 nmcom_get_next_msg+0x7d/0x100 [vxio]
 nmcom_wait_msg_tcp+0x97/0x160 [vxio]
 nmcom_server_main_tcp+0x4c2/0x11e0 [vxio]

RESOLUTION:
Making changes in header files for function definitions if rhel version>=8.2
This kernel warning can be safely ignore as it doesn't have any functionality impact.

Patch ID: VRTSaslapm 8.0.0.2300

* 4115481 (Tracking ID: 4098395)

SYMPTOM:
VRTSaslapm package(rpm) doesn't function correctly for SLES15SP4.

DESCRIPTION:
Due to changes in SLES15 SP4 update, there are breakages in APM(Array Policy Module) kernel modules 
present in VRTSaslapm package. Hence the currently available VRTSaslapm doesn't function with SLES15 SP4. 
The VRTSaslapm code needs to be recompiled with SLES15 SP4 kernel.

RESOLUTION:
VRTSaslapm  is recompiled with SLES15 SP4 kernel.

Patch ID: VRTSvxvm-8.0.0.1800

* 4067609 (Tracking ID: 4058464)

SYMPTOM:
vradmin resizevol fails  when FS is not mounted on master.

DESCRIPTION:
vradmin resizevol cmd resizes datavolume, FS on the primary site whereas on the secondary site it resizes only datavolume as FS is not mounted on the secondary site.

vradmin resizevol cmd ships the cmd to logowner at vradmind level and vradmind on logowner in turn tries to ship the lowlevel vxcommands to master at vradmind level and then finally cmd gets executed on master.

RESOLUTION:
Changes introduced to ship the cmd to the node on which FS is mounted. cvm nodename must be provided where FS gets mounted which is then used by vradmind to ship cmd to that respective mounted node.

* 4067635 (Tracking ID: 4059982)

SYMPTOM:
In container environment, vradmin migrate cmd fails multiple times due to rlink not in connected state.

DESCRIPTION:
In VVR, rlinks are disconnected and connected back during the process of replication lifecycle. And, in this mean time when vradmin migrate cmd gets executed it experience errors. It internally causes vradmind to make configuration changes multiple times which impact further vradmin commands getting executed.

RESOLUTION:
vradmin migrate cmd requires rlink data to be up-to-date on both primary and secondary. It internally executes low-level cmds like vxrvg makesecondary and vxrvg makeprimary to change the role of primary and secondary. These cmds doesn't depend on rlink to be in connected state.  Changes are done to remove the rlink connection handling.

* 4070098 (Tracking ID: 4071345)

SYMPTOM:
Replication is unresponsive after failed site is up.

DESCRIPTION:
Autosync and unplanned fallback synchronisation had issues in a mix of cloud and non-cloud Volumes in RVG.
After a cloud volume is found rest of the volumes were getting ignored for synchronisation

RESOLUTION:
Fixed condition to make it iterate over all Volumes.

* 4078531 (Tracking ID: 4075860)

SYMPTOM:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel

DESCRIPTION:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel. This was happening due to missing fpu armor protection for FPU instruction set.

RESOLUTION:
Fix is added to use FPU protection while using FPU instruction set

* 4079345 (Tracking ID: 4069940)

SYMPTOM:
FS mount failed during Cluster configuration on 24-node physical BOM setup.

DESCRIPTION:
FS mount failed during Cluster configuration on 24-node physical BOM setup due to vxvm transactions were taking time more that vcs timeouts.

RESOLUTION:
Fix is added to reduce unnecessary transaction time on large node setup.

* 4080041 (Tracking ID: 4056953)

SYMPTOM:
3PAR PE LUNs are reported in error state by 3PAR ASL

DESCRIPTION:
3PAR storage presents some special STORAGE LUNs(3PAR PE) and these need to be SKIPPED by VxVM and not claimed.
This causes an issue for VxDMP to handle as multiple PE LUNs from different 3PAR enclosures.

RESOLUTION:
Fix added to SKIP the 3PAR PE luns by 3PAR ASL to avoid disks being reported in error state.

* 4080105 (Tracking ID: 4045837)

SYMPTOM:
DCL volume subdisks doesnot relocate after node fault timeout and remains in RELOCATE state

DESCRIPTION:
If DCO has failed plexs and dco is on different disks than data, dco relocation need to be triggered explicitly as try_fss_reloc will only perform dco relocation in context of data which may not succeed if sufficient data disks not available (additional host/disks may be available where dco can relocate)

RESOLUTION:
Fix is added to relocate DCL subdisks to available spare disks

* 4080122 (Tracking ID: 4044068)

SYMPTOM:
Replace Node is failing at Configuring NetBackup stage due to vxdisk init failed with error "Could not obtain requested lock".

DESCRIPTION:
Replace Node is failing at Configuring NetBackup stage due to vxdisk init failed with error "Could not obtain requested lock".

RESOLUTION:
Fix is added to retry transaction few times if it fails with this error

* 4080269 (Tracking ID: 4044898)

SYMPTOM:
we were unable to see rlink tags from info records with the vxrlink listtag command.

DESCRIPTION:
Making rlinks FIPS compliant has 2nd phase in which we are dealing with disk group upgrade path, where rlink enc tags needs to be copied to info record and needs to be FIPS compliant one.



here vxdg upgrade will internally call vxrlink and vxencrypt to upgrade the rlink and rekey the rlink keys respectively.

RESOLUTION:
copied all the encryption tags for rlink to info record and when we are upgrading DG we will internally upgrade the rlink,
this upgradation process will copy rlink tags to info records.

* 4080276 (Tracking ID: 4065145)

SYMPTOM:
During addsec we were unable to processencrypted volume tags for multiple volumes and vsets.
Error we saw:

$ vradmin -g dg2 -encrypted addsec dg2_rvg1 10.210.182.74 10.210.182.75

Error: Duplicate tag name vxvm.attr.enckeytype provided in input.

DESCRIPTION:
The number of tags was not defined and we were processing all the tags at a time instead of processing max number of tags for a volume.

RESOLUTION:
Introduced a number of tags variable depend on the cipher method (CBC/GCM), as well fixed minor code issues.

* 4080277 (Tracking ID: 3966157)

SYMPTOM:
the feature of SRL batching was broken and we were not able to enable it as it might caused problems.

DESCRIPTION:
Batching of updates needs to be done as to get benefit of batching multiple updates and getting performance increased

RESOLUTION:
we have decided to simplify the working as we are now aligning each of the small update within a total batch to 4K size so that,

by default we will get the whole batch aligned one, and then there is no need of book keeping for last update and hence reducing the overhead of

different calculations.

we are padding individual updates to reduce overhead of book keeping things around last update in a batch,
by padding each updates to 4k, we will be having a batch of updates which is 4k aligned itself.

* 4080579 (Tracking ID: 4077876)

SYMPTOM:
When one cluster node is rebooted, EC log replay is triggered for shared EC volume.
It is seen that system is crashed during this EC log replay.

DESCRIPTION:
It is seen that two flags are assigned same value. So, system panicked during flag check.

RESOLUTION:
Changed the code flow to avoid checking values of flags having same value.

* 4080845 (Tracking ID: 4058166)

SYMPTOM:
While setting up VVR/CVR on large size data volumes (size > 3TB) with filesystems mounted on them, initial autosync operation takes a lot of time to complete.

DESCRIPTION:
While performing autosync on VVR/CVR setup for a volume with filesystem mounted, if smartmove feature is enabled, the operation does smartsync by syncing only the regions dirtied by filesystem, instead of syncing entire volume, which completes faster than normal case. However, for large size volumes (size > 3TB),

smartmove feature does not get enabled, even with filesystem mounted on them and hence autosync operation syncs entire volume.

This behaviour is due to smaller size DCM plexes allocated for such large size volumes, autosync ends up performing complete volume sync, taking lot more time to complete.

RESOLUTION:
Increase the limit of DCM plex size (loglen) beyond 2MB so that smart move feature can be utilised properly.

* 4080846 (Tracking ID: 4058437)

SYMPTOM:
Replication between 8.0 and 7.4.x fails with an error due to sector size field.

DESCRIPTION:
7.4.x branch has sectorsize set to zero which internally is indicated as 512 byte. It caused the startrep, resumerep to fail with the below error message.

Message from Primary:

VxVM VVR vxrlink ERROR V-5-1-20387  sector size mismatch, Primary is having sector size 512, Secondary is having sector size 0

RESOLUTION:
A check added to support replication between 8.0 and 7.4.x

* 4081790 (Tracking ID: 4080373)

SYMPTOM:
SFCFSHA configuration failed on RHEL 8.4 due to 'chmod -R' error.

DESCRIPTION:
Failure messages are getting logged as all log permissions are changed to 600 during the upgrade and all log files moved to '/var/log/vx'.

RESOLUTION:
Added -f option to chmod command to suppress warning and redirect errors from mv command to /dev/null.

* 4083337 (Tracking ID: 4081890)

SYMPTOM:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel.

DESCRIPTION:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel. This was happening due to missing fpu armor protection for FPU instruction set.

RESOLUTION:
Fix is added to use FPU protection while using FPU instruction set

* 4085619 (Tracking ID: 4086718)

SYMPTOM:
VxVM fails to install because vxdmp module fails to load on latest minor kernel of SLES15SP2.

DESCRIPTION:
VxVM modules fail to load on latest minor kernel of SLES15SP2. Following messages can be seen logged in system logs:
vxvm-boot[32069]: ERROR: No appropriate modules found.
vxvm-boot[32069]: Error in loading module "vxdmp". See documentation.
vxvm-boot[32069]: Modules not Loaded

RESOLUTION:
Code changes have been done to fix this issue.

* 4087233 (Tracking ID: 4086856)

SYMPTOM:
For Appliance FLEX product using VRTSdocker-plugin, docker.service needs to be replaced as it is not supported on RHEL8.

DESCRIPTION:
Appliance FLEX product using VRTSdocker-plugin is switching to RHEL8 on which docker.service does not exist. vxinfoscale-docker.service must stop after all container services are stopped. podman.service shuts down after all container services are stopped, so docker.service can be replaced with podman.service.

RESOLUTION:
Added platform-specific dependencies for VRTSdocker-plugin. For RHEL8 podman.service introduced.

* 4087439 (Tracking ID: 4088934)

SYMPTOM:
"dd" command on a simple volume results in kernel panic.

DESCRIPTION:
Kernel panic is observed with following stack trace:
 #0 [ffffb741c062b978] machine_kexec at ffffffffa806fe01
 #1 [ffffb741c062b9d0] __crash_kexec at ffffffffa815959d
 #2 [ffffb741c062ba98] crash_kexec at ffffffffa815a45d
 #3 [ffffb741c062bab0] oops_end at ffffffffa8036d3f
 #4 [ffffb741c062bad0] general_protection at ffffffffa8a012c2
    [exception RIP: __blk_rq_map_sg+813]
    RIP: ffffffffa84419dd  RSP: ffffb741c062bb88  RFLAGS: 00010202
    RAX: 0c2822c2621b1294  RBX: 0000000000010000  RCX: 0000000000000000
    RDX: ffffb741c062bc40  RSI: 0000000000000000  RDI: ffff8998fc947300
    RBP: fffff92f0cbe6f80   R8: ffff8998fcbb1200   R9: fffff92f0cbe0000
    R10: ffff8999bf4c9818  R11: 000000000011e000  R12: 000000000011e000
    R13: fffff92f0cbe0000  R14: 00000000000a0000  R15: 0000000000042000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #5 [ffffb741c062bc38] scsi_init_io at ffffffffc03107a2 [scsi_mod]
 #6 [ffffb741c062bc78] sd_init_command at ffffffffc056c425 [sd_mod]
 #7 [ffffb741c062bcd8] scsi_queue_rq at ffffffffc0311f6e [scsi_mod]
 #8 [ffffb741c062bd20] blk_mq_dispatch_rq_list at ffffffffa8447cfe
 #9 [ffffb741c062bdc0] __blk_mq_do_dispatch_sched at ffffffffa844cae0
#10 [ffffb741c062be28] __blk_mq_sched_dispatch_requests at ffffffffa844d152
#11 [ffffb741c062be68] blk_mq_sched_dispatch_requests at ffffffffa844d290
#12 [ffffb741c062be78] __blk_mq_run_hw_queue at ffffffffa84466a3
#13 [ffffb741c062be98] process_one_work at ffffffffa80bcd74
#14 [ffffb741c062bed8] worker_thread at ffffffffa80bcf8d
#15 [ffffb741c062bf10] kthread at ffffffffa80c30ad
#16 [ffffb741c062bf50] ret_from_fork at ffffffffa8a001ff

RESOLUTION:
Code changes have been done to fix this issue.

* 4087791 (Tracking ID: 4087770)

SYMPTOM:
Data corruption post mirror attach operation seen after complete storage fault for DCO volumes.

DESCRIPTION:
DCO (data change object) tracks delta changes for faulted mirrors. During complete storage loss of DCO volume mirrors in, DCO object will be marked as BADLOG and becomes unusable for bitmap tracking.
Post storage reconnect (such as node rejoin in FSS environments) DCO will be re-paired for subsequent tracking. During this if VxVM finds any of the mirrors detached for data volumes, those are expected to be marked for full-resync as bitmap in DCO has no valid information. Bug in repair DCO operation logic prevented marking mirror for full-resync in cases where repair DCO operation is triggered before data volume is started. This resulted into mirror getting attached without any data being copied from good mirrors and hence reads serviced from such mirrors have stale data, resulting into file-system corruption and data loss.

RESOLUTION:
Code has been added to ensure repair DCO operation is performed only if volume object is enabled so as to ensure detached mirrors are marked for full-resync appropriately.

* 4088076 (Tracking ID: 4054685)

SYMPTOM:
RVG recovery gets hung in case of reconfiguration scenarios in CVR environments leading to vx commands hung on master node.

DESCRIPTION:
As a part of rvg recovery we perform DCM, datavolume recovery. But datavolume recovery takes long time due to wrong IOD handling done in linux platforms.

RESOLUTION:
Fix the IOD handling mechanism to resolve the rvg recovery handling.

* 4088483 (Tracking ID: 4088484)

SYMPTOM:
DMP_APM module is not getting loaded and throwing following message in the dmesg logs:
Mod load failed for dmpnvme module: dependency conflict
VxVM vxdmp V-5-0-1015 DMP_APM: DEPENDENCY CONFLICT

DESCRIPTION:
NVMe module loading failed as dmpaa module dependency added in APM and system doesn't have any A/A type disk which inturn nvme module load failed.

RESOLUTION:
Removed A/A dependency from NVMe APM.

* 4088762 (Tracking ID: 4087099)

SYMPTOM:
DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6 and NVME disks are in an error state.

DESCRIPTION:
NVME disks minor number was getting changed when scandisks was performed.
This was leading to incorrect major / minor information being present in vold of the core database.

RESOLUTION:
Fixed device open by passing O_RDONLY. With write permissions it was changing minor number.

Patch ID: VRTSaslapm 8.0.0.1800

* 4080041 (Tracking ID: 4056953)

SYMPTOM:
3PAR PE LUNs are reported in error state by 3PAR ASL

DESCRIPTION:
3PAR storage presents some special STORAGE LUNs(3PAR PE) and these need to be SKIPPED by VxVM and not claimed.
This causes an issue for VxDMP to handle as multiple PE LUNs from different 3PAR enclosures.

RESOLUTION:
Fix added to SKIP the 3PAR PE luns by 3PAR ASL to avoid disks being reported in error state.

* 4088762 (Tracking ID: 4087099)

SYMPTOM:
DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6 and NVME disks are in an error state.

DESCRIPTION:
NVME disks minor number was getting changed when scandisks was performed.
This was leading to incorrect major / minor information being present in vold of the core database.

RESOLUTION:
Fixed device open by passing O_RDONLY. With write permissions it was changing minor number.

Patch ID: VRTSvxvm-8.0.0.1700

* 4081684 (Tracking ID: 4082799)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

Patch ID: VRTSvxvm-8.0.0.1600

* 4057420 (Tracking ID: 4060462)

SYMPTOM:
System is unresponsive while adding new nodes.

DESCRIPTION:
After a node is removed, and adding node with  different node name is attempted; system turns
unresponsive. When a node leaves a cluster, in-memory information related to the node is not cleared due to the race condition.

RESOLUTION:
Fixed race condition to clear in-memory information of the node that leaves the cluster.

* 4062799 (Tracking ID: 4064208)

SYMPTOM:
Node is unresponsive while it gets added to the cluster.

DESCRIPTION:
While a node joins the cluster, if bits on the node are upgraded; size
of the object is interpreted incorrectly. Issue is observed when number of objects is higher and on
InfoScale 7.3.1 and above.

RESOLUTION:
Correct sizes are calculated for the data received from the master node.

* 4065841 (Tracking ID: 4065495)

SYMPTOM:
This is new array and we need to add support for EMC PowerStore.

DESCRIPTION:
EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support EMC PowerStore have been done.

* 4066213 (Tracking ID: 4052580)

SYMPTOM:
Multipathing not supported for NVMe devices under VxVM.

DESCRIPTION:
NVMe devices being non-SCSI devices, are not considered for multipathing.

RESOLUTION:
Changes introduced to support multipathing for NVMe devices.

* 4068407 (Tracking ID: 4068404)

SYMPTOM:
We need to add support to claim ALUA Disks on HPE 3PAR/Primera/Alletra 9000 arrays.

DESCRIPTION:
Current ASL doesn't support HPE 3PAR/Primera/Alletra 9000 ALUA array type. This ALUA array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.

Patch ID: VRTSaslapm 8.0.0.1600

* 4065841 (Tracking ID: 4065495)

SYMPTOM:
This is new array and we need to add support for EMC PowerStore.

DESCRIPTION:
EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support EMC PowerStore have been done.

* 4068407 (Tracking ID: 4068404)

SYMPTOM:
We need to add support to claim ALUA Disks on HPE 3PAR/Primera/Alletra 9000 arrays.

DESCRIPTION:
Current ASL doesn't support HPE 3PAR/Primera/Alletra 9000 ALUA array type. This ALUA array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.

Patch ID: VRTSvxvm-8.0.0.1400

* 4057420 (Tracking ID: 4060462)

SYMPTOM:
System is unresponsive while adding new nodes.

DESCRIPTION:
After a node is removed, and adding node with  different node name is attempted; system turns
unresponsive. When a node leaves a cluster, in-memory information related to the node is not cleared due to the race condition.

RESOLUTION:
Fixed race condition to clear in-memory information of the node that leaves the cluster.

* 4065569 (Tracking ID: 4056156)

SYMPTOM:
VxVM package fails to load on SLES15 SP3

DESCRIPTION:
Changes introduced in SLES15 SP3 impacted VxVM block IO functionality. This included changes in block layer structures in kernel.

RESOLUTION:
Changes have been done to handle the impacted functionalities.

* 4066259 (Tracking ID: 4062576)

SYMPTOM:
When hastop -local is used to stop the cluster, dg deport command hangs. Below stack trace is observed in system logs :

#0 [ffffa53683bf7b30] __schedule at ffffffffa834a38d
 #1 [ffffa53683bf7bc0] schedule at ffffffffa834a868
 #2 [ffffa53683bf7bd0] blk_mq_freeze_queue_wait at ffffffffa7e4d4e6
 #3 [ffffa53683bf7c18] blk_cleanup_queue at ffffffffa7e433b8
 #4 [ffffa53683bf7c30] vxvm_put_gendisk at ffffffffc3450c6b [vxio]   
 #5 [ffffa53683bf7c50] volsys_unset_device at ffffffffc3450e9d [vxio]
 #6 [ffffa53683bf7c60] vol_rmgroup_devices at ffffffffc3491a6b [vxio]
 #7 [ffffa53683bf7c98] voldg_delete at ffffffffc34932fc [vxio]
 #8 [ffffa53683bf7cd8] vol_delete_group at ffffffffc3494d0d [vxio]
 #9 [ffffa53683bf7d18] volconfig_ioctl at ffffffffc3555b8e [vxio]
#10 [ffffa53683bf7d90] volsioctl_real at ffffffffc355fc8a [vxio]
#11 [ffffa53683bf7e60] vols_ioctl at ffffffffc124542d [vxspec]
#12 [ffffa53683bf7e78] vols_unlocked_ioctl at ffffffffc124547d [vxspec]
#13 [ffffa53683bf7e80] do_vfs_ioctl at ffffffffa7d2deb4
#14 [ffffa53683bf7ef8] ksys_ioctl at ffffffffa7d2e4f0
#15 [ffffa53683bf7f30] __x64_sys_ioctl at ffffffffa7d2e536

DESCRIPTION:
This issue is seen due to some updation from kernel side w.r.t to handling request queue.Existing VxVM code set the request handling area (make_request_fn) as vxvm_gen_strategy, this functionality is getting impacted.

RESOLUTION:
Code changes are added to handle the request queues using blk_mq_init_allocated_queue.

* 4066735 (Tracking ID: 4057526)

SYMPTOM:
Whenever vxnm-vxnetd is loaded, it reports "Cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory" in /var/log/messages.

DESCRIPTION:
New systemd update removed the support for "/var/lock/subsys/" directory. Thus, whenever vxnm-vxnetd is loaded on the systems supporting systemd, it 
reports "cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory"

RESOLUTION:
Added a check to validate if the /var/lock/subsys/ directory is supported in vxnm-vxnetd.sh

* 4066834 (Tracking ID: 4046007)

SYMPTOM:
In FSS environment if the cluster name is changed then the private disk region gets corrupted.

DESCRIPTION:
Under some conditions, when vxconfigd tries to update the TOC (table of contents) blocks of disk private region, the allocation maps cannot be initialized in the memory. This could make allocation maps incorrect and lead to corruption of the private region on the disk.

RESOLUTION:
Code changes have been done to avoid corruption of private disk region.

* 4067237 (Tracking ID: 4058894)

SYMPTOM:
After package installation and reboot , messages regarding udev rules for ignore_device are observed in /var/log/messages .
systemd-udevd[774]: /etc/udev/rules.d/40-VxVM.rules:25 Invalid value for OPTIONS key, ignoring: 'ignore_device'

DESCRIPTION:
From SLES15 Sp3 onwards , ignore_device is deprecated from udev rules and is not available for use anymore . Hence these messages are observed in system logs .

RESOLUTION:
Required changes have been done to handle this defect.

Patch ID: VRTSaslapm 8.0.0.1400

* 4067239 (Tracking ID: 4057110)

SYMPTOM:
Support for ASLAPM on SLES15 sp3

DESCRIPTION:
The SLES15sp3 is new release and hence APM module 
should be recompiled with new kernel.

RESOLUTION:
Compiled APM with new kernel.

Patch ID: VRTSvxvm-8.0.0.1300

* 4065628 (Tracking ID: 4065627)

SYMPTOM:
VxVM modules are not loaded after OS upgrade followed by a reboot .

DESCRIPTION:
Once the stack installation is completed with configuration , after OS upgrade vxvm directory is not formed under /lib/modules/<upgraded_kernel>/veritas/ .

RESOLUTION:
VxVM code is updated with the required changes .

Patch ID: VRTSvxfs-8.0.0.2900

* 4092518 (Tracking ID: 4096267)

SYMPTOM:
Veritas File Replication jobs might failed when there are large number of jobs run in parallel.

DESCRIPTION:
File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication.
With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service and
job might failed.

RESOLUTION:
updated code to handle the code to take a hold while checking invalid job configuration.

* 4097466 (Tracking ID: 4114176)

SYMPTOM:
After failover, job sync fails with error "Device or resource busy".

DESCRIPTION:
If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy".

RESOLUTION:
Code is modified to correct the state on target when job was started in recovery mode.

* 4107367 (Tracking ID: 4108955)

SYMPTOM:
VFR job hangs on source if thread creation fails on target.

DESCRIPTION:
On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job.

RESOLUTION:
Code is modified to  retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error.

* 4111457 (Tracking ID: 4117827)

SYMPTOM:
Without tunable change the logfile permission will always be 600 EO compliant

DESCRIPTION:
Tunable values and behavior:

Value                   Behavior
0 (default)          600 permissions, update existing file permissions on upgrade
1                    640 permissions, update existing file permissions on upgrade
2                    644 permissions, update existing file permissions on upgrade
3                    Inherit umask, update existing file permissions on upgrade
10                   600 permissions, dont touch existing file permissions on upgrade
11                   640 permissions, dont touch existing file permissions on upgrade
12                   644 permissions, dont touch existing file permissions on upgrade
13                   Inherit umask, dont touch existing file permissions on upgrade
--------------------------------------------------------------------------------------

Adding new tunable as part of vxtunefs command which is per-node global tunable (not per filesystem). 
For Executive Order, CPI will be having workflow to update the tunable during installation/upgrade/configuration 
which will take care of updating in all nodes.

RESOLUTION:
New tunable is added to vxtunefs command.
How to set tunable:
/opt/VRTS/bin/vxtunefs -D eo_perm=1

* 4112417 (Tracking ID: 4094326)

SYMPTOM:
mdb invocation displays message "failed to add vx_sl_node_level walker: walk name already in use"

DESCRIPTION:
In vx_sl_kmcache_init(), kmcache is initialized for each level (in this case it is 8) separately. For passing the cache name as an argument to kmem_cache_create(), we have used a macro.

#define VX_SL_KMCACHE_NAME(level)       "vx_sl_node_"#level
#define VX_SL_KMCACHE_CREATE(level)                                     \
                kmem_cache_create(VX_SL_KMCACHE_NAME(level),            \
                                  VX_KMEM_SIZE(VX_SL_KMCACHE_SIZE(level)),\
                                  0, NULL, NULL, NULL, NULL, NULL, 0);


While using this macro, we have passed "level" as an argument and that has been expanded as "vx_sl_node_level" for all the 8 levels in `for` loop. This is causing the cache allocation for all the 8 levels with same name.

RESOLUTION:
Passing separate variable value (as level value) to VX_SL_KMCACHE_NAME as it is done in  vx_wb_sl_kmcache_init().

* 4114621 (Tracking ID: 4113060)

SYMPTOM:
On SLES15SP4 & RHEL9, executing a binary on a vxfs mountpoint resulted in an EINVAL error. The dmesg showed error "kernel read not supported for file"

DESCRIPTION:
This was due to changes in the recent kernel that required modifications in the way we initialize the file operations vector for vxfs.

RESOLUTION:
Added code to correctly update the file operations vector to fix this issue.

* 4118795 (Tracking ID: 4100021)

SYMPTOM:
Running setfacl followed by getfacl resulting in "No such device or address" error.

DESCRIPTION:
When running setfacl command on some of the directories which have the VX_ATTR_INDIRECT type of acl attribute, it is not removing the existing acl attribute and adding a new one, which should  not happen ideally. This is resulting in the failure of getfacl with following "No such device or address" error.

RESOLUTION:
we have done the code chages to removal of VX_ATTR_INDIRECT type acl in setfacl code.

* 4119023 (Tracking ID: 4116329)

SYMPTOM:
fsck -o full -n command will fail with error:
"ERROR: V-3-28446:  bc_write failure devid = 0, bno = 8, len = 1024"

DESCRIPTION:
Previously, to correct the file system WORM/SoftWORM, we didn't check  if user wanted to correct the pflags or just wanted to validate if value is flag is missing or not. Also fsck was not capable to handle SOFTWORM flag.

RESOLUTION:
Code added to not try to fix the the problem if user ran fsck with -n option. Also SOFTWORM scenario is added.

* 4119107 (Tracking ID: 4119106)

SYMPTOM:
VxFS module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

* 4123143 (Tracking ID: 4123144)

SYMPTOM:
fsck binary generating coredump

DESCRIPTION:
In internal testing we found that fsck binary generates coredump due to below mentioned assert when we try to repair corrupted file system using below command:
./fsck -o full -y /dev/vx/rdsk/testdg/vol1

ASSERT(fset >= VX_FSET_STRUCT_INDEX)

RESOLUTION:
Added code to set default (primary) fileset by scanning the fset header list.

Patch ID: VRTSvxfs-8.0.0.2600

* 4084880 (Tracking ID: 4084542)

SYMPTOM:
Enhance fsadm defrag report to display if FS is badly fragmented.

DESCRIPTION:
Enhance fsadm defrag report to display if FS is badly fragmented.

RESOLUTION:
Added method to identify if FS needs defragmentation.

* 4088079 (Tracking ID: 4087036)

SYMPTOM:
FSCK utility exits with an error while running it with the "-o metasave" option on a shared volume.

DESCRIPTION:
FSCK utility exits with an error while running it with the "-o metasave" option on a shared volume. Besides this, while running this utility with "-n" and either "-o metasave" or "-o dumplog", it silently ignores the latter option(s).

RESOLUTION:
Code changes have been done to resolve the above-mentioned failure and also warning messages have been added to inform users regarding mutually exclusive behavior of "-n" and either of "metasave" and "dumplog" options instead of silently ignoring them.

* 4111350 (Tracking ID: 4098085)

SYMPTOM:
The VxFS module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP4.

* 4111910 (Tracking ID: 4090127)

SYMPTOM:
CFS hang in vx_searchau().

DESCRIPTION:
As part of SMAP transaction changes, allocator changed its logic to call mdele tryhold always when getting the emap for a particular EAU, and it passes 
nogetdele as 1 to mdele_tryhold, which suggests that mdele_tryhold should not ask for delegation when detecting a free EAU without delegation, so in our case, 
allocator finds such an EAU in device summary tree but without delegation,  and it keeps retrying but without asking for delegation, hence the forever.

RESOLUTION:
In case a FREE EAU is found without delegation, delegate it back to Primary.

Patch ID: VRTSvxfs-8.0.0.2500

* 4112919 (Tracking ID: 4110764)

SYMPTOM:
Security Vulnerability observed in Zlib a third party component VxFS uses.

DESCRIPTION:
In an internal security scans vulnerabilities in Zlib were found.

RESOLUTION:
Upgrading the third party component Zlib to address these vulnerabilities.

Patch ID: VRTSvxfs-8.0.0.2100

* 4095889 (Tracking ID: 4095888)

SYMPTOM:
Security vulnerabilities exist in the Sqlite third-party components used by VxFS.

DESCRIPTION:
VxFS uses the Sqlite third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version of this third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.0.1800

* 4068960 (Tracking ID: 4073203)

SYMPTOM:
Veritas file replication might generate a core while replicating the files to target when rename and unlink operation is performed on a file with FCL( file change log) mode on.

DESCRIPTION:
vxfsreplicate process of Veritas file replicator  might get a segmentation fault with File change mode on when rename and unlink operation are performed on a file.

RESOLUTION:
Addressed the issue to replicate the files, in scenarios involving rename and unlink operation with FCL mode on.

* 4071108 (Tracking ID: 3988752)

SYMPTOM:
Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.

DESCRIPTION:
bdev_strategy() is deprecated from solaris code and was causing performance issues when used for IO's. Solaris has recommended to use LDI framework for all IO's.

RESOLUTION:
Code is modified to use ldi framework for all IO's in solaris.

* 4072228 (Tracking ID: 4037035)

SYMPTOM:
VxFS should have the ability to control the number of inactive processing threads.

DESCRIPTION:
VxFS may spawn a large number of worker threads that become inactive over time. As a result, heavy lock contention occurs during the removal of inactive threads on high-end servers.

RESOLUTION:
To avoid the contention, a new tunable, vx_ninact_proc_threads, is added. You can use vx_ninact_proc_threads to adjust the number of inactive processing threads based on your server configuration and workload.

* 4078335 (Tracking ID: 4076412)

SYMPTOM:
Addressing Executive Order (EO) 14028,  initial requirements which is intended to improve the Federal Governments investigative and remediation capabilities related to cybersecurity incidents. Executive Order helps in improving the nation's cybersecurity and also enhance any organization's cybersecurity and software supply chain integrity.

DESCRIPTION:
Executive Order helps in improving the nation's cybersecurity and also enhance any organization's cybersecurity and software supply chain integrity, some of the  initial requirements will enable the logging which is compliant to Executive Order. This comprises of command logging,  logging unauthorised access in filesystem and logging WORM events on filesystem. Also include changes to display IP address for Veritas File replication at control plane based on tunable.

RESOLUTION:
The initial requirements of EO are addressed in this release.


As per Executive order(EO) for some of the requirements it should be Tunable based.
For example IP logging where ever applicable (for VFR it should be at control plane(not for every data transfer), and this is also tunable based.
Also for logging some kernel logs, like worm events(plan is to log those to syslog) etc are tunable based.

Introduced new tunable, eo_logging_enable. There is a protocol change because of the introduction of the tunable. 
Though the changes are planned for TOT first and then will go to Update patch on 80all maint for EO release, there is impact of this protocol change for update patch.
We might need to update protocol change with middle protocol version between existing protocol version and new protocol version(introduced because of eo)

For VFR, IP addresses of source and destination are needed to be logged as part of EO.
IP addresses will be included in the log while logging Starting/Resuming a job in VFR.
Log Location: /var/VRTSvxfs/replication/log/mount_point-job_name.log

There are 2 ways to fetch the IP address of the source and target. One is to get the IP addresses stored in the link structure of a session. These IPs are obtained by resolving the source and target hostname. It may contain both IPv4 and IPv6 for a node, and we cannot speculate on which IP actual connection has happened. The second way is to get the socket descriptor from an active connection of the session. This socket descriptor can be used to fetch the source and target IP associated with it. The second method is seems  to get the actual IP addresses used for the connection between source and target. The change contains to  fetch IP addresses from socket descriptor after establishing connections.

More details on EO Logging with respective handling for initial release for VxFS
https://confluence.community.veritas.com/pages/viewpage.action?spaceKey=VES&title=EO+VxFS+Scrum+Page

* 4078520 (Tracking ID: 4058444)

SYMPTOM:
Loop mounts using files on VxFS fail on Linux systems running kernel version 4.1 or higher.

DESCRIPTION:
Starting with the 4.1 version of the Linux kernel, the driver loop.ko uses a new API for read and write requests to the file which was not previously implemented in VxFS. This causes the virtual disk reads during mount to fail while using the -o loop option , causing the mount to fail as well. The same functionality worked in older kernels (such as the version found in RHEL7).

RESOLUTION:
Implemented a new API for all regular files on VxFS, allowing usage of the loop device driver against files on VxFS as well as any other kernel drivers using the same functionality.

* 4079142 (Tracking ID: 4077766)

SYMPTOM:
VxFS kernel module might leak memory during readahead of directory blocks.

DESCRIPTION:
VxFS kernel module might leak memory during readahead of directory blocks due to missing free operation of readahead-related structures.

RESOLUTION:
Code in readahead of directory blocks is modified to free up readahead-related structures.

* 4079173 (Tracking ID: 4070217)

SYMPTOM:
Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.

DESCRIPTION:
On a disabled cluster-mounted filesystem, release of cluster reservation might fail during unmount operation resulting in a  failure of command fsck with 'cluster reservation failed for volume' message.

RESOLUTION:
Code is modified to release cluster reservation in unmount operation properly even for cluster-mounted filesystem.

* 4082260 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability observed in Zlib a third party component VxFS uses.

DESCRIPTION:
In an internal security scans vulnerabilities in Zlib were found.

RESOLUTION:
Upgrading the third party component Zlib to address these vulnerabilities.

* 4082865 (Tracking ID: 4079622)

SYMPTOM:
Migration uses normal read/write file operation instead of read/write iter functions. vxfs requires read/write iter functions from Linux kernel
5.14.

DESCRIPTION:
Starting with 5.14 version of the Linux kernel, vxfs uses a read/write iter file operation for migration.

RESOLUTION:
Developed a common function for read/write which get called for normal and iter read/write file operation.

* 4083335 (Tracking ID: 4076098)

SYMPTOM:
FS migration from ext4 to vxfs on Linux machines with falcon-sensor enabled, may fail

DESCRIPTION:
Falcon-sensor driver installed on test machines is tapping system calls such as close and is doing some 
additional vfs calls such as read. Due to this vxfs driver received read file - operation call from fsmigbgcp 
process context. Read operation is allowed only on special files from fsmigbgcp process context. Since 
the file in picture was not a special file, the vxfs debug code asserted.

RESOLUTION:
As a fix, we are now allowing the read on non special files from fsmigbgcp process context.

[Note:
 - There were other related issues fixed in this incident. But those are not likely to be hit in customer 
   environment as they are negative test scenarios (like trying to overwrite migration special file - deflist) 
   and may not be relevant to customer.
- I am not covering them in above

* 4085623 (Tracking ID: 4085624)

SYMPTOM:
While running fsck with -o and full -y on corrupted FS, fsck may dump core.

DESCRIPTION:
Fsck builds various in-core maps based on on-disk structural files, one such map is dotdotmap (which stores 
info about parent directory). For regular fset (like 999), the dotdotmap is initialized only for primary ilist
(inode list for regular inodes). It is skipped for attribute ilist (inode list for attribute inodes). This is because
attribute inodes do not have parent directories as is the case for regular inodes.

While attempting to resolve inconsistencies in FS metadata, fsck tries to clean up dotdotmap for attribute ilist. 
In the absence of a check, dotdotmap is re-initialized for attribute ilist causing segmentation fault.

RESOLUTION:
In the codepath where fsck attempts to reinitialize the dotdotmap, a check added to skip reinitialization of dotdotmap
for attribute ilist.

* 4085839 (Tracking ID: 4085838)

SYMPTOM:
Command fsck may generate core due to processing of zero size attribute inode.

DESCRIPTION:
Command fsck is modified to skip processing of zero size attribute inode.

RESOLUTION:
Command fsck fails due to allocation of memory and dereferencing it for zero size attribute inode.

* 4086085 (Tracking ID: 4086084)

SYMPTOM:
VxFS mount operation causes system panic when -o context is used.

DESCRIPTION:
VxFS mount operation supports context option to override existing extended attributes, or to specify a different, default context for file systems that do not support extended attributes. System panic observed when -o context is used.

RESOLUTION:
Required code changes are added to avoid panic.

* 4088341 (Tracking ID: 4065575)

SYMPTOM:
Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition

DESCRIPTION:
Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition due to a race between two writer threads to take read-write lock the file to do a delayed allocation operation on it.

RESOLUTION:
Code is modified to allow thread which is already holding read-write lock to complete delayed allocation operation, other thread will skip over that file.

Patch ID: VRTSvxfs-8.0.0.1700

* 4081150 (Tracking ID: 4079869)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party components. The Attackers can exploit these security vulnerability 
to attack on system.

RESOLUTION:
Upgrading the third party components to resolve these vulnerabilities.

* 4083948 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party component Zlib.

RESOLUTION:
Upgrading the third party component Zlib to resolve these vulnerabilities.

Patch ID: VRTSvxfs-8.0.0.1400

* 4055808 (Tracking ID: 4062971)

SYMPTOM:
All the operations like ls, create are blocked on file system

DESCRIPTION:
In WORM file system we do not allow directory rename. When partition directory is enabled, new directories are created and files are moved under this leaf directory based on hash. Due to WORM FS this rename operation was blocked and splitting could not complete. Blocking all the operations on file system.

RESOLUTION:
Allow directory renaming in the context of partition directory split and merge.

* 4056684 (Tracking ID: 4056682)

SYMPTOM:
New features information on a filesystem with fsadm(file system administration utility) from a device is not displayed.

DESCRIPTION:
Information about new features like WORM (Write once read many), auditlog is correctly updated with a file system mounted through the fsadm utility, but on the underlying device the new feature information is not displayed.

RESOLUTION:
Updated fsadm utility to display the new feature information correctly.

* 4062606 (Tracking ID: 4062605)

SYMPTOM:
Minimum retention time cannot be set if the maximum retention time is not set.

DESCRIPTION:
The tunable - minimum retention time cannot be set if the tunable - maximum retention time is not set. This was implemented to ensure 
that the minimum time is lower than the maximum time.

RESOLUTION:
Setting of minimum and maximum retention time is independent of each other. Minimum retention time can be set without the maximum retention time being set.

* 4065565 (Tracking ID: 4065669)

SYMPTOM:
Creating non-WORM checkpoints fails when the tunables - minimum retention time and maximum retention time are set.

DESCRIPTION:
Creation of non-WORM checkpoints fails as all WORM-related validations are extended to non-WORM checkpoints also.

RESOLUTION:
WORM-related validations restricted to WORM fsets only, allowing non-WORM checkpoints to be created.

* 4065651 (Tracking ID: 4065666)

SYMPTOM:
All the operations like ls, create are blocked on file system directory where there are WORM enabled files and retention period not expired

DESCRIPTION:
For WORM file system, files whose retention period is not expired can not be renamed. When partition directory is enabled, new directories are created and files are moved under this leaf directory based on hash. Due to WORM FS this rename operation was blocked and splitting could not complete. Blocking all the operations on file system.

RESOLUTION:
Allow directory renaming of files even if retention period is not expired in the context of partition directory split and merge.

Patch ID: VRTSvxfs-8.0.0.1300

* 4065679 (Tracking ID: 4056797)

SYMPTOM:
The VxFS module fails to load on SLES15 SP3.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP3 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP3.

Patch ID: VRTSvxfen-8.0.0.2500

* 4117657 (Tracking ID: 4108561)

SYMPTOM:
Vxfen print keys  internal utility was not working because of overrunning of array internally

DESCRIPTION:
Vxfen print keys  internal utility will not work if the number of keys exceed 8 will then return garbage value
Overrunning array keylist[i].key of 8 bytes at byte offset 8 using index y (which evaluates to 8)

RESOLUTION:
Restricted the internal loop to VXFEN_KEYLEN. Reading reservation working fine now.

* 4124421 (Tracking ID: 4124417)

SYMPTOM:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.

DESCRIPTION:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel as SLES community maintains Azure kernel separately.

RESOLUTION:
Veritas InfoScale Availability support for SUSE Linux Enterprise Server 15 SP4 for Azure kernel is now introduced.

Patch ID: VRTSvxfen-8.0.0.2300

* 4111571 (Tracking ID: 4090439)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP4 is
now introduced.

Patch ID: VRTSvxfen-8.0.0.1800

* 4087166 (Tracking ID: 4087134)

SYMPTOM:
The error message 'Touch /var/VRTSvcs/log/vxfen/vxfen.log failed' appears after starting vxfen service, if parent directory path of vxfen.log is not present.

DESCRIPTION:
Typically,  if parent directory path of vxfen.log is not present, the following error message appears after starting vxfen service:
'Touch /var/VRTSvcs/log/vxfen/vxfen.log failed'.

RESOLUTION:
Create the parent directory path for the vxfen.log file globally if the path is not present.

* 4088061 (Tracking ID: 4089052)

SYMPTOM:
On RHEL9, Node panics while running vxfenswap as a part of Online Coordination Point Replacement operation.

DESCRIPTION:
RHEL9 has introduced fortifying panic which gets triggered if kernel's static check finds out any buffer overflow. This check was wrongly identifying buffer overflow where strings are copied by using unions.

RESOLUTION:
Moved to bcopy internally for such a scenario and kernel side check skipped.

Patch ID: VRTSvxfen-8.0.0.1400

* 3951882 (Tracking ID: 4004248)

SYMPTOM:
vxfend process segfaults and coredumped

DESCRIPTION:
During fencing race, sometimes vxfend crashes and generates core dump

RESOLUTION:
Vxfend internally uses fork and exec to execute sub tasks. The new child process was using same file descriptors for logging purpose. This simultaneous read of same file using single file descriptor was resulting incorrect read and hence the process crash and coredump. This fix creates new file descriptor for child process to fix the crash

Patch ID: VRTSveki-8.0.0.2800

* 4118568 (Tracking ID: 4110457)

SYMPTOM:
Veki packaging failure due to missing of storageapi specific files

DESCRIPTION:
While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes 
were not taken care in the Veki mk-symlink and build scripts.

RESOLUTION:
Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles.
This is helping to package the storageapi along with veki and resolving all interdependencies

* 4119216 (Tracking ID: 4119215)

SYMPTOM:
VEKI module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
VEKI module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSveki-8.0.0.2400

* 4111580 (Tracking ID: 4111579)

SYMPTOM:
The VEKI module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
VEKI module is updated to accommodate the changes in the kernel and load as expected on SLES15SP4.

Patch ID: VRTSveki-8.0.0.1800

* 4056647 (Tracking ID: 4055072)

SYMPTOM:
Upgrading VRTSveki package using yum reports following error as "Starting veki /etc/vx/veki: line 51: [: too many arguments"

DESCRIPTION:
While upgrading VRTSveki package, presence of multiple module directories might result in upgrade script printing error message.

RESOLUTION:
Code is modified to check for specific module directory related to current kernel version in VRTSveki upgrade script.

Patch ID: VRTSveki-8.0.0.1200

* 4070027 (Tracking ID: 4066550)

SYMPTOM:
After reboot LLT, GAB service fail to start as Veki service start times out.

DESCRIPTION:
After reboot, when systemd tries to bring multiple services in parallel at the same time of Veki, Veki startup times out. The default Veki startup timeout was 90 seconds, which is getting increased to 300 seconds with this patch

RESOLUTION:
Increasing Veki start timeout

Patch ID: VRTSvcsea-8.0.0.2500

* 4118769 (Tracking ID: 4073508)

SYMPTOM:
Oracle virtual fire-drill is failing due to Oracle password file location changes from Oracle version 21c.

DESCRIPTION:
Oracle password file has been moved to $ORACLE_BASE/dbs from Oracle version 21c.

RESOLUTION:
Environment variables are used for pointing the updated path for the password file.

It is mandatory from Oracle 21c and later versions for a client to configure .env file path in EnvFile attribute. This file must have ORACLE_BASE path added to 
work Oracle virtual fire-drill feature. 

Sample EnvFile content with ORACLE_BASE path for Oracle 21c [root@inaqalnx013 Oracle]# cat /opt/VRTSagents/ha/bin/Oracle/envfile 
ORACLE_BASE="/u02/app/oracle/product/21.0.0/dbhome_1/"; export ORACLE_BASE; 

Sample attribute value EnvFile = "/opt/VRTSagents/ha/bin/Oracle/envfile"

Patch ID: VRTSvcsea-8.0.0.1800

* 4030767 (Tracking ID: 4088595)

SYMPTOM:
hapdbmigrate utility fails to online the oracle service group due to a timing issue.

DESCRIPTION:
hapdbmigrate utility fails to online the oracle service group due to a timing issue. example: ./hapdbmigrate -pdbres pdb1_res -cdbres cdb2_res -XMLdirectory /oracle_xml Cluster prechecks and validation Done Taking PDB resource [pdb1_res] offline Done Modification of cluster configuration Done VCS ERROR V-16-41-39 Group [CDB2_grp] is not ONLINE after 300 seconds on %vcs_node% VCS ERROR V-16-41-41 Group [CDB2_grp] is not ONLINE on some nodes in the cluster Bringing PDB resource [pdb1_res] online on CDB resource [cdb2_res]Done For further details, see '/var/VRTSvcs/log/hapdbmigrate.log'

RESOLUTION:
hapdbmigrate utility modified to ensure enough time elapses between probe of PDB resource and online of CDB group.

* 4079559 (Tracking ID: 4064917)

SYMPTOM:
Oracle agent fails to generate ora_api (which is used for Intentional Offline functionality of Oracle agent) using build_oraapi.sh script for Oracle 21c.

DESCRIPTION:
The build_oraapi.sh script could not connect to library named libdbtools21.a, as on the new Oracle 21c environment generic library is present i.e. '$ORACLE_HOME/rdbms/lib/libdbtools.a'.

RESOLUTION:
This script on Oracle 21c database environment will pick generic library and on older database environments it will pick Database version specific library.

Patch ID: VRTSvcsag-8.0.0.2500

* 4118318 (Tracking ID: 4113151)

SYMPTOM:
Dependent DiskGroupAgent fails to get its resource online due to disk group import failure.

DESCRIPTION:
VMwareDisksAgent reports its resource online just after VMware disk is attached to virutal machine, if dependent DiskGroup resource starts to online at the moment it must fail because VMware disk is not yet present into vxdmp database due to VxVM transaction latency. Customer used to add retry times to work around this problem but cannot apply the same to every environment.

RESOLUTION:
Added a finite period of wait for VMware disk is present into vxdmp database before online is complete.

* 4118448 (Tracking ID: 4075950)

SYMPTOM:
When IPv6 VIP switches from node1 to node2 in a cluster,
a longer time is taken to update its neighboring information and traffic to reach node2 which is on the reassigned address.

DESCRIPTION:
After the Service group switches from node1 to node2, the IPv6 VIP is not reachable from the network switch. The mac address changes after the node switch, but the network is not updated. Similar to IPv4 VIP by gracious ARP, in case of IPV6 VIP switch from node1 to node2; the network must be updated for the mac address change.

RESOLUTION:
The network devices which communicate with the VIP are not able to establish a connection with the VIP. To connect with the VIP, the VIP is pinged from the switch or from the cluster nodes 'ip -6 neighbor flush all' command is run. Neighbour flush logic is added to IP/MultiNIC agents so that the changed mac id during floating VIP switchover is updated in the network.

* 4118455 (Tracking ID: 4118454)

SYMPTOM:
when root user login shell is set to /sbin/nologin in /etc/passwd file, Process agent resource fails to come online.

DESCRIPTION:
From the engine_A.log,  the below errors were logged:
2023/05/31 11:34:52 VCS NOTICE V-16-10031-20704 Process:Process:imf_getnotification:Received notification for vxamf-group sendmail
2023/05/31 11:35:38 VCS ERROR V-16-10031-9502 Process:sendmail:online:Could not online the resource, make sure user-name is correct.
2023/05/31 11:35:39 VCS INFO V-16-2-13716 Thread(140147853162240) Resource(sendmail): Output of the completed operation (online)
==============================================
This account is currently not available.
==============================================

RESOLUTION:
The Process agent is enhanced to support nologin shell for root user. If user shell is set to /sbin/nologin, the agent starts a process using /bin/bash shell.

* 4118767 (Tracking ID: 4094539)

SYMPTOM:
The MonitorProcesses argument in the resource ArgListValues being passed to the agent (bundled ApplicationAgent) is incorrectly removing an extra needed space from the following process, as found via the recommended CLI process test.

DESCRIPTION:
In the ArgListValues under MonitorProcesses with the extra space it even shows up when displaying the resource.

RESOLUTION:
For the monitored process (not program) only remove leading and trailing spaces. Do not remove extra spaces between words.

Patch ID: VRTSvcsag-8.0.0.1800

* 4030767 (Tracking ID: 4088595)

SYMPTOM:
hapdbmigrate utility fails to online the oracle service group due to a timing issue.

DESCRIPTION:
hapdbmigrate utility fails to online the oracle service group due to a timing issue. example: ./hapdbmigrate -pdbres pdb1_res -cdbres cdb2_res -XMLdirectory /oracle_xml Cluster prechecks and validation Done Taking PDB resource [pdb1_res] offline Done Modification of cluster configuration Done VCS ERROR V-16-41-39 Group [CDB2_grp] is not ONLINE after 300 seconds on %vcs_node% VCS ERROR V-16-41-41 Group [CDB2_grp] is not ONLINE on some nodes in the cluster Bringing PDB resource [pdb1_res] online on CDB resource [cdb2_res]Done For further details, see '/var/VRTSvcs/log/hapdbmigrate.log'

RESOLUTION:
hapdbmigrate utility modified to ensure enough time elapses between probe of PDB resource and online of CDB group.

* 4058802 (Tracking ID: 4073842)

SYMPTOM:
Oracle 21c is not supported on earlier product versions.

DESCRIPTION:
Implemented Oracle 21c support with Storage Foundation for Databases.

RESOLUTION:
Changes are done to support Oracle 21c with Storage Foundation for Databases.

* 4079372 (Tracking ID: 4073842)

SYMPTOM:
Oracle 21c is not supported on earlier product versions.

DESCRIPTION:
Implemented Oracle 21c support with Storage Foundation for Databases.

RESOLUTION:
Changes are done to support Oracle 21c with Storage Foundation for Databases.

* 4079559 (Tracking ID: 4064917)

SYMPTOM:
Oracle agent fails to generate ora_api (which is used for Intentional Offline functionality of Oracle agent) using build_oraapi.sh script for Oracle 21c.

DESCRIPTION:
The build_oraapi.sh script could not connect to library named libdbtools21.a, as on the new Oracle 21c environment generic library is present i.e. '$ORACLE_HOME/rdbms/lib/libdbtools.a'.

RESOLUTION:
This script on Oracle 21c database environment will pick generic library and on older database environments it will pick Database version specific library.

* 4081774 (Tracking ID: 4083099)

SYMPTOM:
When OverlayIP is configured AzureIP resource offline operation fails.

DESCRIPTION:
AzureIP resource fails to go offline when OverlayIP is configured because Azure API routes.delete part of azure-mgmt-network module has been deprecated.

RESOLUTION:
A new API routes.begin_delete is introduced as suggested by Azure in the Azure agent.

Patch ID: VRTSvcs-8.0.0.2300

* 4038088 (Tracking ID: 4100720)

SYMPTOM:
HA fire drill failed because of deprecated 'netstat' command used on SLES 15 cluster.

DESCRIPTION:
Netstat command is deprecated in SLES15 from SP3 so we need to use alternate packages.

RESOLUTION:
IP command is used to overcome deprecated netstat command.

Patch ID: VRTSvcs-8.0.0.2100

* 4103077 (Tracking ID: 4103073)

SYMPTOM:
Security vulnerabilities present in existing version of Netsnmp.

DESCRIPTION:
Upgrading Netsnmp component to fix security vulnerabilities

RESOLUTION:
Upgrading Netsnmp component to fix security vulnerabilities for security patch IS 8.0U1_SP4.

Patch ID: VRTSvcs-8.0.0.1800

* 4084675 (Tracking ID: 4089059)

SYMPTOM:
File permission for gcoconfig.log is not 0600.

DESCRIPTION:
As default file permission was 0644 so it was allowing read access to groups and others so file permission needs to be updated.

RESOLUTION:
Added solution which creates file with permission 0600 so that it should be readable and writable by user.

Patch ID: VRTSvcs-8.0.0.1400

* 4065820 (Tracking ID: 4065819)

SYMPTOM:
Protocol version upgrade from Access Appliance 7.4.3.200 to 8.0 failed.

DESCRIPTION:
During rolling upgrade, IPM message 'MSG_CLUSTER_VERSION_UPDATE' is generated and as a part of it we do some validations for bumping up protocol. If validation succeeds then a broadcast message to bump up the cluster protocol is sent and immediately we send success message to haclus. Thus, the success message is sent before processing the actual updating Protocol version broadcast message. This process occurs for very short period. Also, after successful processing of the broadcast message, the Protocol version is properly updated in config files as well as command shows correct value.

RESOLUTION:
Instead of immediately returning success message, haclus CLI waits till upgrade is implemented on broadcast channel and then success message is sent.

Patch ID: VRTSspt-8.0.0.1400

* 4085610 (Tracking ID: 4090433)

SYMPTOM:
iostat and vmstat command option changes in FirstLook

DESCRIPTION:
Time stamp for each stat collected is missing for vmstat on Linux and iostat and vmstat commands on Solaris and AIX.

RESOLUTION:
Following are the new flags introduced for each platform separately:
    1) Linux:
       vmstat:
           -t: Append timestamp to each line
    2) Solaris:
       iostat:
           -Td: Display a time stamp. Specify d for standard date format.
       vmstat:
           -Td: Display a time stamp. Specify d for standard date format.
    3) AIX:
       iostat:
           -s: Specifies the system throughput report
           -T: Displays the time stamp.
           -D: Displays the extended tape/drive utilization report.
           -l: Displays the output in long listing mode.
       vmstat:
           -t: Prints the time-stamp next to each line of output of vmstat.

* 4088066 (Tracking ID: 4090446)

SYMPTOM:
vxstat log collection improvements in FirstLook

DESCRIPTION:
Currently FirstLook collects only volume level statistics during execution through vxstat command.

RESOLUTION:
Added options in vxstat command to collect stats for volume, plex, disk and subdisk objects.

* 4091983 (Tracking ID: 4092090)

SYMPTOM:
FirstLook should have OS flavor information stored in its log directory.

DESCRIPTION:
FirstLook collects "uname -a" as part of its log collection but this does not give OS flavour information on Linux platform like Rhel7/8.

RESOLUTION:
Code changes have been done in FirstLook code base to include OS_info file inside system folder to contain OS name and its flavor.

* 4096274 (Tracking ID: 4095687)

SYMPTOM:
While restoring version-8 metasave on sparse volume, the restore operation is not happening correctly.

DESCRIPTION:
While restoring version-8 metasave on sparse volume, the restore operation is not happening correctly due to concurrent I/Os triggered on the sparse volume. While replaying the same metasave on normal volume/vset or on a sparse file (for non-MVFS), the replay is working as expected.

RESOLUTION:
Disabled the concurrent writes during metasave restore operation by default. One hidden option ("-p") has been added to enable the multi-threading during metasave restore if needed (mostly while replaying on normal volumes or volume set).

Patch ID: VRTSrest-2.0.0.1300

* 4088973 (Tracking ID: 4089451)

SYMPTOM:
When a read-only filesystem was created on a Volume, GET on mountpoint's details was throwing error.

DESCRIPTION:
When a read-only filesystem was created on a Volume, GET on mountpoint's details was throwing error as the command which was being used was not working for read-only filesystem.

RESOLUTION:
Used the appropriate command to get details of mountpoint.

* 4089033 (Tracking ID: 4089453)

SYMPTOM:
Some VCS REST APIs were crashing the Gunicorn worker.

DESCRIPTION:
Calling some VCS-related APIs were crashing the gunicorn worker handling the request. A new worker was automatically being spawned.

RESOLUTION:
Fixed the related foreign function call interface in the source code.

* 4089041 (Tracking ID: 4089449)

SYMPTOM:
GET resources API on empty service group was throwing an error.

DESCRIPTION:
When GET resources API was called on an empty service group it was giving an error, and the scenario is not handled.

RESOLUTION:
Scenario added to the code to resolve the issue.

* 4089046 (Tracking ID: 4089448)

SYMPTOM:
Logging in REST API was not in EO-compliant format.

DESCRIPTION:
Timestamp format is not EO-compliant and some attributes were missing for EO compliance.

RESOLUTION:
Changed timestamp, added new attributes like nodename, response time, source and destination IP addresses, username to REST server logs.

Patch ID: -3.9.2.24

* 4114375 (Tracking ID: 4113851)

SYMPTOM:
Open CVE's detected for the python programming language and other python modules being used in VRTSpython

DESCRIPTION:
Some open CVE's are exploitable in VRTSpython for IS 8.0

RESOLUTION:
VRTSpython is patched with all the open CVE's which are impacting IS 8.0.

Patch ID: -5.34.0.4

* 4072234 (Tracking ID: 4069607)

SYMPTOM:
Security vulnerability detected on VRTSperl 5.34.0.0 released with Infoscale 8.0.

DESCRIPTION:
Security vulnerability detected in the Net::Netmask module.

RESOLUTION:
Upgraded Net::Netmask module and re-created VRTSperl 5.34.0.1 version to fix the vulnerability .

* 4075150 (Tracking ID: 4075149)

SYMPTOM:
Security vulnerabilities detected in OpenSSL packaged VRTSperl/VRTSpython released with Infoscale 8.0.

DESCRIPTION:
Security vulnerabilities detected in the OpenSSL.

RESOLUTION:
Upgraded OpenSSL version and re-created VRTSperl/VRTSpython version to fix the vulnerability .

Patch ID: VRTSodm-8.0.0.2900

* 4057432 (Tracking ID: 4056673)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock. Corrected vxgms dependency in odm service file.

* 4119105 (Tracking ID: 4119104)

SYMPTOM:
ODM module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSodm-8.0.0.2600

* 4111349 (Tracking ID: 4092232)

SYMPTOM:
The ODM module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP4.

Patch ID: VRTSodm-8.0.0.2500

* 4114322 (Tracking ID: 4114321)

SYMPTOM:
VRTSodm driver will not load with VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm with latest VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new VRTSvxfs .

Patch ID: VRTSodm-8.0.0.1800

* 4089136 (Tracking ID: 4089135)

SYMPTOM:
VRTSodm driver does not load with VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm with latest VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new VRTSvxfs .

Patch ID: VRTSodm-8.0.0.1200

* 4065680 (Tracking ID: 4056799)

SYMPTOM:
The ODM module fails to load on SLES15 SP3.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP3 kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP3.

Patch ID: VRTSllt-8.0.0.2500

* 4124419 (Tracking ID: 4124417)

SYMPTOM:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.

DESCRIPTION:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel as SLES community maintains Azure kernel separately.

RESOLUTION:
Veritas InfoScale Availability support for SUSE Linux Enterprise Server 15 SP4 for Azure kernel is now introduced.

Patch ID: VRTSllt-8.0.0.2300

* 4111469 (Tracking ID: 4090439)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP4 is
now introduced.

* 4112345 (Tracking ID: 4087662)

SYMPTOM:
During memory fragmentation LLT module may fail to allocate large memory leading to the node eviction or a node not being able to join.

DESCRIPTION:
When system memory is heavily fragmented, LLT module fails to allocate memory in the form of Linux socket buffers (SKB) from the OS. Due to this a 
cluster node may not be able to join the cluster or a node may get evicted from the cluster.

RESOLUTION:
This HF updates LLT module so that memory is allocated from private memory pools maintained inside LLT and if pools are exhausted LLT module tries 
to allocate memory through vmalloc.

Patch ID: VRTSllt-8.0.0.1800

* 4061158 (Tracking ID: 4061156)

SYMPTOM:
IO error on /sys/kernel/slab folder

DESCRIPTION:
After loading LLT module, LS command throws IO error on /sys/kernel/slab folder

RESOLUTION:
IO error on /sys/kernel/slab folder is now fixed after loading LLT module

* 4079637 (Tracking ID: 4079636)

SYMPTOM:
Kernel is getting panicked with null pointer dereference in llt_dump_mblk when LLT is configured over IPsec

DESCRIPTION:
LLT uses skb's sp pointer to chain socekt buffers internally. When LLT is configured over IPsec, llt will receive skb's with sp pointer from ip layer. These skbs were wrongly identified by llt as chained skbs. Now we are resetting the sp pointer field before re-using for interanl chaining.

RESOLUTION:
No panic observed after applying this patch.

* 4079662 (Tracking ID: 3981917)

SYMPTOM:
LLT UDP multiport was previously supported only on 9000 MTU networks.

DESCRIPTION:
Previously LLT UDP multiport configuration required network links to have 9000 MTU. We have enhanced UDP multiport code, so that now this LLT feature can be configured/run on 1500 MTU links as well.

RESOLUTION:
LLT UDP multiport can be configured on 1500 MTU based networks

* 4080630 (Tracking ID: 4046953)

SYMPTOM:
During LLT configuration, messages related to 9000 MTU are getting printed as error.

DESCRIPTION:
On Azure error messages related to 9000 MTU are getting logged. 
These message indicates that to have optimal performance , use 9000 MTU based networks . These are not actual errors but suggestion.

RESOLUTION:
Since customer is going to use it on Azure where 9000 MTU is not supported, hence removed these messages to avoid confusion.

Patch ID: VRTSllt-8.0.0.1400

* 4066063 (Tracking ID: 4066062)

SYMPTOM:
Node panic

DESCRIPTION:
Node panic observed in llt udp multiport configuration with vx ioship stack.

RESOLUTION:
When llt receives an acknowledgement, it tries to free the packet and corresponding client frags blindly without checking the client status. If the client is unregistered, then the free functions of the frags will be invalid and hence should not be called.

* 4066667 (Tracking ID: 4040261)

SYMPTOM:
During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.

DESCRIPTION:
Some log messages may have IDs like 00000. When such logs are encountered, it may lead to a core dump by the lltconfig process.

RESOLUTION:
VCS is updated to use appropriate message IDs for logs so that such issues do not occur.

Patch ID: VRTSgms-8.0.0.2800

* 4057427 (Tracking ID: 4057176)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock.

* 4119111 (Tracking ID: 4119110)

SYMPTOM:
GMS module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSgms-8.0.0.2400

* 4111346 (Tracking ID: 4092229)

SYMPTOM:
The GMS module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP4.

Patch ID: VRTSgms-8.0.0.1800

* 4079190 (Tracking ID: 4071136)

SYMPTOM:
/etc/vx/gms.config file is not created during GMS rpm installation.

DESCRIPTION:
/etc/vx/gms.config file is not created when installing the GMS rpm. It has to be manually created by the user to control GMS start/stop through GMS_START macro.

RESOLUTION:
Changed GMS rpm spec to create gms.config during installation of GMS rpm.

Patch ID: VRTSgms-8.0.0.1200

* 4065686 (Tracking ID: 4056803)

SYMPTOM:
The GMS module fails to load on SLES15 SP3.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP3 kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP3.

Patch ID: VRTSglm-8.0.0.2800

* 4119113 (Tracking ID: 4119112)

SYMPTOM:
GLM module failed to load on SLES15-SP4 azure kernel.

DESCRIPTION:
This issue occurs due to changes in SLES15-SP4 azure kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the SLES15-SP4 azure kernel and load as expected on SLES15-SP4 azure kernel.

Patch ID: VRTSglm-8.0.0.2400

* 4087258 (Tracking ID: 4087259)

SYMPTOM:
While upgrading CFS protocol from 90 to 135 (latest), system may panic with following stack trace.

schedule()
vxg_svar_sleep_unlock() 
vxg_create_kthread()
vxg_startthread()
vxg_thread_create()
vxg_leave_local_scopes()
vxg_recv_restart_reply()
vxg_recovery_helper()
vxg_kthread_init()
kthread()

DESCRIPTION:
In GLM (Group lock manager), while upgrading GLM protocol version from 90 to 135 (latest), GLM need to process structures for local scope functionality. GLM 
creates child threads to do this processing. The child threads are created while holding spin lock, which is causing this issue.

RESOLUTION:
Code is changed to create child threads for processing local scope structures after releasing spin lock.

* 4111341 (Tracking ID: 4092225)

SYMPTOM:
The GLM module fails to load on SLES15 SP4.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP4 kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP4.

Patch ID: VRTSglm-8.0.0.1800

* 4089163 (Tracking ID: 4089162)

SYMPTOM:
The GLM module fails to load on SLES and RHEL.

DESCRIPTION:
The GLM module fails to load on SLES and RHEL.

RESOLUTION:
GLM module is updated to load as expected on SLES and RHEL.

Patch ID: VRTSglm-8.0.0.1200

* 4065685 (Tracking ID: 4056801)

SYMPTOM:
The GLM module fails to load on SLES15 SP3.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP3 kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP3.

Patch ID: VRTSgab-8.0.0.2500

* 4124420 (Tracking ID: 4124417)

SYMPTOM:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.

DESCRIPTION:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel as SLES community maintains Azure kernel separately.

RESOLUTION:
Veritas InfoScale Availability support for SUSE Linux Enterprise Server 15 SP4 for Azure kernel is now introduced.

Patch ID: VRTSgab-8.0.0.2300

* 4111469 (Tracking ID: 4090439)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP4 is
now introduced.

* 4111618 (Tracking ID: 4106321)

SYMPTOM:
After stopping HAD on SFCFHA stack for SLES15 SP4 minor kernel(kernel version > 5.14.21-150400.24.28) panic is observed. Assert is hit during hastop called on SLES15 SP4 kernel.

DESCRIPTION:
Kernel vendors are setting TIF_NOTIFY_SIGNAL flag to break out of wait loops though there is no pending signal for that thread. signal_pending() returns as always true. This is a false alarm to GAB and GAB assumes it as a signal delivered to wait thread, causing a crash.

RESOLUTION:
To avoid this false alarm from signal_pending() api, cleaning TIF_NOTIFY_SIGNAL flag from GAB w.r.t to wait thread context.

Patch ID: VRTSgab-8.0.0.1800

* 4089723 (Tracking ID: 4089722)

SYMPTOM:
VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.

DESCRIPTION:
Need recompilation of VRTSgab , VRTSamf and VRTSdbed with latest changes.

RESOLUTION:
Recompiled the VRTSgab , VRTSamf and VRTSdbed.

Patch ID: VRTSgab-8.0.0.1300

* 4067091 (Tracking ID: 4056991)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP2.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP3 is
now introduced.

Patch ID: VRTSfsadv-8.0.0.2600

* 4103001 (Tracking ID: 4103002)

SYMPTOM:
Replication failures observed in internal testing

DESCRIPTION:
Replication related code changes done in VxFS repository to fix replication failures. The replication binaries are part of VRTSfsadv.

RESOLUTION:
Compiled VRTSfsadv with VxFS changes.

Patch ID: VRTSfsadv-8.0.0.2100

* 4092150 (Tracking ID: 4088024)

SYMPTOM:
Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.

DESCRIPTION:
VxFS uses the OpenSSL third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version (1.1.1q) of this third-party components in which the security vulnerabilities have been addressed. To accommodate the changes vxfs_solutions is added with libboost_system entries in Makefile [dedup/pdde/sdk/common/Makefile].

Patch ID: VRTSfsadv-8.0.0.1200

* 4066092 (Tracking ID: 4057644)

SYMPTOM:
Getting a warning in the dmesg i.e. SysV service '/etc/init.d/fsdedupschd' lacks a native systemd unit file.

DESCRIPTION:
During the start of fsdedupschd service with init we are getting the waring as the init.d file will soon get depricated

RESOLUTION:
Have code changes to make fsdedupschd systemd compatible.

Patch ID: VRTSdbed-8.0.0.1800

* 4079372 (Tracking ID: 4073842)

SYMPTOM:
Oracle 21c is not supported on earlier product versions.

DESCRIPTION:
Implemented Oracle 21c support with Storage Foundation for Databases.

RESOLUTION:
Changes are done to support Oracle 21c with Storage Foundation for Databases.

Patch ID: VRTSdbac-8.0.0.2400

* 4124424 (Tracking ID: 4124417)

SYMPTOM:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel.

DESCRIPTION:
Veritas InfoScale Availability does not support SUSE Linux Enterprise Server 15 SP4 for Azure kernel as SLES community maintains Azure kernel separately.

RESOLUTION:
Veritas InfoScale Availability support for SUSE Linux Enterprise Server 15 SP4 for Azure kernel is now introduced.

Patch ID: VRTSdbac-8.0.0.2300

* 4111610 (Tracking ID: 4090439)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 4 (SLES 15 SP4).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP4 is
now introduced.

Patch ID: VRTSdbac-8.0.0.1800

* 4089728 (Tracking ID: 4089722)

SYMPTOM:
VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.

DESCRIPTION:
Need recompilation of VRTSgab , VRTSamf and VRTSdbed with latest changes.

RESOLUTION:
Recompiled the VRTSgab , VRTSamf and VRTSdbed.

Patch ID: VRTSdbac-8.0.0.1200

* 4056997 (Tracking ID: 4056991)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 3 (SLES 15 SP3).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 15 SP2.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP3 is
now introduced.

Patch ID: VRTScps-8.0.0.1900

* 4091306 (Tracking ID: 4088158)

SYMPTOM:
Security vulnerabilities exists Sqlite third-party components used by VCS.

DESCRIPTION:
VCS uses the  Sqlite third-party components in which some security vulnerability exist.

RESOLUTION:
VCS is updated to use newer versions of Sqlite third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTScps-8.0.0.1800

* 4073050 (Tracking ID: 4018218)

SYMPTOM:
Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2

DESCRIPTION:
Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2.

RESOLUTION:
This hotfix updates the VRTScps module so that InfoScale CP Client can establish secure communication with a CP server using TLSv1.2. However, to enable TLSv1.2 communication between the CP client and CP server after installing this hotfix, you must perform the following steps:

To configure TLSv1.2 for CP server
1. Stop the process resource that has pathname="/opt/VRTScps/bin/vxcpserv"
   # hares -offline <vxcpserv> -sys <sysname> 
2. Check that the vxcpserv daemon is stopped using the following command:
   # ps -eaf | grep "/opt/VRTScps/bin/vxcpserv"
3. When the vxcpserv daemon is stopped, edit the "/etc/vxcps_ssl.properties" file and make the following changes:
   a. Remove or comment the entry: openSSL.server.requireTLSv1 = true 
   b. Add a new entry: openSSL.server.requireTLSv1.2 = true
4. Start the process resource that has pathname="/opt/VRTScps/bin/vxcpserv"
   # hares -offline <vxcpserv> -sys <sysname>

To configure TLSv1.2 for CP Client
Edit the "/etc/vxcps_ssl.properties" file and make the following changes:
   a. Remove or comment the entry: openSSL.server.requireTLSv1 = true 
   b. Add a new entry: openSSL.server.requireTLSv1.2 = true

Patch ID: VRTScps-8.0.0.1400

* 4066225 (Tracking ID: 4056666)

SYMPTOM:
The Error writing to database message may appear in syslogs intermittently on InfoScale CP servers.

DESCRIPTION:
Typically, when a coordination point server (CP server) is shared among multiple InfoScale clusters, the following messages may intermittently appear in syslogs:
CPS CRITICAL V-97-1400-501 Error writing to database! :database is locked.
These messages appear in the context of the CP server protocol handshake between the clients and the server.

RESOLUTION:
The CP server is updated so that, in addition to its other database write operations, all the ones for the CP server protocol handshake action are also synchronized.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sles15_x86_64-Patch-8.0.0.2900.tar.gz to /tmp
2. Untar infoscale-sles15_x86_64-Patch-8.0.0.2900.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sles15_x86_64-Patch-8.0.0.2900.tar.gz
    # tar xf /tmp/infoscale-sles15_x86_64-Patch-8.0.0.2900.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale800P2900 [<host1> <host2>...]

You can also install this patch together with 8.0 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 8.0 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
1. Please check any cumulative patch (Update release) released prior to this platform patch. Install this platform patch along with the CP.
2. In case any CP is released on top of this platform patch, this platform patch will be included in that. Please check and install the latest CP.
3. In case the internet is not available, Installation of the patch must be performed concurrently with the latest CPI patch downloaded from Download Center.

NOTE: Use the below steps to upgrade to latest SLES15 SP4 kernel.

1.Install SLES15SP4 GA kernel
2.Install 8.0 Stack + IS8.0MR2(this patch)
3.Configure the product
4.Now upgrade to latest minor kernels.
5.reboot


OTHERS
------
NONE