infoscale-rhel8_x86_64-Patch-8.0.0.2900

 Basic information
Release type: Patch
Release date: 2023-09-27
OS update support: RHEL8 x86-64 Update 8
Technote: None
Documentation: None
Popularity: 1701 viewed    downloaded
Download size: 1.09 GB
Checksum: 1341730258

 Applies to one or more of the following products:
InfoScale Availability 8.0 On RHEL8 x86-64
InfoScale Enterprise 8.0 On RHEL8 x86-64
InfoScale Foundation 8.0 On RHEL8 x86-64
InfoScale Storage 8.0 On RHEL8 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
infoscale-rhel8_x86_64-Patch-8.0.0.2100 (obsolete) 2023-01-23
infoscale-rhel8.6_x86_64-Patch-8.0.0.1500 (obsolete) 2022-05-10
vm-rhel8_x86_64-Patch-8.0.0.1200 (obsolete) 2022-02-16

 Fixes the following incidents:
3951882, 4030767, 4053178, 4055808, 4056647, 4056684, 4057420, 4057427, 4057432, 4058590, 4058802, 4061114, 4061158, 4062606, 4062799, 4064783, 4064784, 4064785, 4064786, 4064788, 4065565, 4065628, 4065651, 4065820, 4065841, 4066063, 4066213, 4066225, 4066259, 4066667, 4067609, 4067635, 4068407, 4068960, 4070027, 4070098, 4071108, 4072228, 4072234, 4072717, 4072752, 4073050, 4073695, 4073696, 4075150, 4078335, 4078520, 4078531, 4079142, 4079173, 4079345, 4079372, 4079559, 4079637, 4079662, 4080041, 4080105, 4080122, 4080269, 4080276, 4080277, 4080579, 4080630, 4080845, 4080846, 4081150, 4081684, 4081774, 4081790, 4082260, 4082865, 4083335, 4083337, 4083948, 4084675, 4085610, 4085619, 4085623, 4085839, 4086085, 4087166, 4087233, 4087439, 4087791, 4088061, 4088066, 4088076, 4088341, 4088483, 4088762, 4088973, 4089033, 4089041, 4089046, 4089163, 4089723, 4089724, 4089728, 4090090, 4091306, 4091983, 4092150, 4092518, 4095889, 4096274, 4097466, 4100204, 4100453, 4100923, 4100925, 4100995, 4101000, 4101232, 4101233, 4101808, 4102352, 4102502, 4102924, 4102973, 4103001, 4103077, 4107367, 4108381, 4108392, 4108582, 4108584, 4108585, 4108933, 4108947, 4108951, 4108952, 4108953, 4108954, 4109554, 4110560, 4111442, 4111457, 4111623, 4112417, 4112549, 4113012, 4113057, 4113310, 4113324, 4113357, 4113661, 4113663, 4113664, 4113666, 4113911, 4113912, 4114019, 4114020, 4114021, 4114375, 4114654, 4114656, 4114963, 4115251, 4115252, 4115381, 4116411, 4116421, 4116548, 4116551, 4116557, 4116559, 4116562, 4116565, 4116567, 4116688, 4116885, 4117110, 4117385, 4118108, 4118111, 4118318, 4118448, 4118455, 4118568, 4118733, 4118767, 4118769, 4118779, 4118795, 4118845, 4119023, 4119087, 4119257, 4119276, 4119438, 4120350, 4121241, 4121714, 4123143, 4124200

 Patch ID:
VRTSrest-2.0.0.1300-linux
VRTSdbed-8.0.0.1800-RHEL
VRTScps-8.0.0.1900-RHEL8
VRTSfsadv-8.0.0.2200-RHEL8
VRTSdbac-8.0.0.2200-RHEL8
VRTSgab-8.0.0.2200-RHEL8
VRTSvxfen-8.0.0.2200-RHEL8
VRTSglm-8.0.0.2300-RHEL8
VRTSsfmh-8.0.0.411_Linux.rpm
VRTSpython-3.9.2.24-RHEL8
VRTSllt-8.0.0.2400-RHEL8
VRTSamf-8.0.0.2400-RHEL8
VRTSvcs-8.0.0.2400-RHEL8
VRTSspt-8.0.0.1400-RHEL8
VRTScavf-8.0.0.2800-GENERIC
VRTSveki-8.0.0.2500-RHEL8
VRTSvcsag-8.0.0.2500-RHEL8
VRTSvcsea-8.0.0.2500-RHEL8
VRTSgms-8.0.0.2800-RHEL8
VRTSperl-5.34.0.4-RHEL8
VRTSodm-8.0.0.2900-RHEL8
VRTSaslapm-8.0.0.2600-RHEL8
VRTSvxfs-8.0.0.2900-RHEL8
VRTSvxvm-8.0.0.2600-RHEL8

Readme file
                          * * * READ ME * * *
                       * * * InfoScale 8.0 * * *
                         * * * Patch 2900 * * *
                         Patch Date: 2023-07-10


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 8.0 Patch 2900


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL8 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTScavf
VRTScps
VRTSdbac
VRTSdbed
VRTSfsadv
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSperl
VRTSpython
VRTSrest
VRTSsfmh
VRTSspt
VRTSvcs
VRTSvcsag
VRTSvcsea
VRTSveki
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 8.0
   * InfoScale Enterprise 8.0
   * InfoScale Foundation 8.0
   * InfoScale Storage 8.0


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSamf-8.0.0.2400
* 4116411 (4113340) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).
Patch ID: VRTSamf-8.0.0.2200
* 4108952 (4107779) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).
Patch ID: VRTSamf-8.0.0.2100
* 4101233 (4100203) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).
Patch ID: VRTSamf-8.0.0.1800
* 4089724 (4089722) VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.
Patch ID: VRTSamf-8.0.0.1500
* 4072752 (4072335) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).
Patch ID: VRTSamf-8.0.0.1100
* 4064788 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTScavf-8.0.0.2800
* 4118779 (4074274) DR test and failover activity might not succeed for hardware-replicated disk groups.
Patch ID: VRTSvxvm-8.0.0.2600
* 4124200 (4124223) Core dump is generated for vxconfigd in TC execution.
* 4109554 (4105953) system panic due to VVR accessed a NULL pointer.
* 4111442 (4066785) create new option usereplicatedev=only to import the replicated LUN only.
* 4112549 (4112701) Nodes stuck in reconfig hang and vxconfigd coredump after rebooting all nodes with a delay of 5min in between them.
* 4113310 (4114601) Panic: in dmp_process_errbp() for disk pull scenario.
* 4113357 (4112433) Security vulnerabilities exists in third party components [openssl, curl and libxml].
* 4114963 (4114962) [NBFS-3.1][DL]:MASTER and CAT_FS got corrupted while performing multiple NVMEs failure
* 4115251 (4115195) [NBFS-3.1][DL]:MEDIA_FS got corrupted after panic loop test
* 4115252 (4115193) Data corruption observed after the node fault and cluster restart in DR environment
* 4115381 (4091783) build script and mb.sh changes in unixvm for integration of storageapi
* 4116548 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference.
* 4116551 (4108913) Vradmind dumps core because of memory corruption.
* 4116557 (4085404) Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.
* 4116559 (4091076) SRL gets into pass-thru mode because of head error.
* 4116562 (4114257) Observed IO hung and high system load average after rebooted master and one slave node rejoins cluster.
* 4116565 (4034741) The current fix from limits IO load on secondary causing deadlock situtaion
* 4116567 (4072862) Stop cluster hang because of RVGLogowner and CVMClus resources fail to offline.
* 4117110 (4113841) VVR panic in replication connection handshake request from network scan tool.
* 4118108 (4114867) systemd-udevd[2224]: invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 103 ('D')
* 4118111 (4065490) VxVM udev rules consumes more CPU and appears in "top" output when system has thousands of storage devices attached.
* 4118733 (4106689) Solaris Zones cannot be started due to Method "/lib/svc/method/fs-local" failed with exit status 95
* 4118845 (4116024) machine panic due to access illegal address.
* 4119087 (4067191) IS8.0_SUSE15_CVR: After rebooted slave node master node got panic
* 4119257 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd
* 4119276 (4090943) VVR Primary RLink cannot connect as secondary reports SRL log is full.
* 4119438 (4117985) EC volume corruption due to lockless access of FPU
* 4120350 (4120878) After enabling the dmp_native_support, system failed to boot.
* 4121241 (4114927) Failed to mount /boot on dmp device after enabling dmp_native_support.
* 4121714 (4081740) vxdg flush command slow due to too many luns needlessly access /proc/partitions.
Patch ID: VRTSaslapm 8.0.0.2600
* 4101808 (4101807) VxVM with DMP is failing to detect Hitachi ShadowImage (SI) svol devices.
* 4116688 (4085145) EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.
* 4117385 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .
Patch ID: VRTSvxvm-8.0.0.2400
* 4110560 (4104927) Changing the attributes in vxvm-boot.service for SLES15 is causing regression in RHEL versions.
* 4113324 (4113323) VxVM Support on RHEL 8.8
* 4113661 (4091076) SRL gets into pass-thru mode because of head error.
* 4113663 (4095163) system panic due to a race freeing VVR update.
* 4113664 (4091390) vradmind service has dump core and stopped on few nodes
* 4113666 (4064772) After enabling slub debug, system could hang with IO load.
Patch ID: VRTSaslapm 8.0.0.2400
* 4116885 (4116868) ASLAPM rpm Support on RHEL 8.8
Patch ID: VRTSvxvm-8.0.0.2200
* 4058590 (4058867) VxVM rpm Support on RHEL 8.7 minor kernel 4.18.0-425.10.1.el8_7.x86_64
* 4108392 (4107802) Fix for calculating best-fit module for upcoming RHEL8.7 minor kernels (higher than 4.18.0-425.10.1.el8_7.x86_64).
Patch ID: VRTSaslapm 8.0.0.2200
* 4108933 (4107932) ASLAPM rpm Support on RHEL 8.7 minor kernel 4.18.0-425.10.1.el8_7.x86_64
Patch ID: VRTSvxvm-8.0.0.2100
* 4102502 (4102501) A security vulnerability exists in the third-party component libcurl.
Patch ID: VRTSvxvm-8.0.0.1900
* 4102924 (4101128) VxVM rpm Support on RHEL 8.7 kernel
Patch ID: VRTSaslapm 8.0.0.1900
* 4102973 (4101139) ASLAPM rpm Support on RHEL 8.7 kernel
Patch ID: VRTSvxvm-8.0.0.1800
* 4067609 (4058464) vradmin resizevol fails when FS is not mounted on master.
* 4067635 (4059982) vradmind need not check for rlink connect during migrate.
* 4070098 (4071345) Unplanned fallback synchronisation is unresponsive
* 4078531 (4075860) Tutil putil rebalance flag is not getting cleared during +4 or more node addition
* 4079345 (4069940) FS mount failed during Cluster configuration on 24-node physical HP BOM2 setup.
* 4080041 (4056953) 3PAR PE LUNs are reported in error state by 3PAR ASL.
* 4080105 (4045837) Sub disks are in relocate state after exceed fault slave node panic.
* 4080122 (4044068) After disc replacement, Replace Node operation failed at Configure Netbackup stage.
* 4080269 (4044898) Copy rlink tags from reprec to info rec, through vxdg upgrade path.
* 4080276 (4065145) multivolume and vset not able to overwrite encryption tags on secondary.
* 4080277 (3966157) SRL batching feature is broken
* 4080579 (4077876) System is crashed when EC log replay is in progress after node reboot.
* 4080845 (4058166) Increase DCM log size based on volume size without exceeding region size limit of 4mb.
* 4080846 (4058437) Replication between 8.0 and 7.4.x fails due to sector size field.
* 4081790 (4080373) SFCFSHA configuration failed on RHEL 8.4.
* 4083337 (4081890) On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4.
* 4085619 (4086718) VxVM modules fail to load with latest minor kernel of SLES15SP2
* 4087233 (4086856) For Appliance FLEX product using VRTSdocker-plugin, need to add platform-specific dependencies service ( docker.service and podman.service ) change.
* 4087439 (4088934) Kernel Panic while running LM/CFS CONFORMANCE - variant (SLES15SP3)
* 4087791 (4087770) NBFS: Data corruption due to skipped full-resync of detached mirrors of volume after DCO repair operation
* 4088076 (4054685) In case of CVR environment, RVG recovery gets hung in linux platforms.
* 4088483 (4088484) Failed to load DMP_APM NVME modules
* 4088762 (4087099) DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6.
Patch ID: VRTSaslapm 8.0.0.1800
* 4080041 (4056953) 3PAR PE LUNs are reported in error state by 3PAR ASL.
* 4088762 (4087099) DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6.
Patch ID: VRTSvxvm-8.0.0.1700
* 4081684 (4082799) A security vulnerability exists in the third-party component libcurl.
Patch ID: VRTSvxvm-8.0.0.1600
* 4057420 (4060462) Nidmap information is not cleared after a node leaves, resulting in add node failure subsequently.
* 4062799 (4064208) Node failed to join the existing cluster after bits are upgraded to a newer version.
* 4065841 (4065495) Add support for DELL EMC PowerStore.
* 4066213 (4052580) Supporting multipathing for NVMe devices under VxVM.
* 4068407 (4068404) ASL request for HPE 3PAR/Primera/Alletra 9000 ALUA support.
Patch ID: VRTSaslapm 8.0.0.1600
* 4065841 (4065495) Add support for DELL EMC PowerStore.
* 4068407 (4068404) ASL request for HPE 3PAR/Primera/Alletra 9000 ALUA support.
Patch ID: VRTSvxvm-8.0.0.1200
* 4066259 (4062576) hastop -local never finishes on Rhel8.4 and RHEL8.5 servers with latest minor kernels due to hang in vxdg deport command.
Patch ID: VRTSvxvm-8.0.0.1100
* 4064786 (4053230) VxVM support for RHEL 8.5
* 4065628 (4065627) VxVM Modules failed to load after OS upgrade .
Patch ID: VRTSvxfs-8.0.0.2900
* 4092518 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel.
* 4097466 (4114176) After failover, job sync fails with error "Device or resource busy".
* 4107367 (4108955) VFR job hangs on source if thread creation fails on target.
* 4111457 (4117827) For EO compliance, there is a requirement to support 3 types of log file permissions  600, 640 and 644 with 600 being default 
new eo_perm tunable is added in vxtunefs command to manage the log file permissions.
* 4112417 (4094326) mdb invocation displays message "failed to add vx_sl_node_level walker: walk name already in use"
* 4118795 (4100021) Running setfacl followed by getfacl resulting in "No such device or address" error.
* 4119023 (4116329) While checking FS sanity with the help of "fsck -o full -n" command, we tried to correct the FS flag value (WORM/Softworm), but failed because -n (read-only) option was given.
* 4123143 (4123144) fsck binary generating coredump
Patch ID: VRTSvxfs-8.0.0.2700
* 4113911 (4113121) VXFS support for RHEL 8.8.
* 4114019 (4067505) invalid VX_AF_OVERLAY aflags error in fsck
* 4114020 (4083056) Hang observed while punching the smaller hole over the bigger hole.
* 4114021 (4101634) Directory inode getting incorrect file-type error in fsck.
Patch ID: VRTSvxfs-8.0.0.2600
* 4114654 (4114652) VXFS support for RHEL 8.7 minor kernel 4.18.0-425.19.2.
Patch ID: VRTSvxfs-8.0.0.2300
* 4108381 (4107777) VxFS support for RHEL 8.7 minor kernel.
Patch ID: VRTSvxfs-8.0.0.2200
* 4100925 (4100926) VxFS module failed to load on RHEL8.7
Patch ID: VRTSvxfs-8.0.0.2100
* 4095889 (4095888) Security vulnerabilities exist in the Sqlite third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.0.1800
* 4068960 (4073203) Veritas file replication might generate a core while replicating the files to target.
* 4071108 (3988752) Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.
* 4072228 (4037035) VxFS should have the ability to control the number of inactive processing threads.
* 4078335 (4076412) Addressing Executive Order (EO) 14028,  initial requirements which is intended to improve the Federal Governments investigative and remediation capabilities related to cybersecurity incidents.
* 4078520 (4058444) Loop mounts using files on VxFS fail on Linux systems.
* 4079142 (4077766) VxFS kernel module might leak memory during readahead of directory blocks.
* 4079173 (4070217) Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.
* 4082260 (4070814) Security Vulnerability observed in Zlib a third party component VxFS uses.
* 4082865 (4079622) Existing migration read/write iter operation handling is not fully functional as vxfs uses normal read/write file operation only.
* 4083335 (4076098) Fix migration issues seen with falcon-sensor.
* 4085623 (4085624) While running fsck, fsck might dump core.
* 4085839 (4085838) Command fsck may generate core due to processing of zero size attribute inode.
* 4086085 (4086084) VxFS mount operation causes system panic.
* 4088341 (4065575) Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition
Patch ID: VRTSvxfs-8.0.0.1700
* 4081150 (4079869) Security Vulnerability in VxFS third party components
* 4083948 (4070814) Security Vulnerability in VxFS third party component Zlib
Patch ID: VRTSvxfs-8.0.0.1200
* 4055808 (4062971) Enable partition directory on WORM file system
* 4056684 (4056682) New features information on a filesystem with fsadm(file system administration utility) from a device is not displayed.
* 4062606 (4062605) Minimum retention time cannot be set if the maximum retention time is not set.
* 4065565 (4065669) Creating non-WORM checkpoints fails when the tunables - minimum retention time and maximum retention time are set.
* 4065651 (4065666) Enable partition directory on WORM file system having WORM enabled on files with retention period not expired.
Patch ID: VRTSvxfs-8.0.0.1100
* 4061114 (4052883) VxFS support for RHEL 8.5.
Patch ID: VRTSvxfen-8.0.0.2200
* 4108953 (4107779) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).
Patch ID: VRTSvxfen-8.0.0.2100
* 4102352 (4100203) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).
Patch ID: VRTSvxfen-8.0.0.1800
* 4087166 (4087134) The error message 'Touch /var/VRTSvcs/log/vxfen/vxfen.log failed' appears after starting vxfen service.
* 4088061 (4089052) On RHEL9, Coordination Point Replacement operation is causing node panic
Patch ID: VRTSvxfen-8.0.0.1500
* 4072717 (4072335) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).
Patch ID: VRTSvxfen-8.0.0.1200
* 3951882 (4004248) vxfend generates core sometimes during vxfen race in CPS based fencing configuration
Patch ID: VRTSvxfen-8.0.0.1100
* 4064785 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTSveki-8.0.0.2500
* 4118568 (4110457) Veki packaging were failing due to dependency
Patch ID: VRTSveki-8.0.0.1800
* 4056647 (4055072) Upgrading VRTSveki package using yum reports error
Patch ID: VRTSveki-8.0.0.1200
* 4070027 (4066550) Increasing Veki systemd service start timeout to 300 seconds
Patch ID: VRTSvcsea-8.0.0.2500
* 4118769 (4073508) Oracle virtual fire-drill is failing.
Patch ID: VRTSvcsea-8.0.0.1800
* 4030767 (4088595) hapdbmigrate utility fails to online the oracle service group
* 4079559 (4064917) As a part of Oracle 21c support, fixed the issue Oracle agent fails to generate ora_api using the build_oraapi.sh script
Patch ID: VRTSvcsag-8.0.0.2500
* 4118318 (4113151) VMwareDisksAgent reports resource online before VMware disk to be online is present into vxvm/dmp database.
* 4118448 (4075950) IPv6 neighbor flush logic needs to be added to IP/MultiNIC agents
* 4118455 (4118454) Process agent fails to come online when root user shell is set to /sbin/nologin.
* 4118767 (4094539) Agent resource monitor not parsing process name correctly.
Patch ID: VRTSvcsag-8.0.0.2400
* 4113057 (4113056) ReuseMntPt is not honored when the same mountpoint is used for two resources with different FSType.
Patch ID: VRTSvcsag-8.0.0.1800
* 4030767 (4088595) hapdbmigrate utility fails to online the oracle service group
* 4058802 (4073842) SFAE changes to support Oracle 21c
* 4079372 (4073842) SFAE changes to support Oracle 21c
* 4079559 (4064917) As a part of Oracle 21c support, fixed the issue Oracle agent fails to generate ora_api using the build_oraapi.sh script
* 4081774 (4083099) AzureIP resource fails to go offline when OverlayIP is configured.
Patch ID: VRTSvcs-8.0.0.2400
* 4111623 (4100720) GCO fails to configure for the latest RHEL/SLES platforms.
Patch ID: VRTSvcs-8.0.0.2100
* 4103077 (4103073) Upgrading Netsnmp component to fix security vulnerabilities .
Patch ID: VRTSvcs-8.0.0.1800
* 4084675 (4089059) gcoconfig.log file permission is changes to 0600.
Patch ID: VRTSvcs-8.0.0.1400
* 4065820 (4065819) Protocol version upgrade failed.
Patch ID: VRTSspt-8.0.0.1400
* 4085610 (4090433) iostat and vmstat command option changes in FirstLook
* 4088066 (4090446) vxstat log collection improvements in FirstLook
* 4091983 (4092090) FirstLook should have OS flavor information stored in its log directory.
* 4096274 (4095687) While restoring version-8 metasave on sparse volume, the restore operation is not happening correctly.
Patch ID: VRTSsfmh-vom-HF0800411
* 4113012 (4113011) VIOM VRTSsfmh package on Linux to fix dclid/vxlist issue with InfoScale VRTSvxvm 8.0.0.2200
Patch ID: VRTSrest-2.0.0.1300
* 4088973 (4089451) When a read-only filesystem was created on a Volume, GET on mountpoint's details was throwing error .
* 4089033 (4089453) Some VCS REST APIs were crashing the Gunicorn worker.
* 4089041 (4089449) GET resources API on empty service group was throwing an error.
* 4089046 (4089448) Logging in REST API is not EO-compliant.
Patch ID: -3.9.2.24
* 4114375 (4113851) For VRTSpython need to fix some open CVE's
Patch ID: -5.34.0.4
* 4072234 (4069607) Security vulnerability detected on VRTSperl 5.34.0.0 released with Infoscale 8.0.
* 4075150 (4075149) Security vulnerabilities detected in OpenSSL packaged with VRTSperl/VRTSpython released with Infoscale 8.0.
Patch ID: VRTSodm-8.0.0.2900
* 4057432 (4056673) Rebooting the system results into emergency mode due to corruption of module dependency files. Incorrect vxgms dependency in odm service file.
Patch ID: VRTSodm-8.0.0.2700
* 4113912 (4113118) ODM support for RHEL 8.8.
Patch ID: VRTSodm-8.0.0.2600
* 4114656 (4114655) ODM support for RHEL 8.7 minor kernel 4.18.0-425.19.2.
Patch ID: VRTSodm-8.0.0.2300
* 4108585 (4107778) ODM support for RHEL 8.7 minor kernel.
Patch ID: VRTSodm-8.0.0.2200
* 4100923 (4100922) ODM module failed to load on RHEL8.7
Patch ID: VRTSllt-8.0.0.2400
* 4116421 (4113340) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).
Patch ID: VRTSllt-8.0.0.2200
* 4108947 (4107779) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).
Patch ID: VRTSllt-8.0.0.2100
* 4101232 (4100203) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).
Patch ID: VRTSllt-8.0.0.1800
* 4061158 (4061156) IO error on /sys/kernel/slab folder
* 4079637 (4079636) LLT over IPsec is causing node crash
* 4079662 (3981917) Support LLT UDP multiport on 1500 MTU based networks.
* 4080630 (4046953) Delete / disable confusing messages.
Patch ID: VRTSllt-8.0.0.1500
* 4073695 (4072335) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).
Patch ID: VRTSllt-8.0.0.1200
* 4066063 (4066062) Node panic is observed while using llt udp with multiport enabled.
* 4066667 (4040261) During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.
Patch ID: VRTSllt-8.0.0.1100
* 4064783 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTSgms-8.0.0.2800
* 4057427 (4057176) Rebooting the system results into emergency mode due to corruption of module dependency files.
Patch ID: VRTSgms-8.0.0.2300
* 4108584 (4107753) GMS support for RHEL 8.7 minor kernel.
Patch ID: VRTSgms-8.0.0.2200
* 4101000 (4100999) GMS support for RHEL 8.7.
Patch ID: VRTSglm-8.0.0.2300
* 4108582 (4107754) GLM support for RHEL 8.7 minor kernel.
Patch ID: VRTSglm-8.0.0.2200
* 4100995 (4100994) GLM module failed to load on RHEL8.7
Patch ID: VRTSglm-8.0.0.1800
* 4089163 (4089162) The GLM module fails to load.
Patch ID: VRTSgab-8.0.0.2200
* 4108951 (4107779) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).
Patch ID: VRTSgab-8.0.0.2100
* 4100453 (4100203) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).
Patch ID: VRTSgab-8.0.0.1800
* 4089723 (4089722) VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.
Patch ID: VRTSgab-8.0.0.1500
* 4073696 (4072335) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).
Patch ID: VRTSgab-8.0.0.1100
* 4064784 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTSfsadv-8.0.0.2200
* 4103001 (4103002) Replication failures observed in internal testing
Patch ID: VRTSfsadv-8.0.0.2100
* 4092150 (4088024) Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.
Patch ID: VRTSdbed-8.0.0.1800
* 4079372 (4073842) SFAE changes to support Oracle 21c
Patch ID: VRTSdbac-8.0.0.2200
* 4108954 (4107779) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).
Patch ID: VRTSdbac-8.0.0.2100
* 4100204 (4100203) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).
Patch ID: VRTSdbac-8.0.0.1900
* 4090090 (4090485) Installation of Oracle 12c GRID and database fails on RHEL8.*/OL8.* with GLIBC package error
Patch ID: VRTSdbac-8.0.0.1800
* 4089728 (4089722) VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.
Patch ID: VRTSdbac-8.0.0.1100
* 4053178 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTScps-8.0.0.1900
* 4091306 (4088158) Security vulnerabilities exists in Sqlite third-party components used by VCS.
Patch ID: VRTScps-8.0.0.1800
* 4073050 (4018218) Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2
Patch ID: VRTScps-8.0.0.1200
* 4066225 (4056666) The Error writing to database message may intermittently appear in syslogs on CP servers.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSamf-8.0.0.2400

* 4116411 (Tracking ID: 4113340)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
8(RHEL8.8) is now introduced.

Patch ID: VRTSamf-8.0.0.2200

* 4108952 (Tracking ID: 4107779)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7 GA kernel(4.18.0-425.3.1.el8.x86_64).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7) is now introduced.

Patch ID: VRTSamf-8.0.0.2100

* 4101233 (Tracking ID: 4100203)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7(RHEL8.7) is now introduced.

Patch ID: VRTSamf-8.0.0.1800

* 4089724 (Tracking ID: 4089722)

SYMPTOM:
VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.

DESCRIPTION:
Need recompilation of VRTSgab , VRTSamf and VRTSdbed with latest changes.

RESOLUTION:
Recompiled the VRTSgab , VRTSamf and VRTSdbed.

Patch ID: VRTSamf-8.0.0.1500

* 4072752 (Tracking ID: 4072335)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 5.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
6(RHEL8.6) is now introduced.

Patch ID: VRTSamf-8.0.0.1100

* 4064788 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTScavf-8.0.0.2800

* 4118779 (Tracking ID: 4074274)

SYMPTOM:
DR test and failover activity might not succeed for hardware-replicated disk groups.

DESCRIPTION:
In case of hardware-replicated disks like EMC SRDF, failover of disk groups might not automatically succeed and a manual intervention might be needed. After failover, disks at the new primary site have the 'udid_mismatch' flag which needs to be updated manually for a successful failover.

RESOLUTION:
For DMP environments, the VxVM & DMP extended attributes need to be refreshed by using 'vxdisk scandisks' prior to import. VxVM has also provided a new vxdg import option '-o usereplicatedev=only' with DMP. This option selects only the hardware-replicated disks during LUN selection process.

Sample main.cf configuration for DMP managed hardware Replicated disks.
CVMVolDg srdf_dg (
CVMDiskGroup = LINUXSRDF
CVMVolume = { scott, scripts }
CVMActivation = sw
CVMDeportOnOffline = 1
ClearClone = 1
ScanDisks = 1
DGOptions = { "-t -o usereplicatedev=only" }
)
All 4 options(CVMDeportOnOffline, ClearClone, ScanDisks and DGOptions) are recommended with hardware-replicated disk groups.

Patch ID: VRTSvxvm-8.0.0.2600

* 4124200 (Tracking ID: 4124223)

SYMPTOM:
Core dump is generated for vxconfigd in TC execution.

DESCRIPTION:
TC creates a scenario where 0s are written in first block of disk. In such case, Null check is necessary in code before some variable is accessed. This Null check is missing which causes vxconfigd core dump in TC execution.

RESOLUTION:
Necessary Null checks is added in code to avoid vxconfigd core dump.

* 4109554 (Tracking ID: 4105953)

SYMPTOM:
System panic with below stack in CVR environment.

 #9 [] page_fault at 
    [exception RIP: vol_ru_check_update_done+183]
#10 [] vol_rv_write2_done at [vxio]
#11 [] voliod_iohandle at [vxio]
#12 [] voliod_loop at [vxio]
#13 [] kthread at

DESCRIPTION:
In CVR environment, when IO is issued in writeack sync mode we ack to application when datavolwrite is done on either log client or logowner depending on 
where IO is issued on. it could happen that VVR freed the metadata I/O update after SRL write is done incase of writeack sync mode, but later after freeing the update, its accessed again and hence we end up in hitting NULL ptr deference.

RESOLUTION:
Code changes have been made to avoid the accessing NULL pointer.

* 4111442 (Tracking ID: 4066785)

SYMPTOM:
When the replicated disks are in SPLIT mode, importing its disk group failed with "Device is a hardware mirror".

DESCRIPTION:
When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported with option `-o usereplicatedev=only`.

RESOLUTION:
The code is enhanced to import the replicated disk group with option `-o usereplicatedev=only`.

* 4112549 (Tracking ID: 4112701)

SYMPTOM:
Observed reconfig hang on 8 nodes ISO vm setup after rebooting all nodes with a delay of 5min in between them due to Vxconfigd core dumped on master node

DESCRIPTION:
1. reconfig hang on 8 nodes ISO vm setup after rebooting all nodes with a delay of 5min.
             2. This is because fork failed during command shipping which caused vxconfigd core dump on master. So all reconfigurations after that failed to 
                 process.

RESOLUTION:
Reboot master node where vold is core dumped.

* 4113310 (Tracking ID: 4114601)

SYMPTOM:
System gets panicked and rebooted

DESCRIPTION:
RCA:
Start the IO on volume device and pull out it's disk from the machine and hit below panic on RHEL8.

 dmp_process_errbp
 dmp_process_errbuf.cold.2+0x328/0x429 [vxdmp]
 dmpioctl+0x35/0x60 [vxdmp]
 dmp_flush_errbuf+0x97/0xc0 [vxio]
 voldmp_errbuf_sio_start+0x4a/0xc0 [vxio]
 voliod_iohandle+0x43/0x390 [vxio]
 voliod_loop+0xc2/0x330 [vxio]
 ? voliod_iohandle+0x390/0x390 [vxio]
 kthread+0x10a/0x120
 ? set_kthread_struct+0x50/0x50

As disk pulled out from the machine VxIO hit a IO error and it routed that IO to dmp layer via kernel-kernel IOCTL for error analysis.
following is the code path for IO routing,

voldmp_errbuf_sio_start()-->dmp_flush_errbuf()--->dmpioctl()--->dmp_process_errbuf()

dmp_process_errbuf() retrieves device number of the underlying path (os-device).
and it tries to get bdev (i.e. block_device) pointer from path-device number.
As path/os-device is removed by disk pull, linux returns fake bdev for the path-device number.
For this fake bdev there is no gendisk associated with it (bdev->bd_disk is NULL).

We are setting this NULL bdev->bd_disk to the IO buffer routed from vxio.
which leads a panic on dmp_process_errbp.

RESOLUTION:
If bdev->bd_disk found NULL then set DMP_CONN_FAILURE error on the IO buffer and return DKE_ENXIO to vxio driver

* 4113357 (Tracking ID: 4112433)

SYMPTOM:
Vulnerabilities have been reported in third party components, [openssl, curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [openssl, curl and libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which needs

RESOLUTION:
[openssl, curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

* 4114963 (Tracking ID: 4114962)

SYMPTOM:
File system data corruption with mirrored volumes in Flexible Storage Sharing (FSS) environments during beyond fault storage failure situations.

DESCRIPTION:
In FSS environments, data change object (DCO) provides functionality to track changes on detached mirrors using bitmaps. This bitmap is later used for re-sync of detached mirrors data (change delta).
When DCO volume and data volume share the same set of devices, DCO volumes last mirror failure means IOs on data volume is going to fail. In such cases instead of invaliding DCO volumes, proactively IO is failed.
This helps in protecting DCO when entire storage comes back and optimal recovery of mirrors can be performed.
When disk for one of the mirror of DCO object become available, the bug in DCO update incorrectly updates metadata of DCO which lead to ignoring valid DCO maps during actual volume recovery and hence newly recovered mirrors of volume missed blocks of valid application data. This lead to corruption when read IO were serviced from the newly recovered mirrors.

RESOLUTION:
The login of FMR map updating transaction of enabling disks is fixed to resolve the bug. This ensures all valid bitmaps are considered for recovery of mirrors and avoid data loss.

* 4115251 (Tracking ID: 4115195)

SYMPTOM:
Data corruption on file-systems with  erasure coded volumes

DESCRIPTION:
In Erasure coded (EC) volume are used in Flexible shared storage (FSS) environments, data change object (DCO) is used to tracking changes in volume with faulted columns. The DCO provides a bitmap of all changed regions during rebuild of the faulted columns. When EC volume starts with few faulted columns, the log-replay IO could not be performed on those columns. Those additional writes are expected to be tracked in DCO bitmap. Due to bug those IOs are not getting tracked in DCO bitmap as DCO bitmaps are not yet enabled when log-replay is triggered. Hence when the remaining columns are attached back, appropriate data blocks of those log-replay IOs are skipped during rebuild. This leads to data corruption when reads are serviced from those columns.

RESOLUTION:
Code changes are done to ensure log-replay on EC volume is triggered only after ensuring DCO tracking is enabled. This ensures that all IOs from log-replay operations are tracked in DCO maps for remaining faulted columns of volume.

* 4115252 (Tracking ID: 4115193)

SYMPTOM:
Data corruption on VVR primary with storage loss beyond fault tolerance level in replicated environment.

DESCRIPTION:
In Flexible Storage Sharing (FSS)  environment  any node fault can lead to storage failure. In VVR primary when last  mirror of SRL  (Storage Replicator Log) volume faulted while application writes are in progress replication is expected to go to pass-through mode.
This information is persistently recorded in the kernel log (KLOG). In the event of cascaded storage node failures, the KLOG updation protocol could not update quorum number of copies. This mis-match in on-disk v/s in-core state of VVR objects leading to data loss due to missing recovery when all storage faults are resolved.

RESOLUTION:
Code changes to handle the KLOG update failure in SRL IO failure handling is done to ensure configuration on-disk and in-core is consistent and subsequent application IO will not be allowed to avoid data corruption.

* 4115381 (Tracking ID: 4091783)

SYMPTOM:
Buildarea creation for unixvm were failing

DESCRIPTION:
unixvm build were failing as there is dependency on the storageapi while creation of build area for unixvm and veki.

This intern were causing issues in Veki packaging during unixvm builds and vxio driver compilation dependency

RESOLUTION:
Added support for storageapi build area creation and building the storageapi internally from unixvm build scripts.

* 4116548 (Tracking ID: 4111254)

SYMPTOM:
vradmind dumps core with the following stack:

#3  0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6
#4  0x000000000045922c in RDS::getHandle ()
#5  0x000000000056ec04 in StatsSession::addHost ()
#6  0x000000000045d9ef in RDS::addRVG ()
#7  0x000000000046ef3d in RDS::createDummyRVG ()
#8  0x000000000044aed7 in PriRunningState::update ()
#9  0x00000000004b3410 in RVG::update ()
#10 0x000000000045cb94 in RDS::update ()
#11 0x000000000042f480 in DBMgr::update ()
#12 0x000000000040a755 in main ()

DESCRIPTION:
vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set.

RESOLUTION:
The issue has been fixed by making code changes.

* 4116551 (Tracking ID: 4108913)

SYMPTOM:
Vradmind dumps core with the following stacks:
#3  0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6
#4  0x00000000005d7a90 in VList::concat () at VList.C:1017
#5  0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280
#6  0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389
#7  0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764
#8  0x00000000004093e9 in process_message () at srvmd.C:418
#9  0x000000000040a66d in main () at srvmd.C:733

#0  0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6
#1  0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6
#2  0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6
#3  0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6
#4  0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6
#5  0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491
#6  0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480
#7  0x00000000005d7244 in VElem::~VElem () at VList.C:480
#8  0x00000000005d8ad9 in VList::~VList () at VList.C:1167
#9  0x000000000040a71a in main () at srvmd.C:743

#0  0x000000000040b826 in DList::head () at ../include/DList.h:82
#1  0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318
#2  0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157
#3  0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117
#4  0x000000000046f610 in RDS::collectStats () at RDS.C:6011
#5  0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799
#6  0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0
#7  0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6

DESCRIPTION:
There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted.

RESOLUTION:
The code changes have been made to fix the issue.

* 4116557 (Tracking ID: 4085404)

SYMPTOM:
Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.

DESCRIPTION:
The active map flush caused RVG serialization. Once RVG gets serialized, all IOs are queued in restart queue, till the active map flush is finished. The too frequent active map flush caused the huge IO drop during flushing SRL to DCM.

RESOLUTION:
The code is modified to adjust the frequency of active map flush and balance the application IO and SRL flush.

* 4116559 (Tracking ID: 4091076)

SYMPTOM:
SRL gets into pass-thru mode when it's about to overflow.

DESCRIPTION:
Primary initiated log search for the requested update sent from secondary. The search aborted with head error as a check condition isn't set correctly.

RESOLUTION:
Fixed the check condition to resolve the issue.

* 4116562 (Tracking ID: 4114257)

SYMPTOM:
VxVM cmd is hung and file system was waiting for io to complete.

file system stack:
#3 [] wait_for_completion at 
#4 [] vx_bc_biowait at [vxfs]
#5 [] vx_biowait at [vxfs]
#6 [] vx_isumupd at [vxfs]
#7 [] __switch_to_asm at 
#8 [] vx_process_revokedele at [vxfs]
#9 [] vx_recv_revokedele at [vxfs]
#10 [] vx_recvdele at [vxfs]
#11 [] vx_msg_process_thread at [vxfs]

vxconfigd stack:
[<0>] volsync_wait+0x106/0x180 [vxio]
[<0>] vol_ktrans+0x9f/0x2c0 [vxio]
[<0>] volconfig_ioctl+0x82a/0xdf0 [vxio]
[<0>] volsioctl_real+0x38a/0x450 [vxio]
[<0>] vols_ioctl+0x6d/0xa0 [vxspec]
[<0>] vols_unlocked_ioctl+0x1d/0x20 [vxspec]

One of vxio thread was waiting for IO drain with below stack.

 #2 [] schedule_timeout at 
 #3 [] vol_rv_change_sio_start at [vxio]
 #4 [] voliod_iohandle at [vxio]

DESCRIPTION:
VVR rvdcm flush SIO was triggered by VVR logowner change and it would set the ru_state throttle flags which caused  MDATA_SHIP SIO got queued in rv_mdship_throttleq. As the MDATA_SHIP SIOs are active, it caused rvdcm flush SIO unable to proceed. In the end, rvdcm_flush SIO was waiting for SIOs in rv_mdship_throttleq to complete. SIOs in rv_mdship_throttleq were waiting rvdcm_flush SIO to complete. Hence a  dead lock situation.

RESOLUTION:
Code changes have been made to solve the dead lock issue.

* 4116565 (Tracking ID: 4034741)

SYMPTOM:
Due to a common RVIOmem pool being used by multiple RVG, a deadlock scenario gets created, causing high load average and system hang.

DESCRIPTION:
The current fix limits IO load on secondary by retaining the updates in NMCOM pool until the DV write done, by which RVIOMEM pool became easy to fill up and 
deadlock situtaion may occur, esp. when high work load on multiple RVGs or cross direction RVGs.Currently all RVGs share the same RVIOMEM pool, while NMCOM 
pool, RDBACK pool and network/dv update list are all per-RVGs, so the RVIOMEM pool becomes the bottle neck on secondary, which is easy to full and run into 
deadlock situation.

RESOLUTION:
Code changes to honor per-RVG RVIOMEM pool to resolve the deadlock issue.

* 4116567 (Tracking ID: 4072862)

SYMPTOM:
In case RVGLogowner resources get onlined on slave nodes, stop the whole cluster may fail and RVGLogowner resources goes in to offline_propagate state.

DESCRIPTION:
While stopping whole cluster, the racing may happen between CVM reconfiguration and RVGLogowner change SIO.

RESOLUTION:
Code changes have been made to fix these racings.

* 4117110 (Tracking ID: 4113841)

SYMPTOM:
VVR panic happened in below code path:

kmsg_sys_poll()
nmcom_get_next_mblk() 
nmcom_get_hdr_msg() 
nmcom_get_next_msg() 
nmcom_wait_msg_tcp() 
nmcom_server_main_tcp()

DESCRIPTION:
When the network scan tool send request to VVR which is unexpected, during the VVR connection handshake, the tcp connection may be terminated immediately by the network scan tool, which may lead to the sock released. Hence, VVR panic when try to refer to it as hit the NULL pointer during the processing.

RESOLUTION:
The code change has been made to check sock is valid, otherwise, return without continue with the VVR connection.

* 4118108 (Tracking ID: 4114867)

SYMPTOM:
Getting these error messages while adding new disks
[root@server101 ~]# cat /etc/udev/rules.d/41-VxVM-selinux.rules | tail -1
KERNEL=="VxVM*", SUBSYSTEM=="block", ACTION=="add", RUN+="/bin/sh -c 'if [ `/usr/sbin/getenforce` != "Disabled" -a `/usr/sbin/
[root@server101 ~]#
[root@server101 ~]# systemctl restart systemd-udevd.service
[root@server101 ~]# udevadm test /block/sdb 2>&1 | grep "invalid"
invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 104 ('D')

DESCRIPTION:
In /etc/udev/rules.d/41-VxVM-selinux.rules double quotation on Disabled and disable is the issue.

RESOLUTION:
Code changes have been made to correct the problem.

* 4118111 (Tracking ID: 4065490)

SYMPTOM:
systemd-udev threads consumes more CPU during system bootup or device discovery.

DESCRIPTION:
During disk discovery when new storage devices are discovered, VxVM udev rules are invoked for creating hardware path
symbolic link and setting SELinux security context on Veritas device files. For creating hardware path symbolic link to each
storage device, "find" command is used internally which is CPU intensive operation. If too many storage devices are attached to
system, then usage of "find" command causes high CPU consumption.

Also, for setting appropriate SELinux security context on VxVM device files, restorecon is done irrespective of SELinux is enabled or disabled.

RESOLUTION:
Usage of "find" command is replaced with "udevadm" command. SELinux security context on VxVM device files is being set
only when SELinux is enabled on system.

* 4118733 (Tracking ID: 4106689)

SYMPTOM:
Solaris Zones cannot be started due to Method "/lib/svc/method/fs-local" failed with exit status 95. The error logs are observed as below:
Mounting ZFS filesystems: cannot mount 'rpool/export' on '/export': directory is not empty
cannot mount 'rpool/export' on '/export': directory is not empty
cannot mount 'rpool/export/home' on '/export/home': failure mounting parent dataset
cannot mount 'rpool/export/home/addm' on /export/home/addm': directory is not empty
.... ....
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: one or more file systems failed.

DESCRIPTION:
When DMP native support is enabled and the "faulted" zpools are found, VxVM will deport the faulty zpools and re-import them. In case fs-local isn't started before vxvm-startup2, this error handling will cause a non-empty /export which further cause zfs mount failure.

RESOLUTION:
Code changes have been made to guarantee the mount order of rpool and zpools.

* 4118845 (Tracking ID: 4116024)

SYMPTOM:
kernel panicked at gab_ifreemsg with following stack:
gab_ifreemsg
gab_freemsg
kmsg_gab_send
vol_kmsg_sendmsg
vol_kmsg_sender

DESCRIPTION:
In a CVR environment there is a RVG of > 600 data volumes, enabling vxvvrstatd daemon through service vxvm-recover. vxvvrstatd calls into ioctl(VOL_RV_APPSTATS) , the latter will generate a kmsg whose length is longer than 64k and trigger a kernel panic due to GAB/LLT no support any message longer than 64k.

RESOLUTION:
Code changes have been done to add a limitation to the maximum number of data volumes for which that ioctl(VOL_RV_APPSTATS) can request the VVR statistics.

* 4119087 (Tracking ID: 4067191)

SYMPTOM:
In CVR environment after rebooting Slave node, Master node may panic with below stack:

Call Trace:
dump_stack+0x66/0x8b
panic+0xfe/0x2d7
volrv_free_mu+0xcf/0xd0 [vxio]
vol_ru_free_update+0x81/0x1c0 [vxio]
volilock_release_internal+0x86/0x440 [vxio]
vol_ru_free_updateq+0x35/0x70 [vxio]
vol_rv_write2_done+0x191/0x510 [vxio]
voliod_iohandle+0xca/0x3d0 [vxio]
wake_up_q+0xa0/0xa0
voliod_iohandle+0x3d0/0x3d0 [vxio]
voliod_loop+0xc3/0x330 [vxio]
kthread+0x10d/0x130
kthread_park+0xa0/0xa0
ret_from_fork+0x22/0x40

DESCRIPTION:
As part of CVM Master switch a rvg_recovery is triggered. In this step race
condition can occured between the VVR objects due to which the object value
is not updated properly and can cause panic.

RESOLUTION:
Code changes are done to handle the race condition between VVR objects.

* 4119257 (Tracking ID: 4090772)

SYMPTOM:
vxconfigd/vx commands hang on secondary site in a CVR environment.

DESCRIPTION:
Due to a window with unmatched SRL positions, if any application (e.g. fdisk) trying
to open the secondary RVG volume will acquire a lock and wait for SRL positions to match.
During this if any vxvm transaction kicked in will also have to wait for same lock.
Further logowner node panic'd which triggered logownership change protocol which hung
as earlier transaction was stuck. As logowner change protocol could not complete,
in absence of valid logowner SRL position could not match and caused deadlock. That lead
to vxconfigd and vx command hang.

RESOLUTION:
Added changes to allow read operation on volume even if SRL positions are
unmatched. We are still blocking write IOs and just allowing open() call for read-only
operations, and hence there will not be any data consistency or integrity issues.

* 4119276 (Tracking ID: 4090943)

SYMPTOM:
On Primary, RLink is continuously getting connected/disconnected with below message seen in secondary syslog:
  VxVM VVR vxio V-5-3-0 Disconnecting replica <rlink_name> since log is full on secondary.

DESCRIPTION:
When RVG logowner node panic, RVG recovery happens in 3 phases.
At the end of 2nd phase of recovery in-memory and on-disk SRL positions remains incorrect
and during this time if there is logowner change then Rlink won't get connected.

RESOLUTION:
Handled in-memory and on-disk SRL positions correctly.

* 4119438 (Tracking ID: 4117985)

SYMPTOM:
Memory/data corruption hit for EC volumes

DESCRIPTION:
This is a porting request original request was already reviewed:http://codereview.engba.veritas.com/r/42056/

Memory corruption hitting in EC was fixed by calling kernel_fpu_begin() for kernel version < rhel8.6. But in latest kernel kernel_fpu_begin() symbol is not 
available, We can not use it. So we have created separate Module with name 'storageapi' which is having implementation of _fpu_begin and _fpu_end
VxIO module is dependent on 'storageapi'

RESOLUTION:
take a fpu lock for FPU related operations

* 4120350 (Tracking ID: 4120878)

SYMPTOM:
System doesn't come up on taking a reboot after enabling dmp_native_support. System goes into maintenance mode.

DESCRIPTION:
"vxio.ko" is dependent on the new "storageapi.ko" module. "storageapi.ko" was missing from VxDMP_initrd file, which is created when dmp_native_support is enabled. So on reboot, without "storageapi.ko" present, "vxio.ko" fails to load.

RESOLUTION:
Code changes have been made to include "strorageapi.ko" in VxDMP_initrd.

* 4121241 (Tracking ID: 4114927)

SYMPTOM:
After enabling dmp_native_support and taking reboot, /boot is not mounted VxDMP node.

DESCRIPTION:
When dmp_native_support is enabled, vxdmproot script is expected to modify the /etc/fstab entry for /boot so that on next boot up, /boot is mounted on dmp device instead of OS device. Also, this operation modifies SELinux context of file /etc/fstab. This causes the machine to go into maintenance mode because of a read permission denied error for /etc/fstab on boot up.

RESOLUTION:
Code changes have been done to make sure SELinux context is preserved for /etc/fstab file and /boot is mounted on dmp device when dmp_native_support is enabled.

* 4121714 (Tracking ID: 4081740)

SYMPTOM:
vxdg flush command slow due to too many luns needlessly access /proc/partitions.

DESCRIPTION:
Linux BLOCK_EXT_MAJOR(block major 259) is used as extended devt for block devices. When partition number of one device is more than 15, the partition device gets assigned under major 259 to solve the sd limitations (16 minors per device), by which more partitions are allowed for one sd device. During "vxdg flush", for each lun in the disk group, vxconfigd reads file /proc/partitions line by line through fgets() to find all the partition devices with major number 259, which would cause vxconfigd to respond sluggishly if there are large amount of luns in the disk group.

RESOLUTION:
Code has been changed to remove the needless access on /proc/partitions for the luns without using extended devt.

Patch ID: VRTSaslapm 8.0.0.2600

* 4101808 (Tracking ID: 4101807)

SYMPTOM:
"vxdisk -e list" does not show "svol" for Hitachi ShadowImage (SI) svol devices.

DESCRIPTION:
VxVM with DMP is failing to detect Hitachi ShadowImage (SI) svol devices.

RESOLUTION:
Hitachi ASL modified to correctly read SCSI Byte locations and recognize ShadowImage (SI) svol device.

* 4116688 (Tracking ID: 4085145)

SYMPTOM:
The issue we are discussing is with AWS environment, on-prim physical/vm host this issue does not exist.( as ioctl and sysfs is giving same values)

DESCRIPTION:
The UDID value in case of Amazon EBS devices was going beyond its limit (read from sysfs as ioctl is not supported by AWS)

RESOLUTION:
Did code changes to fetch LSN through IOCTL as we have fix for intermittent ioctl failure.

* 4117385 (Tracking ID: 4117350)

SYMPTOM:
Below error is observed when trying to import 

# vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg
VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]

Please refer to system log for details.

DESCRIPTION:
REPLICATED flag is used to identify a hardware replicated device so to import dg on the  REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .

RESOLUTION:
REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.

Patch ID: VRTSvxvm-8.0.0.2400

* 4110560 (Tracking ID: 4104927)

SYMPTOM:
vxvm-boot.service fails to start on linux platforms other than SLES15

DESCRIPTION:
SLES15 specific attribute changes causes vxvm-boot.service to fail to start on other linux platforms.

RESOLUTION:
A new vxvm-boot.service file to honour vxvm-boot.service for SLES15, the existing vxvm-boot.service file will serve for other linux platforms.

* 4113324 (Tracking ID: 4113323)

SYMPTOM:
Existing package failed to load on RHEL 8.8 server.

DESCRIPTION:
RHEL 8.8 is a new release and hence VxVM module is compiled with this new kernel along with few other changes.

RESOLUTION:
Compiled VxVM code against 8.8 kernel and made changes to make it compatible.

* 4113661 (Tracking ID: 4091076)

SYMPTOM:
SRL gets into pass-thru mode when it's about to overflow.

DESCRIPTION:
Primary initiated log search for the requested update sent from secondary. The search aborted with head error as a check condition isn't set correctly.

RESOLUTION:
Fixed the check condition to resolve the issue.

* 4113663 (Tracking ID: 4095163)

SYMPTOM:
System panic with below stack:
 #6 [] invalid_op at 
    [exception RIP: __slab_free+414]
 #7 [] kfree at 
 #8 [] vol_ru_free_update at [vxio]
 #9 [] vol_ru_free_updateq at  [vxio]
#10 [] vol_rv_write2_done at [vxio]
#11 [] voliod_iohandle at [vxio]
#12 [] voliod_loop at [vxio]

DESCRIPTION:
The update gets freed as a part of VVR recovery. At the same time, this update also gets freed in VVR second phase of write. Hence there is a race in freeing the updates and caused the system panic.

RESOLUTION:
Code changes have been made to avoid

* 4113664 (Tracking ID: 4091390)

SYMPTOM:
vradmind hit the core dump while accessing pHdr, which is already freed.

DESCRIPTION:
While processing the config message - CFG_UPDATE, we incorrectly freed the existing config message objects. Later, objects are accessed again which dumped the vradmind core.

RESOLUTION:
Changes are done to access the correct configuration objects.

* 4113666 (Tracking ID: 4064772)

SYMPTOM:
After enabling slub debug, system could hang with IO load.

DESCRIPTION:
When creating VxVM I/O memory, VxVM does not align the cache size. This unaligned length will be treated as an invalid I/O length in SCSI layer, which causes some I/O requests are stuck in an invalid state and results in the I/Os never being able to complete. Thus system hang could be observed, especially after cache slub debug is enabled.

RESOLUTION:
Code changes have been done to align the cache size.

Patch ID: VRTSaslapm 8.0.0.2400

* 4116885 (Tracking ID: 4116868)

SYMPTOM:
Support for ASLAPM on RHEL 8.8

DESCRIPTION:
RHEL 8.8 is new release and hence APM module 
should be recompiled with new kernel.

RESOLUTION:
Compiled APM with RHEL 8.8 kernel.

Patch ID: VRTSvxvm-8.0.0.2200

* 4058590 (Tracking ID: 4058867)

SYMPTOM:
Old VxVM rpm fails to load on RHEL8.7 minor kernel 4.18.0-425.10.1.el8_7.x86_64

DESCRIPTION:
RedHat did some critical changes in latest kernel which causing soft-lockup issue to VxVM kernel modules while installation.

RESOLUTION:
As suggested by RedHat (https://access.redhat.com/solutions/6985596) VxVM modules compiled with RHEL 8.7 minor kernel.

* 4108392 (Tracking ID: 4107802)

SYMPTOM:
vxdmp fails to load and system hangs.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel and incorrect module is calculated for best-fit.

RESOLUTION:
Modified existing modinst-vxvm script to calculate correct best-fit module.

Patch ID: VRTSaslapm 8.0.0.2200

* 4108933 (Tracking ID: 4107932)

SYMPTOM:
Support for ASLAPM on RHEL8.7 minor kernel 4.18.0-425.10.1.el8_7.x86_64

DESCRIPTION:
RedHat did some critical changes in latest kernel which causing soft-lockup issue kernel modules while installation.

RESOLUTION:
As suggested by RedHat (https://access.redhat.com/solutions/6985596) modules compiled with RHEL 8.7 minor kernel.

Patch ID: VRTSvxvm-8.0.0.2100

* 4102502 (Tracking ID: 4102501)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

Patch ID: VRTSvxvm-8.0.0.1900

* 4102924 (Tracking ID: 4101128)

SYMPTOM:
Old VxVM rpm fails to load on RHEL8.7

DESCRIPTION:
The RHEL8.7 is a new OS release and has multiple kernel changes which were making VxVM incompatible with OS kernel version 4.18.0-425.3.1

RESOLUTION:
Required code changes have been done. VxVM module compiled with RHEL 8.7 kernel.

Patch ID: VRTSaslapm 8.0.0.1900

* 4102973 (Tracking ID: 4101139)

SYMPTOM:
Support for ASLAPM on RHEL 8.7 kernel

DESCRIPTION:
The RHEL8.7 is new release and hence APM module 
should be recompiled with new kernel.

RESOLUTION:
Compiled APM with new kernel.

Patch ID: VRTSvxvm-8.0.0.1800

* 4067609 (Tracking ID: 4058464)

SYMPTOM:
vradmin resizevol fails  when FS is not mounted on master.

DESCRIPTION:
vradmin resizevol cmd resizes datavolume, FS on the primary site whereas on the secondary site it resizes only datavolume as FS is not mounted on the secondary site.

vradmin resizevol cmd ships the cmd to logowner at vradmind level and vradmind on logowner in turn tries to ship the lowlevel vxcommands to master at vradmind level and then finally cmd gets executed on master.

RESOLUTION:
Changes introduced to ship the cmd to the node on which FS is mounted. cvm nodename must be provided where FS gets mounted which is then used by vradmind to ship cmd to that respective mounted node.

* 4067635 (Tracking ID: 4059982)

SYMPTOM:
In container environment, vradmin migrate cmd fails multiple times due to rlink not in connected state.

DESCRIPTION:
In VVR, rlinks are disconnected and connected back during the process of replication lifecycle. And, in this mean time when vradmin migrate cmd gets executed it experience errors. It internally causes vradmind to make configuration changes multiple times which impact further vradmin commands getting executed.

RESOLUTION:
vradmin migrate cmd requires rlink data to be up-to-date on both primary and secondary. It internally executes low-level cmds like vxrvg makesecondary and vxrvg makeprimary to change the role of primary and secondary. These cmds doesn't depend on rlink to be in connected state.  Changes are done to remove the rlink connection handling.

* 4070098 (Tracking ID: 4071345)

SYMPTOM:
Replication is unresponsive after failed site is up.

DESCRIPTION:
Autosync and unplanned fallback synchronisation had issues in a mix of cloud and non-cloud Volumes in RVG.
After a cloud volume is found rest of the volumes were getting ignored for synchronisation

RESOLUTION:
Fixed condition to make it iterate over all Volumes.

* 4078531 (Tracking ID: 4075860)

SYMPTOM:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel

DESCRIPTION:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel. This was happening due to missing fpu armor protection for FPU instruction set.

RESOLUTION:
Fix is added to use FPU protection while using FPU instruction set

* 4079345 (Tracking ID: 4069940)

SYMPTOM:
FS mount failed during Cluster configuration on 24-node physical BOM setup.

DESCRIPTION:
FS mount failed during Cluster configuration on 24-node physical BOM setup due to vxvm transactions were taking time more that vcs timeouts.

RESOLUTION:
Fix is added to reduce unnecessary transaction time on large node setup.

* 4080041 (Tracking ID: 4056953)

SYMPTOM:
3PAR PE LUNs are reported in error state by 3PAR ASL

DESCRIPTION:
3PAR storage presents some special STORAGE LUNs(3PAR PE) and these need to be SKIPPED by VxVM and not claimed.
This causes an issue for VxDMP to handle as multiple PE LUNs from different 3PAR enclosures.

RESOLUTION:
Fix added to SKIP the 3PAR PE luns by 3PAR ASL to avoid disks being reported in error state.

* 4080105 (Tracking ID: 4045837)

SYMPTOM:
DCL volume subdisks doesnot relocate after node fault timeout and remains in RELOCATE state

DESCRIPTION:
If DCO has failed plexs and dco is on different disks than data, dco relocation need to be triggered explicitly as try_fss_reloc will only perform dco relocation in context of data which may not succeed if sufficient data disks not available (additional host/disks may be available where dco can relocate)

RESOLUTION:
Fix is added to relocate DCL subdisks to available spare disks

* 4080122 (Tracking ID: 4044068)

SYMPTOM:
Replace Node is failing at Configuring NetBackup stage due to vxdisk init failed with error "Could not obtain requested lock".

DESCRIPTION:
Replace Node is failing at Configuring NetBackup stage due to vxdisk init failed with error "Could not obtain requested lock".

RESOLUTION:
Fix is added to retry transaction few times if it fails with this error

* 4080269 (Tracking ID: 4044898)

SYMPTOM:
we were unable to see rlink tags from info records with the vxrlink listtag command.

DESCRIPTION:
Making rlinks FIPS compliant has 2nd phase in which we are dealing with disk group upgrade path, where rlink enc tags needs to be copied to info record and needs to be FIPS compliant one.



here vxdg upgrade will internally call vxrlink and vxencrypt to upgrade the rlink and rekey the rlink keys respectively.

RESOLUTION:
copied all the encryption tags for rlink to info record and when we are upgrading DG we will internally upgrade the rlink,
this upgradation process will copy rlink tags to info records.

* 4080276 (Tracking ID: 4065145)

SYMPTOM:
During addsec we were unable to processencrypted volume tags for multiple volumes and vsets.
Error we saw:

$ vradmin -g dg2 -encrypted addsec dg2_rvg1 10.210.182.74 10.210.182.75

Error: Duplicate tag name vxvm.attr.enckeytype provided in input.

DESCRIPTION:
The number of tags was not defined and we were processing all the tags at a time instead of processing max number of tags for a volume.

RESOLUTION:
Introduced a number of tags variable depend on the cipher method (CBC/GCM), as well fixed minor code issues.

* 4080277 (Tracking ID: 3966157)

SYMPTOM:
the feature of SRL batching was broken and we were not able to enable it as it might caused problems.

DESCRIPTION:
Batching of updates needs to be done as to get benefit of batching multiple updates and getting performance increased

RESOLUTION:
we have decided to simplify the working as we are now aligning each of the small update within a total batch to 4K size so that,

by default we will get the whole batch aligned one, and then there is no need of book keeping for last update and hence reducing the overhead of

different calculations.

we are padding individual updates to reduce overhead of book keeping things around last update in a batch,
by padding each updates to 4k, we will be having a batch of updates which is 4k aligned itself.

* 4080579 (Tracking ID: 4077876)

SYMPTOM:
When one cluster node is rebooted, EC log replay is triggered for shared EC volume.
It is seen that system is crashed during this EC log replay.

DESCRIPTION:
It is seen that two flags are assigned same value. So, system panicked during flag check.

RESOLUTION:
Changed the code flow to avoid checking values of flags having same value.

* 4080845 (Tracking ID: 4058166)

SYMPTOM:
While setting up VVR/CVR on large size data volumes (size > 3TB) with filesystems mounted on them, initial autosync operation takes a lot of time to complete.

DESCRIPTION:
While performing autosync on VVR/CVR setup for a volume with filesystem mounted, if smartmove feature is enabled, the operation does smartsync by syncing only the regions dirtied by filesystem, instead of syncing entire volume, which completes faster than normal case. However, for large size volumes (size > 3TB),

smartmove feature does not get enabled, even with filesystem mounted on them and hence autosync operation syncs entire volume.

This behaviour is due to smaller size DCM plexes allocated for such large size volumes, autosync ends up performing complete volume sync, taking lot more time to complete.

RESOLUTION:
Increase the limit of DCM plex size (loglen) beyond 2MB so that smart move feature can be utilised properly.

* 4080846 (Tracking ID: 4058437)

SYMPTOM:
Replication between 8.0 and 7.4.x fails with an error due to sector size field.

DESCRIPTION:
7.4.x branch has sectorsize set to zero which internally is indicated as 512 byte. It caused the startrep, resumerep to fail with the below error message.

Message from Primary:

VxVM VVR vxrlink ERROR V-5-1-20387  sector size mismatch, Primary is having sector size 512, Secondary is having sector size 0

RESOLUTION:
A check added to support replication between 8.0 and 7.4.x

* 4081790 (Tracking ID: 4080373)

SYMPTOM:
SFCFSHA configuration failed on RHEL 8.4 due to 'chmod -R' error.

DESCRIPTION:
Failure messages are getting logged as all log permissions are changed to 600 during the upgrade and all log files moved to '/var/log/vx'.

RESOLUTION:
Added -f option to chmod command to suppress warning and redirect errors from mv command to /dev/null.

* 4083337 (Tracking ID: 4081890)

SYMPTOM:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel.

DESCRIPTION:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs & mkfs.ext4 in parallel. This was happening due to missing fpu armor protection for FPU instruction set.

RESOLUTION:
Fix is added to use FPU protection while using FPU instruction set

* 4085619 (Tracking ID: 4086718)

SYMPTOM:
VxVM fails to install because vxdmp module fails to load on latest minor kernel of SLES15SP2.

DESCRIPTION:
VxVM modules fail to load on latest minor kernel of SLES15SP2. Following messages can be seen logged in system logs:
vxvm-boot[32069]: ERROR: No appropriate modules found.
vxvm-boot[32069]: Error in loading module "vxdmp". See documentation.
vxvm-boot[32069]: Modules not Loaded

RESOLUTION:
Code changes have been done to fix this issue.

* 4087233 (Tracking ID: 4086856)

SYMPTOM:
For Appliance FLEX product using VRTSdocker-plugin, docker.service needs to be replaced as it is not supported on RHEL8.

DESCRIPTION:
Appliance FLEX product using VRTSdocker-plugin is switching to RHEL8 on which docker.service does not exist. vxinfoscale-docker.service must stop after all container services are stopped. podman.service shuts down after all container services are stopped, so docker.service can be replaced with podman.service.

RESOLUTION:
Added platform-specific dependencies for VRTSdocker-plugin. For RHEL8 podman.service introduced.

* 4087439 (Tracking ID: 4088934)

SYMPTOM:
"dd" command on a simple volume results in kernel panic.

DESCRIPTION:
Kernel panic is observed with following stack trace:
 #0 [ffffb741c062b978] machine_kexec at ffffffffa806fe01
 #1 [ffffb741c062b9d0] __crash_kexec at ffffffffa815959d
 #2 [ffffb741c062ba98] crash_kexec at ffffffffa815a45d
 #3 [ffffb741c062bab0] oops_end at ffffffffa8036d3f
 #4 [ffffb741c062bad0] general_protection at ffffffffa8a012c2
    [exception RIP: __blk_rq_map_sg+813]
    RIP: ffffffffa84419dd  RSP: ffffb741c062bb88  RFLAGS: 00010202
    RAX: 0c2822c2621b1294  RBX: 0000000000010000  RCX: 0000000000000000
    RDX: ffffb741c062bc40  RSI: 0000000000000000  RDI: ffff8998fc947300
    RBP: fffff92f0cbe6f80   R8: ffff8998fcbb1200   R9: fffff92f0cbe0000
    R10: ffff8999bf4c9818  R11: 000000000011e000  R12: 000000000011e000
    R13: fffff92f0cbe0000  R14: 00000000000a0000  R15: 0000000000042000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #5 [ffffb741c062bc38] scsi_init_io at ffffffffc03107a2 [scsi_mod]
 #6 [ffffb741c062bc78] sd_init_command at ffffffffc056c425 [sd_mod]
 #7 [ffffb741c062bcd8] scsi_queue_rq at ffffffffc0311f6e [scsi_mod]
 #8 [ffffb741c062bd20] blk_mq_dispatch_rq_list at ffffffffa8447cfe
 #9 [ffffb741c062bdc0] __blk_mq_do_dispatch_sched at ffffffffa844cae0
#10 [ffffb741c062be28] __blk_mq_sched_dispatch_requests at ffffffffa844d152
#11 [ffffb741c062be68] blk_mq_sched_dispatch_requests at ffffffffa844d290
#12 [ffffb741c062be78] __blk_mq_run_hw_queue at ffffffffa84466a3
#13 [ffffb741c062be98] process_one_work at ffffffffa80bcd74
#14 [ffffb741c062bed8] worker_thread at ffffffffa80bcf8d
#15 [ffffb741c062bf10] kthread at ffffffffa80c30ad
#16 [ffffb741c062bf50] ret_from_fork at ffffffffa8a001ff

RESOLUTION:
Code changes have been done to fix this issue.

* 4087791 (Tracking ID: 4087770)

SYMPTOM:
Data corruption post mirror attach operation seen after complete storage fault for DCO volumes.

DESCRIPTION:
DCO (data change object) tracks delta changes for faulted mirrors. During complete storage loss of DCO volume mirrors in, DCO object will be marked as BADLOG and becomes unusable for bitmap tracking.
Post storage reconnect (such as node rejoin in FSS environments) DCO will be re-paired for subsequent tracking. During this if VxVM finds any of the mirrors detached for data volumes, those are expected to be marked for full-resync as bitmap in DCO has no valid information. Bug in repair DCO operation logic prevented marking mirror for full-resync in cases where repair DCO operation is triggered before data volume is started. This resulted into mirror getting attached without any data being copied from good mirrors and hence reads serviced from such mirrors have stale data, resulting into file-system corruption and data loss.

RESOLUTION:
Code has been added to ensure repair DCO operation is performed only if volume object is enabled so as to ensure detached mirrors are marked for full-resync appropriately.

* 4088076 (Tracking ID: 4054685)

SYMPTOM:
RVG recovery gets hung in case of reconfiguration scenarios in CVR environments leading to vx commands hung on master node.

DESCRIPTION:
As a part of rvg recovery we perform DCM, datavolume recovery. But datavolume recovery takes long time due to wrong IOD handling done in linux platforms.

RESOLUTION:
Fix the IOD handling mechanism to resolve the rvg recovery handling.

* 4088483 (Tracking ID: 4088484)

SYMPTOM:
DMP_APM module is not getting loaded and throwing following message in the dmesg logs:
Mod load failed for dmpnvme module: dependency conflict
VxVM vxdmp V-5-0-1015 DMP_APM: DEPENDENCY CONFLICT

DESCRIPTION:
NVMe module loading failed as dmpaa module dependency added in APM and system doesn't have any A/A type disk which inturn nvme module load failed.

RESOLUTION:
Removed A/A dependency from NVMe APM.

* 4088762 (Tracking ID: 4087099)

SYMPTOM:
DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6 and NVME disks are in an error state.

DESCRIPTION:
NVME disks minor number was getting changed when scandisks was performed.
This was leading to incorrect major / minor information being present in vold of the core database.

RESOLUTION:
Fixed device open by passing O_RDONLY. With write permissions it was changing minor number.

Patch ID: VRTSaslapm 8.0.0.1800

* 4080041 (Tracking ID: 4056953)

SYMPTOM:
3PAR PE LUNs are reported in error state by 3PAR ASL

DESCRIPTION:
3PAR storage presents some special STORAGE LUNs(3PAR PE) and these need to be SKIPPED by VxVM and not claimed.
This causes an issue for VxDMP to handle as multiple PE LUNs from different 3PAR enclosures.

RESOLUTION:
Fix added to SKIP the 3PAR PE luns by 3PAR ASL to avoid disks being reported in error state.

* 4088762 (Tracking ID: 4087099)

SYMPTOM:
DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6 and NVME disks are in an error state.

DESCRIPTION:
NVME disks minor number was getting changed when scandisks was performed.
This was leading to incorrect major / minor information being present in vold of the core database.

RESOLUTION:
Fixed device open by passing O_RDONLY. With write permissions it was changing minor number.

Patch ID: VRTSvxvm-8.0.0.1700

* 4081684 (Tracking ID: 4082799)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

Patch ID: VRTSvxvm-8.0.0.1600

* 4057420 (Tracking ID: 4060462)

SYMPTOM:
System is unresponsive while adding new nodes.

DESCRIPTION:
After a node is removed, and adding node with  different node name is attempted; system turns
unresponsive. When a node leaves a cluster, in-memory information related to the node is not cleared due to the race condition.

RESOLUTION:
Fixed race condition to clear in-memory information of the node that leaves the cluster.

* 4062799 (Tracking ID: 4064208)

SYMPTOM:
Node is unresponsive while it gets added to the cluster.

DESCRIPTION:
While a node joins the cluster, if bits on the node are upgraded; size
of the object is interpreted incorrectly. Issue is observed when number of objects is higher and on
InfoScale 7.3.1 and above.

RESOLUTION:
Correct sizes are calculated for the data received from the master node.

* 4065841 (Tracking ID: 4065495)

SYMPTOM:
This is new array and we need to add support for EMC PowerStore.

DESCRIPTION:
EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support EMC PowerStore have been done.

* 4066213 (Tracking ID: 4052580)

SYMPTOM:
Multipathing not supported for NVMe devices under VxVM.

DESCRIPTION:
NVMe devices being non-SCSI devices, are not considered for multipathing.

RESOLUTION:
Changes introduced to support multipathing for NVMe devices.

* 4068407 (Tracking ID: 4068404)

SYMPTOM:
We need to add support to claim ALUA Disks on HPE 3PAR/Primera/Alletra 9000 arrays.

DESCRIPTION:
Current ASL doesn't support HPE 3PAR/Primera/Alletra 9000 ALUA array type. This ALUA array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.

Patch ID: VRTSaslapm 8.0.0.1600

* 4065841 (Tracking ID: 4065495)

SYMPTOM:
This is new array and we need to add support for EMC PowerStore.

DESCRIPTION:
EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support EMC PowerStore have been done.

* 4068407 (Tracking ID: 4068404)

SYMPTOM:
We need to add support to claim ALUA Disks on HPE 3PAR/Primera/Alletra 9000 arrays.

DESCRIPTION:
Current ASL doesn't support HPE 3PAR/Primera/Alletra 9000 ALUA array type. This ALUA array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.

Patch ID: VRTSvxvm-8.0.0.1200

* 4066259 (Tracking ID: 4062576)

SYMPTOM:
When hastop -local is used to stop the cluster, dg deport command hangs. Below stack trace is observed in system logs :

#0 [ffffa53683bf7b30] __schedule at ffffffffa834a38d
 #1 [ffffa53683bf7bc0] schedule at ffffffffa834a868
 #2 [ffffa53683bf7bd0] blk_mq_freeze_queue_wait at ffffffffa7e4d4e6
 #3 [ffffa53683bf7c18] blk_cleanup_queue at ffffffffa7e433b8
 #4 [ffffa53683bf7c30] vxvm_put_gendisk at ffffffffc3450c6b [vxio]   
 #5 [ffffa53683bf7c50] volsys_unset_device at ffffffffc3450e9d [vxio]
 #6 [ffffa53683bf7c60] vol_rmgroup_devices at ffffffffc3491a6b [vxio]
 #7 [ffffa53683bf7c98] voldg_delete at ffffffffc34932fc [vxio]
 #8 [ffffa53683bf7cd8] vol_delete_group at ffffffffc3494d0d [vxio]
 #9 [ffffa53683bf7d18] volconfig_ioctl at ffffffffc3555b8e [vxio]
#10 [ffffa53683bf7d90] volsioctl_real at ffffffffc355fc8a [vxio]
#11 [ffffa53683bf7e60] vols_ioctl at ffffffffc124542d [vxspec]
#12 [ffffa53683bf7e78] vols_unlocked_ioctl at ffffffffc124547d [vxspec]
#13 [ffffa53683bf7e80] do_vfs_ioctl at ffffffffa7d2deb4
#14 [ffffa53683bf7ef8] ksys_ioctl at ffffffffa7d2e4f0
#15 [ffffa53683bf7f30] __x64_sys_ioctl at ffffffffa7d2e536

DESCRIPTION:
This issue is seen due to some updation from kernel side w.r.t to handling request queue.Existing VxVM code set the request handling area (make_request_fn) as vxvm_gen_strategy, this functionality is getting impacted.

RESOLUTION:
Code changes are added to handle the request queues using blk_mq_init_allocated_queue.

Patch ID: VRTSvxvm-8.0.0.1100

* 4064786 (Tracking ID: 4053230)

SYMPTOM:
RHEL 8.5 support is to be provided with IS 8.0

DESCRIPTION:
RHEL 8.5 ZDS support is being provided with IS 8.0

RESOLUTION:
VxVM packages are available with RHEL 8.5 compatibility

* 4065628 (Tracking ID: 4065627)

SYMPTOM:
VxVM modules are not loaded after OS upgrade followed by a reboot .

DESCRIPTION:
Once the stack installation is completed with configuration , after OS upgrade vxvm directory is not formed under /lib/modules/<upgraded_kernel>veritas/ .

RESOLUTION:
VxVM code is updated with the required changes .

Patch ID: VRTSvxfs-8.0.0.2900

* 4092518 (Tracking ID: 4096267)

SYMPTOM:
Veritas File Replication jobs might failed when there are large number of jobs run in parallel.

DESCRIPTION:
File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication.
With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service and
job might failed.

RESOLUTION:
updated code to handle the code to take a hold while checking invalid job configuration.

* 4097466 (Tracking ID: 4114176)

SYMPTOM:
After failover, job sync fails with error "Device or resource busy".

DESCRIPTION:
If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy".

RESOLUTION:
Code is modified to correct the state on target when job was started in recovery mode.

* 4107367 (Tracking ID: 4108955)

SYMPTOM:
VFR job hangs on source if thread creation fails on target.

DESCRIPTION:
On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job.

RESOLUTION:
Code is modified to  retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error.

* 4111457 (Tracking ID: 4117827)

SYMPTOM:
Without tunable change the logfile permission will always be 600 EO compliant

DESCRIPTION:
Tunable values and behavior:

Value                   Behavior
0 (default)          600 permissions, update existing file permissions on upgrade
1                    640 permissions, update existing file permissions on upgrade
2                    644 permissions, update existing file permissions on upgrade
3                    Inherit umask, update existing file permissions on upgrade
10                   600 permissions, dont touch existing file permissions on upgrade
11                   640 permissions, dont touch existing file permissions on upgrade
12                   644 permissions, dont touch existing file permissions on upgrade
13                   Inherit umask, dont touch existing file permissions on upgrade
--------------------------------------------------------------------------------------

Adding new tunable as part of vxtunefs command which is per-node global tunable (not per filesystem). 
For Executive Order, CPI will be having workflow to update the tunable during installation/upgrade/configuration 
which will take care of updating in all nodes.

RESOLUTION:
New tunable is added to vxtunefs command.
How to set tunable:
/opt/VRTS/bin/vxtunefs -D eo_perm=1

* 4112417 (Tracking ID: 4094326)

SYMPTOM:
mdb invocation displays message "failed to add vx_sl_node_level walker: walk name already in use"

DESCRIPTION:
In vx_sl_kmcache_init(), kmcache is initialized for each level (in this case it is 8) separately. For passing the cache name as an argument to kmem_cache_create(), we have used a macro.

#define VX_SL_KMCACHE_NAME(level)       "vx_sl_node_"#level
#define VX_SL_KMCACHE_CREATE(level)                                     \
                kmem_cache_create(VX_SL_KMCACHE_NAME(level),            \
                                  VX_KMEM_SIZE(VX_SL_KMCACHE_SIZE(level)),\
                                  0, NULL, NULL, NULL, NULL, NULL, 0);


While using this macro, we have passed "level" as an argument and that has been expanded as "vx_sl_node_level" for all the 8 levels in `for` loop. This is causing the cache allocation for all the 8 levels with same name.

RESOLUTION:
Passing separate variable value (as level value) to VX_SL_KMCACHE_NAME as it is done in  vx_wb_sl_kmcache_init().

* 4118795 (Tracking ID: 4100021)

SYMPTOM:
Running setfacl followed by getfacl resulting in "No such device or address" error.

DESCRIPTION:
When running setfacl command on some of the directories which have the VX_ATTR_INDIRECT type of acl attribute, it is not removing the existing acl attribute and adding a new one, which should  not happen ideally. This is resulting in the failure of getfacl with following "No such device or address" error.

RESOLUTION:
we have done the code chages to removal of VX_ATTR_INDIRECT type acl in setfacl code.

* 4119023 (Tracking ID: 4116329)

SYMPTOM:
fsck -o full -n command will fail with error:
"ERROR: V-3-28446:  bc_write failure devid = 0, bno = 8, len = 1024"

DESCRIPTION:
Previously, to correct the file system WORM/SoftWORM, we didn't check  if user wanted to correct the pflags or just wanted to validate if value is flag is missing or not. Also fsck was not capable to handle SOFTWORM flag.

RESOLUTION:
Code added to not try to fix the the problem if user ran fsck with -n option. Also SOFTWORM scenario is added.

* 4123143 (Tracking ID: 4123144)

SYMPTOM:
fsck binary generating coredump

DESCRIPTION:
In internal testing we found that fsck binary generates coredump due to below mentioned assert when we try to repair corrupted file system using below command:
./fsck -o full -y /dev/vx/rdsk/testdg/vol1

ASSERT(fset >= VX_FSET_STRUCT_INDEX)

RESOLUTION:
Added code to set default (primary) fileset by scanning the fset header list.

Patch ID: VRTSvxfs-8.0.0.2700

* 4113911 (Tracking ID: 4113121)

SYMPTOM:
The VxFS module fails to load on RHEL8.8.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.8.

RESOLUTION:
Updated VXFS to support RHEL 8.8.

* 4114019 (Tracking ID: 4067505)

SYMPTOM:
Fsck reports error invalid VX_AF_OVERLAY aflags

DESCRIPTION:
If the inode does not have push linkage (inode not allocated / inode and data already pushed), we skip pushing the data blocks when the inode is removed. Inode will have overlay data blocks, gen bumped up and IEREMOVE set. During extop processing size is set to 0 and bmap is cleared. This is a valid scenario.

Fsck while validating the inodes with overlay flag set, expects gen can be different only if the overlay inode has IEREMOVE set and it is last clone in the chain.

RESOLUTION:
If the push inode is not present allow gen to be different even if the clone is not last in chain.

* 4114020 (Tracking ID: 4083056)

SYMPTOM:
Hang observed while punching the smaller hole over the bigger hole.

DESCRIPTION:
We observed the hang while punching the smaller hole over the bigger hole in the file due to the tight race
while processing the punching of the hole to the file and flushing it to the disk.

RESOLUTION:
Code changes checked in.

* 4114021 (Tracking ID: 4101634)

SYMPTOM:
Fsck reports error directory block containing inode has incorrect file-type and directory contains invalid directory blocks.

DESCRIPTION:
While doing diretory sanity in fsck we skip updating new directory type ondisk in case of filetype error, hence fsck
reporting incorrect file-type error and directory contains invalid directory blocks .

RESOLUTION:
While doing diretory sanity in fsck updating new directory type ondisk in case of filetype error.

Patch ID: VRTSvxfs-8.0.0.2600

* 4114654 (Tracking ID: 4114652)

SYMPTOM:
The VxFS module fails to load on RHEL8.7 minor kernel 4.18.0-425.19.2.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel.

RESOLUTION:
Updated VXFS to support RHEL 8.7 minor kernel 4.18.0-425.19.2.

Patch ID: VRTSvxfs-8.0.0.2300

* 4108381 (Tracking ID: 4107777)

SYMPTOM:
The VxFS module fails to load on RHEL8.7 minor kernel.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel.

RESOLUTION:
Modified existing modinst-vxfs script to accommodate the changes in the kernel and load the correct module.

Patch ID: VRTSvxfs-8.0.0.2200

* 4100925 (Tracking ID: 4100926)

SYMPTOM:
VxFS module failed to load on RHEL8.7

DESCRIPTION:
The RHEL8.7 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.7.

Patch ID: VRTSvxfs-8.0.0.2100

* 4095889 (Tracking ID: 4095888)

SYMPTOM:
Security vulnerabilities exist in the Sqlite third-party components used by VxFS.

DESCRIPTION:
VxFS uses the Sqlite third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version of this third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.0.1800

* 4068960 (Tracking ID: 4073203)

SYMPTOM:
Veritas file replication might generate a core while replicating the files to target when rename and unlink operation is performed on a file with FCL( file change log) mode on.

DESCRIPTION:
vxfsreplicate process of Veritas file replicator  might get a segmentation fault with File change mode on when rename and unlink operation are performed on a file.

RESOLUTION:
Addressed the issue to replicate the files, in scenarios involving rename and unlink operation with FCL mode on.

* 4071108 (Tracking ID: 3988752)

SYMPTOM:
Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.

DESCRIPTION:
bdev_strategy() is deprecated from solaris code and was causing performance issues when used for IO's. Solaris has recommended to use LDI framework for all IO's.

RESOLUTION:
Code is modified to use ldi framework for all IO's in solaris.

* 4072228 (Tracking ID: 4037035)

SYMPTOM:
VxFS should have the ability to control the number of inactive processing threads.

DESCRIPTION:
VxFS may spawn a large number of worker threads that become inactive over time. As a result, heavy lock contention occurs during the removal of inactive threads on high-end servers.

RESOLUTION:
To avoid the contention, a new tunable, vx_ninact_proc_threads, is added. You can use vx_ninact_proc_threads to adjust the number of inactive processing threads based on your server configuration and workload.

* 4078335 (Tracking ID: 4076412)

SYMPTOM:
Addressing Executive Order (EO) 14028,  initial requirements which is intended to improve the Federal Governments investigative and remediation capabilities related to cybersecurity incidents. Executive Order helps in improving the nation's cybersecurity and also enhance any organization's cybersecurity and software supply chain integrity.

DESCRIPTION:
Executive Order helps in improving the nation's cybersecurity and also enhance any organization's cybersecurity and software supply chain integrity, some of the  initial requirements will enable the logging which is compliant to Executive Order. This comprises of command logging,  logging unauthorised access in filesystem and logging WORM events on filesystem. Also include changes to display IP address for Veritas File replication at control plane based on tunable.

RESOLUTION:
The initial requirements of EO are addressed in this release.


As per Executive order(EO) for some of the requirements it should be Tunable based.
For example IP logging where ever applicable (for VFR it should be at control plane(not for every data transfer), and this is also tunable based.
Also for logging some kernel logs, like worm events(plan is to log those to syslog) etc are tunable based.

Introduced new tunable, eo_logging_enable. There is a protocol change because of the introduction of the tunable. 
Though the changes are planned for TOT first and then will go to Update patch on 80all maint for EO release, there is impact of this protocol change for update patch.
We might need to update protocol change with middle protocol version between existing protocol version and new protocol version(introduced because of eo)

For VFR, IP addresses of source and destination are needed to be logged as part of EO.
IP addresses will be included in the log while logging Starting/Resuming a job in VFR.
Log Location: /var/VRTSvxfs/replication/log/mount_point-job_name.log

There are 2 ways to fetch the IP address of the source and target. One is to get the IP addresses stored in the link structure of a session. These IPs are obtained by resolving the source and target hostname. It may contain both IPv4 and IPv6 for a node, and we cannot speculate on which IP actual connection has happened. The second way is to get the socket descriptor from an active connection of the session. This socket descriptor can be used to fetch the source and target IP associated with it. The second method is seems  to get the actual IP addresses used for the connection between source and target. The change contains to  fetch IP addresses from socket descriptor after establishing connections.

More details on EO Logging with respective handling for initial release for VxFS
https://confluence.community.veritas.com/pages/viewpage.action?spaceKey=VES&title=EO+VxFS+Scrum+Page

* 4078520 (Tracking ID: 4058444)

SYMPTOM:
Loop mounts using files on VxFS fail on Linux systems running kernel version 4.1 or higher.

DESCRIPTION:
Starting with the 4.1 version of the Linux kernel, the driver loop.ko uses a new API for read and write requests to the file which was not previously implemented in VxFS. This causes the virtual disk reads during mount to fail while using the -o loop option , causing the mount to fail as well. The same functionality worked in older kernels (such as the version found in RHEL7).

RESOLUTION:
Implemented a new API for all regular files on VxFS, allowing usage of the loop device driver against files on VxFS as well as any other kernel drivers using the same functionality.

* 4079142 (Tracking ID: 4077766)

SYMPTOM:
VxFS kernel module might leak memory during readahead of directory blocks.

DESCRIPTION:
VxFS kernel module might leak memory during readahead of directory blocks due to missing free operation of readahead-related structures.

RESOLUTION:
Code in readahead of directory blocks is modified to free up readahead-related structures.

* 4079173 (Tracking ID: 4070217)

SYMPTOM:
Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.

DESCRIPTION:
On a disabled cluster-mounted filesystem, release of cluster reservation might fail during unmount operation resulting in a  failure of command fsck with 'cluster reservation failed for volume' message.

RESOLUTION:
Code is modified to release cluster reservation in unmount operation properly even for cluster-mounted filesystem.

* 4082260 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability observed in Zlib a third party component VxFS uses.

DESCRIPTION:
In an internal security scans vulnerabilities in Zlib were found.

RESOLUTION:
Upgrading the third party component Zlib to address these vulnerabilities.

* 4082865 (Tracking ID: 4079622)

SYMPTOM:
Migration uses normal read/write file operation instead of read/write iter functions. vxfs requires read/write iter functions from Linux kernel
5.14.

DESCRIPTION:
Starting with 5.14 version of the Linux kernel, vxfs uses a read/write iter file operation for migration.

RESOLUTION:
Developed a common function for read/write which get called for normal and iter read/write file operation.

* 4083335 (Tracking ID: 4076098)

SYMPTOM:
FS migration from ext4 to vxfs on Linux machines with falcon-sensor enabled, may fail

DESCRIPTION:
Falcon-sensor driver installed on test machines is tapping system calls such as close and is doing some 
additional vfs calls such as read. Due to this vxfs driver received read file - operation call from fsmigbgcp 
process context. Read operation is allowed only on special files from fsmigbgcp process context. Since 
the file in picture was not a special file, the vxfs debug code asserted.

RESOLUTION:
As a fix, we are now allowing the read on non special files from fsmigbgcp process context.

[Note:
 - There were other related issues fixed in this incident. But those are not likely to be hit in customer 
   environment as they are negative test scenarios (like trying to overwrite migration special file - deflist) 
   and may not be relevant to customer.
- I am not covering them in above

* 4085623 (Tracking ID: 4085624)

SYMPTOM:
While running fsck with -o and full -y on corrupted FS, fsck may dump core.

DESCRIPTION:
Fsck builds various in-core maps based on on-disk structural files, one such map is dotdotmap (which stores 
info about parent directory). For regular fset (like 999), the dotdotmap is initialized only for primary ilist
(inode list for regular inodes). It is skipped for attribute ilist (inode list for attribute inodes). This is because
attribute inodes do not have parent directories as is the case for regular inodes.

While attempting to resolve inconsistencies in FS metadata, fsck tries to clean up dotdotmap for attribute ilist. 
In the absence of a check, dotdotmap is re-initialized for attribute ilist causing segmentation fault.

RESOLUTION:
In the codepath where fsck attempts to reinitialize the dotdotmap, a check added to skip reinitialization of dotdotmap
for attribute ilist.

* 4085839 (Tracking ID: 4085838)

SYMPTOM:
Command fsck may generate core due to processing of zero size attribute inode.

DESCRIPTION:
Command fsck is modified to skip processing of zero size attribute inode.

RESOLUTION:
Command fsck fails due to allocation of memory and dereferencing it for zero size attribute inode.

* 4086085 (Tracking ID: 4086084)

SYMPTOM:
VxFS mount operation causes system panic when -o context is used.

DESCRIPTION:
VxFS mount operation supports context option to override existing extended attributes, or to specify a different, default context for file systems that do not support extended attributes. System panic observed when -o context is used.

RESOLUTION:
Required code changes are added to avoid panic.

* 4088341 (Tracking ID: 4065575)

SYMPTOM:
Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition

DESCRIPTION:
Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition due to a race between two writer threads to take read-write lock the file to do a delayed allocation operation on it.

RESOLUTION:
Code is modified to allow thread which is already holding read-write lock to complete delayed allocation operation, other thread will skip over that file.

Patch ID: VRTSvxfs-8.0.0.1700

* 4081150 (Tracking ID: 4079869)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party components. The Attackers can exploit these security vulnerability 
to attack on system.

RESOLUTION:
Upgrading the third party components to resolve these vulnerabilities.

* 4083948 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party component Zlib.

RESOLUTION:
Upgrading the third party component Zlib to resolve these vulnerabilities.

Patch ID: VRTSvxfs-8.0.0.1200

* 4055808 (Tracking ID: 4062971)

SYMPTOM:
All the operations like ls, create are blocked on file system

DESCRIPTION:
In WORM file system we do not allow directory rename. When partition directory is enabled, new directories are created and files are moved under this leaf directory based on hash. Due to WORM FS this rename operation was blocked and splitting could not complete. Blocking all the operations on file system.

RESOLUTION:
Allow directory renaming in the context of partition directory split and merge.

* 4056684 (Tracking ID: 4056682)

SYMPTOM:
New features information on a filesystem with fsadm(file system administration utility) from a device is not displayed.

DESCRIPTION:
Information about new features like WORM (Write once read many), auditlog is correctly updated with a file system mounted through the fsadm utility, but on the underlying device the new feature information is not displayed.

RESOLUTION:
Updated fsadm utility to display the new feature information correctly.

* 4062606 (Tracking ID: 4062605)

SYMPTOM:
Minimum retention time cannot be set if the maximum retention time is not set.

DESCRIPTION:
The tunable - minimum retention time cannot be set if the tunable - maximum retention time is not set. This was implemented to ensure 
that the minimum time is lower than the maximum time.

RESOLUTION:
Setting of minimum and maximum retention time is independent of each other. Minimum retention time can be set without the maximum retention time being set.

* 4065565 (Tracking ID: 4065669)

SYMPTOM:
Creating non-WORM checkpoints fails when the tunables - minimum retention time and maximum retention time are set.

DESCRIPTION:
Creation of non-WORM checkpoints fails as all WORM-related validations are extended to non-WORM checkpoints also.

RESOLUTION:
WORM-related validations restricted to WORM fsets only, allowing non-WORM checkpoints to be created.

* 4065651 (Tracking ID: 4065666)

SYMPTOM:
All the operations like ls, create are blocked on file system directory where there are WORM enabled files and retention period not expired

DESCRIPTION:
For WORM file system, files whose retention period is not expired can not be renamed. When partition directory is enabled, new directories are created and files are moved under this leaf directory based on hash. Due to WORM FS this rename operation was blocked and splitting could not complete. Blocking all the operations on file system.

RESOLUTION:
Allow directory renaming of files even if retention period is not expired in the context of partition directory split and merge.

Patch ID: VRTSvxfs-8.0.0.1100

* 4061114 (Tracking ID: 4052883)

SYMPTOM:
The VxFS module fails to load on RHEL 8.5.

DESCRIPTION:
This issue occurs due to changes in the RHEL 8.5 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on RHEL 8.5.

Patch ID: VRTSvxfen-8.0.0.2200

* 4108953 (Tracking ID: 4107779)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7 GA kernel(4.18.0-425.3.1.el8.x86_64).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7) is now introduced.

Patch ID: VRTSvxfen-8.0.0.2100

* 4102352 (Tracking ID: 4100203)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7(RHEL8.7) is now introduced.

Patch ID: VRTSvxfen-8.0.0.1800

* 4087166 (Tracking ID: 4087134)

SYMPTOM:
The error message 'Touch /var/VRTSvcs/log/vxfen/vxfen.log failed' appears after starting vxfen service, if parent directory path of vxfen.log is not present.

DESCRIPTION:
Typically,  if parent directory path of vxfen.log is not present, the following error message appears after starting vxfen service:
'Touch /var/VRTSvcs/log/vxfen/vxfen.log failed'.

RESOLUTION:
Create the parent directory path for the vxfen.log file globally if the path is not present.

* 4088061 (Tracking ID: 4089052)

SYMPTOM:
On RHEL9, Node panics while running vxfenswap as a part of Online Coordination Point Replacement operation.

DESCRIPTION:
RHEL9 has introduced fortifying panic which gets triggered if kernel's static check finds out any buffer overflow. This check was wrongly identifying buffer overflow where strings are copied by using unions.

RESOLUTION:
Moved to bcopy internally for such a scenario and kernel side check skipped.

Patch ID: VRTSvxfen-8.0.0.1500

* 4072717 (Tracking ID: 4072335)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 5.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
6(RHEL8.6) is now introduced.

Patch ID: VRTSvxfen-8.0.0.1200

* 3951882 (Tracking ID: 4004248)

SYMPTOM:
vxfend process segfaults and coredumped

DESCRIPTION:
During fencing race, sometimes vxfend crashes and generates core dump

RESOLUTION:
Vxfend internally uses fork and exec to execute sub tasks. The new child process was using same file descriptors for logging purpose. This simultaneous read of same file using single file descriptor was resulting incorrect read and hence the process crash and coredump. This fix creates new file descriptor for child process to fix the crash

Patch ID: VRTSvxfen-8.0.0.1100

* 4064785 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTSveki-8.0.0.2500

* 4118568 (Tracking ID: 4110457)

SYMPTOM:
Veki packaging failure due to missing of storageapi specific files

DESCRIPTION:
While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes 
were not taken care in the Veki mk-symlink and build scripts.

RESOLUTION:
Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles.
This is helping to package the storageapi along with veki and resolving all interdependencies

Patch ID: VRTSveki-8.0.0.1800

* 4056647 (Tracking ID: 4055072)

SYMPTOM:
Upgrading VRTSveki package using yum reports following error as "Starting veki /etc/vx/veki: line 51: [: too many arguments"

DESCRIPTION:
While upgrading VRTSveki package, presence of multiple module directories might result in upgrade script printing error message.

RESOLUTION:
Code is modified to check for specific module directory related to current kernel version in VRTSveki upgrade script.

Patch ID: VRTSveki-8.0.0.1200

* 4070027 (Tracking ID: 4066550)

SYMPTOM:
After reboot LLT, GAB service fail to start as Veki service start times out.

DESCRIPTION:
After reboot, when systemd tries to bring multiple services in parallel at the same time of Veki, Veki startup times out. The default Veki startup timeout was 90 seconds, which is getting increased to 300 seconds with this patch

RESOLUTION:
Increasing Veki start timeout

Patch ID: VRTSvcsea-8.0.0.2500

* 4118769 (Tracking ID: 4073508)

SYMPTOM:
Oracle virtual fire-drill is failing due to Oracle password file location changes from Oracle version 21c.

DESCRIPTION:
Oracle password file has been moved to $ORACLE_BASE/dbs from Oracle version 21c.

RESOLUTION:
Environment variables are used for pointing the updated path for the password file.

It is mandatory from Oracle 21c and later versions for a client to configure .env file path in EnvFile attribute. This file must have ORACLE_BASE path added to 
work Oracle virtual fire-drill feature. 

Sample EnvFile content with ORACLE_BASE path for Oracle 21c [root@inaqalnx013 Oracle]# cat /opt/VRTSagents/ha/bin/Oracle/envfile 
ORACLE_BASE="/u02/app/oracle/product/21.0.0/dbhome_1/"; export ORACLE_BASE; 

Sample attribute value EnvFile = "/opt/VRTSagents/ha/bin/Oracle/envfile"

Patch ID: VRTSvcsea-8.0.0.1800

* 4030767 (Tracking ID: 4088595)

SYMPTOM:
hapdbmigrate utility fails to online the oracle service group due to a timing issue.

DESCRIPTION:
hapdbmigrate utility fails to online the oracle service group due to a timing issue. example: ./hapdbmigrate -pdbres pdb1_res -cdbres cdb2_res -XMLdirectory /oracle_xml Cluster prechecks and validation Done Taking PDB resource [pdb1_res] offline Done Modification of cluster configuration Done VCS ERROR V-16-41-39 Group [CDB2_grp] is not ONLINE after 300 seconds on %vcs_node% VCS ERROR V-16-41-41 Group [CDB2_grp] is not ONLINE on some nodes in the cluster Bringing PDB resource [pdb1_res] online on CDB resource [cdb2_res]Done For further details, see '/var/VRTSvcs/log/hapdbmigrate.log'

RESOLUTION:
hapdbmigrate utility modified to ensure enough time elapses between probe of PDB resource and online of CDB group.

* 4079559 (Tracking ID: 4064917)

SYMPTOM:
Oracle agent fails to generate ora_api (which is used for Intentional Offline functionality of Oracle agent) using build_oraapi.sh script for Oracle 21c.

DESCRIPTION:
The build_oraapi.sh script could not connect to library named libdbtools21.a, as on the new Oracle 21c environment generic library is present i.e. '$ORACLE_HOME/rdbms/lib/libdbtools.a'.

RESOLUTION:
This script on Oracle 21c database environment will pick generic library and on older database environments it will pick Database version specific library.

Patch ID: VRTSvcsag-8.0.0.2500

* 4118318 (Tracking ID: 4113151)

SYMPTOM:
Dependent DiskGroupAgent fails to get its resource online due to disk group import failure.

DESCRIPTION:
VMwareDisksAgent reports its resource online just after VMware disk is attached to virutal machine, if dependent DiskGroup resource starts to online at the moment it must fail because VMware disk is not yet present into vxdmp database due to VxVM transaction latency. Customer used to add retry times to work around this problem but cannot apply the same to every environment.

RESOLUTION:
Added a finite period of wait for VMware disk is present into vxdmp database before online is complete.

* 4118448 (Tracking ID: 4075950)

SYMPTOM:
When IPv6 VIP switches from node1 to node2 in a cluster,
a longer time is taken to update its neighboring information and traffic to reach node2 which is on the reassigned address.

DESCRIPTION:
After the Service group switches from node1 to node2, the IPv6 VIP is not reachable from the network switch. The mac address changes after the node switch, but the network is not updated. Similar to IPv4 VIP by gracious ARP, in case of IPV6 VIP switch from node1 to node2; the network must be updated for the mac address change.

RESOLUTION:
The network devices which communicate with the VIP are not able to establish a connection with the VIP. To connect with the VIP, the VIP is pinged from the switch or from the cluster nodes 'ip -6 neighbor flush all' command is run. Neighbour flush logic is added to IP/MultiNIC agents so that the changed mac id during floating VIP switchover is updated in the network.

* 4118455 (Tracking ID: 4118454)

SYMPTOM:
when root user login shell is set to /sbin/nologin in /etc/passwd file, Process agent resource fails to come online.

DESCRIPTION:
From the engine_A.log,  the below errors were logged:
2023/05/31 11:34:52 VCS NOTICE V-16-10031-20704 Process:Process:imf_getnotification:Received notification for vxamf-group sendmail
2023/05/31 11:35:38 VCS ERROR V-16-10031-9502 Process:sendmail:online:Could not online the resource, make sure user-name is correct.
2023/05/31 11:35:39 VCS INFO V-16-2-13716 Thread(140147853162240) Resource(sendmail): Output of the completed operation (online)
==============================================
This account is currently not available.
==============================================

RESOLUTION:
The Process agent is enhanced to support nologin shell for root user. If user shell is set to /sbin/nologin, the agent starts a process using /bin/bash shell.

* 4118767 (Tracking ID: 4094539)

SYMPTOM:
The MonitorProcesses argument in the resource ArgListValues being passed to the agent (bundled ApplicationAgent) is incorrectly removing an extra needed space from the following process, as found via the recommended CLI process test.

DESCRIPTION:
In the ArgListValues under MonitorProcesses with the extra space it even shows up when displaying the resource.

RESOLUTION:
For the monitored process (not program) only remove leading and trailing spaces. Do not remove extra spaces between words.

Patch ID: VRTSvcsag-8.0.0.2400

* 4113057 (Tracking ID: 4113056)

SYMPTOM:
ReuseMntPt is not honored when the same mountpoint is used for two resources with different FSType.

DESCRIPTION:
The reuseMntPt attribute is set to 1 if the same mount point needs to be specified in more than one mount resource but when both the resource has different FSType, The state of the offline resource shows as offline|unknown instead of offline. No errors on the online resource.

RESOLUTION:
The Required code changes have been done to indicate the correct state of the resource.

Patch ID: VRTSvcsag-8.0.0.1800

* 4030767 (Tracking ID: 4088595)

SYMPTOM:
hapdbmigrate utility fails to online the oracle service group due to a timing issue.

DESCRIPTION:
hapdbmigrate utility fails to online the oracle service group due to a timing issue.
example:
./hapdbmigrate -pdbres pdb1_res -cdbres cdb2_res -XMLdirectory /oracle_xml
Cluster prechecks and validation                                 Done
Taking PDB resource [pdb1_res] offline                           Done
Modification of cluster configuration                            Done
VCS ERROR V-16-41-39 Group [CDB2_grp] is not ONLINE after 300 seconds on %vcs_node%

VCS ERROR V-16-41-41 Group [CDB2_grp] is not ONLINE on some nodes in the cluster

Bringing PDB resource [pdb1_res] online on CDB resource [cdb2_res]Done

For further details, see '/var/VRTSvcs/log/hapdbmigrate.log'

RESOLUTION:
hapdbmigrate utility modified to ensure enough time elapses between probe of PDB resource and online of CDB group.

* 4058802 (Tracking ID: 4073842)

SYMPTOM:
Oracle 21c is not supported on earlier product versions.

DESCRIPTION:
Implemented Oracle 21c support with Storage Foundation for Databases.

RESOLUTION:
Changes are done to support Oracle 21c with Storage Foundation for Databases.

* 4079372 (Tracking ID: 4073842)

SYMPTOM:
Oracle 21c is not supported on earlier product versions.

DESCRIPTION:
Implemented Oracle 21c support with Storage Foundation for Databases.

RESOLUTION:
Changes are done to support Oracle 21c with Storage Foundation for Databases.

* 4079559 (Tracking ID: 4064917)

SYMPTOM:
Oracle agent fails to generate ora_api (which is used for Intentional Offline functionality of Oracle agent) using build_oraapi.sh script for Oracle 21c.

DESCRIPTION:
The build_oraapi.sh script could not connect to library named libdbtools21.a, as on the new Oracle 21c environment generic library is present i.e. '$ORACLE_HOME/rdbms/lib/libdbtools.a'.

RESOLUTION:
This script on Oracle 21c database environment will pick generic library and on older database environments it will pick Database version specific library.

* 4081774 (Tracking ID: 4083099)

SYMPTOM:
When OverlayIP is configured AzureIP resource offline operation fails.

DESCRIPTION:
AzureIP resource fails to go offline when OverlayIP is configured because Azure API routes.delete part of azure-mgmt-network module has been deprecated.

RESOLUTION:
A new API routes.begin_delete is introduced as suggested by Azure in the Azure agent.

Patch ID: VRTSvcs-8.0.0.2400

* 4111623 (Tracking ID: 4100720)

SYMPTOM:
GCO fails to configure for the latest RHEL/SLES platform due to an incorrect CIDR value.

DESCRIPTION:
GCO failed to configure as altname defined for latest RHEL/SLES kernel sends an incorrect CIDR value.

RESOLUTION:
Updated code to pick correct CIDR value in GCO config if altname is defined.

Patch ID: VRTSvcs-8.0.0.2100

* 4103077 (Tracking ID: 4103073)

SYMPTOM:
Security vulnerabilities present in existing version of Netsnmp.

DESCRIPTION:
Upgrading Netsnmp component to fix security vulnerabilities

RESOLUTION:
Upgrading Netsnmp component to fix security vulnerabilities for security patch IS 8.0U1_SP4.

Patch ID: VRTSvcs-8.0.0.1800

* 4084675 (Tracking ID: 4089059)

SYMPTOM:
File permission for gcoconfig.log is not 0600.

DESCRIPTION:
As default file permission was 0644 so it was allowing read access to groups and others so file permission needs to be updated.

RESOLUTION:
Added solution which creates file with permission 0600 so that it should be readable and writable by user.

Patch ID: VRTSvcs-8.0.0.1400

* 4065820 (Tracking ID: 4065819)

SYMPTOM:
Protocol version upgrade from Access Appliance 7.4.3.200 to 8.0 failed.

DESCRIPTION:
During rolling upgrade, IPM message 'MSG_CLUSTER_VERSION_UPDATE' is generated and as a part of it we do some validations for bumping up protocol. If validation succeeds then a broadcast message to bump up the cluster protocol is sent and immediately we send success message to haclus. Thus, the success message is sent before processing the actual updating Protocol version broadcast message. This process occurs for very short period. Also, after successful processing of the broadcast message, the Protocol version is properly updated in config files as well as command shows correct value.

RESOLUTION:
Instead of immediately returning success message, haclus CLI waits till upgrade is implemented on broadcast channel and then success message is sent.

Patch ID: VRTSspt-8.0.0.1400

* 4085610 (Tracking ID: 4090433)

SYMPTOM:
iostat and vmstat command option changes in FirstLook

DESCRIPTION:
Time stamp for each stat collected is missing for vmstat on Linux and iostat and vmstat commands on Solaris and AIX.

RESOLUTION:
Following are the new flags introduced for each platform separately:
    1) Linux:
       vmstat:
           -t: Append timestamp to each line
    2) Solaris:
       iostat:
           -Td: Display a time stamp. Specify d for standard date format.
       vmstat:
           -Td: Display a time stamp. Specify d for standard date format.
    3) AIX:
       iostat:
           -s: Specifies the system throughput report
           -T: Displays the time stamp.
           -D: Displays the extended tape/drive utilization report.
           -l: Displays the output in long listing mode.
       vmstat:
           -t: Prints the time-stamp next to each line of output of vmstat.

* 4088066 (Tracking ID: 4090446)

SYMPTOM:
vxstat log collection improvements in FirstLook

DESCRIPTION:
Currently FirstLook collects only volume level statistics during execution through vxstat command.

RESOLUTION:
Added options in vxstat command to collect stats for volume, plex, disk and subdisk objects.

* 4091983 (Tracking ID: 4092090)

SYMPTOM:
FirstLook should have OS flavor information stored in its log directory.

DESCRIPTION:
FirstLook collects "uname -a" as part of its log collection but this does not give OS flavour information on Linux platform like Rhel7/8.

RESOLUTION:
Code changes have been done in FirstLook code base to include OS_info file inside system folder to contain OS name and its flavor.

* 4096274 (Tracking ID: 4095687)

SYMPTOM:
While restoring version-8 metasave on sparse volume, the restore operation is not happening correctly.

DESCRIPTION:
While restoring version-8 metasave on sparse volume, the restore operation is not happening correctly due to concurrent I/Os triggered on the sparse volume. While replaying the same metasave on normal volume/vset or on a sparse file (for non-MVFS), the replay is working as expected.

RESOLUTION:
Disabled the concurrent writes during metasave restore operation by default. One hidden option ("-p") has been added to enable the multi-threading during metasave restore if needed (mostly while replaying on normal volumes or volume set).

Patch ID: VRTSsfmh-vom-HF0800411

* 4113012 (Tracking ID: 4113011)

SYMPTOM:
vxlist output on InfoScale server shows volume name as "-" and status Unknown.

DESCRIPTION:
vxlist output on InfoScale server shows volume name as "-" and status Unknown.

RESOLUTION:
New vxdclid plugin for VxVM has been created.

Patch ID: VRTSrest-2.0.0.1300

* 4088973 (Tracking ID: 4089451)

SYMPTOM:
When a read-only filesystem was created on a Volume, GET on mountpoint's details was throwing error.

DESCRIPTION:
When a read-only filesystem was created on a Volume, GET on mountpoint's details was throwing error as the command which was being used was not working for read-only filesystem.

RESOLUTION:
Used the appropriate command to get details of mountpoint.

* 4089033 (Tracking ID: 4089453)

SYMPTOM:
Some VCS REST APIs were crashing the Gunicorn worker.

DESCRIPTION:
Calling some VCS-related APIs were crashing the gunicorn worker handling the request. A new worker was automatically being spawned.

RESOLUTION:
Fixed the related foreign function call interface in the source code.

* 4089041 (Tracking ID: 4089449)

SYMPTOM:
GET resources API on empty service group was throwing an error.

DESCRIPTION:
When GET resources API was called on an empty service group it was giving an error, and the scenario is not handled.

RESOLUTION:
Scenario added to the code to resolve the issue.

* 4089046 (Tracking ID: 4089448)

SYMPTOM:
Logging in REST API was not in EO-compliant format.

DESCRIPTION:
Timestamp format is not EO-compliant and some attributes were missing for EO compliance.

RESOLUTION:
Changed timestamp, added new attributes like nodename, response time, source and destination IP addresses, username to REST server logs.

Patch ID: -3.9.2.24

* 4114375 (Tracking ID: 4113851)

SYMPTOM:
Open CVE's detected for the python programming language and other python modules being used in VRTSpython

DESCRIPTION:
Some open CVE's are exploitable in VRTSpython for IS 8.0

RESOLUTION:
VRTSpython is patched with all the open CVE's which are impacting IS 8.0.

Patch ID: -5.34.0.4

* 4072234 (Tracking ID: 4069607)

SYMPTOM:
Security vulnerability detected on VRTSperl 5.34.0.0 released with Infoscale 8.0.

DESCRIPTION:
Security vulnerability detected in the Net::Netmask module.

RESOLUTION:
Upgraded Net::Netmask module and re-created VRTSperl 5.34.0.1 version to fix the vulnerability .

* 4075150 (Tracking ID: 4075149)

SYMPTOM:
Security vulnerabilities detected in OpenSSL packaged VRTSperl/VRTSpython released with Infoscale 8.0.

DESCRIPTION:
Security vulnerabilities detected in the OpenSSL.

RESOLUTION:
Upgraded OpenSSL version and re-created VRTSperl/VRTSpython version to fix the vulnerability .

Patch ID: VRTSodm-8.0.0.2900

* 4057432 (Tracking ID: 4056673)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock. Corrected vxgms dependency in odm service file.

Patch ID: VRTSodm-8.0.0.2700

* 4113912 (Tracking ID: 4113118)

SYMPTOM:
The ODM module fails to load on RHEL8.8.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.8.

RESOLUTION:
Updated ODM to support RHEL 8.8.

Patch ID: VRTSodm-8.0.0.2600

* 4114656 (Tracking ID: 4114655)

SYMPTOM:
The ODM module fails to load on RHEL8.7 minor kernel 4.18.0-425.19.2.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel.

RESOLUTION:
Updated ODM to support RHEL 8.7 minor kernel 4.18.0-425.19.2.

Patch ID: VRTSodm-8.0.0.2300

* 4108585 (Tracking ID: 4107778)

SYMPTOM:
The ODM module fails to load on RHEL8.7 minor kernel.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel.

RESOLUTION:
Modified existing modinst-odm script to accommodate the changes in the kernel and load the correct module.

Patch ID: VRTSodm-8.0.0.2200

* 4100923 (Tracking ID: 4100922)

SYMPTOM:
ODM module failed to load on RHEL8.7

DESCRIPTION:
The RHEL8.7 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.7.

Patch ID: VRTSllt-8.0.0.2400

* 4116421 (Tracking ID: 4113340)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
8(RHEL8.8) is now introduced.

Patch ID: VRTSllt-8.0.0.2200

* 4108947 (Tracking ID: 4107779)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7 GA kernel(4.18.0-425.3.1.el8.x86_64).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7) is now introduced.

Patch ID: VRTSllt-8.0.0.2100

* 4101232 (Tracking ID: 4100203)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7(RHEL8.7) is now introduced.

Patch ID: VRTSllt-8.0.0.1800

* 4061158 (Tracking ID: 4061156)

SYMPTOM:
IO error on /sys/kernel/slab folder

DESCRIPTION:
After loading LLT module, LS command throws IO error on /sys/kernel/slab folder

RESOLUTION:
IO error on /sys/kernel/slab folder is now fixed after loading LLT module

* 4079637 (Tracking ID: 4079636)

SYMPTOM:
Kernel is getting panicked with null pointer dereference in llt_dump_mblk when LLT is configured over IPsec

DESCRIPTION:
LLT uses skb's sp pointer to chain socekt buffers internally. When LLT is configured over IPsec, llt will receive skb's with sp pointer from ip layer. These skbs were wrongly identified by llt as chained skbs. Now we are resetting the sp pointer field before re-using for interanl chaining.

RESOLUTION:
No panic observed after applying this patch.

* 4079662 (Tracking ID: 3981917)

SYMPTOM:
LLT UDP multiport was previously supported only on 9000 MTU networks.

DESCRIPTION:
Previously LLT UDP multiport configuration required network links to have 9000 MTU. We have enhanced UDP multiport code, so that now this LLT feature can be configured/run on 1500 MTU links as well.

RESOLUTION:
LLT UDP multiport can be configured on 1500 MTU based networks

* 4080630 (Tracking ID: 4046953)

SYMPTOM:
During LLT configuration, messages related to 9000 MTU are getting printed as error.

DESCRIPTION:
On Azure error messages related to 9000 MTU are getting logged. 
These message indicates that to have optimal performance , use 9000 MTU based networks . These are not actual errors but suggestion.

RESOLUTION:
Since customer is going to use it on Azure where 9000 MTU is not supported, hence removed these messages to avoid confusion.

Patch ID: VRTSllt-8.0.0.1500

* 4073695 (Tracking ID: 4072335)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 5.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
6(RHEL8.6) is now introduced.

Patch ID: VRTSllt-8.0.0.1200

* 4066063 (Tracking ID: 4066062)

SYMPTOM:
Node panic

DESCRIPTION:
Node panic observed in llt udp multiport configuration with vx ioship stack.

RESOLUTION:
When llt receives an acknowledgement, it tries to free the packet and corresponding client frags blindly without checking the client status. If the client is unregistered, then the free functions of the frags will be invalid and hence should not be called.

* 4066667 (Tracking ID: 4040261)

SYMPTOM:
During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.

DESCRIPTION:
Some log messages may have IDs like 00000. When such logs are encountered, it may lead to a core dump by the lltconfig process.

RESOLUTION:
VCS is updated to use appropriate message IDs for logs so that such issues do not occur.

Patch ID: VRTSllt-8.0.0.1100

* 4064783 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTSgms-8.0.0.2800

* 4057427 (Tracking ID: 4057176)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock.

Patch ID: VRTSgms-8.0.0.2300

* 4108584 (Tracking ID: 4107753)

SYMPTOM:
The GMS module fails to load on RHEL8.7 minor kernel.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel.

RESOLUTION:
Modified existing modinst-gms script to accommodate the changes in the kernel and load the correct module.

Patch ID: VRTSgms-8.0.0.2200

* 4101000 (Tracking ID: 4100999)

SYMPTOM:
The GMS module fails to load on RHEL 8.7.

DESCRIPTION:
This issue occurs due to changes in the RHEL 8.7 kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the kernel and load as expected on RHEL 8.7.

Patch ID: VRTSglm-8.0.0.2300

* 4108582 (Tracking ID: 4107754)

SYMPTOM:
The GLM module fails to load on RHEL8.7 minor kernel.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel.

RESOLUTION:
Modified existing modinst-glm script to accommodate the changes in the kernel and load the correct module.

Patch ID: VRTSglm-8.0.0.2200

* 4100995 (Tracking ID: 4100994)

SYMPTOM:
GLM module failed to load on RHEL8.7

DESCRIPTION:
The RHEL8.7 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.7.

Patch ID: VRTSglm-8.0.0.1800

* 4089163 (Tracking ID: 4089162)

SYMPTOM:
The GLM module fails to load on SLES and RHEL.

DESCRIPTION:
The GLM module fails to load on SLES and RHEL.

RESOLUTION:
GLM module is updated to load as expected on SLES and RHEL.

Patch ID: VRTSgab-8.0.0.2200

* 4108951 (Tracking ID: 4107779)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7 GA kernel(4.18.0-425.3.1.el8.x86_64).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7) is now introduced.

Patch ID: VRTSgab-8.0.0.2100

* 4100453 (Tracking ID: 4100203)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7(RHEL8.7) is now introduced.

Patch ID: VRTSgab-8.0.0.1800

* 4089723 (Tracking ID: 4089722)

SYMPTOM:
VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.

DESCRIPTION:
Need recompilation of VRTSgab , VRTSamf and VRTSdbed with latest changes.

RESOLUTION:
Recompiled the VRTSgab , VRTSamf and VRTSdbed.

Patch ID: VRTSgab-8.0.0.1500

* 4073696 (Tracking ID: 4072335)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 5.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
6(RHEL8.6) is now introduced.

Patch ID: VRTSgab-8.0.0.1100

* 4064784 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTSfsadv-8.0.0.2200

* 4103001 (Tracking ID: 4103002)

SYMPTOM:
Replication failures observed in internal testing

DESCRIPTION:
Replication related code changes done in VxFS repository to fix replication failures. The replication binaries are part of VRTSfsadv.

RESOLUTION:
Compiled VRTSfsadv with VxFS changes.

Patch ID: VRTSfsadv-8.0.0.2100

* 4092150 (Tracking ID: 4088024)

SYMPTOM:
Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.

DESCRIPTION:
VxFS uses the OpenSSL third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version (1.1.1q) of this third-party components in which the security vulnerabilities have been addressed. To accommodate the changes vxfs_solutions is added with libboost_system entries in Makefile [dedup/pdde/sdk/common/Makefile].

Patch ID: VRTSdbed-8.0.0.1800

* 4079372 (Tracking ID: 4073842)

SYMPTOM:
Oracle 21c is not supported on earlier product versions.

DESCRIPTION:
Implemented Oracle 21c support with Storage Foundation for Databases.

RESOLUTION:
Changes are done to support Oracle 21c with Storage Foundation for Databases.

Patch ID: VRTSdbac-8.0.0.2200

* 4108954 (Tracking ID: 4107779)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7 GA kernel(4.18.0-425.3.1.el8.x86_64).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7) is now introduced.

Patch ID: VRTSdbac-8.0.0.2100

* 4100204 (Tracking ID: 4100203)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7(RHEL8.7) is now introduced.

Patch ID: VRTSdbac-8.0.0.1900

* 4090090 (Tracking ID: 4090485)

SYMPTOM:
Installation of Oracle 12c GRID and database fails on RHEL8.*/OL8.* with GLIBC package error

DESCRIPTION:
On RHEL8/OL8 with GLIBC version 2.2.5, VCSMM lib uses the available default version and hence fails to build with the following error message:

INFO: /u03/app/12201/dbbase/dbhome/lib//libskgxn2.so: undefined reference to `memcpy@GLIBC_2.14' INFO: make: *** [/u03/app/12201/dbbase/dbhome/rdbms/lib/ins_rdbms.mk:1013: /u03/app/12201/dbbase/dbhome/rdbms/lib/orapwd] Error 1

RESOLUTION:
RHEL8/OL8 VCSMM module is built with GLIBC 2.2.5.

Patch ID: VRTSdbac-8.0.0.1800

* 4089728 (Tracking ID: 4089722)

SYMPTOM:
VRTSgab , VRTSamf and VRTSdbed driver does not load on RHEL and SLES platform.

DESCRIPTION:
Need recompilation of VRTSgab , VRTSamf and VRTSdbed with latest changes.

RESOLUTION:
Recompiled the VRTSgab , VRTSamf and VRTSdbed.

Patch ID: VRTSdbac-8.0.0.1100

* 4053178 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTScps-8.0.0.1900

* 4091306 (Tracking ID: 4088158)

SYMPTOM:
Security vulnerabilities exists Sqlite third-party components used by VCS.

DESCRIPTION:
VCS uses the  Sqlite third-party components in which some security vulnerability exist.

RESOLUTION:
VCS is updated to use newer versions of Sqlite third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTScps-8.0.0.1800

* 4073050 (Tracking ID: 4018218)

SYMPTOM:
Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2

DESCRIPTION:
Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2.

RESOLUTION:
This hotfix updates the VRTScps module so that InfoScale CP Client can establish secure communication with a CP server using TLSv1.2. However, to enable TLSv1.2 communication between the CP client and CP server after installing this hotfix, you must perform the following steps:

To configure TLSv1.2 for CP server
1. Stop the process resource that has pathname="/opt/VRTScps/bin/vxcpserv"
   # hares -offline <vxcpserv> -sys <sysname> 
2. Check that the vxcpserv daemon is stopped using the following command:
   # ps -eaf | grep "/opt/VRTScps/bin/vxcpserv"
3. When the vxcpserv daemon is stopped, edit the "/etc/vxcps_ssl.properties" file and make the following changes:
   a. Remove or comment the entry: openSSL.server.requireTLSv1 = true 
   b. Add a new entry: openSSL.server.requireTLSv1.2 = true
4. Start the process resource that has pathname="/opt/VRTScps/bin/vxcpserv"
   # hares -offline <vxcpserv> -sys <sysname>

To configure TLSv1.2 for CP Client
Edit the "/etc/vxcps_ssl.properties" file and make the following changes:
   a. Remove or comment the entry: openSSL.server.requireTLSv1 = true 
   b. Add a new entry: openSSL.server.requireTLSv1.2 = true

Patch ID: VRTScps-8.0.0.1200

* 4066225 (Tracking ID: 4056666)

SYMPTOM:
The Error writing to database message may appear in syslogs intermittently on InfoScale CP servers.

DESCRIPTION:
Typically, when a coordination point server (CP server) is shared among multiple InfoScale clusters, the following messages may intermittently appear in syslogs:
CPS CRITICAL V-97-1400-501 Error writing to database! :database is locked.
These messages appear in the context of the CP server protocol handshake between the clients and the server.

RESOLUTION:
The CP server is updated so that, in addition to its other database write operations, all the ones for the CP server protocol handshake action are also synchronized.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel8_x86_64-Patch-8.0.0.2900.tar.gz to /tmp
2. Untar infoscale-rhel8_x86_64-Patch-8.0.0.2900.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel8_x86_64-Patch-8.0.0.2900.tar.gz
    # tar xf /tmp/infoscale-rhel8_x86_64-Patch-8.0.0.2900.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale800P2900 [<host1> <host2>...]

You can also install this patch together with 8.0 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 8.0 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
1. Please check any cumulative patch (Update release) released prior to this platform patch. Install this platform patch along with the CP.
2. In case any CP is released on top of this platform patch, this platform patch will be included in that. Please check and install the latest CP.
3. In case the internet is not available, Installation of the patch must be performed concurrently with the latest CPI patch downloaded from Download Center.


OTHERS
------
NONE