infoscale-sles15_x86_64-Patch-8.0.2.1500

 Basic information
Release type: Patch
Release date: 2024-02-08
OS update support: None
Technote: None
Documentation: None
Popularity: 204 viewed    downloaded
Download size: 512.7 MB
Checksum: 2111135513

 Applies to one or more of the following products:
InfoScale Availability 8.0.2 On SLES15 x86-64
InfoScale Enterprise 8.0.2 On SLES15 x86-64
InfoScale Foundation 8.0.2 On SLES15 x86-64
InfoScale Storage 8.0.2 On SLES15 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
infoscale-sles15_x86_64-Patch-8.0.2.1400 (obsolete) 2024-01-09
infoscale-sles15_x86_64-Patch-8.0.2.1300 (obsolete) 2023-11-05

 Fixes the following incidents:
4058775, 4113391, 4114880, 4118568, 4119267, 4119626, 4121230, 4123065, 4123069, 4123080, 4124086, 4124291, 4124794, 4124796, 4124889, 4124960, 4124963, 4124964, 4124966, 4124968, 4125162, 4125322, 4125392, 4125811, 4125870, 4125871, 4125873, 4125875, 4125878, 4125891, 4125895, 4126104, 4126262, 4126266, 4127509, 4127510, 4127518, 4127519, 4127524, 4127525, 4127527, 4127528, 4127594, 4127720, 4127785, 4128127, 4128249, 4128723, 4128835, 4128886, 4129494, 4129681, 4129708, 4129715, 4129765, 4129766, 4129838, 4130206, 4130402, 4130816, 4130827, 4130858, 4130861, 4130947, 4131369, 4132209, 4132775, 4133009, 4133131, 4133132, 4133133, 4133167, 4133277, 4133279, 4133286, 4133294, 4133312, 4133315, 4133481, 4133677, 4133930, 4133946, 4133965, 4133969, 4134040, 4134084, 4134946, 4134948, 4134950, 4134952, 4135127, 4135388, 4135534, 4135795, 4136419, 4136428, 4136429, 4136802, 4136859, 4136866, 4136868, 4136870, 4137163, 4137164, 4137165, 4137174, 4137175, 4137215, 4137283, 4137376, 4137377, 4137508, 4137600, 4137602, 4137611, 4137615, 4137618, 4137640, 4137753, 4137757, 4137986, 4137995, 4138051, 4138069, 4138075, 4138101, 4138107, 4138224, 4138236, 4138237, 4138251, 4138274, 4138348, 4138537, 4138538, 4139975, 4140468, 4140598, 4140911, 4141666, 4143509, 4143580, 4143857, 4143918, 4144274, 4145064, 4146550, 4146580, 4146957, 4148734, 4149499, 4150065, 4150099, 4150459, 4153061, 4153142, 4153144, 4153146, 4153164

 Patch ID:
VRTSrest-3.0.10-linux
VRTSsfmh-8.0.2.320_Linux.rpm
VRTSpython-3.9.16.2-SLES15
VRTSspt-8.0.2.1300-0027_SLES15
VRTSdbed-8.0.2.1100-0026_SLES
VRTSdbac-8.0.2.1300-0033_SLES15
VRTSveki-8.0.2.1500-0110_SLES15
VRTSodm-8.0.2.1500-0110_SLES15
VRTSgms-8.0.2.1500-0110_SLES15
VRTSvxfs-8.0.2.1500-0110_SLES15
VRTSvcs-8.0.2.1400-0102_SLES15
VRTSgab-8.0.2.1400-0102_SLES15
VRTSvcsea-8.0.2.1400-0102_SLES15
VRTSvcsag-8.0.2.1400-0102_SLES15
VRTSllt-8.0.2.1400-0102_SLES15
VRTScavf-8.0.2.1500-0110_SLES15
VRTSvxfen-8.0.2.1400-0102_SLES15
VRTSglm-8.0.2.1500-0110_SLES15
VRTSamf-8.0.2.1400-0102_SLES15
VRTSfsadv-8.0.2.1500-0110_SLES15
VRTSaslapm-8.0.2.1400-0111_SLES15
VRTSvxvm-8.0.2.1400-0111_SLES15

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 8.0.2 * * *
                         * * * Patch 1500 * * *
                         Patch Date: 2024-02-06


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 8.0.2 Patch 1500


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES15 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTScavf
VRTSdbac
VRTSdbed
VRTSfsadv
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSpython
VRTSrest
VRTSsfmh
VRTSspt
VRTSvcs
VRTSvcsag
VRTSvcsea
VRTSveki
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 8.0.2
   * InfoScale Enterprise 8.0.2
   * InfoScale Foundation 8.0.2
   * InfoScale Storage 8.0.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSveki-8.0.2.1500
* 4135795 (4135683) Enhancing debugging capability of VRTSveki package installation
* 4140468 (4152368) Some incidents do not appear in changelog because their cross-references are not properly processed
Patch ID: VRTSveki-8.0.2.1300
* 4134084 (4134083) VEKI support for SLES15 SP5.
Patch ID: VRTSveki-8.0.2.1200
* 4130816 (4130815) Generate and add changelog in VEKI rpm
Patch ID: VRTSveki-8.0.2.1100
* 4118568 (4110457) Veki packaging were failing due to dependency
Patch ID: VRTSpython-3.9.16.2
* 4143509 (4143508) Upgrading multiple vulnerable module under VRTSpython to address open exploitable security vulnerabilities.
Patch ID: VRTSrest-3.0.10
* 4124960 (4130028) GET apis of vm and filesystem were failing because of datatype mismatch in spec and original output, if the client generates the client code from specs
* 4124963 (4127170) While modifying the system list for service group when dependency is there, the api would fail
* 4124964 (4127167) -force option is used by default in delete of rvg and a new -online option is used in patch of rvg
* 4124966 (4127171) While getting excluded disks on Systems API we were getting nodelist instead of nodename in href
* 4124968 (4127168) In GET request on rvgs all datavolumes in RVGs not listed correctly
* 4125162 (4127169) Get disks api failing when cvm is down on any node
Patch ID: VRTSspt-8.0.2.1300
* 4139975 (4149462) New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version.
* 4146957 (4149448) New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog.
Patch ID: VRTSsfmh-8.0.2.320
* 4140911 (4140908) NA
Patch ID: VRTSvcsea-8.0.2.1400
* 4058775 (4073508) Oracle virtual fire-drill is failing.
Patch ID: VRTSvcsag-8.0.2.1400
* 4114880 (4152700) When Private DNS Zone resource ID is passed, the AzureDNSZone Agent returns an error saying that the resource cannot be found.
* 4135534 (4152812) AWS EBSVol agent takes long time to perform online and offline operations on resources.
* 4137215 (4094539) Agent resource monitor not parsing process name correctly.
* 4137376 (4122001) NIC resource remain online after unplug network cable on ESXi server.
* 4137377 (4113151) VMwareDisksAgent reports resource online before VMware disk to be online is present into vxvm/dmp database.
* 4137602 (4121270) EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.
* 4137618 (4152886) AWSIP agent fails to bring OverlayIP resources online and offline on the instances in a shared VPC.
* 4143918 (4152815) AWS EBS Volume in-use with other AWS instance is getting used by cluster nodes through AWS EBSVol agent.
Patch ID: VRTSvcsag-8.0.2.1200
* 4130206 (4127320) The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.
Patch ID: VRTSdbac-8.0.2.1300
* 4153146 (4153140) Veritas Infoscale Availability does not qualify latest kernels for sles/rhel platforms.
Patch ID: VRTSdbac-8.0.2.1200
* 4133167 (4131368) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 5 (SLES 15 SP5).
Patch ID: VRTSdbac-8.0.2.1100
* 4133133 (4133130) Veritas Infoscale Availability qualification for latest sles15 kernels is provided.
Patch ID: VRTSdbed-8.0.2.1100
* 4153061 (4092588) SFAE failed to start with systemd.
Patch ID: VRTSvcs-8.0.2.1400
* 4153146 (4153140) Veritas Infoscale Availability does not qualify latest kernels for sles/rhel platforms.
Patch ID: VRTSvcs-8.0.2.1300
* 4133294 (4070999) Processes registered under VCS control get killed after running the 'hastop -local -force' command.
* 4133677 (4129493) Tenable security scan kills the Notifier resource.
Patch ID: VRTSvcs-8.0.2.1200
* 4113391 (4124956) GCO configuration with hostname is not working.
Patch ID: VRTSvxfen-8.0.2.1400
* 4153144 (4153140) Veritas Infoscale Availability does not qualify latest kernels for sles/rhel platforms.
Patch ID: VRTSvxfen-8.0.2.1300
* 4131369 (4131368) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 5 (SLES 15 SP5).
Patch ID: VRTSvxfen-8.0.2.1200
* 4124086 (4124084) Security vulnerabilities exist in the Curl third-party components used by VCS.
* 4125891 (4113847) Support for even number of coordination disks for CVM-based disk-based fencing
* 4125895 (4108561) Reading vxfen reservation not working
Patch ID: VRTSamf-8.0.2.1400
* 4137600 (4136003) A cluster node panics when the AMF module overruns internal buffer to analyze arguments of an executable binary.
Patch ID: VRTSamf-8.0.2.1300
* 4137165 (4131368) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 5 (SLES 15 SP5).
Patch ID: VRTSamf-8.0.2.1200
* 4133131 (4133130) Veritas Infoscale Availability qualification for latest sles15 kernels is provided.
Patch ID: VRTSgab-8.0.2.1400
* 4153142 (4153140) Veritas Infoscale Availability does not qualify latest kernels for sles/rhel platforms.
Patch ID: VRTSgab-8.0.2.1300
* 4137164 (4131368) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 5 (SLES 15 SP5).
Patch ID: VRTSgab-8.0.2.1200
* 4133132 (4133130) Veritas Infoscale Availability qualification for latest sles15 kernels is provided.
Patch ID: VRTSllt-8.0.2.1400
* 4137611 (4135825) Once root file system is full during llt start, llt module failing to load forever.
Patch ID: VRTSllt-8.0.2.1300
* 4132209 (4124759) Panic happened with llt_ioship_recv on a server running in AWS.
* 4137163 (4131368) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 5 (SLES 15 SP5).
Patch ID: VRTSllt-8.0.2.1200
* 4128886 (4128887) During rmmod of llt package, warning trace is observed on kernel versions higher than 5.14 on RHEL9 and SLES15.
Patch ID: VRTSvxvm-8.0.2.1400
* 4124889 (4090828) Enhancement to track plex att/recovery data synced in past to debug corruption
* 4129765 (4111978) Replication failed to start due to vxnetd threads not running on secondary site.
* 4130858 (4128351) System hung observed when switching log owner.
* 4130861 (4122061) Observing hung after resync operation, vxconfigd was waiting for slaves' response
* 4132775 (4132774) VxVM support on SLES15 SP5
* 4133930 (4100646) Recoveries of dcl objects not happening due to ATT, RELOCATE flags are set on DCL subdisks
* 4133946 (3972344) vxrecover returns an error - 'ERROR V-5-1-11150'  Volume <vol_name> not found'
* 4135127 (4134023) vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed.
* 4135388 (4131202) In VVR environment, changeip command may fail.
* 4136419 (4089696) In FSS environment, with DCO log attached to VVR SRL volume, reboot of the cluster may result into panic on the CVM master node.
* 4136428 (4131449) In CVR environment, the restriction of four RVGs per diskgroup has been removed.
* 4136429 (4077944) In VVR environment, application I/O operation may get hung.
* 4136802 (4136751) Added selinux permissions for fcontext: aide_t, support_t, mdadm_t
* 4136859 (4117568) vradmind dumps core due to invalid memory access.
* 4136866 (4090476) SRL is not draining to secondary.
* 4136868 (4120068) A standard disk was added to a cloned diskgroup successfully which is not expected.
* 4136870 (4117957) During a phased reboot of a two node Veritas Access cluster, mounts would hang.
* 4137174 (4081740) vxdg flush command slow due to too many luns needlessly access /proc/partitions.
* 4137175 (4124223) Core dump is generated for vxconfigd in TC execution.
* 4137508 (4066310) Added BLK-MQ feature for DMP driver
* 4137615 (4087628) CVM goes into faulted state when slave node of primary is rebooted .
* 4137753 (4128271) In CVR environment, a node is not able to join the CVM cluster if RVG recovery is taking place.
* 4137757 (4136458) In CVR environment, the DCM resync may hang with 0% sync remaining.
* 4137986 (4133793) vxsnap restore failed with DCO IO errors during the operation when run in loop for multiple VxVM volumes.
* 4138051 (4090943) VVR Primary RLink cannot connect as secondary reports SRL log is full.
* 4138069 (4139703) Panic due to wrong use of OS API (HUNZA issue)
* 4138075 (4129873) In CVR environment, if CVM slave node is acting as logowner, then I/Os may hang when data volume is grown.
* 4138101 (4114867) systemd-udevd[2224]: invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 103 ('D')
* 4138107 (4065490) VxVM udev rules consumes more CPU and appears in "top" output when system has thousands of storage devices attached.
* 4138224 (4129489) With VxVM installed in AWS cloud environment, disk devices may intermittently disappear from 'vxdisk list' output.
* 4138236 (4134069) VVR replication was not using VxFS SmartMove feature if filesystem was not mounted on RVG Logowner node.
* 4138237 (4113240) In CVR environment, with hostname binding configured, Rlink on VVR secondary may have incorrect VVR primary IP.
* 4138251 (4132799) No detailed error messages while joining CVM fail.
* 4138348 (4121564) Memory leak for volcred_t could be observed in vxio.
* 4138537 (4098144) vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume
* 4138538 (4085404) Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.
* 4140598 (4141590) Some incidents do not appear in changelog because their cross-references are not properly processed
* 4143580 (4142054) primary master got panicked with ted assert during the run.
* 4143857 (4130393) vxencryptd crashed repeatedly due to segfault.
* 4145064 (4145063) unknown symbol message logged in syslogs while inserting vxio module.
* 4146550 (4108235) System wide hang due to memory leak in VVR vxio kernel module
* 4149499 (4149498) Getting unsupported .ko files not found warning while upgrading VM packages.
* 4150099 (4150098) vxconfigd goes down after few VxVM operations and System file system becomes read-only .
* 4150459 (4150160) Panic due to less memory allocation than required
Patch ID: VRTSaslapm 8.0.2.1400
* 4137995 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .
Patch ID: VRTSvxvm-8.0.2.1300
* 4132775 (4132774) VxVM support on SLES15 SP5
* 4133312 (4128451) A hardware replicated disk group fails to be auto-imported after reboot.
* 4133315 (4130642) node failed to rejoin the cluster after this node switched from master to slave due to the failure of the replicated diskgroup import.
* 4133946 (3972344) vxrecover returns an error - 'ERROR V-5-1-11150'  Volume <vol_name> not found'
* 4135127 (4134023) vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed.
Patch ID: VRTSaslapm-8.0.2.1300
* 4137283 (4137282) ASLAPM rpm Support on SLES15SP5
Patch ID: VRTSvxvm-8.0.2.1200
* 4119267 (4113582) In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode.
* 4123065 (4113138) 'vradmin repstatus' invoked on the secondary site shows stale information
* 4123069 (4116609) VVR Secondary logowner change is not reflected with virtual hostnames.
* 4123080 (4111789) VVR does not utilize the network provisioned for it.
* 4124291 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference.
* 4124794 (4114952) With virtual hostnames, pause replication operation fails.
* 4124796 (4108913) Vradmind dumps core because of memory corruption.
* 4125392 (4114193) 'vradmin repstatus' incorrectly shows replication status as inconsistent.
* 4125811 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd
* 4128127 (4132265) Machine attached with NVMe devices may panic.
* 4128835 (4127555) Unable to configure replication using diskgroup id.
* 4129766 (4128380) With virtual hostnames, 'vradmin resync' command may fail if invoked from DR site.
* 4130402 (4107801) /dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.
* 4130827 (4098391) Continuous system crash is observed during VxVM installation.
* 4130947 (4124725) With virtual hostnames, 'vradmin delpri' command may hang.
Patch ID: VRTSaslapm-8.0.2.1200
* 4133009 (4133010) Generate and add changelog in aslapm rpm
Patch ID: VRTSvxvm-8.0.2.1100
* 4125322 (4119950) Security vulnerabilities exists in third party components [curl and libxml].
Patch ID: VRTSaslapm-8.0.2.1100
* 4125322 (4119950) Security vulnerabilities exists in third party components [curl and libxml].
Patch ID: VRTScavf-8.0.2.1500
* 4133969 (4074274) DR test and failover activity might not succeed for hardware-replicated disk groups and EMC SRDF hardware-replicated disk groups are failing with PR operation failed message.
* 4137640 (4088479) The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.
Patch ID: VRTSfsadv-8.0.2.1500
* 4153164 (4088024) Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.
Patch ID: VRTSgms-8.0.2.1500
* 4133279 (4133278) GMS support for SLES15 SP5.
* 4134948 (4134947) GMS support for azure SLES15 SP5.
Patch ID: VRTSgms-8.0.2.1300
* 4133279 (4133278) GMS support for SLES15 SP5.
Patch ID: VRTSgms-8.0.2.1200
* 4126266 (4125932) no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.
* 4127527 (4107112) When finding GMS module with version same as kernel version, need to consider kernel-build number.
* 4127528 (4107753) If GMS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4129708 (4129707) Generate and add changelog in GMS rpm
Patch ID: VRTSglm-8.0.2.1500
* 4133277 (4133276) GLM support for SLES15 SP5.
* 4134946 (4134945) GLM support for azure SLES15 SP5.
* 4138274 (4126298) System may panic due to unable to handle kernel paging request 
and memory corruption could happen.
Patch ID: VRTSglm-8.0.2.1300
* 4133277 (4133276) GLM support for SLES15 SP5.
Patch ID: VRTSglm-8.0.2.1200
* 4127524 (4107114) When finding GLM module with version same as kernel version, need to consider kernel-build number.
* 4127525 (4107754) If GLM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4129715 (4129714) Generate and add changelog in GLM rpm
Patch ID: VRTSodm-8.0.2.1500
* 4133286 (4133285) ODM support for SLES15 SP5.
* 4134950 (4134949) ODM support for azure SLES15 SP5.
Patch ID: VRTSodm-8.0.2.1400
* 4144274 (4144269) After installing VRTSvxfs-8.0.2.1400 ODM fails to start.
Patch ID: VRTSodm-8.0.2.1300
* 4133286 (4133285) ODM support for SLES15 SP5.
Patch ID: VRTSodm-8.0.2.1200
* 4126262 (4126256) no symbol version warning for VEKI's symbol in dmesg after SFCFSHA configuration
* 4127518 (4107017) When finding ODM module with version same as kernel version, need to consider kernel-build number.
* 4127519 (4107778) If ODM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4129838 (4129837) Generate and add changelog in ODM rpm
Patch ID: VRTSvxfs-8.0.2.1500
* 4119626 (4119627) Command fsck is facing few SELinux permission denials issue.
* 4133481 (4133480) VxFS support for SLES15 SP5.
* 4134952 (4134951) VxFS support for azure SLES15 SP5.
* 4146580 (4141876) Parallel invocation of command vxschadm might delete previous SecureFS configuration.
* 4148734 (4148732) get_dg_vol_names is leaking memory.
* 4150065 (4149581) VxFS Secure clock is running behind than expected by huge margin.
Patch ID: VRTSvxfs-8.0.2.1400
* 4141666 (4141665) Security vulnerabilities exist in the Zlib third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.2.1300
* 4133481 (4133480) VxFS support for SLES15 SP5.
* 4133965 (4116329) While checking FS sanity with the help of "fsck -o full -n" command, we tried to correct the FS flag value (WORM/Softworm), but failed because -n (read-only) option was given.
* 4134040 (3979756) kfcntl/vx_cfs_ifcntllock performance is very bad on CFS.
Patch ID: VRTSvxfs-8.0.2.1200
* 4121230 (4119990) Recovery stuck while flushing and invalidating the buffers
* 4125870 (4120729) Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source.
* 4125871 (4114176) After failover, job sync fails with error "Device or resource busy".
* 4125873 (4108955) VFR job hangs on source if thread creation fails on target.
* 4125875 (4112931) vxfsrepld consumes a lot of virtual memory when it has been running for long time.
* 4125878 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel.
* 4126104 (4122331) Enhancement in vxfs error message which are logged while marking the bitmap or inode as "BAD".
* 4127509 (4107015) When finding VxFS module with version same as kernel version, need to consider kernel-build number.
* 4127510 (4107777) If VxFS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4127594 (4126957) System crashes with VxFS stack.
* 4127720 (4127719) Added fallback logic in fsdb binary and made changes to fstyp binary such that it now dumps uuid.
* 4127785 (4127784) Earlier fsppadm binary was just giving warning in case of invalid UID, GID number. After this change providing invalid UID / GID  e.g.
"1ABC" (UID/GID are always numbers) will result into error and parsing will stop.
* 4128249 (4119965) VxFS mount binary failed to mount VxFS with SELinux context.
* 4128723 (4114127) Hang in VxFS internal LM Conformance - inotify test
* 4129494 (4129495) Kernel panic observed in internal VxFS LM conformance testing.
* 4129681 (4129680) Generate and add changelog in VxFS rpm


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSveki-8.0.2.1500

* 4135795 (Tracking ID: 4135683)

SYMPTOM:
Enhancing debugging capability of VRTSveki package installation

DESCRIPTION:
Enhancing debugging capability of VRTSveki package installation using temporary debug logs for SELinux policy file installation.

RESOLUTION:
Code is changed to store output of VRTSveki SELinux policy file installation in temporary debug logs.

* 4140468 (Tracking ID: 4152368)

SYMPTOM:
Some incidents do not appear in changelog because their cross-references are not properly processed

DESCRIPTION:
Every cross-references is not parent-child. In such case 'top' will not be present and changelog script ends execution.

RESOLUTION:
All cross-references are traversed to find parent-child only if it present and then find top.

Patch ID: VRTSveki-8.0.2.1300

* 4134084 (Tracking ID: 4134083)

SYMPTOM:
The VEKI module fails to load on SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP5 kernel.

RESOLUTION:
VEKI module is updated to accommodate the changes in the kernel and load as expected on SLES15SP5.

Patch ID: VRTSveki-8.0.2.1200

* 4130816 (Tracking ID: 4130815)

SYMPTOM:
VEKI rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to VEKI rpm.

Patch ID: VRTSveki-8.0.2.1100

* 4118568 (Tracking ID: 4110457)

SYMPTOM:
Veki packaging failure due to missing of storageapi specific files

DESCRIPTION:
While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes 
were not taken care in the Veki mk-symlink and build scripts.

RESOLUTION:
Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles.
This is helping to package the storageapi along with veki and resolving all interdependencies

Patch ID: VRTSpython-3.9.16.2

* 4143509 (Tracking ID: 4143508)

SYMPTOM:
There are open exploitable CVEs which are having High/Critical CVSS score, in multiple modules under VRTSpython.

DESCRIPTION:
There are open exploitable CVEs which are having High/Critical CVSS score, in the current VRTSPython modules under 
VRTSpython.

RESOLUTION:
Upgrading multiple modules under VRTSpython to address open exploitable security vulnerabilities.

Patch ID: VRTSrest-3.0.10

* 4124960 (Tracking ID: 4130028)

SYMPTOM:
GET apis of vm and filesystem were failing because of datatype mismatch in spec and original output, if the client generates the client code from specs

DESCRIPTION:
The get api was returning different response from what was mentioned in the specs

RESOLUTION:
Changed the response of the GET api of vm and fs apis to match the specs. After the changes client generated code will not get error

* 4124963 (Tracking ID: 4127170)

SYMPTOM:
While modifying the system list for service group when dependency is there, the api would fail

DESCRIPTION:
While modifying the system list for service group when dependency is there, the api would fail. So we were not able to modify system list if there were dependency of service group on other service group

RESOLUTION:
Now we have modified the code for api to modify system list for service group when the dependency exists.

* 4124964 (Tracking ID: 4127167)

SYMPTOM:
DELETE rvg was failing when replication was in progress

DESCRIPTION:
DELETE rvg was failing when replication was in progress, so used -force option in deletion, so that rvg is deleted successfully. Added a new online option in PATCH of rvg, so that used can explicitly say that he wants online add volume

RESOLUTION:
DELETE rvg was failing when replication was in progress, so used -force option in deletion, so that rvg is deleted successfully. Added a new online option in PATCH of rvg, so that used can explicitly say that he wants online add volume

* 4124966 (Tracking ID: 4127171)

SYMPTOM:
While getting excluded disks on Systems API we were getting nodelist instead of nodename in href. When the user would try to GET on that link, the request would fail

DESCRIPTION:
The GET system list api was returning wrong reference links for excluded disks. When the user would try to GET on that link, the request would fail

RESOLUTION:
Returning the correct href for excluded disks from GET system api.

* 4124968 (Tracking ID: 4127168)

SYMPTOM:
In GET request on rvgs all datavolumes in RVGs not listed correctly

DESCRIPTION:
The command which we were using for getting the list of data volumes on rvg, was not returning all data volumes, because of which the api was not returning all the data volumes of rvg

RESOLUTION:
Changed the command to get data volumes of rvg. Now GET on rvg will return all the data volumes associated with that rvg

* 4125162 (Tracking ID: 4127169)

SYMPTOM:
Get disks api failing when cvm is down on any node

DESCRIPTION:
When node is out of cluster from CVM, GET disks api is failing and not giving proper output

RESOLUTION:
Used the appropriate checks to get the proper list of disks from GET disks api.

Patch ID: VRTSspt-8.0.2.1300

* 4139975 (Tracking ID: 4149462)

SYMPTOM:
New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version.

DESCRIPTION:
list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of 
script refer README.list_missing_incidents in VRTSspt package

RESOLUTION:
list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of 
script refer README.list_missing_incidents in VRTSspt package

* 4146957 (Tracking ID: 4149448)

SYMPTOM:
New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog.

DESCRIPTION:
If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt package

RESOLUTION:
If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt package

Patch ID: VRTSsfmh-8.0.2.320

* 4140911 (Tracking ID: 4140908)

SYMPTOM:
NA

DESCRIPTION:
NA

RESOLUTION:
NA

Patch ID: VRTSvcsea-8.0.2.1400

* 4058775 (Tracking ID: 4073508)

SYMPTOM:
Oracle virtual fire-drill is failing due to Oracle password file location changes from Oracle version 21c.

DESCRIPTION:
Oracle password file has been moved to $ORACLE_BASE/dbs from Oracle version 21c.

RESOLUTION:
Environment variables are used for pointing the updated path for the password file.

It is mandatory from Oracle 21c and later versions for a client to configure .env file path in EnvFile attribute. This file must have ORACLE_BASE path added to 
work Oracle virtual fire-drill feature. 

Sample EnvFile content with ORACLE_BASE path for Oracle 21c [root@inaqalnx013 Oracle]# cat /opt/VRTSagents/ha/bin/Oracle/envfile 
ORACLE_BASE="/u02/app/oracle/product/21.0.0/dbhome_1/"; export ORACLE_BASE; 

Sample attribute value EnvFile = "/opt/VRTSagents/ha/bin/Oracle/envfile"

Patch ID: VRTSvcsag-8.0.2.1400

* 4114880 (Tracking ID: 4152700)

SYMPTOM:
When Private DNS Zone resource ID is passed, the AzureDNSZone Agent returns an error saying that the resource cannot be found.

DESCRIPTION:
Azure Private DNS Zone with AzureDNSZone Agent is not supported.

RESOLUTION:
The Azure Private DNS Zone is supported by the AzureDNSZone Agent by installing the Azure library for Private DNS Zone(azure-mgmt-privatedns). 
This library has functions that can be utilized by for Private DNS zone operations. The resource ID is differentiated based on the Public and the Private DNS zones and the corrective actions are taken accordingly. 
For DNS zones, the resource ID differs between Public and Private DNS zones. The resource ID can be parsed, and the resource type can be checked to determine whether it is a Public or Private DNS zone.

* 4135534 (Tracking ID: 4152812)

SYMPTOM:
AWS EBSVol agent takes long time to perform online and offline operations on resources.

DESCRIPTION:
When a large number of AWS EBSVol resources are configured, it takes a long time to perform online and offline operations on these resources. 
EBSVol is a single threaded agent and hence prevents parallel execution of attach and detach EBS volume commands.

RESOLUTION:
To resolvethe issue, the default value of 'NumThreads' attribute of EBSVol agent is modified from 1 to 10 and the agent is enhanced to use the locking mechanism to avoid conflicting resource configuration. 
This results in enhanced response time for parallel execution of attach and detach commands. 
Also, the default value of MonitorTimeout attribute is modified from 60 to 120. This avoids timeout of monitor entry point when response of AWS CLI/server is unexpectedly slow.

* 4137215 (Tracking ID: 4094539)

SYMPTOM:
The MonitorProcesses argument in the resource ArgListValues being passed to the agent (bundled ApplicationAgent) is incorrectly removing an extra needed space from the following process, as found via the recommended CLI process test.

DESCRIPTION:
In the ArgListValues under MonitorProcesses with the extra space it even shows up when displaying the resource.

RESOLUTION:
For the monitored process (not program) only remove leading and trailing spaces. Do not remove extra spaces between words.

* 4137376 (Tracking ID: 4122001)

SYMPTOM:
NIC resource remain online after unplug network cable on ESXi server.

DESCRIPTION:
Previously MII checking network statistics/ping test but now it's directly marking NIC state ONLINE by checking NIC status in operstate. Now there is no any ping check before that. If it's fail to detect operstate file then only it's going to check for PING test. But on ESXi server environment NIC is already marked ONLINE as operstate file is available with state UP & carrier bit is set. So, even if "NetworkHosts" is not reachable then also it's marking NIC resource ONLINE.

RESOLUTION:
The NIC agent's already having "PingOptimize" attribute. We introduced new value (2) for "PingOptimize" attribute to make decision of perform PING test. If "PingOptimize = 2" then only it will do PING test or else it will work as per previous design.

* 4137377 (Tracking ID: 4113151)

SYMPTOM:
Dependent DiskGroupAgent fails to get its resource online due to disk group import failure.

DESCRIPTION:
VMwareDisksAgent reports its resource online just after VMware disk is attached to virutal machine, if dependent DiskGroup resource starts to online at the moment it must fail because VMware disk is not yet present into vxdmp database due to VxVM transaction latency. Customer used to add retry times to work around this problem but cannot apply the same to every environment.

RESOLUTION:
Added a finite period of wait for VMware disk is present into vxdmp database before online is complete.

* 4137602 (Tracking ID: 4121270)

SYMPTOM:
EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.

DESCRIPTION:
After attaching volume to instance its taking some time to update its device mapping in system. Due to which if we run lsblk -d -o +SERIAL immediately after attaching volume then its not showing that volume details in output. Due to which $native_device was getting blank/uninitialized.
 
So, we need to wait for some time to get device mapping updated in system.

RESOLUTION:
We have added logic to retry once for same command after some interval if in first run we didnt find expected volume device mapping. And now its properly updating attribute NativeDevice.

* 4137618 (Tracking ID: 4152886)

SYMPTOM:
AWSIP agent fails to bring OverlayIP resources online and offline on the instances in a VPC that is shared across multiple AWS accounts.

DESCRIPTION:
When VPC is shared across multiple AWS accounts, route table associated with the subnets is exclusively owned by the owner account. AWS restricts the modification in the route table from any other account. When AWSIP agent tries to bring OverlayIP resource online on the Instance owned by a different account, may not have privileges to update the route table. In such cases, AWSIP agent fails to edit the route table, and fails to bring OverlayIP resource online and offline.

RESOLUTION:
To support cross-account deployment, assign appropriate privileges on shared resources. Create an AWS profile to grant permissions to update Route Table of VPC through different nodes belonging to different AWS accounts. This profile is used to update route tables accordingly.
A new attribute "Profile" is introduced in AWSIP agent. Use this attribute to configure the above created profile.

* 4143918 (Tracking ID: 4152815)

SYMPTOM:
AWS EBS Volume in-use with other AWS instance is getting used by cluster nodes through AWS EBSVol agent.

DESCRIPTION:
AWS EBS Volume which is attached to AWS instance that is not part of cluster is getting attach to the node of cluster during online event. 

Instead, Unable to detach volume' message should be logged in log file as volume is already in use by another AWS instance in AWS EBSVol agent.

RESOLUTION:
AWS EBSVol agent is enhanced to avoid attachment of in-use EBS volumes whose instances are not part of cluster.

Patch ID: VRTSvcsag-8.0.2.1200

* 4130206 (Tracking ID: 4127320)

SYMPTOM:
The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.

DESCRIPTION:
The agent fails to bring online a resource when the shell for the user is set to /sbin/nologin.

RESOLUTION:
The ProcessOnOnly agent is enhanced to support the /sbin/nologin shell. If the shell is set to /sbin/nologin, the agent uses /bin/bash as shell to start the process.

Patch ID: VRTSdbac-8.0.2.1300

* 4153146 (Tracking ID: 4153140)

SYMPTOM:
Qualification for Veritas Infoscale Availability on latest kernels for rhel/sles platform is needed.

DESCRIPTION:
Needed recompilation of Veritas Infoscale Availability packages with latest changes.

RESOLUTION:
Recompiled Veritas Infoscale Availability packages with latest changes and confirmed qualification on latest kernels of RHEL/SLES platform.

Patch ID: VRTSdbac-8.0.2.1200

* 4133167 (Tracking ID: 4131368)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 5 (SLES 15 SP5).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP5 is
now introduced.

Patch ID: VRTSdbac-8.0.2.1100

* 4133133 (Tracking ID: 4133130)

SYMPTOM:
Veritas Infoscale Availability does not qualify latest sles15 kernels.

DESCRIPTION:
Veritas Infoscale Availability qualification for latest sles15 kernels has been provided.

RESOLUTION:
Veritas Infoscale Availability qualify latest sles15 kernels.

Patch ID: VRTSdbed-8.0.2.1100

* 4153061 (Tracking ID: 4092588)

SYMPTOM:
SFAE failed to start with systemd.

DESCRIPTION:
SFAE failed to start with systemd as currently SFAE service is used in backward compatibility mode using init script.

RESOLUTION:
Added systemd support for SFAE, such as; systemctl commands - stop/start/status/restart/enable/disable.

Patch ID: VRTSvcs-8.0.2.1400

* 4153146 (Tracking ID: 4153140)

SYMPTOM:
Qualification for Veritas Infoscale Availability on latest kernels for rhel/sles platform is needed.

DESCRIPTION:
Needed recompilation of Veritas Infoscale Availability packages with latest changes.

RESOLUTION:
Recompiled Veritas Infoscale Availability packages with latest changes and confirmed qualification on latest kernels of RHEL/SLES platform.

Patch ID: VRTSvcs-8.0.2.1300

* 4133294 (Tracking ID: 4070999)

SYMPTOM:
Processes registered under VCS control get killed after running the 'hastop -local -force' command.

DESCRIPTION:
Processes registered under VCS control get killed after running the 'hastop -local -force' command.

RESOLUTION:
'KillMode' value changed from 'control-group' to 'process'.

* 4133677 (Tracking ID: 4129493)

SYMPTOM:
Tenable security scan kills the Notifier resource.

DESCRIPTION:
When nmap port scan performed on port 14144 (on which notifier process is listening), notifier gets killed because of connection request.

RESOLUTION:
The required code changes have done to prevent Notifier agent crash when nmap port scan is performed on notifier port 14144.

Patch ID: VRTSvcs-8.0.2.1200

* 4113391 (Tracking ID: 4124956)

SYMPTOM:
Traditionally, virtual IP addresses are used as cluster addresses. Cluster address is also used for peer-to-peer communication in GCO-DR deployment. Thus, gcoconfig utility is accustomed to IPv4 and IPv6 addresses. It gives error if hostname is provided as cluster address.

DESCRIPTION:
In cloud ecosystem, hostnames are widely used. As cluster address, gcoconfig utility must be compatible with hostname and virtual IPs.

RESOLUTION:
To address the limitation (gcoconfig does not accept hostname as cluster address), gcoconfig utility is enhanced to be supported on the following:
1. NIC and IP configuration:
   i. Continue using NIC and IP configuration.
2. Hostname as cluster address along with corresponding DNS. 
   i. On premise DNS: 
      a. Utility will take following inputs: Domain, Resource Records, TSIGKeyFile (if opted secured DNS), and StealthMasters (optional).
      b. Accordingly, gcoconfig utility will create DNS resource in cluster service group. 
   ii. AWSRoute53 DNS: It is Amazon DNS web service.
      a. Utility will take following inputs: Hosted Zone ID, Resource Records, AWS Binaries, and Directory Path.
      b. Accordingly, gcoconfig utility will create AWSRoute53 DNS type's resource in cluster service group.
   iii. AzureDNSZone: It is Microsoft DNS web service.
      a. Utility will take following inputs: Azure DNS Zone Resource Id, Resource Records are mandatory attributes. Additionally, user must either provide 
         Managed Identity Client ID or Azure Auth Resource.
      b. Accordingly, gcoconfig utility will create AzureDNSZone type's resource in cluster service group.

For the end points mentioned in Resource Records, gcoconfig utility can neither ensure their accessibility, nor manage their lifecycle. Hence, these are not within the scope of gcoconfig utility.

Patch ID: VRTSvxfen-8.0.2.1400

* 4153144 (Tracking ID: 4153140)

SYMPTOM:
Qualification for Veritas Infoscale Availability on latest kernels for rhel/sles platform is needed.

DESCRIPTION:
Needed recompilation of Veritas Infoscale Availability packages with latest changes.

RESOLUTION:
Recompiled Veritas Infoscale Availability packages with latest changes and confirmed qualification on latest kernels of RHEL/SLES platform.

Patch ID: VRTSvxfen-8.0.2.1300

* 4131369 (Tracking ID: 4131368)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 5 (SLES 15 SP5).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP5 is
now introduced.

Patch ID: VRTSvxfen-8.0.2.1200

* 4124086 (Tracking ID: 4124084)

SYMPTOM:
Security vulnerabilities exist in the Curl third-party components used by VCS.

DESCRIPTION:
Security vulnerabilities exist in the Curl third-party components used by VCS.

RESOLUTION:
Curl is upgraded in which the security vulnerabilities have been addressed.

* 4125891 (Tracking ID: 4113847)

SYMPTOM:
Even number of cp disks is not supported by design. This enhancement is a part of AFA wherein a faulted disk needs to be replaced as soon as the number of coordination disks is even in number and fencing is up and running

DESCRIPTION:
Regular split / network partitioning must be an odd number of disks.
Even number of cp support is provided with cp_count. With cp_count/2+1, fencing is not allowed to come up. Also if cp_count is not defined 
in vxfenmode file then by default minimum 3 cp disk are needed, otherwise vxfen does not start.

RESOLUTION:
In case of even number of cp disk, another disk is added. The number of cp disks is odd and fencing is thus running.

* 4125895 (Tracking ID: 4108561)

SYMPTOM:
Vxfen print keys  internal utility was not working because of overrunning of array internally

DESCRIPTION:
Vxfen print keys  internal utility will not work if the number of keys exceed 8 will then return garbage value
Overrunning array keylist[i].key of 8 bytes at byte offset 8 using index y (which evaluates to 8)

RESOLUTION:
Restricted the internal loop to VXFEN_KEYLEN. Reading reservation working fine now.

Patch ID: VRTSamf-8.0.2.1400

* 4137600 (Tracking ID: 4136003)

SYMPTOM:
A cluster node panics when VCS enabled AMF module that monitors process on/off.

DESCRIPTION:
A cluster node panics which indicates that an issue with the AMF module overruns into user space buffer during it is analyzing an argument of 8K size. The AMF module cannot load that length of data into internal buffer, eventually misleading access into user buffer which is not allowed when kernel SMAP in effect.

RESOLUTION:
The AMF module is constrained to ignore an argument of 8K or bigger size to avoid internal buffer overrun.

Patch ID: VRTSamf-8.0.2.1300

* 4137165 (Tracking ID: 4131368)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 5 (SLES 15 SP5).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP5 is
now introduced.

Patch ID: VRTSamf-8.0.2.1200

* 4133131 (Tracking ID: 4133130)

SYMPTOM:
Veritas Infoscale Availability does not qualify latest sles15 kernels.

DESCRIPTION:
Veritas Infoscale Availability qualification for latest sles15 kernels has been provided.

RESOLUTION:
Veritas Infoscale Availability qualify latest sles15 kernels.

Patch ID: VRTSgab-8.0.2.1400

* 4153142 (Tracking ID: 4153140)

SYMPTOM:
Qualification for Veritas Infoscale Availability on latest kernels for rhel/sles platform is needed.

DESCRIPTION:
Needed recompilation of Veritas Infoscale Availability packages with latest changes.

RESOLUTION:
Recompiled Veritas Infoscale Availability packages with latest changes and confirmed qualification on latest kernels of RHEL/SLES platform.

Patch ID: VRTSgab-8.0.2.1300

* 4137164 (Tracking ID: 4131368)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 5 (SLES 15 SP5).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP5 is
now introduced.

Patch ID: VRTSgab-8.0.2.1200

* 4133132 (Tracking ID: 4133130)

SYMPTOM:
Veritas Infoscale Availability does not qualify latest sles15 kernels.

DESCRIPTION:
Veritas Infoscale Availability qualification for latest sles15 kernels has been provided.

RESOLUTION:
Veritas Infoscale Availability qualify latest sles15 kernels.

Patch ID: VRTSllt-8.0.2.1400

* 4137611 (Tracking ID: 4135825)

SYMPTOM:
Once root file system is full during llt start, llt module failing to load forever.

DESCRIPTION:
Disk is full, user rebooted system or restart the product. In this case while LLT loading, it deletes llt links and tried creates new one using link names and "/bin/ln -f -s". As disk is full, it unable to create the links. Even after making space, it failed to create the links as those are deleted. So failed to load LLT module.

RESOLUTION:
If existing links are not present, added the logic to get name of file names to create new links.

Patch ID: VRTSllt-8.0.2.1300

* 4132209 (Tracking ID: 4124759)

SYMPTOM:
Panic happened with llt_ioship_recv on a server running in AWS.

DESCRIPTION:
In AWS environment packets can be duplicated even it is configured in UDP. As UDP it is not expected.

RESOLUTION:
To avoid the panic we have checked the packet is already in send queue of bucket and decided invalid/duplicate packet.

* 4137163 (Tracking ID: 4131368)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
15 Service Pack 5 (SLES 15 SP5).

DESCRIPTION:
Veritas Infoscale Availability does not support SUSE Linux Enterprise Server
versions released after SLES 15 SP5.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 15 SP5 is
now introduced.

Patch ID: VRTSllt-8.0.2.1200

* 4128886 (Tracking ID: 4128887)

SYMPTOM:
Below warning trace is observed while unloading llt module:
[171531.684503] Call Trace:
[171531.684505]  <TASK>
[171531.684509]  remove_proc_entry+0x45/0x1a0
[171531.684512]  llt_mod_exit+0xad/0x930 [llt]
[171531.684533]  ? find_module_all+0x78/0xb0
[171531.684536]  __do_sys_delete_module.constprop.0+0x178/0x280
[171531.684538]  ? exit_to_user_mode_loop+0xd0/0x130

DESCRIPTION:
While unloading llt module, vxnet/llt dir is not removed properly due to which warning trace is observed .

RESOLUTION:
Proc_remove api is used which cleans up the whole subtree.

Patch ID: VRTSvxvm-8.0.2.1400

* 4124889 (Tracking ID: 4090828)

SYMPTOM:
Dumped fmrmap data for better debuggability for corruption issues

DESCRIPTION:
vxplex att/vxvol recover cli will internally fetch fmrmaps from kernel using existing ioctl before starting attach operation and get data in binary 
format and dump to file and store it with specific format like volname_taskid_date.

RESOLUTION:
Changes done now dumps the fmrmap data into a binary file.

* 4129765 (Tracking ID: 4111978)

SYMPTOM:
Replication failed to start due to vxnetd threads not running on secondary site.

DESCRIPTION:
Vxnetd was waiting to start "nmcomudpsrv" and "nmcomlistenserver" threads. Due to a race condition of some resource between those two thread, vxnetd was stuck in a dead loop till max retry reached.

RESOLUTION:
Code changes have been made to add lock protection to avoid the race condition.

* 4130858 (Tracking ID: 4128351)

SYMPTOM:
System hung observed when switching log owner.

DESCRIPTION:
VVR mdship SIOs might be throttled due to reaching max allocation count, etc. These SIOs are holding io count. When log owner change kicked in and quiesced RVG. VVR log owner change SIO is waiting for iocount to drop to zero to proceed further. VVR mdship requests from the log client are returned with EAGAIN as RVG quiesced. The throttled mdship SIOs need to be driven by the upcoming mdship requests, hence the deadlock, which caused system hung.

RESOLUTION:
Code changes have been made to flush the mdship queue before VVR log owner change SIO waiting for IO drain.

* 4130861 (Tracking ID: 4122061)

SYMPTOM:
Observing hung after resync operation, vxconfigd was waiting for slaves' response.

DESCRIPTION:
VVR logowner was in a transaction and returned VOLKMSG_EAGAIN to CVM_MSG_GET_METADATA which is expected. Once the client received VOLKMSG_EAGAIN, it would sleep 
10 jiffies and retry the kmsg . In a busy cluster, it might happen the retried kmsgs plus the new kmsgs got built up and hit the kmsg flowcontrol before the vvr logowner transaction completed. Once the client refused any kmsgs due to the flowcontrol. The transaction on vvr logowner might get stuck because it required kmsg response from all the slave node.

RESOLUTION:
Code changes have been made to increase the kmsg flowcontrol and don't let kmsg receiver fall asleep but handle the kmsg in a restart function.

* 4132775 (Tracking ID: 4132774)

SYMPTOM:
Existing VxVM package fails to load on SLES15SP5

DESCRIPTION:
There are multiple changes done in this kernel related to handling of SCSI passthrough requests , initialization of bio routines , ways of obtaining blk requests .Hence existing code is not compatible with SLES15SP5.

RESOLUTION:
Required changes have been done to make VxVM compatible with SLES15SP5.

* 4133930 (Tracking ID: 4100646)

SYMPTOM:
Recoveries of dcl objects not happening due to ATT, RELOCATE flags are set on DCL subdisks

DESCRIPTION:
Due to multiple reason stale tutil may remain stamped on dcl subdisks which may cause next vxrecover instances
not able to recover dcl plex.

RESOLUTION:
Issue is resolved by vxattachd daemon intelligently detecting these stale tutils and clearing+triggering recoveries after 10 min interval.

* 4133946 (Tracking ID: 3972344)

SYMPTOM:
After reboot of a node on a setup where multiple diskgroups / Volumes within diskgroups are present, sometimes in /var/log/messages an error 'vxrecover ERROR V-5-1-11150  Volume <volume_name> does not exist' is logged.

DESCRIPTION:
In volume_startable function (volrecover.c), dgsetup is called to set the current default diskgroup. This does not update the current_group variable leading to inappropriate mappings. Volumes are searched in an incorrect diskgroup which is logged in the error message.
The vxrecover command works fine if the diskgroup name associated with volume is specified. [vxrecover -g <dg_name> -s]

RESOLUTION:
Changed the code to use switch_diskgroup() instead of dgsetup. Current_group is updated and the current_dg is set. Thus vxrecover finds the Volume correctly.

* 4135127 (Tracking ID: 4134023)

SYMPTOM:
vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed with below error:
# vxconfigrestore -p LINUXSRDF
VxVM vxconfigrestore INFO V-5-2-6198 Diskgroup LINUXSRDF configuration restoration started ......
VxVM vxdg ERROR V-5-1-0 Disk group LINUXSRDF: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]
Please refer to system log for details.
... ...
VxVM vxconfigrestore ERROR V-5-2-3706 Diskgroup configuration restoration for LINUXSRDF failed.

DESCRIPTION:
H/W Replicated diskgroup can be imported only with option "-o usereplicatedev=only". vxconfigrestore didn't do H/W Replicated diskgroup check, without giving the proper import option diskgroup import failed.

RESOLUTION:
The code changes have been made to do H/W Replicated diskgroup check in vxconfigrestore .

* 4135388 (Tracking ID: 4131202)

SYMPTOM:
In VVR environment, 'vradmin changeip' would fail with following error message:
VxVM VVR vradmin ERROR V-5-52-479 Host <host> not reachable.

DESCRIPTION:
Existing heartbeat to new secondary host is assumed where as it is actually going to be started after this changeip operation.

RESOLUTION:
Heartbeat assumption is fixed.

* 4136419 (Tracking ID: 4089696)

SYMPTOM:
In FSS environment, with DCO log attached to VVR SRL volume, reboot of the cluster may result into following panic on the CVM master node: 

voldco_get_mapid
voldco_get_detach_mapid
voldco_get_detmap_offset
voldco_recover_detach_map
volmv_recover_dcovol
vol_mv_fmr_precommit
vol_mv_precommit
vol_ktrans_precommit_parallel
volobj_ktrans_sio_start
voliod_iohandle
voliod_loop

DESCRIPTION:
If DCO is configured with SRL volume, and both SRL volume plexes and DCO plexes get IO error, this panic is hit in the recovery path.

RESOLUTION:
Recovery path is fixed to handle this condition.

* 4136428 (Tracking ID: 4131449)

SYMPTOM:
In CVR environments, there was a restriction to configure up to four RVGs per diskgroup as more RVGs resulted in degradation of I/O performance in case 
of VxVM transactions.

DESCRIPTION:
In CVR environments, VxVM transactions on an RVG also impacted I/O operations on other RVGs in the same diskgroup resulting in I/O performance 
degradation in case of higher number of RVGs configured in a diskgroup.

RESOLUTION:
VxVM transaction impact has been isolated to each RVG resulting in the ability to scale beyond four RVGs in a diskgroup.

* 4136429 (Tracking ID: 4077944)

SYMPTOM:
In VVR environment, when I/O throttling gets activated and deactivated by VVR, it may result in an application I/O hang.

DESCRIPTION:
In case VVR throttles and unthrottles I/O, the diving of throttled I/O is not done in one of the cases.

RESOLUTION:
Resolved the issue by making sure the application throttled I/Os get driven in all the cases.

* 4136802 (Tracking ID: 4136751)

SYMPTOM:
Selinux denies access to such files where support_t permissions are required

DESCRIPTION:
Selinux denies access to such files where support_t permissions are required to fix such denials added this fix

RESOLUTION:
code changes have been done for this issue, hence resolved

* 4136859 (Tracking ID: 4117568)

SYMPTOM:
Vradmind dumps core with the following stack:

#1  std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string (this=0x7ffdc380d810,
    __str=<error reading variable: Cannot access memory at address 0x3736656436303563>)
#2  0x000000000040e02b in ClientMgr::closeStatsSession
#3  0x000000000040d0d7 in ClientMgr::client_ipm_close
#4  0x000000000058328e in IpmHandle::~IpmHandle
#5  0x000000000057c509 in IpmHandle::events
#6  0x0000000000409f5d in main

DESCRIPTION:
After terminating vrstat, the StatSession in vradmind was closed and the corresponding Client object was deleted. When closing the IPM object of vrstat, try to access the removed Client, hence the core dump.

RESOLUTION:
Core changes have been made to fix the issue.

* 4136866 (Tracking ID: 4090476)

SYMPTOM:
Storage Replicator Log (SRL) is not draining to secondary. rlink status shows the outstanding writes never got reduced in several hours.

VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRL

DESCRIPTION:
In poor network environment, VVR seems not syncing. Another reconfigure happened before VVR state became clean, VVR atomic window got set to a large size. VVR couldnt complete all the atomic updates before the next reconfigure. VVR ended kept sending atomic updates from VVR pending position. Hence VVR appears to be stuck.

RESOLUTION:
Code changes have been made to update VVR pending position accordingly.

* 4136868 (Tracking ID: 4120068)

SYMPTOM:
A standard disk was added to a cloned diskgroup successfully which is not expected.

DESCRIPTION:
When add a disk to a disk group, a pre-check will be made to avoid ending up with a mixed diskgroup. In a cluster, the local node might fail to use the 
latest record to do the pre-check, which caused a mixed diskgroup in the cluster, further caused node join failure.

RESOLUTION:
Code changes have been made to use latest record to do a mixed diskgroup pre-check.

* 4136870 (Tracking ID: 4117957)

SYMPTOM:
During a phased reboot of a two node Veritas Access cluster, mounts would hang. Transaction aborted waiting for io drain.
VxVM vxio V-5-3-1576 commit: Timedout waiting for Cache XXXX to quiesce, iocount XX msg 0

DESCRIPTION:
Transaction on Cache object getting failed since there are IOs waiting on the cache object. Those queued IOs couldn't be 
proceeded due to the missing flag VOLOBJ_CACHE_RECOVERED on the cache object. A transact might kicked in when the old cache was doing
recovery, therefore the new cache object might fail to inherit VOLOBJ_CACHE_RECOVERED, further caused IO hung.

RESOLUTION:
Code changes have been to fail the new cache creation if the old cache is doing recovery.

* 4137174 (Tracking ID: 4081740)

SYMPTOM:
vxdg flush command slow due to too many luns needlessly access /proc/partitions.

DESCRIPTION:
Linux BLOCK_EXT_MAJOR(block major 259) is used as extended devt for block devices. When partition number of one device is more than 15, the partition device gets assigned under major 259 to solve the sd limitations (16 minors per device), by which more partitions are allowed for one sd device. During "vxdg flush", for each lun in the disk group, vxconfigd reads file /proc/partitions line by line through fgets() to find all the partition devices with major number 259, which would cause vxconfigd to respond sluggishly if there are large amount of luns in the disk group.

RESOLUTION:
Code has been changed to remove the needless access on /proc/partitions for the luns without using extended devt.

* 4137175 (Tracking ID: 4124223)

SYMPTOM:
Core dump is generated for vxconfigd in TC execution.

DESCRIPTION:
TC creates a scenario where 0s are written in first block of disk. In such case, Null check is necessary in code before some variable is accessed. This Null check is missing which causes vxconfigd core dump in TC execution.

RESOLUTION:
Necessary Null checks is added in code to avoid vxconfigd core dump.

* 4137508 (Tracking ID: 4066310)

SYMPTOM:
New feature for performance improvement

DESCRIPTION:
Linux subsystem has two types of block driver 1) block multiqueue driver 2) bio based block driver. since day -1 DMP was a bio based driver now added new feature that is block multiqueue for DMP.

RESOLUTION:
resolved

* 4137615 (Tracking ID: 4087628)

SYMPTOM:
When DCM is in replication mode with volumes mounted having large regions for DCM to sync and if slave node reboot is triggered, this might cause CVM to go into faulted state .

DESCRIPTION:
During Resiliency tests, performed sequence of operations as following. 
1. On an AWS FSS-CVR setup, replication is started across the sites for 2 RVGs.
2. The low owner service groups for both the RVGs are online on a Slave node. 
3. Rebooted another Slave node where logowner is not online. 
4. After Slave node come back from reboot, it is unable to join CVM Cluster. 
5. Also vx commands are also hung/stuck on the CVM Master and Logowner slave node.

RESOLUTION:
In RU SIO before requesting vxfs_free_region(), drop IO count and hold it again after. Because the transaction has been locked (vol_ktrans_locked = 1) right 
before calling vxfs_free_region(), we don't need the iocount to hold rvg from being removed.

* 4137753 (Tracking ID: 4128271)

SYMPTOM:
In CVR environment, a node is not able to join the CVM cluster if RVG recovery is taking place.

DESCRIPTION:
If there has been an SRL overflow, then RVG recovery takes more time as it was loaded with more work than required because the recovery related metadata was not updated.

RESOLUTION:
Updated the metadata correctly to reduce the RVG recovery time.

* 4137757 (Tracking ID: 4136458)

SYMPTOM:
In CVR environment, if CVM slave node is acting as logowner, the DCM resync issues after snapshot restore may hang showing 0% sync is remaining.

DESCRIPTION:
The DCM resync completion is not correctly communicated to CVM master resulting into hang.

RESOLUTION:
The DCM resync operation is enhanced to correctly communicate resync completion to CVM master.

* 4137986 (Tracking ID: 4133793)

SYMPTOM:
DCO experience IO Errors while doing a vxsnap restore on vxvm volumes.

DESCRIPTION:
Dirty flag was getting set in context of an SIO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set. This led to transaction errors while doing a vxsnap restore command in loop for vxvm volumes causing transaction abort. As a result, VxVM tries to cleanup by removing newly added BMs. Now, VxVM tries to access the deleted BMs. however it is not able to since they were deleted previously. This ultimately leads to DCO IO error.

RESOLUTION:
Skip first write klogging in the context of an IO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set.

* 4138051 (Tracking ID: 4090943)

SYMPTOM:
On Primary, RLink is continuously getting connected/disconnected with below message seen in secondary syslog:
  VxVM VVR vxio V-5-3-0 Disconnecting replica <rlink_name> since log is full on secondary.

DESCRIPTION:
When RVG logowner node panic, RVG recovery happens in 3 phases.
At the end of 2nd phase of recovery in-memory and on-disk SRL positions remains incorrect
and during this time if there is logowner change then Rlink won't get connected.

RESOLUTION:
Handled in-memory and on-disk SRL positions correctly.

* 4138069 (Tracking ID: 4139703)

SYMPTOM:
System gets panicked on RHEL9.2 AWS environment while registering the pgr key.

DESCRIPTION:
On RHEL 9.2, Observing panic while reading PGR keys on AWS VM.

2)	Reproduction steps: Run "/etc/vx/diag.d/vxdmppr read /dev/vx/dmp/ip-10-20-2-49_nvme4_0" on AWS nvme 9.2 setup.

3)	 Build details: ga8_0_2_all_maint

4)	Test Bed details: AWS VM with RHEL 9.2 

Nodes:

Access details(login)

Console details:

4) OS and Kernel details: 5.14.0-284.11.1.el9_2.x86_64

5). Crash dump and core dump location with Binary

6) Failure signature: 
PID: 8250     TASK: ffffa0e882ca1c80  CPU: 1    COMMAND: "vxdmppr"
 #0 [ffffbf3c4039f8e0] machine_kexec at ffffffffb626c237
 #1 [ffffbf3c4039f938] __crash_kexec at ffffffffb63c3c9a
 #2 [ffffbf3c4039f9f8] crash_kexec at ffffffffb63c4e58
 #3 [ffffbf3c4039fa00] oops_end at ffffffffb62291db
 #4 [ffffbf3c4039fa20] do_trap at ffffffffb622596e
 #5 [ffffbf3c4039fa70] do_error_trap at ffffffffb6225a25
 #6 [ffffbf3c4039fab0] exc_invalid_op at ffffffffb6d256be
 #7 [ffffbf3c4039fad0] asm_exc_invalid_op at ffffffffb6e00af6
    [exception RIP: kfree+1074]
    RIP: ffffffffb6578e32  RSP: ffffbf3c4039fb88  RFLAGS: 00010246
    RAX: ffffa0e7984e9c00  RBX: ffffa0e7984e9c00  RCX: ffffa0e7984e9c60
    RDX: 000000001bc22001  RSI: ffffffffb6729dfd  RDI: ffffa0e7984e9c00
    RBP: ffffa0e880042800   R8: ffffa0e8b572b678   R9: ffffa0e8b572b678
    R10: 0000000000005aca  R11: 00000000000000e0  R12: fffff20e00613a40
    R13: fffff20e00613a40  R14: ffffffffb6729dfd  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #8 [ffffbf3c4039fbc0] blk_update_request at ffffffffb6729dfd
 #9 [ffffbf3c4039fc18] blk_mq_end_request at ffffffffb672a11a
#10 [ffffbf3c4039fc30] dmp_kernel_nvme_ioctl at ffffffffc09f2647 [vxdmp]
#11 [ffffbf3c4039fd00] dmp_dev_ioctl at ffffffffc09a3b93 [vxdmp]
#12 [ffffbf3c4039fd10] dmp_send_nvme_passthru_cmd_over_node at ffffffffc09f1497 [vxdmp]
#13 [ffffbf3c4039fd60] dmp_pr_do_nvme_read.constprop.0 at ffffffffc09b78e1 [vxdmp]
#14 [ffffbf3c4039fe00] dmp_pr_read at ffffffffc09e40be [vxdmp]
#15 [ffffbf3c4039fe78] dmpioctl at ffffffffc09b09c3 [vxdmp]
#16 [ffffbf3c4039fe88] dmp_ioctl at ffffffffc09d7a1c [vxdmp]
#17 [ffffbf3c4039fea0] blkdev_ioctl at ffffffffb6732b81
#18 [ffffbf3c4039fef0] __x64_sys_ioctl at ffffffffb65df1ba
#19 [ffffbf3c4039ff20] do_syscall_64 at ffffffffb6d2515c
#20 [ffffbf3c4039ff50] entry_SYSCALL_64_after_hwframe at ffffffffb6e0009b
    RIP: 00007fef03c3ec6b  RSP: 00007ffd1acad8a8  RFLAGS: 00000202
    RAX: ffffffffffffffda  RBX: 00000000444d5061  RCX: 00007fef03c3ec6b
    RDX: 00007ffd1acad990  RSI: 00000000444d5061  RDI: 0000000000000003
    RBP: 0000000000000003   R8: 0000000001cbba20   R9: 0000000000000000
    R10: 00007fef03c11d78  R11: 0000000000000202  R12: 00007ffd1acad990
    R13: 0000000000000000  R14: 0000000000000000  R15: 0000000000000002
    ORIG_RAX: 0000000000000010  CS: 0033  SS: 002b

RESOLUTION:
resolved

* 4138075 (Tracking ID: 4129873)

SYMPTOM:
In CVR environment, the application I/O may hang if CVM slave node is acting as RVG logowner and a data volume grow operation is triggered followed by a logclient node leaving the cluster.

DESCRIPTION:
When logowner is not CVM master, and data volume grow operation is taking place, the CVM master controls the region locking for IO operations. In case, 
a logclient node leaves the cluster, the I/O operations initiated by it are not cleaned up correctly due to lack of co-ordination between CVM master and RVG logowner node.

RESOLUTION:
Co-ordination between CVM master and RVG logowner node is fixed to manage the I/O cleanup correctly.

* 4138101 (Tracking ID: 4114867)

SYMPTOM:
Getting these error messages while adding new disks
[root@server101 ~]# cat /etc/udev/rules.d/41-VxVM-selinux.rules | tail -1
KERNEL=="VxVM*", SUBSYSTEM=="block", ACTION=="add", RUN+="/bin/sh -c 'if [ `/usr/sbin/getenforce` != "Disabled" -a `/usr/sbin/
[root@server101 ~]#
[root@server101 ~]# systemctl restart systemd-udevd.service
[root@server101 ~]# udevadm test /block/sdb 2>&1 | grep "invalid"
invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 104 ('D')

DESCRIPTION:
In /etc/udev/rules.d/41-VxVM-selinux.rules double quotation on Disabled and disable is the issue.

RESOLUTION:
Code changes have been made to correct the problem.

* 4138107 (Tracking ID: 4065490)

SYMPTOM:
systemd-udev threads consumes more CPU during system bootup or device discovery.

DESCRIPTION:
During disk discovery when new storage devices are discovered, VxVM udev rules are invoked for creating hardware path
symbolic link and setting SELinux security context on Veritas device files. For creating hardware path symbolic link to each
storage device, "find" command is used internally which is CPU intensive operation. If too many storage devices are attached to
system, then usage of "find" command causes high CPU consumption.

Also, for setting appropriate SELinux security context on VxVM device files, restorecon is done irrespective of SELinux is enabled or disabled.

RESOLUTION:
Usage of "find" command is replaced with "udevadm" command. SELinux security context on VxVM device files is being set
only when SELinux is enabled on system.

* 4138224 (Tracking ID: 4129489)

SYMPTOM:
With VxVM installed in AWS cloud environment, disk devices may intermittently disappear from 'vxdisk list' output.

DESCRIPTION:
There was an issue with disk discovery at OS and DDL layer.

RESOLUTION:
Integration issue with disk discovery was resolved.

* 4138236 (Tracking ID: 4134069)

SYMPTOM:
VVR replication was not using VxFS SmartMove feature if filesystem was not mounted on RVG Logowner node.

DESCRIPTION:
Initial synchronization and DCM replay of VVR required the filesystem to be mounted locally on the logowner node as VVR did not have capability to 
fetch the required information from a remotely mounted filesystem mount point.

RESOLUTION:
VVR is updated to fetch the required SmartMove related information from a remotely mounted filesystem mount point.

* 4138237 (Tracking ID: 4113240)

SYMPTOM:
In CVR environment, with hostname binding configured, Rlink on VVR secondary may have incorrect VVR primary IP.

DESCRIPTION:
VVR Secondary Rlink picks up a wrong IP randomly since the replication is configured using virtual host which maps to multiple IPs.

RESOLUTION:
VVR Primary IP is corrected on the VVR Secondary Rlink.

* 4138251 (Tracking ID: 4132799)

SYMPTOM:
If GLM is not loaded, start CVM fails with the following errors:
# vxclustadm -m gab startnode
VxVM vxclustadm INFO V-5-2-9687 vxclustadm: Fencing driver is in disabled mode - 
VxVM vxclustadm ERROR V-5-1-9743 errno 3

DESCRIPTION:
The error number but the error message is printed while joining CVM fails.

RESOLUTION:
The code changes have been made to fix the issue.

* 4138348 (Tracking ID: 4121564)

SYMPTOM:
Memory leak for volcred_t could be observed in vxio.

DESCRIPTION:
Memory leak could occur if some private region IOs hang on a disk and there are duplicate entries for the disk in vxio.

RESOLUTION:
Code has been changed to avoid memory leak.

* 4138537 (Tracking ID: 4098144)

SYMPTOM:
vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume

DESCRIPTION:
vxtask remains stuck since the parent process doesn't exit. It was seen that all childs are completed, but the parent is not able to exit.
(gdb) p active_jobs
$1 = 1
Active jobs are reduced as in when childs complete. Somehow one count is pending and we don't know which child exited without decrementing count. Instrumentation messages are added to capture the issue.

RESOLUTION:
Added a code that will create a log file in /etc/vx/log/. This file will be deleted when vxrecover exists successfully. The file will be present when vxtask parent hang issue is seen.

* 4138538 (Tracking ID: 4085404)

SYMPTOM:
Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.

DESCRIPTION:
The active map flush caused RVG serialization. Once RVG gets serialized, all IOs are queued in restart queue, till the active map flush is finished. The too frequent active map flush caused the huge IO drop during flushing SRL to DCM.

RESOLUTION:
The code is modified to adjust the frequency of active map flush and balance the application IO and SRL flush.

* 4140598 (Tracking ID: 4141590)

SYMPTOM:
Some incidents do not appear in changelog because their cross-references are not properly processed

DESCRIPTION:
Every cross-references is not parent-child. In such case 'top' will not be present and changelog script ends execution.

RESOLUTION:
All cross-references are traversed to find parent-child only if it present and then find top.

* 4143580 (Tracking ID: 4142054)

SYMPTOM:
System panicked in the following stack:

[ 9543.195915] Call Trace:
[ 9543.195938]  dump_stack+0x41/0x60
[ 9543.195954]  panic+0xe7/0x2ac
[ 9543.195974]  vol_rv_inactive+0x59/0x790 [vxio]
[ 9543.196578]  vol_rvdcm_flush_done+0x159/0x300 [vxio]
[ 9543.196955]  voliod_iohandle+0x294/0xa40 [vxio]
[ 9543.197327]  ? volted_getpinfo+0x15/0xe0 [vxio]
[ 9543.197694]  voliod_loop+0x4b6/0x950 [vxio]
[ 9543.198003]  ? voliod_kiohandle+0x70/0x70 [vxio]
[ 9543.198364]  kthread+0x10a/0x120
[ 9543.198385]  ? set_kthread_struct+0x40/0x40
[ 9543.198389]  ret_from_fork+0x1f/0x40

DESCRIPTION:
- From the SIO stack, we can see that it is a case of done being called twice. 
- Looking at vol_rvdcm_flush_start(), we can see that when child sio is created, it is being directly added to the the global SIO queue. 
- This can cause child SIO to start while vol_rvdcm_flush_start() is still in process of generating other child SIOs. 
- It means that, say the first child SIO gets done, it can find the children count going to zero and calls done.
- The next child SIO, also independently find children count to be zero and call done.

RESOLUTION:
The code changes have been done to fix the problem.

* 4143857 (Tracking ID: 4130393)

SYMPTOM:
vxencryptd crashed repeatedly due to segfault.

DESCRIPTION:
Linux could pass large IOs with 2MB size to VxVM layer, however vxencryptd only expects IOs with maximum IO size 1MB from kernel and only pre-allocates 1MB buffer size for encryption/decryption. This would cause vxencryptd to crash when processing large IOs.

RESOLUTION:
Code changes have been made to allocate enough buffer.

* 4145064 (Tracking ID: 4145063)

SYMPTOM:
vxio Module fails to load post VxVM package installation.

DESCRIPTION:
Following message is seen in dmesg:
[root@dl360g10-115-v23 ~]# dmesg | grep symbol
[ 2410.561682] vxio: no symbol version for storageapi_associate_blkg

RESOLUTION:
Because of incorrectly nested IF blocks in the "src/linux/kernel/vxvm/Makefile.target", the code for the RHEL 9 block was not getting executed, because of which certain symbols were not present in the vxio.mod.c file. This in turn caused the above mentioned symbol warning to be seen in dmesg.
Fixed the improper nesting of the IF conditions.

* 4146550 (Tracking ID: 4108235)

SYMPTOM:
System wide hang causing all application and config IOs hang

DESCRIPTION:
Memory pools are used in vxio driver for managing kernel memory for different purposes. One of the pool called 'NMCOM pool' used on VVR secondary was causing memory leak. Memory leak was not getting detected from pool stats as metadata referring to pool itself was getting freed.

RESOLUTION:
Bug causing memory leak is fixed. There was race condition in VxVM transaction code path on secondary side of VVR where memory was not getting freed when certain conditions was hit.

* 4149499 (Tracking ID: 4149498)

SYMPTOM:
While upgrading the VxVM package, a number of warnings are seen regarding .ko files not being found for various modules.

DESCRIPTION:
These warnings are seen because all the unwanted .ko files have been removed.

RESOLUTION:
Code changes have been done to not get these warnings.

* 4150099 (Tracking ID: 4150098)

SYMPTOM:
After few VxVM operations , if a reboot is taken  file system goes into read-only mode and vxconfigd does not come up .

DESCRIPTION:
SELinux context of /etc/fstab is getting updated which is causing the issue.

RESOLUTION:
Fixed the SELinux context of /etc/fstab.

* 4150459 (Tracking ID: 4150160)

SYMPTOM:
System gets panicked in dmp code path

DESCRIPTION:
CMDS-fsmigadm test hits "Oops: 0003 [#1] PREEMPT SMP PTI"

2)	Reproduction steps: Running cmds-fsmigadm test.

3)	 Build details:

# rpm -qi VRTSvxvm
Name        : VRTSvxvm
Version     : 8.0.3.0000
Release     : 0716_RHEL9
Architecture: x86_64
Install Date: Wed 10 Jan 2024 11:46:24 AM IST
Group       : Applications/System
Size        : 414813743
License     : Veritas Proprietary
Signature   : RSA/SHA256, Thu 04 Jan 2024 04:24:23 PM IST, Key ID 4e84af75cc633953
Source RPM  : VRTSvxvm-8.0.3.0000-0716_RHEL9.src.rpm
Build Date  : Thu 04 Jan 2024 06:35:01 AM IST
Build Host  : vmrsvrhel9bld.rsv.ven.veritas.com
Packager    : enterprise_technical_support@veritas.com
Vendor      : Veritas Technologies LLC
URL         : www.veritas.com/support
Summary     : Veritas Volume Manager

RESOLUTION:
removed buggy code and fixed it.

Patch ID: VRTSaslapm 8.0.2.1400

* 4137995 (Tracking ID: 4117350)

SYMPTOM:
Below error is observed when trying to import 

# vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdg
VxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]

Please refer to system log for details.

DESCRIPTION:
REPLICATED flag is used to identify a hardware replicated device so to import dg on the  REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .

RESOLUTION:
REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.

Patch ID: VRTSvxvm-8.0.2.1300

* 4132775 (Tracking ID: 4132774)

SYMPTOM:
Existing VxVM package fails to load on SLES15SP5

DESCRIPTION:
There are multiple changes done in this kernel related to handling of SCSI passthrough requests , initialization of bio routines , ways of obtaining blk requests .Hence existing code is not compatible with SLES15SP5.

RESOLUTION:
Required changes have been done to make VxVM compatible with SLES15SP5.

* 4133312 (Tracking ID: 4128451)

SYMPTOM:
A hardware replicated disk group fails to be auto-imported after reboot.

DESCRIPTION:
Currently the standard diskgroup and cloned diskgroup are supported with auto-import. Hardware replicated disk group isn't supported yet.

RESOLUTION:
Code changes have been made to support hardware replicated disk groups with autoimport.

* 4133315 (Tracking ID: 4130642)

SYMPTOM:
node failed to rejoin the cluster after switched from master to slave due to the failure of the replicated diskgroup import.
The below error message could be found in /var/VRTSvcs/log/CVMCluster_A.log.
CVMCluster:cvm_clus:monitor:vxclustadm nodestate return code:[101] with output: [state: out of cluster
reason: Replicated dg record is found: retry to add a node failed]

DESCRIPTION:
The flag which shows the diskgroup was imported with usereplicatedev=only failed to be marked since the last time the diskgroup got imported. 
The missing flag caused the failure of the replicated diskgroup import, further caused node rejoin failure.

RESOLUTION:
The code changes have been done to flag the diskgroup after it got imported with usereplicatedev=only.

* 4133946 (Tracking ID: 3972344)

SYMPTOM:
After reboot of a node on a setup where multiple diskgroups / Volumes within diskgroups are present, sometimes in /var/log/messages an error 'vxrecover ERROR V-5-1-11150  Volume <volume_name> does not exist' is logged.

DESCRIPTION:
In volume_startable function (volrecover.c), dgsetup is called to set the current default diskgroup. This does not update the current_group variable leading to inappropriate mappings. Volumes are searched in an incorrect diskgroup which is logged in the error message.
The vxrecover command works fine if the diskgroup name associated with volume is specified. [vxrecover -g <dg_name> -s]

RESOLUTION:
Changed the code to use switch_diskgroup() instead of dgsetup. Current_group is updated and the current_dg is set. Thus vxrecover finds the Volume correctly.

* 4135127 (Tracking ID: 4134023)

SYMPTOM:
vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed with below error:
# vxconfigrestore -p LINUXSRDF
VxVM vxconfigrestore INFO V-5-2-6198 Diskgroup LINUXSRDF configuration restoration started ......
VxVM vxdg ERROR V-5-1-0 Disk group LINUXSRDF: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]
Please refer to system log for details.
... ...
VxVM vxconfigrestore ERROR V-5-2-3706 Diskgroup configuration restoration for LINUXSRDF failed.

DESCRIPTION:
H/W Replicated diskgroup can be imported only with option "-o usereplicatedev=only". vxconfigrestore didn't do H/W Replicated diskgroup check, without giving the proper import option diskgroup import failed.

RESOLUTION:
The code changes have been made to do H/W Replicated diskgroup check in vxconfigrestore .

Patch ID: VRTSaslapm-8.0.2.1300

* 4137283 (Tracking ID: 4137282)

SYMPTOM:
Support for ASLAPM on SLES15SP5

DESCRIPTION:
SLES15SP5 is a new release and hence APM module should be recompiled with new kernel.

RESOLUTION:
Compiled APM with new kernel.

Patch ID: VRTSvxvm-8.0.2.1200

* 4119267 (Tracking ID: 4113582)

SYMPTOM:
In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode.

DESCRIPTION:
Reboot of primary nodes resulted in missing write completions of updates on the primary SRL volume. After the node came up, last update received by VVR secondary was incorrectly compared with the missing updates.

RESOLUTION:
Fixed the check to correctly compare the last received update by VVR secondary.

* 4123065 (Tracking ID: 4113138)

SYMPTOM:
In CVR environments configured with virtual hostname, after node reboots on VVR Primary and Secondary, 'vradmin repstatus' invoked on the secondary site shows stale information with following  warning message:
VxVM VVR vradmin INFO V-5-52-1205 Primary is unreachable or RDS has configuration error. Displayed status information is from Secondary and can be out-of-date.

DESCRIPTION:
This issue occurs when there is a explicit RVG logowner set on the CVM master due to which the old connection of vradmind with its remote peer disconnects and new connection is not formed.

RESOLUTION:
Fixed the issue with the vradmind connection with its remote peer.

* 4123069 (Tracking ID: 4116609)

SYMPTOM:
In CVR environments where replication is configured using virtual hostnames, vradmind on VVR primary loses connection with its remote peer after a planned RVG logowner change on the VVR secondary site.

DESCRIPTION:
vradmind on VVR primary was unable to detect a RVG logowner change on the VVR secondary site.

RESOLUTION:
Enabled primary vradmind to detect RVG logowner change on the VVR secondary site.

* 4123080 (Tracking ID: 4111789)

SYMPTOM:
In VVR/CVR environments, VVR would use any IP/NIC/network to replicate the data and may not utilize the high performance NIC/network configured for VVR.

DESCRIPTION:
The default value of tunable was set to 'any_ip'.

RESOLUTION:
The default value of tunable is set to 'replication_ip'.

* 4124291 (Tracking ID: 4111254)

SYMPTOM:
vradmind dumps core with the following stack:

#3  0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6
#4  0x000000000045922c in RDS::getHandle ()
#5  0x000000000056ec04 in StatsSession::addHost ()
#6  0x000000000045d9ef in RDS::addRVG ()
#7  0x000000000046ef3d in RDS::createDummyRVG ()
#8  0x000000000044aed7 in PriRunningState::update ()
#9  0x00000000004b3410 in RVG::update ()
#10 0x000000000045cb94 in RDS::update ()
#11 0x000000000042f480 in DBMgr::update ()
#12 0x000000000040a755 in main ()

DESCRIPTION:
vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set.

RESOLUTION:
The issue has been fixed by making code changes.

* 4124794 (Tracking ID: 4114952)

SYMPTOM:
With VVR configured with a virtual hostname, after node reboots on DR site, 'vradmin pauserep' command failed with following error:
VxVM VVR vradmin ERROR V-5-52-421 vradmind server on host <host> not responding or hostname cannot be resolved.

DESCRIPTION:
The virtual host mapped to multiple IP addresses, and vradmind was using incorrectly mapped IP address.

RESOLUTION:
Fixed by using the correct mapping of IP address from the virtual host.

* 4124796 (Tracking ID: 4108913)

SYMPTOM:
Vradmind dumps core with the following stacks:
#3  0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6
#4  0x00000000005d7a90 in VList::concat () at VList.C:1017
#5  0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280
#6  0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389
#7  0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764
#8  0x00000000004093e9 in process_message () at srvmd.C:418
#9  0x000000000040a66d in main () at srvmd.C:733

#0  0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6
#1  0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6
#2  0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6
#3  0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6
#4  0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6
#5  0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491
#6  0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480
#7  0x00000000005d7244 in VElem::~VElem () at VList.C:480
#8  0x00000000005d8ad9 in VList::~VList () at VList.C:1167
#9  0x000000000040a71a in main () at srvmd.C:743

#0  0x000000000040b826 in DList::head () at ../include/DList.h:82
#1  0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318
#2  0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157
#3  0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117
#4  0x000000000046f610 in RDS::collectStats () at RDS.C:6011
#5  0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799
#6  0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0
#7  0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6

DESCRIPTION:
There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted.

RESOLUTION:
The code changes have been made to fix the issue.

* 4125392 (Tracking ID: 4114193)

SYMPTOM:
'vradmin repstatus' command showed replication data status incorrectly as 'inconsistent'.

DESCRIPTION:
vradmind was relying on replication data status from both primary as well as DR site.

RESOLUTION:
Fixed replication data status to rely on the primary data status.

* 4125811 (Tracking ID: 4090772)

SYMPTOM:
vxconfigd/vx commands hang on secondary site in a CVR environment.

DESCRIPTION:
Due to a window with unmatched SRL positions, if any application (e.g. fdisk) trying
to open the secondary RVG volume will acquire a lock and wait for SRL positions to match.
During this if any vxvm transaction kicked in will also have to wait for same lock.
Further logowner node panic'd which triggered logownership change protocol which hung
as earlier transaction was stuck. As logowner change protocol could not complete,
in absence of valid logowner SRL position could not match and caused deadlock. That lead
to vxconfigd and vx command hang.

RESOLUTION:
Added changes to allow read operation on volume even if SRL positions are
unmatched. We are still blocking write IOs and just allowing open() call for read-only
operations, and hence there will not be any data consistency or integrity issues.

* 4128127 (Tracking ID: 4132265)

SYMPTOM:
Machine with NVMe disks panics with following stack: 
blk_update_request
blk_mq_end_request
dmp_kernel_nvme_ioctl
dmp_dev_ioctl
dmp_send_nvme_passthru_cmd_over_node
dmp_pr_do_nvme_read
dmp_pgr_read
dmpioctl
dmp_ioctl
blkdev_ioctl
__x64_sys_ioctl
do_syscall_64

DESCRIPTION:
Issue was applicable to setups with NVMe devices which do not support SCSI3-PR as an ioctl was called without checking correctly if SCSI3-PR was supported.

RESOLUTION:
Fixed the check to avoid calling the ioctl on devices which do not support SCSI3-PR.

* 4128835 (Tracking ID: 4127555)

SYMPTOM:
While adding secondary site using the 'vradmin addsec' command, the command fails with following error if diskgroup id is used in place of diskgroup name:
VxVM vxmake ERROR V-5-1-627 Error in field remote_dg=<dgid>: name is too long

DESCRIPTION:
Diskgroup names can be 32 characters long where as diskgroup ids can be 64 characters long. This was not handled by vradmin commands.

RESOLUTION:
Fix vradmin commands to handle the case where longer diskgroup ids can be used in place of diskgroup names.

* 4129766 (Tracking ID: 4128380)

SYMPTOM:
If VVR is configured using virtual hostname and 'vradmin resync' command is invoked from a DR site node, it fails with following error:
VxVM VVR vradmin ERROR V-5-52-405 Primary vradmind server disconnected.

DESCRIPTION:
In case of virtual hostname maps to multiple IPs, vradmind service on the DR site was not able to reach the VVR logowner node on the primary site due to incorrect IP address mapping used.

RESOLUTION:
Fixed vradmind to use correct mapped IP address of the primary vradmind.

* 4130402 (Tracking ID: 4107801)

SYMPTOM:
/dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.

DESCRIPTION:
vxpath-links is responsible for creating the the hardware paths under /dev/vx/.dmp .
This script get invokes from: /lib/udev/vxpath_links. The "/lib/udev" folder is not present in SLES15SP3.
This folder is explicitly removed from  SLES15SP3 onwards and it is expected to create Veritas specific scripts/libraries from vendor specific folder.

RESOLUTION:
Code changes have been made to invoke "/etc/vx/vxpath-links" instead of "/lib/udev/vxpath-links".

* 4130827 (Tracking ID: 4098391)

SYMPTOM:
Kernel panic is observed with following stack:

#6 [ffffa479c21cf6f0] page_fault at ffffffffb240130e
    [exception RIP: bfq_bio_bfqg+37]
    RIP: ffffffffb1e78135  RSP: ffffa479c21cf7a0  RFLAGS: 00010002
    RAX: 000000000000001f  RBX: 0000000000000000  RCX: ffffa479c21cf860
    RDX: ffff8bd779775000  RSI: ffff8bd795b2fa00  RDI: ffff8bd795b2fa00
    RBP: ffff8bd78f136000   R8: 0000000000000000   R9: ffff8bd793a5b800
    R10: ffffa479c21cf828  R11: 0000000000001000  R12: ffff8bd7796b6e60
    R13: ffff8bd78f136000  R14: ffff8bd795b2fa00  R15: ffff8bd7946ad0bc
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
#7 [ffffa479c21cf7b0] bfq_bic_update_cgroup at ffffffffb1e78458
#8 [ffffa479c21cf7e8] bfq_bio_merge at ffffffffb1e6f47f
#9 [ffffa479c21cf840] blk_mq_submit_bio at ffffffffb1e48c09
#10 [ffffa479c21cf8c8] submit_bio_noacct at ffffffffb1e3c7e3
#11 [ffffa479c21cf958] submit_bio at ffffffffb1e3c87b
#12 [ffffa479c21cf9a8] submit_bh_wbc at ffffffffb1d2536a
#13 [ffffa479c21cf9e0] block_read_full_page at ffffffffb1d27ac1
#14 [ffffa479c21cfa90] do_read_cache_page at ffffffffb1c2f7e5
#15 [ffffa479c21cfb48] read_part_sector at ffffffffb1e546b5
#16 [ffffa479c21cfb60] read_lba at ffffffffb1e595d2
#17 [ffffa479c21cfba8] efi_partition at ffffffffb1e59f4d
#18 [ffffa479c21cfcb8] blk_add_partitions at ffffffffb1e54377
#19 [ffffa479c21cfcf8] bdev_disk_changed at ffffffffb1d2a8fa
#20 [ffffa479c21cfd30] __blkdev_get at ffffffffb1d2c16c
#21 [ffffa479c21cfda0] blkdev_get at ffffffffb1d2c2b4
#22 [ffffa479c21cfdb8] __device_add_disk at ffffffffb1e5107e
#23 [ffffa479c21cfe20] dmp_register_disk at ffffffffc0e68ae7 [vxdmp]
#24 [ffffa479c21cfe50] dmp_reconfigure_db at ffffffffc0e8d8bd [vxdmp]
#25 [ffffa479c21cfe80] dmpioctl at ffffffffc0e75cd5 [vxdmp]
#26 [ffffa479c21cfe90] dmp_ioctl at ffffffffc0e9d469 [vxdmp]
#27 [ffffa479c21cfea8] blkdev_ioctl at ffffffffb1e4ed19
#28 [ffffa479c21cfef0] block_ioctl at ffffffffb1d2a719
#29 [ffffa479c21cfef8] ksys_ioctl at ffffffffb1cfb262
#30 [ffffa479c21cff30] __x64_sys_ioctl at ffffffffb1cfb296
#31 [ffffa479c21cff38] do_syscall_64 at ffffffffb1a0538b
#32 [ffffa479c21cff50] entry_SYSCALL_64_after_hwframe at ffffffffb240008c

DESCRIPTION:
VxVM causes kernel panic because of null pointer dereference in kernel code when BFQ disk io scheduler is used. This is observed on SLES15 SP3 minor kernel >= 5.3.18-150300.59.68.1 and SLES15 SP4 minor kernel >= 5.14.21-150400.24.11.1

RESOLUTION:
It is recommended to use mq-deadline as io scheduler. Code changes have been done to automatically change the disk io scheduler to mq-deadline.

* 4130947 (Tracking ID: 4124725)

SYMPTOM:
With VVR configured using virtual hostnames, 'vradmin delpri' command could hang after doing the RVG cleanup.

DESCRIPTION:
'vradmin delsec' command used prior to 'vradmin delpri' command had left the cleanup in an incomplete state resulting in next cleanup command to hang.

RESOLUTION:
Fixed to make sure that 'vradmin delsec' command executes its workflow correctly.

Patch ID: VRTSaslapm-8.0.2.1200

* 4133009 (Tracking ID: 4133010)

SYMPTOM:
aslapm rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to aslapm rpm.

Patch ID: VRTSvxvm-8.0.2.1100

* 4125322 (Tracking ID: 4119950)

SYMPTOM:
Vulnerabilities have been reported in third party components, [curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [curl and libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which 
needs

RESOLUTION:
[curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

Patch ID: VRTSaslapm-8.0.2.1100

* 4125322 (Tracking ID: 4119950)

SYMPTOM:
Vulnerabilities have been reported in third party components, [curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [curl and libxml] in their current versions, used by VxVM have been reported with security vulnerabilities which needs

RESOLUTION:
[curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

Patch ID: VRTScavf-8.0.2.1500

* 4133969 (Tracking ID: 4074274)

SYMPTOM:
DR test and failover activity might not succeed for hardware-replicated disk groups and  EMC SRDF hardware-replicated disk groups are failing with PR operation failed message.

DESCRIPTION:
In case of hardware-replicated disks like EMC SRDF, failover of disk groups might not automatically succeed and a manual intervention might be needed. After failover, disks at the new primary site have the 'udid_mismatch' flag which needs to be updated manually for a successful failover. And also we need to change SCSI-3 error message to "PR operation failed".

RESOLUTION:
For DMP environments, the VxVM & DMP extended attributes need to be refreshed by using 'vxdisk scandisks' prior to import. VxVM has also provided a new vxdg import option '-o usereplicatedev=only' with DMP. This option selects only the hardware-replicated disks during LUN selection process.
And Pre VxVM 8.0.x, we getting "SCSI-3 PR operation failed" as shown and changes done respectively

Sample syntax
# /usr/sbin/vxdg -s -o groupreserve=VCS -o clearreserve -cC -t import AIXSRDF
VxVM vxdg ERROR V-5-1-19179 Disk group AIXSRDF: import failed:
SCSI-3 PR operation failed

VRTScavf (CVM) 7.4.2.2201 agent enhanced on AIX to handle EMC SRDF VxVM vxdg ERROR V-5-1-19179 Disk group AIXSRDF: import failed: SCSI-3 PR operation failed failures

NEW 8.0.x VxVM error message format:
2023/09/27 12:44:02 VCS INFO V-16-20007-1001 CVMVolDg:<RESOURCE-NAME>:online:VxVM vxdg ERROR V-5-1-19179 Disk group <DISKGROUP-NAME>: import failed:
PR operation failed

* 4137640 (Tracking ID: 4088479)

SYMPTOM:
The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.

DESCRIPTION:
The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.
#/usr/sbin/vxdg -o groupreserve=VCS -o clearreserve -c -tC import srdfdg
VxVM vxdg ERROR V-5-1-19179 Disk group srdfdg: import failed:
SCSI-3 PR operation failed

RESOLUTION:
06/16 14:31:49:  VxVM vxconfigd DEBUG  V-5-1-7765 /dev/vx/rdmp/emc1_0c93: pgr_register: setting pgrkey: AVCS
06/16 14:31:49:  VxVM vxconfigd DEBUG  V-5-1-5762 prdev_open(/dev/vx/rdmp/emc1_0c93): open failure: 47                           //#define EWRPROTECT 47 /* Write-protected media */
06/16 14:31:49:  VxVM vxconfigd ERROR  V-5-1-18444 vold_pgr_register: /dev/vx/rdmp/emc1_0c93: register failed:errno:47 Make sure the disk supports SCSI-3 PR.
 
AIX differentiates between RW and RD-only opens. When the underlying device state has changed, because of the pending open count(dmp_cache_open feature), device open failed.

Patch ID: VRTSfsadv-8.0.2.1500

* 4153164 (Tracking ID: 4088024)

SYMPTOM:
Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.

DESCRIPTION:
VxFS uses the OpenSSL third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version (1.1.1q) of this third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSgms-8.0.2.1500

* 4133279 (Tracking ID: 4133278)

SYMPTOM:
The GMS module fails to load on SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP5 kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5.

* 4134948 (Tracking ID: 4134947)

SYMPTOM:
The GMS module fails to load on azure SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the azure SLES15 SP5 kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the kernel and load as expected on azure SLES15 SP5.

Patch ID: VRTSgms-8.0.2.1300

* 4133279 (Tracking ID: 4133278)

SYMPTOM:
The GMS module fails to load on SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP5 kernel.

RESOLUTION:
GMS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5.

Patch ID: VRTSgms-8.0.2.1200

* 4126266 (Tracking ID: 4125932)

SYMPTOM:
no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.

DESCRIPTION:
no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.

RESOLUTION:
Updated the code to build gms with correct kbuild symbols.

* 4127527 (Tracking ID: 4107112)

SYMPTOM:
The GMS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-gms script to consider kernel-build version in exact-version-module version calculation.

* 4127528 (Tracking ID: 4107753)

SYMPTOM:
The GMS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-gms script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not 
present.

* 4129708 (Tracking ID: 4129707)

SYMPTOM:
GMS rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to GMS rpm.

Patch ID: VRTSglm-8.0.2.1500

* 4133277 (Tracking ID: 4133276)

SYMPTOM:
The GLM module fails to load on SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP5 kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5.

* 4134946 (Tracking ID: 4134945)

SYMPTOM:
The GLM module fails to load on azure SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the azure SLES15 SP5 kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the kernel and load as expected on azure SLES15 SP5.

* 4138274 (Tracking ID: 4126298)

SYMPTOM:
System may panic due to unable to handle kernel paging request 
and memory corruption could happen.

DESCRIPTION:
Panic may occur due to a race between a spurious wakeup and normal 
wakeup of thread waiting for glm lock grant. Due to the race, 
the spurious wakeup would have already freed a memory and then 
normal wakeup thread might be passing that freed and reused memory 
to wake_up function causing memory corruption and panic.

RESOLUTION:
Fixed the race between a spurious wakeup and normal wakeup threads
by making wake_up lock protected.

Patch ID: VRTSglm-8.0.2.1300

* 4133277 (Tracking ID: 4133276)

SYMPTOM:
The GLM module fails to load on SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP5 kernel.

RESOLUTION:
GLM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5.

Patch ID: VRTSglm-8.0.2.1200

* 4127524 (Tracking ID: 4107114)

SYMPTOM:
The GLM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-glm script to consider kernel-build version in exact-version-module version calculation.

* 4127525 (Tracking ID: 4107754)

SYMPTOM:
The GLM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-glm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not 
present.

* 4129715 (Tracking ID: 4129714)

SYMPTOM:
GLM rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to GLM rpm.

Patch ID: VRTSodm-8.0.2.1500

* 4133286 (Tracking ID: 4133285)

SYMPTOM:
The ODM module fails to load on SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP5 kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5.

* 4134950 (Tracking ID: 4134949)

SYMPTOM:
The ODM module fails to load on azure SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the azure SLES15 SP5 kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the kernel and load as expected on azure SLES15 SP5.

Patch ID: VRTSodm-8.0.2.1400

* 4144274 (Tracking ID: 4144269)

SYMPTOM:
After installing VRTSvxfs-8.0.2.1400, ODM fails to start.

DESCRIPTION:
Because of the VxFS version update, the ODM module needs to be repackaged due to an
internal dependency on VxFS version.

RESOLUTION:
As part of this fix, the ODM module has been repackaged to support the updated
VxFS version.

Patch ID: VRTSodm-8.0.2.1300

* 4133286 (Tracking ID: 4133285)

SYMPTOM:
The ODM module fails to load on SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP5 kernel.

RESOLUTION:
ODM module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5.

Patch ID: VRTSodm-8.0.2.1200

* 4126262 (Tracking ID: 4126256)

SYMPTOM:
no symbol version warning for "ki_get_boot" in dmesg after SFCFSHA configuration

DESCRIPTION:
modpost is unable to read VEKI's Module.symvers while building ODM module, which results in no symbol version warning for "ki_get_boot" symbol of VEKI.

RESOLUTION:
Modified the code to make sure that modpost picks all the dependent symbols while building ODM module.

* 4127518 (Tracking ID: 4107017)

SYMPTOM:
The ODM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-odm script to consider kernel-build version in exact-version-module version calculation.

* 4127519 (Tracking ID: 4107778)

SYMPTOM:
The ODM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-odm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not 
present.

* 4129838 (Tracking ID: 4129837)

SYMPTOM:
ODM rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to ODM rpm.

Patch ID: VRTSvxfs-8.0.2.1500

* 4119626 (Tracking ID: 4119627)

SYMPTOM:
Command fsck is facing few SELinux permission denials issue.

DESCRIPTION:
Command fsck is facing few SELinux permission denials issue to manage var_log_t files and search init_var_run_t directories.

RESOLUTION:
Required SELinux permissions are added for command fsck to be able to manage var_log_t files and search init_var_run_t directories.

* 4133481 (Tracking ID: 4133480)

SYMPTOM:
The VxFS module fails to load on SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP5 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5.

* 4134952 (Tracking ID: 4134951)

SYMPTOM:
The VxFS module fails to load on azure SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the azure SLES15 SP5 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on azure SLES15 SP5.

* 4146580 (Tracking ID: 4141876)

SYMPTOM:
Old SecureFS configuration is getting deleted.

DESCRIPTION:
It is possible that multiple instances of vxschadm binary are getting executed to update the config file, however there are high chances that the last updater will nullify the previous binary changes.

RESOLUTION:
Added synchronisation mechanism between two processes of vxschadm command running across the Infoscale cluster.

* 4148734 (Tracking ID: 4148732)

SYMPTOM:
Memory by binaries / daemons who are calling this API, e.g. vxfstaskd daemon

DESCRIPTION:
At every call get_dg_vol_names() is not freeing the 8192 bytes of memory, which will result to increase in the total consumption of virtual memory by vxfstaskd.

RESOLUTION:
Free the unused memory.

* 4150065 (Tracking ID: 4149581)

SYMPTOM:
WORM checkpoints and files will not be deleted despite their retention period is expired.

DESCRIPTION:
Frequent FS freeze operations, like creation of checkpoint, may cause SecureClock to get drifted from its regular update cycle.

RESOLUTION:
Fixed this bug.

Patch ID: VRTSvxfs-8.0.2.1400

* 4141666 (Tracking ID: 4141665)

SYMPTOM:
Security vulnerabilities exist in the Zlib third-party components used by VxFS.

DESCRIPTION:
VxFS uses Zlib third-party components with some security vulnerabilities.

RESOLUTION:
VxFS is updated to use a newer version of Zlib third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.2.1300

* 4133481 (Tracking ID: 4133480)

SYMPTOM:
The VxFS module fails to load on SLES15 SP5.

DESCRIPTION:
This issue occurs due to changes in the SLES15 SP5 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on SLES15 SP5.

* 4133965 (Tracking ID: 4116329)

SYMPTOM:
fsck -o full -n command will fail with error:
"ERROR: V-3-28446:  bc_write failure devid = 0, bno = 8, len = 1024"

DESCRIPTION:
Previously, to correct the file system WORM/SoftWORM, we didn't check  if user wanted to correct the pflags or just wanted to validate if value is flag is missing or not. Also fsck was not capable to handle SOFTWORM flag.

RESOLUTION:
Code added to not try to fix the the problem if user ran fsck with -n option. Also SOFTWORM scenario is added.

* 4134040 (Tracking ID: 3979756)

SYMPTOM:
Multiple fcntl F_GETLK calls are taking longer to complete in case of CFS than LM. Each call adds more and more delay and contributes to what seen later as performance degradation.

DESCRIPTION:
F_SETLK is utilizing the lock caches while taking or invalidating the locks, that's why it does not need to broadcast the messages to peer nodes. Whereas, F_GETLK does not utilize the caches and broadcasts the messages to all the peer nodes. Therefore operations of F_SETLK are not penalized but F_GETLK by almost the factor of 2 when used in CFS as compared to LM.

RESOLUTION:
Added cache for F_GETLK operation as well so that broadcast messages are avoided which would save some time.

Patch ID: VRTSvxfs-8.0.2.1200

* 4121230 (Tracking ID: 4119990)

SYMPTOM:
Some nodes in cluster are in hang state and recovery is stuck.

DESCRIPTION:
There is a deadlock where one thread locks the buffer and wait for the recovery to complete. Recovery on the other hand may get stuck while flushing and
invalidating the buffers from buffer cache as it cannot lock the buffer.

RESOLUTION:
If recovery is in progress, release buffer and return VX_ERETRY callers will retry the operation. There are some cases where lock is taken on 2 buffers. For
those cases pass the flag VX_NORECWAIT which will retry the operation after releasing both the buffers.

* 4125870 (Tracking ID: 4120729)

SYMPTOM:
Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source.

DESCRIPTION:
If full sync is started in recovery mode, we don't update state on target during start of replication (from failed to full-sync running). This state change is missed and is causing issues with states for next incremental syncs.

RESOLUTION:
Updated the code to address the correct state at target when vfr full sync is started in recovery mode

* 4125871 (Tracking ID: 4114176)

SYMPTOM:
After failover, job sync fails with error "Device or resource busy".

DESCRIPTION:
If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy".

RESOLUTION:
Code is modified to correct the state on target when job was started in recovery mode.

* 4125873 (Tracking ID: 4108955)

SYMPTOM:
VFR job hangs on source if thread creation fails on target.

DESCRIPTION:
On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job.

RESOLUTION:
Code is modified to  retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error.

* 4125875 (Tracking ID: 4112931)

SYMPTOM:
vxfsrepld consumes a lot of virtual memory when it has been running for long time.

DESCRIPTION:
Current VxFS thread pool  is not efficient if it has been used for daemon process like vxfsrepld. It didn't release underline resources used by newly created threads which in turn increasing virtual memory consumption of that process. underline resources of threads will be released either when we call pthread_join() on them or when threads are created with detached attribute. with current implementation, pthread_join() is called only when thread pool is destroyed as a part of cleanup. but with vxfsrepld, it is not expected to call pool_destroy() every time when job is successful. pool_destroy is called only when repld is stopped. This was leading to consolidating threads resources and increasing VM usage of process.

RESOLUTION:
Code is modified to detach threads when it exits.

* 4125878 (Tracking ID: 4096267)

SYMPTOM:
Veritas File Replication jobs might failed when there are large number of jobs run in parallel.

DESCRIPTION:
File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication.
With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service and
job might failed.

RESOLUTION:
updated code to handle the code to take a hold while checking invalid job configuration.

* 4126104 (Tracking ID: 4122331)

SYMPTOM:
Block number, device id information, in-core inode state are missing from the error messages logged in syslog while marking a bitmap/inode as "BAD".

DESCRIPTION:
Block number, device id information, in-core inode state are missing from the error messages logged in syslog upon encountering bitmap corruption or while marking an inode "BAD".

RESOLUTION:
Code changes have been done to include required missing information in corresponding error messages.

* 4127509 (Tracking ID: 4107015)

SYMPTOM:
The VxFS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-vxfs script to consider kernel-build version in exact-version-module version calculation.

* 4127510 (Tracking ID: 4107777)

SYMPTOM:
The VxFS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-vxfs script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present.

* 4127594 (Tracking ID: 4126957)

SYMPTOM:
If "fsadm -o mntunlock=<string> <mountpoint>" and "umount -f <mountpoint>" operations are run in parallel,
system may crash with following stack:

 vx_aioctl_unsetmntlock+0xd3/0x2a0 [vxfs]
 vx_aioctl_vfs+0x256/0x2d0 [vxfs]
 vx_admin_ioctl+0x156/0x2f0 [vxfs]
 vxportalunlockedkioctl+0x529/0x660 [vxportal]
 do_vfs_ioctl+0xa4/0x690
 ksys_ioctl+0x64/0xa0
 __x64_sys_ioctl+0x16/0x20
 do_syscall_64+0x5b/0x1b0

DESCRIPTION:
There is a race condition between these two operations, due to which by the time fsadm thread tries to access
FS data structure, it is possible that umount operation has already freed the structures, which leads to panic.

RESOLUTION:
As a fix, the fsadm thread first checks if the umount operation is in progress. If so, it fails rather than continuing.

* 4127720 (Tracking ID: 4127719)

SYMPTOM:
fsdb binary fails to open the device on a VVR secondary volume in RW mode although it has write permissions. fstyp binary could not dump fs_uuid value.

DESCRIPTION:
We have observed that fsdb when run on a VVR secondary volume bails out. 
At fs level, the volume has write permission but since it is secondary from VVR perspective, it is not allowed to be opened in write mode at block layer.
fstyp binary could not dump fs_uuid value along with other superblock fields.

RESOLUTION:
Added fallback logic, wherein fsdb if fs_open fails to open the device in Read-Write mode, it will try to open it in Read-Only mode. Fixed fstyp binary to dump fs_uuid value along with other superblock fields.
Code changes have been done to reflect these changes.

* 4127785 (Tracking ID: 4127784)

SYMPTOM:
/opt/VRTS/bin/fsppadm validate /mnt4 invalid_uid.xml
UX:vxfs fsppadm: WARNING: V-3-26537: Invalid USER id 1xx specified at or near line 10

DESCRIPTION:
Before this fix, fsppadm command was not stoping the parsing, and was treating invalid uid/gid as warning only. Here invalid uid/gid means whether
they are integer numbers or not. If given uid/gid is not existing then it is still a warning.

RESOLUTION:
Code added to give user proper error in case if invalid user/group ids are provided.

* 4128249 (Tracking ID: 4119965)

SYMPTOM:
VxFS mount binary failed to mount VxFS with SELinux context.

DESCRIPTION:
Mounting the file system using VxFS binary with specific SELinux context shows below error:
/FSQA/fsqa/vxfsbin/mount -t vxfs /dev/vx/dsk/testdg/vol1 /mnt1 -ocontext="system_u:object_r:httpd_sys_content_t:s0"
UX:vxfs mount: ERROR: V-3-28681: Selinux context is invalid or option/operation is not supported. Please look into the syslog for more information.

RESOLUTION:
VxFS mount command is modified to pass context options to kernel only if SELinux is enabled.

* 4128723 (Tracking ID: 4114127)

SYMPTOM:
Hang in VxFS internal LM Conformance - inotify test

DESCRIPTION:
On SLES15 SP4 kernel, we observed that internal test is going into hang state with below process stack:

[<0>] fsnotify_sb_delete+0x19d/0x1e0
[<0>] generic_shutdown_super+0x3f/0x120
[<0>] deactivate_locked_super+0x3c/0x70
[<0>] vx_unmount_cleanup_notify.part.37+0x96/0x150 [vxfs]
[<0>] vx_kill_sb+0x91/0x2b0 [vxfs]
[<0>] deactivate_locked_super+0x3c/0x70
[<0>] cleanup_mnt+0xb8/0x150
[<0>] task_work_run+0x70/0xb0
[<0>] exit_to_user_mode_prepare+0x224/0x230
[<0>] syscall_exit_to_user_mode+0x18/0x40
[<0>] do_syscall_64+0x67/0x80
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xae

RESOLUTION:
Code changes are done to resolve the hang.

* 4129494 (Tracking ID: 4129495)

SYMPTOM:
Kernel panic observed in internal VxFS LM conformance testing.

DESCRIPTION:
Kernel panic has been observed in internal VxFS testing, OS writeback thread marking inode for writeback and then calling filesystem hook vx_writepages.
The OS writeback is not expected to get inside iput(), as it would self deadlock while waiting on writeback. This deadlock causing tsrapi command hang which further causing kernel panic.

RESOLUTION:
Modified code to avoid deallocation of inode when the inode writeback is in progress.

* 4129681 (Tracking ID: 4129680)

SYMPTOM:
VxFS rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to VxFS rpm.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sles15_x86_64-Patch-8.0.2.1500.tar.gz to /tmp
2. Untar infoscale-sles15_x86_64-Patch-8.0.2.1500.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sles15_x86_64-Patch-8.0.2.1500.tar.gz
    # tar xf /tmp/infoscale-sles15_x86_64-Patch-8.0.2.1500.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale802P1500 [<host1> <host2>...]

You can also install this patch together with 8.0.2 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 8.0.2 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
Secure boot support :
 
InfoScale support for UEFI secure boot feature was added in InfoScale 8.0.2 release. In the patch 
8.0.2U2 (8.0.2.1500),
we have updated the keys used for signing InfoScale kernel drivers.
- There is no impact to customers with Secure boot disabled.
- Customers with Secure boot enabled will need to do following:
1. Download the latest key from https://sort.veritas.com/public/infoscale/keys/pubkey.der and enroll 
it before installing or upgrading to InfoScale 8.0.2U2 (8.0.2.1500).
2. Download the latest key from https://sort.veritas.com/public/infoscale/keys/pubkey.der and enroll 
it before installing or upgrading to any InfoScale 8.0.2 patch built on or after 4th January 2024.
3. If customers are trying to install any InfoScale release (version 8.0.2.x) with release date prior 
to 4th January 2024, they would need to contact Veritas Support and get older keys and enroll them 
before install/upgrade of InfoScale.


OTHERS
------
NONE