infoscale-rhel8_x86_64-Patch-7.4.2.2500

 Basic information
Release type: Patch
Release date: 2022-05-18
OS update support: RHEL8 x86-64 Update 6
Technote: None
Documentation: None
Popularity: 782 viewed    downloaded
Download size: 808.17 MB
Checksum: 2498436735

 Applies to one or more of the following products:
InfoScale Availability 7.4.2 On RHEL8 x86-64
InfoScale Enterprise 7.4.2 On RHEL8 x86-64
InfoScale Foundation 7.4.2 On RHEL8 x86-64
InfoScale Storage 7.4.2 On RHEL8 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
3930708, 4006982, 4007226, 4007375, 4007376, 4007677, 4008072, 4008606, 4010892, 4011866, 4011971, 4012063, 4012485, 4012744, 4012745, 4012746, 4012751, 4012848, 4013034, 4013036, 4013155, 4013169, 4013420, 4013718, 4014244, 4015834, 4018173, 4018174, 4018178, 4018181, 4018182, 4019533, 4019535, 4020207, 4020528, 4021055, 4021057, 4021058, 4021059, 4021238, 4021240, 4021346, 4021359, 4021366, 4021428, 4021748, 4022049, 4022052, 4022053, 4022054, 4022056, 4022069, 4029493, 4030882, 4031189, 4031195, 4031342, 4037283, 4037288, 4037622, 4037952, 4038101, 4039475, 4039510, 4039511, 4039512, 4039517, 4039694, 4040238, 4040608, 4040612, 4040618, 4040831, 4040834, 4040837, 4042038, 4042650, 4042686, 4043042, 4044184, 4046200, 4046265, 4046266, 4046267, 4046271, 4046272, 4046415, 4046419, 4046420, 4046423, 4046515, 4046520, 4046524, 4046526, 4046829, 4046906, 4046907, 4046908, 4047568, 4047588, 4047590, 4047592, 4047695, 4047722, 4049091, 4049097, 4049268, 4049416, 4049440, 4049572, 4049693, 4051701, 4051702, 4051705, 4051706, 4052763, 4052766, 4053173, 4053174, 4053176, 4053177, 4053178, 4053228, 4054311, 4054435, 4054857, 4055211, 4056919, 4057309, 4057311, 4057313, 4057424, 4057429, 4058802, 4058873, 4059899, 4059901, 4060549, 4060566, 4060584, 4060585, 4060805, 4060839, 4060962, 4060966, 4061004, 4061036, 4061055, 4061057, 4061203, 4061298, 4061317, 4061462, 4061509, 4061527, 4061646, 4062461, 4062577, 4062746, 4062747, 4062751, 4062755, 4062783, 4062795, 4063374, 4064523, 4064920, 4066930, 4067422, 4067460, 4067463, 4067464, 4067466, 4067686, 4067706, 4067710, 4067712, 4067713, 4067715, 4067717, 4067914, 4067915, 4069522, 4069523, 4069524, 4070099, 4070186, 4070253, 4070366, 4071007, 4071090, 4071101, 4071131, 4072337, 4072338, 4072339, 4072340, 4072874, 4074770

 Patch ID:
VRTSsfcpi-7.4.2.1100-GENERIC
VRTSglm-7.4.2.2100-RHEL8
VRTSvlic-4.01.742.300-RHEL8
VRTSvcswiz-7.4.2.2100-RHEL8
VRTSpython-3.7.4.35-RHEL8
VRTSdbac-7.4.2.2300-RHEL8
VRTSvxfs-7.4.2.2600-RHEL8
VRTSodm-7.4.2.2600-RHEL8
VRTSvcs-7.4.2.2500-RHEL8
VRTSamf-7.4.2.2500-RHEL8
VRTScps-7.4.2.2500-RHEL8
VRTSgab-7.4.2.2500-RHEL8
VRTSllt-7.4.2.2500-RHEL8
VRTSvcsag-7.4.2.2500-RHEL8
VRTSvxfen-7.4.2.2500-RHEL8
VRTSaslapm-7.4.2.3200-RHEL8
VRTSvxvm-7.4.2.3200-RHEL8
VRTScavf-7.4.2.1400-GENERIC
VRTSgms-7.4.2.2600-RHEL8
VRTSfsadv-7.4.2.2600-RHEL8
VRTSsfmh-7.4.2.601_Linux.rpm
VRTSveki-7.4.2.2500-RHEL8
VRTSvcsea-7.4.2.2500-RHEL8
VRTSdbed-7.4.2.2200-RHEL
VRTSperl-5.30.0.2-RHEL8

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.2 * * *
                         * * * Patch 2500 * * *
                         Patch Date: 2022-05-16


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4.2 Patch 2500


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL8 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTScavf
VRTScps
VRTSdbac
VRTSdbed
VRTSfsadv
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSperl
VRTSpython
VRTSsfcpi
VRTSsfmh
VRTSvcs
VRTSvcsag
VRTSvcsea
VRTSvcswiz
VRTSveki
VRTSvlic
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.2
   * InfoScale Enterprise 7.4.2
   * InfoScale Foundation 7.4.2
   * InfoScale Storage 7.4.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSsfcpi-7.4.2.1100
* 4014244 (4014243) The patch installer does not provide the rolling upgrade option to apply a patch.
Patch ID: VRTSperl-5.30.0.2
* 4074770 (4074769) Security vulnerability detected on VRTSperl 5.30.0.0 released with Infoscale 7.4.2.
Patch ID: VRTSdbed-7.4.2.2200
* 4058802 (4073842) Oracle 21c support
Patch ID: VRTSvcsea-7.4.2.2500
* 4064920 (4064917) As a part of Oracle 21c support, fixed the issue Oracle agent fails to generate ora_api using the build_oraapi.sh script
Patch ID: VRTSvcsea-7.4.2.1100
* 4020528 (4001565) On Solaris 11.4, IMF fails to provide notifications when Oracle processes stop.
Patch ID: VRTSvcs-7.4.2.2500
* 4059899 (4059897) In some cases, core files get generated after executing the hacf -verify command.
* 4059901 (4059900) On a VCS node, hastop -local is unresponsive.
* 4071007 (4070999) Processes registered under VCS control get killed after running the 'hastop -local -force' command.
Patch ID: VRTSvcs-7.4.2.2100
* 4046515 (4040705) hacli hangs indefinitely when command exceeds character limit of 4096
* 4046520 (4040656) Gracefully restart HAD in occurrence of ENOMEM error
* 4046526 (4043700) While Online operation is in progress and the PreOnline trigger is already executing; Multiple PreOnline triggers can be executed on the same/different nodes in the cluster for failover/parallel/hybrid service groups.
Patch ID: VRTSveki-7.4.2.2500
* 4067686 (4012098) SE Linux context is not set properly during package installation when upgrading to IS-7.4.2.
Patch ID: VRTSvxfen-7.4.2.2500
* 4057309 (4057308) After an InfoScale upgrade, the updated values of vxfen tunables that are used when loading the corresponding module fail to persist.
* 4067460 (4004248) vxfend generates core sometimes during vxfen race in CPS based fencing configuration
* 4072340 (4072335) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).
Patch ID: VRTSvxfen-7.4.2.2300
* 4053177 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTSvxfen-7.4.2.2100
* 4046423 (4043619) OCPR failed from SCSI3 fencing to Customized mode
Patch ID: VRTSvxfen-7.4.2.1700
* 4029493 (4029261) An entire InfoScale cluster may go down unexpectedly if one of its nodes receives a RECONFIG message during a shutdown or a restart operation.
* 4040837 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSvxfen-7.4.2.1300
* 4021055 (4010237) On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.
Patch ID: VRTSvxfen-7.4.2.1200
* 4022054 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSvxfen-7.4.2.1100
* 4006982 (3988184) The vxfen process cannot complete due to incomplete vxfentab file.
* 4007375 (4000745) The VxFEN process fails to start due to late discovery of the VxFEN disk group.
* 4007376 (3996218) In a customized fencing mode, the 'vxfenconfig -c' command creates a new vxfend process even if VxFen is already configured.
* 4007677 (3970753) Freeing uninitialized/garbage memory causes panic in vxfen.
Patch ID: VRTSamf-7.4.2.2500
* 4072337 (4072335) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).
Patch ID: VRTSamf-7.4.2.2300
* 4053176 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTSamf-7.4.2.2100
* 4046524 (4041596) A cluster node panics when the arguments passed to a process that is registered with AMF exceeds 8K characters.
Patch ID: VRTSamf-7.4.2.1900
* 4042650 (4041703) The system panics when the Mount and the CFSMount agents fail to register with AMF.
Patch ID: VRTSamf-7.4.2.1700
* 4040834 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSamf-7.4.2.1300
* 4019533 (4018791) A cluster node panics when the AMF module attempts to access an executable binary or a script using its absolute path.
* 4021057 (4010237) On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.
Patch ID: VRTSamf-7.4.2.1200
* 4022053 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSamf-7.4.2.1100
* 4012746 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).
Patch ID: VRTSgab-7.4.2.2500
* 4057313 (4057312) After an InfoScale upgrade, the updated values of GAB tunables that are used when loading the corresponding modules fail to persist.
* 4072339 (4072335) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).
Patch ID: VRTSgab-7.4.2.2300
* 4053174 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTSgab-7.4.2.2100
* 4046415 (4046413) gab node count/fencing quorum not getting updated properly
* 4046419 (4046418) gab startup does not fail even if llt is not configured
Patch ID: VRTSgab-7.4.2.1700
* 4040831 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSgab-7.4.2.1300
* 4021059 (4010237) On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.
Patch ID: VRTSgab-7.4.2.1200
* 4022052 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSgab-7.4.2.1100
* 4012745 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
* 4013034 (4011683) The GAB module failed to start and the system log messages indicate failures with the mknod command.
Patch ID: VRTSllt-7.4.2.2500
* 4031189 (4031186) Added an alert message to show retransmission percentage in lltstats.<hosts> file generated from getcomms script.
* 4031195 (4031193) Added new field to show LLT flow control in lltshow output.
* 4057311 (4057310) After an InfoScale upgrade, the updated values of LLT tunables that are used when loading the corresponding modules fail to persist.
* 4067422 (4040261) During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.
* 4067463 (4066062) Node panic is observed while using llt udp with multiport enabled.
* 4067466 (4067465) Enhancing LLT internal skb allocation to allocate from vmalloc
* 4072338 (4072335) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).
Patch ID: VRTSllt-7.4.2.2300
* 4053173 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTSllt-7.4.2.2100
* 4039475 (4045607) Performance improvement of the UDP multiport feature of LLT on 1500 MTU-based networks.
* 4046200 (4046199) llt over udp configuration now accepts any link tag name
* 4046420 (3989372) When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.
Patch ID: VRTSllt-7.4.2.1700
* 4030882 (4029253) LLT may not reuse the buffer slots on which NAK is received from the earlier RDMA writes.
* 4038101 (4034933) After installing VRTScps 6.2.1.002, the following error is logged in cpserver_A.log "CPS CRITICAL V-97-1400-22017 Error executing update nodes set is_reachable..."
* 4039694 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSllt-7.4.2.1300
* 4019535 (4018581) The LLT module fails to start and the system log messages indicate missing IP address.
* 4021058 (4010237) On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.
Patch ID: VRTSllt-7.4.2.1200
* 4022049 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSllt-7.4.2.1100
* 4012744 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
* 4013036 (3985775) Sometimes, the system log may get flooded with LLT heartbeat loss messages that do not necessarily indicate any actual issues with LLT.
Patch ID: VRTSsfmh-7.4.2.601
* 4062783 (4062782) VIOM VRTSsfmh package on Linux to fix dclid/vxlist issue with InfoScale VxVM version 7.4.1.3104
* 4062795 (4062794) Random services are failed using ansible on rhel8.4 7.4.1 w/latest patches
* 4071101 (4071100) With AFTER dependency of autofs in the xprtld.service it causes ordering cycles where systemd will randomly delete a service caught up in the cycle.
Patch ID: VRTSfsadv-7.4.2.2600
* 4070366 (4070367) After Upgrade "/var/VRTS/fsadv" directory is getting deleted.
* 4071090 (4040281) Security vulnerabilities exist in the OpenSSL, Xpat, and Sqlite third-party components used by VxFS.
Patch ID: VRTSgms-7.4.2.2600
* 4057424 (4057176) Rebooting the system results into emergency mode due to corruption of module dependency files.
* 4061646 (4061644) GMS failed to start after the kernel upgrade.
Patch ID: VRTSgms-7.4.2.1100
* 4022069 (4022067) Unable to load the vxgms module.
Patch ID: VRTSodm-7.4.2.2600
* 4057429 (4056673) Rebooting the system results into emergency mode due to corruption of module dependency files. Incorrect vxgms dependency in odm service file.
* 4060584 (3868609) High CPU usage by vxfs thread.
Patch ID: VRTSodm-7.4.2.2300
* 3930708 (3897161) Oracle Database on Veritas filesystem with Veritas ODM
library has high log file sync wait time.
* 4052766 (4052885) ODM support for RHEL 8.5.
Patch ID: VRTSodm-7.4.2.2200
* 4049440 (4049438) VRTSodm driver will not load with 7.4.2.2200 VRTSvxfs patch.
Patch ID: VRTSvxfs-7.4.2.2600
* 4015834 (3988752) Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.
* 4040612 (4033664) Multiple different issues occur with hardlink replication using VFR.
* 4040618 (4040617) Veritas file replicator is not performing as per the expectation.
* 4060549 (4047921) Replication job getting into hung state when pause/resume operations performed repeatedly.
* 4060566 (4052449) Cluster goes in an 'unresponsive' mode while invalidating pages due to duplicate page entries in iowr structure.
* 4060585 (4042925) Intermittent Performance issue on commands like df and ls.
* 4060805 (4042254) A new feature has been added in vxupgrade which fails disk-layout upgrade if sufficient space is not available in the filesystem.
* 4061203 (4005620) Internal counter of inodes from Inode Allocation Unit (IAU) can be negative if IAU is marked bad.
* 4061527 (4054386) If systemd service fails to load vxfs module, the service still shows status as active instead of failed.
Patch ID: VRTSvxfs-7.4.2.2300
* 4018174 (3941620) Memory starvation during heavy write activity.
* 4043042 (3990168) Accommodation of new interpretation of error from bio structure in linux kernel greater than 4.12.14
* 4052763 (4052883) VxFS support for RHEL 8.5.
Patch ID: VRTSvxfs-7.4.2.2200
* 4013420 (4013139) The abort operation on an ongoing online migration from the native file system to VxFS on RHEL 8.x systems.
* 4040238 (4035040) vfradmin stats command failed to show all the fields in the command output in-case job paused and resume.
* 4040608 (4008616) fsck command got hung.
* 4042686 (4042684) ODM resize fails for size 8192.
* 4044184 (3993140) Compclock was not giving accurate results.
* 4046265 (4037035) VxFS should have the ability to control the number of inactive processing threads.
* 4046266 (4043084) panic in vx_cbdnlc_lookup
* 4046267 (4034910) Asynchronous access/updatation of global list large_dirinfo  can corrupt its values in multi-threaded execution.
* 4046271 (3993822) fsck stops running on a file system
* 4046272 (4017104) Deleting a lot of files can cause resource starvation, causing panic or momentary hangs.
* 4046829 (3993943) The fsck utility hit the coredump due to segmentation fault in get_dotdotlst()
* 4047568 (4046169) On RHEL8, while doing a directory move from one FS (ext4 or vxfs) to migration VxFS, the migration can fail and FS will be disable.
* 4049091 (4035057) On RHEL8, IOs done on FS, while other FS to VxFS migration is in progress can cause panic.
* 4049097 (4049096) Dalloc change ctime in background while extent allocation
Patch ID: VRTSvxvm-7.4.2.3200
* 4011971 (3991668) In a Veritas Volume Replicator (VVR) configuration where secondary logging is enabled, data inconsistency is reported after the "No IBC message arrived" error is encountered.
* 4013169 (4011691) High CPU consumption on the VVR secondary nodes because of high pending IO load.
* 4037288 (4034857) VxVM support on SLES 15 SP2
* 4054311 (4040701) Some warnings are observed while installing vxvm package.
* 4056919 (4056917) Import of disk group in Flexible Storage Sharing (FSS) with missing disks can lead to data corruption.
* 4058873 (4057526) Adding check for init while accessing /var/lock/subsys/ path in vxnm-vxnetd.sh script.
* 4060839 (3975667) Softlock in vol_ioship_sender kernel thread
* 4060962 (3915202) Reporting repeated disk failures & DCPA events for other internal disks
* 4060966 (3959716) System may panic with sync replication with VVR configuration, when the RVG is in DCM mode.
* 4061004 (3993242) vxsnap prepare command when run on vset sometimes fails.
* 4061036 (4031064) Master switch operation is hung in VVR secondary environment.
* 4061055 (3999073) The file system corrupts when the cfsmount group goes into offline state.
* 4061057 (3931583) Node may panic while unloading the vxio module due to race condition.
* 4061298 (3982103) I/O hang is observed in VVR.
* 4061317 (3925277) DLE (Dynamic Lun Expansion) of single path GPT disk may corrupt disk public region.
* 4061509 (4043337) logging fixes for VVR
* 4062461 (4066785) Unable to import diskgroup even replicated disks are in SPLIT mode
* 4062577 (4062576) hastop -local never finishes on Rhel8.4 and RHEL8.5 servers with latest minor kernels due to hang in vxdg deport command.
* 4062746 (3992053) Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex.
* 4062747 (3943707) vxconfigd reconfig hang when joing a cluster
* 4062751 (3989185) In a Veritas Volume Manager(VVR) environment vxrecover command can hang.
* 4062755 (3978453) Reconfig hang during master takeover
* 4063374 (4005121) Application IOPS drop in DCM mode with DCO-integrated DCM
* 4064523 (4049082) I/O read error is displayed when remote FSS node rebooting.
* 4066930 (3951527) Data loss on DR site seen while upgrading from Infoscale 7.3.1 or before to 7.4.x or later versions.
* 4067706 (4060462) Nidmap information is not cleared after a node leaves, resulting in add node failure subsequently.
* 4067710 (4064208) Node failed to join the existing cluster after bits are upgraded to a newer version.
* 4067712 (3868140) VVR primary site node might panic if the rlink disconnects while some data is getting replicated to secondary.
* 4067713 (3997531) Fail to start the VVR replication as vxnetd threads are not running
* 4067715 (4008740) Access to freed memory
* 4067717 (4009151) Auto-import of diskgroup on system reboot fails with error 'Disk for diskgroup not found'.
* 4067914 (4037757) Add a tunable to control auto start VVR services on boot up.
* 4067915 (4059134) Resync takes too long on raid-5 volume
* 4069522 (4043276) vxattachd is onlining previously offlined disks.
* 4069523 (4056751) Import read only cloned disk corrupts private region
* 4069524 (4056954) Vradmin addsec failures when encryption is enabled over wire
* 4070099 (3159650) Implemented vol_vvr_use_nat tunable support for vxtune.
* 4070253 (3911930) Provide a way to clear the PGR_FLAG_NOTSUPPORTED flag on the device instead of using exclude and include commands.
* 4071131 (4071605) A security vulnerability exists in the third-party component libxml2.
* 4072874 (4046786) FS becomes NOT MOUNTED after powerloss/poweron on all nodes.
* 4070186 (4041822) In an SRDF/Metro array setup, the last path is in the enabled state even after all the host and the array-side switch ports are disabled.
Patch ID: VRTSvxvm-7.4.2.2400
* 4018181 (3995474) VxVM sub-disks IO error occurs unexpectedly on SLES12SP3.
* 4051701 (4031597) vradmind generates a core dump in __strncpy_sse2_unaligned.
* 4051702 (4019182) In case of a VxDMP configuration, an InfoScale server panics when applying a patch.
* 4051705 (4049371) DMP unable to display all WWN details when running "vxdmpadm getctlr all".
* 4051706 (4046007) disk private region gets corrupted after cluster name change in FSS environment.
* 4053228 (4053230) VxVM support for RHEL 8.5
* 4055211 (4052191) Unexcepted scripts or commands got launched from vxvm-configure script because of incorrect comments format.
Patch ID: VRTSvxvm-7.4.2.2200
* 4018173 (3852146) Shared DiskGroup(DG) fails to import when "-c" and "-o noreonline" options 
are
specified together
* 4018178 (3906534) After enabling DMP (Dynamic Multipathing) Native support, enable /boot to be
mounted on DMP device.
* 4031342 (4031452) vxesd core dump in esd_write_fc()
* 4037283 (4021301) Data corruption issue observed in VxVM on RHEL8.
* 4042038 (4040897) Add support for HPE MSA 2060 arrays in the current ASL.
* 4046906 (3956607) vxdisk reclaim dumps core.
* 4046907 (4041001) In VxVM, system is getting hung when some nodes are rebooted.
* 4046908 (4038865) System panick at vxdmp module in IRQ stack.
* 4047588 (4044072) I/Os fail for NVMe disks with 4K block size on the RHEL 8.4 kernel.
* 4047590 (4045501) The VRTSvxvm and the VRTSaslapm packages fail to install on Centos 8.4 systems.
* 4047592 (3992040) bi_error - bi_status conversion map added for proper interpretation of errors at FS side.
* 4047695 (3911930) Provide a way to clear the PGR_FLAG_NOTSUPPORTED on the device instead of using
exclude/include commands
* 4047722 (4023390) Vxconfigd keeps dump core as invalid private region offset on a disk.
* 4049268 (4044583) A system goes into the maintenance mode when DMP is enabled to manage native devices.
Patch ID: VRTSvxvm-7.4.2.1900
* 4020207 (4018086) system hang was observed when RVG was in DCM resync with SmartMove as ON.
* 4039510 (4037915) VxVM 7.4.1 support for RHEL 8.4 compilation errors
* 4039511 (4037914) BUG: unable to handle kernel NULL pointer dereference
* 4039512 (4017334) vxio stack trace warning message kmsg_mblk_to_msg can be seen in systemlog
* 4039517 (4012763) IO hang may happen in VVR (Veritas Volume Replicator) configuration when SRL overflows for one rlink while another one rlink is in AUTOSYNC mode.
Patch ID: VRTSvxvm-7.4.2.1500
* 4018182 (4008664) System panic when signal vxlogger daemon that has ended.
* 4020207 (4018086) system hang was observed when RVG was in DCM resync with SmartMove as ON.
* 4021238 (4008075) Observed with ASL changes for NVMe, This issue observed in reboot scenario. For every reboot machine was hitting panic And this was happening in loop.
* 4021240 (4010612) This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on. means every nvme/ssd disks names would be 
hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....
* 4021346 (4010207) System panicked due to hard-lockup due to a spinlock not released properly during the vxstat collection.
* 4021359 (4010040) A security issue occurs during Volume Manager configuration.
* 4021366 (4008741) VxVM device files are not correctly labeled to prevent unauthorized modification - device_t
* 4021428 (4020166) Vxvm Support on RHEL8 Update3
* 4021748 (4020260) Failed to activate/set tunable dmp native support for Centos 8
Patch ID: VRTSvxvm-7.4.2.1400
* 4018182 (4008664) System panic when signal vxlogger daemon that has ended.
* 4020207 (4018086) system hang was observed when RVG was in DCM resync with SmartMove as ON.
* 4021346 (4010207) System panicked due to hard-lockup due to a spinlock not released properly during the vxstat collection.
* 4021428 (4020166) Vxvm Support on RHEL8 Update3
* 4021748 (4020260) Failed to activate/set tunable dmp native support for Centos 8
Patch ID: VRTSvxvm-7.4.2.1300
* 4008606 (4004455) Instant restore failed for a snapshot created on older version DG.
* 4010892 (4009107) CA chain certificate verification fails in SSL context.
* 4011866 (3976678) vxvm-recover:  cat: write error: Broken pipe error encountered in syslog.
* 4011971 (3991668) Veritas Volume Replicator (VVR) configured with sec logging reports data inconsistency when hit "No IBC message arrived" error.
* 4012485 (4000387) VxVM support on RHEL 8.2
* 4012848 (4011394) Performance enhancement for cloud tiering.
* 4013155 (4010458) In VVR (Veritas Volume replicator), the rlink might inconsistently disconnect due to unexpected transactions.
* 4013169 (4011691) High CPU consumption on the VVR secondary nodes because of high pending IO load.
* 4013718 (4008942) Docker infoscale plugin is failing to unmount the filesystem, if the cache object is full
Patch ID: VRTSvlic-4.01.742.300
* 4049416 (4049416) Migrate Telemetry Collector from Java to Python
Patch ID: VRTSpython-3.7.4.35
* 4049693 (4049692) VRTSpython package has been updated with more python modules to support Licensing component.
Patch ID: VRTSdbac-7.4.2.2300
* 4053178 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTSdbac-7.4.2.1400
* 4037952 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSdbac-7.4.2.1200
* 4022056 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSdbac-7.4.2.1100
* 4012751 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).
Patch ID: VRTScps-7.4.2.2500
* 4054435 (4018218) Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2
* 4067464 (4056666) The Error writing to database message may intermittently appear in syslogs on CP servers.
Patch ID: VRTScps-7.4.2.1600
* 4038101 (4034933) After installing VRTScps 6.2.1.002, the following error is logged in cpserver_A.log "CPS CRITICAL V-97-1400-22017 Error executing update nodes set is_reachable..."
Patch ID: VRTSvcsag-7.4.2.2500
* 4061462 (4044567) The HostMonitor agent faults while logging the memory usage of a system.
Patch ID: VRTScavf-7.4.2.1400
* 4054857 (4035066) Improving logging capabilities in case of monitor and EP timeout events.
Patch ID: VRTScavf-7.4.2.1300
* 4007226 (4007196) Potential fsck flag being set during cluster shutdown
Patch ID: VRTSvcswiz-7.4.2.2100
* 4049572 (4049573) Veritas High Availability Configuration Wizard (HA-Plugin) is not supported on VMWare vCenter HTML based UI.
Patch ID: VRTSglm-7.4.2.2100
* 4037622 (4037621) GLM module failed to load on RHEL8.4
Patch ID: VRTSglm-7.4.2.1300
* 4008072 (4008071) Rebooting the system results into emergency mode due to corruption of Module Dependency files.
* 4012063 (4001382) GLM module failed to load on RHEL8.2


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSsfcpi-7.4.2.1100

* 4014244 (Tracking ID: 4014243)

SYMPTOM:
The patch installer does not provide the rolling upgrade option to apply a patch.

DESCRIPTION:
Before installing a patch, the patch installer stops the InfoScale services on all the cluster nodes, and then starts them again afterwards. Thus, a full patch upgrade involves cluster downtime. The patch installer should also allow installation of a patch by using the rolling upgrade method. In a rolling upgrade, the patch is installed on one node at a time, thereby avoiding cluster downtime.

RESOLUTION:
The patch installer is enhanced to support installation of a patch using the rolling upgrade method as well.

Patch ID: VRTSperl-5.30.0.2

* 4074770 (Tracking ID: 4074769)

SYMPTOM:
Security vulnerability detected on VRTSperl 5.30.0.0 released with Infoscale 7.4.2.

DESCRIPTION:
Security vulnerability detected in the Net::Netmask module.

RESOLUTION:
Upgraded Net::Netmask module and re-created VRTSperl 5.30.0.2 version to fix the vulnerability .

Patch ID: VRTSdbed-7.4.2.2200

* 4058802 (Tracking ID: 4073842)

SYMPTOM:
Oracle 21c is not supported with previous releases

DESCRIPTION:
Oracle 21c support with SFAE

RESOLUTION:
Done changes to support Oracle 21c with SFAE

Patch ID: VRTSvcsea-7.4.2.2500

* 4064920 (Tracking ID: 4064917)

SYMPTOM:
Oracle agent fails to generate ora_api (which is used for Intentional Offline functionality of Oracle agent) using build_oraapi.sh script for Oracle 21c.

DESCRIPTION:
The build_oraapi.sh script could not connect to library named libdbtools21.a, as on the new Oracle 21c environment generic library is present i.e. '$ORACLE_HOME/rdbms/lib/libdbtools.a'.

RESOLUTION:
This script on Oracle 21c database environment will pick generic library and on older database environments it will pick Database version specific library.

Patch ID: VRTSvcsea-7.4.2.1100

* 4020528 (Tracking ID: 4001565)

SYMPTOM:
On Solaris 11.4, IMF fails to provide notifications when Oracle processes stop.

DESCRIPTION:
On Solaris 11.4, when Oracle processes stop, IMF provides notification to Oracle agent, but the monitor is not scheduled. As as result, agent fails intelligent monitoring.

RESOLUTION:
Oracle agent now provides notifications when Oracle processes stop.

Patch ID: VRTSvcs-7.4.2.2500

* 4059899 (Tracking ID: 4059897)

SYMPTOM:
Executing the hacf -verify command generates core files if main.cf file contains non-ASCII character.

DESCRIPTION:
When main.cf contains a non-ASCII character, in some cases core files get generated for 'hacf -verify'. 
An example snippet of main.cf :
group test1_fail (
        SystemList = { iar730-05vm21 = 0, iar730-05vm23 = 1 }
        %AutoStartList = { iar730-05vm21, iar730-05vm23 }
        )

NULL Length of one buffer causes segmentation fault.

RESOLUTION:
NULL buffer is checked first before finding length.

* 4059901 (Tracking ID: 4059900)

SYMPTOM:
On a VCS node, hastop -local is unresponsive.

DESCRIPTION:
When a resource level online is initiated, VCS does not set any flags on a service group state to mark the group as STARTING. When a VCS stop is initiated on a system, VCS checks all the groups which are either ONLINE or in the process of coming ONLINE. Thus, even though the resources were being brought online, there was no such indication set on the group level. VCS does not mark the CVM group for offline.

The group that comes ONLINE after VCS entered a LEAVING state is missed for offlining causing VCS hastop -local to be unresponsive.

RESOLUTION:
Resource active state is checked to determine if a group is in the process of coming ONLINE.

* 4071007 (Tracking ID: 4070999)

SYMPTOM:
Processes registered under VCS control get killed after running the 'hastop -local -force' command.

DESCRIPTION:
Processes registered under VCS control get killed after running the 'hastop -local -force' command.

RESOLUTION:
'KillMode' value changed from 'control-group' to 'process'.

Patch ID: VRTSvcs-7.4.2.2100

* 4046515 (Tracking ID: 4040705)

SYMPTOM:
hacli hangs indefinitely when command exceeds character limit of 4096.

DESCRIPTION:
hacli hangs indefinitely when '-cmd' option value exceeds character limit 4096. Instead of returning proper error message hacli indefinitely waits for reply from vcs engine.

RESOLUTION:
Increased character limit of hacli '-cmd' option value. Now it's 7680. Also handled validations of different options of hacli. So when '-cmd' option value will exceed this new limit it will give proper error message instead of hanging.

* 4046520 (Tracking ID: 4040656)

SYMPTOM:
In result of ENOMEM error HAD restart with '-restart' option

DESCRIPTION:
When ENOMEM error occurs, HAD retries for some max limit and still if we get ENOMEM error then HAD exits. Then hashadow daemon restarts HAD with '-restart' option. So it doesn't allows to Austostart of failover SG in cluster as it considers as one of the node is in restarting mode.

RESOLUTION:
In nonoccurence of ENOMEM error HAD will gracefully exit and hashadow daemon will restart HAD without '-restart' option. So that node will not be considered as restarted and Autostart of failover SG will be triggered.

* 4046526 (Tracking ID: 4043700)

SYMPTOM:
While Online operation is in progress and the PreOnline trigger is already executing; Multiple PreOnline triggers can be executed on the same/different nodes in the cluster for failover/parallel/hybrid service groups.

DESCRIPTION:
In-progress execution of the PreOnline trigger was not accounted. Thus subsequent online operations can be accepted while there is a PreOnline trigger already executing. Hence multiple PreOnline trigger instances were executed.

RESOLUTION:
While validating an online operation in progress PreOnline triggers were also considered and subsequent online operations were rejected. This fix ensures only one execution of the PreOnline trigger for failover groups.

Patch ID: VRTSveki-7.4.2.2500

* 4067686 (Tracking ID: 4012098)

SYMPTOM:
SE Linux context is not set properly during package installation when upgraded to IS-7.4.2, impacting loading of IS modules, package installation.

DESCRIPTION:
The issue leads to a lot of manual intervention to set the SE Linux context for all the Infoscale modules.
Also, the installation of the VxVM package is impacted due to VEKI module not being loaded

RESOLUTION:
Code changes have been made to set the SE Linux context for veki.

Patch ID: VRTSvxfen-7.4.2.2500

* 4057309 (Tracking ID: 4057308)

SYMPTOM:
After an InfoScale upgrade, the updated values of vxfen tunables that are used when loading the corresponding module fail to persist.

DESCRIPTION:
When the value of a tunable in /etc/sysconfig/vxfen is changed before an RPM upgrade, the existing value gets reset to the default value.

RESOLUTION:
The vxfen module is updated so that its existing tunable values in /etc/sysconfig/vxfen can be retained even after an RPM upgrade.

* 4067460 (Tracking ID: 4004248)

SYMPTOM:
vxfend process segfaults and coredumped

DESCRIPTION:
During fencing race, sometimes vxfend crashes and generates core dump

RESOLUTION:
Vxfend internally uses fork and exec to execute sub tasks. The new child process was using same file descriptors for logging purpose. This simultaneous read of same file using single file descriptor was resulting incorrect read and hence the process crash and coredump. This fix creates new file descriptor for child process to fix the crash

* 4072340 (Tracking ID: 4072335)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 5.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
6(RHEL8.6) is now introduced.

Patch ID: VRTSvxfen-7.4.2.2300

* 4053177 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTSvxfen-7.4.2.2100

* 4046423 (Tracking ID: 4043619)

SYMPTOM:
OCPR failed from SCSI3 fencing to Customized mode

DESCRIPTION:
Online Coordination Point Replacement (OCPR) was broken for SCSI3 to Customized mode based fencing. This was due to a regression due to a change in vxfend invocation

RESOLUTION:
OCPR from SCSI3 to Customized mode is working again with this fix

Patch ID: VRTSvxfen-7.4.2.1700

* 4029493 (Tracking ID: 4029261)

SYMPTOM:
An entire InfoScale cluster may go down unexpectedly if one of its nodes receives a RECONFIG message during a shutdown or a restart operation.

DESCRIPTION:
If a cluster node receives a RECONFIG message while a shutdown or a restart operation is in progress, it may participate in the fencing race. The node may also win the race and then proceed to shut down. If this situation occurs, the fencing module panics the nodes that lost the race, which may cause the entire cluster to go down.

RESOLUTION:
This hotfix updates the fencing module so that it stops a cluster node from joining a race, if it receives a RECONFIG message while a shutdown or a restart operation is in progress.

* 4040837 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSvxfen-7.4.2.1300

* 4021055 (Tracking ID: 4010237)

SYMPTOM:
On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.

DESCRIPTION:
On RHEL, device files should have correct SELinux labels to avoid unauthorized access.

RESOLUTION:
Code changes done to set correct SELinux labels.

Patch ID: VRTSvxfen-7.4.2.1200

* 4022054 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSvxfen-7.4.2.1100

* 4006982 (Tracking ID: 3988184)

SYMPTOM:
The vxfen process cannot complete due to incomplete vxfentab file.

DESCRIPTION:
When I/O fencing starts, the vxfen startup script creates the /etc/vxfentab file on each node. If the coordination disk discovery is slow, the vxfen startup script fails to include all the coordination points in the vxfentab file. As a result, the vxfen startup script gets stuck in a loop.

RESOLUTION:
The vxfen startup process is modified to exit from the loop if it gets stuck while configuring 'vxfenconfig -c'. On exiting from the loop, systemctl starts vxfen again and tries to use the updated vxfentab file.

* 4007375 (Tracking ID: 4000745)

SYMPTOM:
The VxFEN process fails to start due to late discovery of the VxFEN disk group.

DESCRIPTION:
When I/O fencing starts, the VxFEN startup script creates this /etc/vxfentab file on each node. During disk-based fencing, the VxVM module may take longer time to discover the VxFEN disk group. Because of this delay, the 'generate disk list' opreration times out. Therefore, the VxFEN process fails to start and reports the following error: 'ERROR: VxFEN cannot generate vxfentab because vxfendg does not exist'

RESOLUTION:
A new tunable, getdisks_timeout, is introduced to specify the timeout value for the VxFEN disk group discovery. The maximum and the default value for this tunable is 600 seconds. You can set the value of this tunable by adding an getdisks_timeout=<time_in_sec> entry in the /etc/vxfenmode file.

* 4007376 (Tracking ID: 3996218)

SYMPTOM:
In a customized fencing mode, the 'vxfenconfig -c' command creates a new vxfend process even if VxFen is already configured.

DESCRIPTION:
When you configure fencing in the customized mode and run the 'vxfenconfig -c' command, the vxfenconfig utility reports the 'VXFEN ERROR V-11-1-6 vxfen already configured...' error. Moreover, it also creates a new vxfend process even if VxFen is already configured. Such redundant processes may impact the performance of the system.

RESOLUTION:
The vxfenconfig utility is modified so that it does not create a new vxfend process when VxFen is already configured.

* 4007677 (Tracking ID: 3970753)

SYMPTOM:
Freeing uninitialized/garbage memory causes panic in vxfen.

DESCRIPTION:
Freeing uninitialized/garbage memory causes panic in vxfen.

RESOLUTION:
Veritas has modified the VxFen kernel module to fix the issue by initializing the object before attempting to free it.
 .

Patch ID: VRTSamf-7.4.2.2500

* 4072337 (Tracking ID: 4072335)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 5.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
6(RHEL8.6) is now introduced.

Patch ID: VRTSamf-7.4.2.2300

* 4053176 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTSamf-7.4.2.2100

* 4046524 (Tracking ID: 4041596)

SYMPTOM:
A cluster node panics when the arguments passed to a process that is registered with AMF exceeds 8K characters.

DESCRIPTION:
This issue occurs due to improper parsing and handling of argument lists that are passed to processes registered with AMF.

RESOLUTION:
AMF is updated to correctly parse and handle argument lists for processes.

Patch ID: VRTSamf-7.4.2.1900

* 4042650 (Tracking ID: 4041703)

SYMPTOM:
The system panics when the Mount and the CFSMount agents fail to register with AMF.

DESCRIPTION:
This issue occurs after an operating system upgrade. The agents fail to register with AMF, which leads to a system panic.

RESOLUTION:
Added support for cursor in mount structures starting RHEL 8.4.

Patch ID: VRTSamf-7.4.2.1700

* 4040834 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSamf-7.4.2.1300

* 4019533 (Tracking ID: 4018791)

SYMPTOM:
A cluster node panics when the AMF module module attempts to access an executable binary or a script using its absolute path.

DESCRIPTION:
A cluster node panics and generates a core dump, which indicates that an issue with the AMF module. The AMF module function that locates an executable binary or a script using its absolute path fails to handle NULL values.

RESOLUTION:
The AMF module is updated to handle NULL values when locating an executable binary or a script using its absolute path.

* 4021057 (Tracking ID: 4010237)

SYMPTOM:
On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.

DESCRIPTION:
On RHEL, device files should have correct SELinux labels to avoid unauthorized access.

RESOLUTION:
Code changes done to set correct SELinux labels.

Patch ID: VRTSamf-7.4.2.1200

* 4022053 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSamf-7.4.2.1100

* 4012746 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
2(RHEL8.2) is now introduced.

Patch ID: VRTSgab-7.4.2.2500

* 4057313 (Tracking ID: 4057312)

SYMPTOM:
After an InfoScale upgrade, the updated values of GAB tunables that are used when loading the corresponding modules fail to persist.

DESCRIPTION:
If the value of a tunable in  /etc/sysconfig/gab is changed before an RPM upgrade, the change does not persist after the upgrade, and the tunable gets reset to the default value.

RESOLUTION:
The GAB module is updated so that its tunable parameters in /etc/sysconfig/gab can retain the existing values even after an RPM upgrade.

* 4072339 (Tracking ID: 4072335)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 5.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
6(RHEL8.6) is now introduced.

Patch ID: VRTSgab-7.4.2.2300

* 4053174 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTSgab-7.4.2.2100

* 4046415 (Tracking ID: 4046413)

SYMPTOM:
After node addition/node deletion gab node count is not updated properly

DESCRIPTION:
gabconfig -m <node count> command displays error despite providing a correct node count

RESOLUTION:
There was a parsing issue which has been resolved by this fix

* 4046419 (Tracking ID: 4046418)

SYMPTOM:
gab startup does not fail even if llt is not configured

DESCRIPTION:
Since gab service depends on llt service, if llt service fails to start/is not configured, gab should not start

RESOLUTION:
This fix will prevent gab to start if llt is not configured

Patch ID: VRTSgab-7.4.2.1700

* 4040831 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSgab-7.4.2.1300

* 4021059 (Tracking ID: 4010237)

SYMPTOM:
On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.

DESCRIPTION:
On RHEL, device files should have correct SELinux labels to avoid unauthorized access.

RESOLUTION:
Code changes done to set correct SELinux labels.

Patch ID: VRTSgab-7.4.2.1200

* 4022052 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSgab-7.4.2.1100

* 4012745 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

* 4013034 (Tracking ID: 4011683)

SYMPTOM:
The GAB module failed to start and the system log messages indicate failures with the mknod command.

DESCRIPTION:
The mknod command fails to start the GAB module because its format is invalid. If the names of multiple drivers in an environment contain the value "gab" as a substring, all their major device numbers get passed on to the mknod command. Instead, the command must contain the major device number for the GAB driver only.

RESOLUTION:
This hotfix addresses the issue so that the GAB module starts successfully even when other driver names in the environment contain "gab" as a substring.

Patch ID: VRTSllt-7.4.2.2500

* 4031189 (Tracking ID: 4031186)

SYMPTOM:
The LLT retransmission percentage is more than 0.1%

DESCRIPTION:
Retransmission percentage should never exceed 0.1% in a stable cluster setup. If it is greater than 0.1% then it needs to be looked at.
We are adding retransmission percentage to the lltstats.<host> file which is generated from /opt/VRTSgab/getcomms script.

RESOLUTION:
Enhanced getcomms script to display alert message if retransmission percentage exceeds 0.1%

* 4031195 (Tracking ID: 4031193)

SYMPTOM:
None

DESCRIPTION:
In "flcnttime" field, the lltshow -n command's output will include the duration of LLT flow control set for the particular port.

RESOLUTION:
To show LLT flow control duration, new "flcnttime" field is added.

* 4057311 (Tracking ID: 4057310)

SYMPTOM:
After an InfoScale upgrade, the updated values of LLT tunables that are used when loading the corresponding modules fail to persist.

DESCRIPTION:
If the value of a tunable in /etc/sysconfig/llt is changed before an RPM upgrade, the change does not persist after the upgrade, and the tunable gets reset to the default value.

RESOLUTION:
The LLT module is updated so that its tunable parameters in /etc/sysconfig/llt can retain the existing values even after an RPM upgrade.

* 4067422 (Tracking ID: 4040261)

SYMPTOM:
During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.

DESCRIPTION:
Some log messages may have IDs like 00000. When such logs are encountered, it may lead to a core dump by the lltconfig process.

RESOLUTION:
VCS is updated to use appropriate message IDs for logs so that such issues do not occur.

* 4067463 (Tracking ID: 4066062)

SYMPTOM:
Node panic

DESCRIPTION:
Node panic observed in llt udp multiport configuration with vx ioship stack.

RESOLUTION:
When llt receives an acknowledgement, it tries to free the packet and corresponding client frags blindly without checking the client status. If the client is unregistered, then the free functions of the frags will be invalid and hence should not be called.

* 4067466 (Tracking ID: 4067465)

SYMPTOM:
In systems with high memory usage, the kmem cache allocation for internal skb might fail. In such situation, use vmalloc.

DESCRIPTION:
If kmem cache allocation fails, extend the internal skb allocation feature to allocate with vmalloc. User can also configure the type of allocation scheme such as; only vmalloc, only kmem cache, or both.

RESOLUTION:
Enhanced LLT internal skb allocation method.

* 4072338 (Tracking ID: 4072335)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 5.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
6(RHEL8.6) is now introduced.

Patch ID: VRTSllt-7.4.2.2300

* 4053173 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTSllt-7.4.2.2100

* 4039475 (Tracking ID: 4045607)

SYMPTOM:
LLT over UDP support for transmission and reception of data over 1500 MTU networks.

DESCRIPTION:
The UDP multiport feature in LLT performs poorly in case of 1500 MTU-based networks. Data packets larger than 1500 bytes cannnot be transmitted over 1500 MTU-based networks, so the IP layer fragments them appropriately for transmission. The loss of a single fragment from the set leads to a total packet (I/O) loss. LLT then retransmits the same packet repeatedly until the transmission is successful. Eventually, you may encounter issues with the Flexible Storage Sharing (FSS) feature. For example, the vxprint process or the disk group creation process may stop responding, or the I/O-shipping performance may degrade severely.

RESOLUTION:
The UDP multiport feature of LLT is updated to fragment the packets such that they can be accommodated in the 1500-byte network frame. The fragments are rearranged on the receiving node at the LLT layer. Thus, LLT can track every fragment to the destination, and in case of transmission failures, retransmit the lost fragments based on the current RTT time.

* 4046200 (Tracking ID: 4046199)

SYMPTOM:
llt over udp configuration now accepts any link tag name

DESCRIPTION:
Previously for llt over udp configuration, the tag field in link definition had to be the ethernet interface name. With this fix any string can be used as a tag name

RESOLUTION:
Any string can be used as link tag name with this fix

* 4046420 (Tracking ID: 3989372)

SYMPTOM:
When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.

DESCRIPTION:
Occasionally, in a VMware environment, the operating system may not schedule LLT contexts on time. Consequently, heartbeats from some of the cluster nodes may be lost, and those nodes may get fenced out. This situation typically occurs when the CPU load or the memory usage is high or when the VMDK snapshot or vMotion operations are in progress.

RESOLUTION:
This fix attempts to make clusters more resilient to transient issues by heartbeating using threads bound to every vCPU.

Patch ID: VRTSllt-7.4.2.1700

* 4030882 (Tracking ID: 4029253)

SYMPTOM:
LLT may not reuse the buffer slots on which NAK is received from the earlier RDMA writes.

DESCRIPTION:
On receiving the buffer advertisement after an RDMA write, LLT also waits for the hardware/OS ACK for that RDMA write. Only after the ACK is received, LLT sets the state of the buffers to free (usable). If the connection between the cluter nodes breaks after LLT receives the buffer advertisement but before receiving the ACK, the local node generates a NAK. LLT does not acknowledge this NAK, and so, that specific buffer slot remains unusable. Over time, the number of buffer slots in the unusable state increases, which sets the flow control for the LLT client. This conditions leads to an FSS I/O hang.

RESOLUTION:
This hotfix updates the LLT module to mark a buffer slot as free (usable) even when a NAK is received from the previous RDMA write.

* 4038101 (Tracking ID: 4034933)

SYMPTOM:
After installing VRTScps 6.2.1.002, the following error is logged in cpserver_A.log "CPS CRITICAL V-97-1400-22017 Error executing update nodes set is_reachable..."

DESCRIPTION:
This issue occurs due to unexpected locking of the CP server database that is related to the stale key detection feature.

RESOLUTION:
This hotfix updates the VRTScps RPM so that the unexpected database lock is cleared and the nodes can updated successfully.

* 4039694 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSllt-7.4.2.1300

* 4019535 (Tracking ID: 4018581)

SYMPTOM:
The LLT module fails to start and the system log messages indicate missing IP address.

DESCRIPTION:
When only the low priority LLT links are configured over UDP, UDPBurst mode must be disabled. UDPBurst mode must only be enabled when the high priority LLT links are configured over UDP. If the UDPBurst mode gets enabled while configuring the low priority links, the LLT module fails to start and logs the following error: "V-14-2-15795 missing ip address / V-14-2-15800 UDPburst:Failed to get link info".

RESOLUTION:
This hotfix updates the LLT module to not enable the UDPBurst mode when only the low priority LLT links are configured over UDP.

* 4021058 (Tracking ID: 4010237)

SYMPTOM:
On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.

DESCRIPTION:
On RHEL, device files should have correct SELinux labels to avoid unauthorized access.

RESOLUTION:
Code changes done to set correct SELinux labels.

Patch ID: VRTSllt-7.4.2.1200

* 4022049 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSllt-7.4.2.1100

* 4012744 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

* 4013036 (Tracking ID: 3985775)

SYMPTOM:
Sometimes, the system log may get flooded with LLT heartbeat loss messages that do not necessarily indicate any actual issues with LLT.

DESCRIPTION:
LLT heartbeat loss messages can appear in the system log either due to actual heartbeat drops in the network or due to heartbeat packets arriving out of order. In either case, these messages are only informative and do not indicate any issue in the LLT functionality. Sometimes, the system log may get flooded with these messages, which are not useful.

RESOLUTION:
The LLT module is updated to lower the frequency of printing LLT heartbeat loss messages. This is achieved by increasing the number of missed sequential HB packets required to print this informative message.

Patch ID: VRTSsfmh-7.4.2.601

* 4062783 (Tracking ID: 4062782)

SYMPTOM:
vxlist command does not give proper output and throw message like 'Cannot connect to vxconfigd on host'

DESCRIPTION:
On InfoScale server having VxVM version 7.4.1.3104 and VRTSsfmh version 7.4.2, you may see that vxlist does not work. In this case, manually upgrade VRTSsfmh package to 7.4.2.502

RESOLUTION:
New dclid plugin compatible with VxVM 7.4.1.3104 is present in VRTSsfmh package.

* 4062795 (Tracking ID: 4062794)

SYMPTOM:
VxVM services fails to start.

DESCRIPTION:
There was dependency between VIOM xprtld and VxVM boot service. This causes sometimes VxVM services to start properly on Linux platform.

RESOLUTION:
Removed dependency between xprtld and VXVM.

* 4071101 (Tracking ID: 4071100)

SYMPTOM:
systemd xprtld.service causes ordering cycles issue with autofs.

DESCRIPTION:
With AFTER dependency of autofs in the xprtld.service it causes ordering cycles where systemd will randomly delete a service caught up in the cycle.

RESOLUTION:
Removed dependency of autofs.service from xprtld.service

Patch ID: VRTSfsadv-7.4.2.2600

* 4070366 (Tracking ID: 4070367)

SYMPTOM:
After Upgrade "/var/VRTS/fsadv" directory is getting deleted.

DESCRIPTION:
"/var/VRTS/fsadv" directory is required to keep the logs which is getting deleted after we are upgrading VRTSfsadv package.

RESOLUTION:
Made necessary code changes to fix the issue.

* 4071090 (Tracking ID: 4040281)

SYMPTOM:
Security vulnerabilities exist in the OpenSSL, Xpat, and Sqlite third-party components used by VxFS.

DESCRIPTION:
VxFS uses the OpenSSL, Xpat, and Sqlite third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer versions of these third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSgms-7.4.2.2600

* 4057424 (Tracking ID: 4057176)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock.

* 4061646 (Tracking ID: 4061644)

SYMPTOM:
GMS failed to start after the kernel upgrade with the following error as
"modprobe FATAL Module vxgms not found in directory /lib/modules/>"

DESCRIPTION:
Due to a race GMS module (vxgms.ko) is not copied to the latest kernel location inside /lib/module directory during reboot.

RESOLUTION:
Added code to copy it to the directory.

Patch ID: VRTSgms-7.4.2.1100

* 4022069 (Tracking ID: 4022067)

SYMPTOM:
Unable to load the vxgms module

DESCRIPTION:
Need recompilation of VRTSgms due to recent changes.

RESOLUTION:
Recompiled the VRTSgms module.

Patch ID: VRTSodm-7.4.2.2600

* 4057429 (Tracking ID: 4056673)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock. Corrected vxgms dependency in odm service file.

* 4060584 (Tracking ID: 3868609)

SYMPTOM:
While applying Oracle redo logs, a significant increase is observed in the CPU usage by the vxfs thread.

DESCRIPTION:
To avoid memory deadlocks and to track exiting threads with outstanding ODM requests, the kernels memory management was analysed. While the Oracle threads are being rescheduled, they hold the mmap_sem. The FDD threads keep waiting for mmap_sem to be released, which causes the contention and the high CPU usage.

RESOLUTION:
The bouncing of the spinlock between the CPUs is removed to reduce the CPU spike.

Patch ID: VRTSodm-7.4.2.2300

* 3930708 (Tracking ID: 3897161)

SYMPTOM:
Oracle Database on Veritas filesystem with Veritas ODM library has high
log file sync wait time.

DESCRIPTION:
The ODM_IOP lock would not be held for long, so instead of trying
to take a trylock, and deferring the IO when we fail to get the trylock, it
would be better to call the non-trylock lock and finish the IO in the interrupt
context. It should be fine on solaris since this "sleep" lock is actually an
adaptive mutex.

RESOLUTION:
Instead of ODM_IOP_TRYLOCK() call ODM_IOP_LOCK() in the odm_iodone
and finish the IO. This fix will not defer any IO.

* 4052766 (Tracking ID: 4052885)

SYMPTOM:
The ODM module fails to load on RHEL 8.5.

DESCRIPTION:
This issue occurs due to changes in the RHEL 8.5 kernel.

RESOLUTION:
The ODM module is updated to accommodate the changes in the kernel and load as expected on RHEL 8.5.

Patch ID: VRTSodm-7.4.2.2200

* 4049440 (Tracking ID: 4049438)

SYMPTOM:
VRTSodm driver will not load with 7.4.2.2200 VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm due to recent changes in VRTSvxfs.

RESOLUTION:
Recompiled the VRTSodm with new changes in VRTSvxfs.

Patch ID: VRTSvxfs-7.4.2.2600

* 4015834 (Tracking ID: 3988752)

SYMPTOM:
Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.

DESCRIPTION:
bdev_strategy() is deprecated from solaris code and was causing performance issues when used for IO's. Solaris has recommended to use LDI framework for all IO's.

RESOLUTION:
Code is modified to use ldi framework for all IO's in solaris.

* 4040612 (Tracking ID: 4033664)

SYMPTOM:
Multiple issues occur with hardlink replication using VFR.

DESCRIPTION:
Multiple different issues occur with hardlink replication using Veritas File Replicator (VFR).

RESOLUTION:
VFR is updated to fix issues with hardlink replication in the following cases:
1. Files with multiple links
2. Data inconsistency after hardlink file replication
3. Rename and move operations dumping core in multiple different scenarios
4. WORM feature support

* 4040618 (Tracking ID: 4040617)

SYMPTOM:
Veritas file replicator is not performing as per the expectation.

DESCRIPTION:
Veritas FIle replicator was having some bottlenecks at networking layer as well as data transfer level. This was causing additional throttling in the Replication.

RESOLUTION:
Performance optimisations done at multiple places to make use of available resources properly so that Veritas File replicator

* 4060549 (Tracking ID: 4047921)

SYMPTOM:
Replication job was getting into hung state because of the deadlock involving below threads :

Thread : 1  

#0  0x00007f160581854d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f1605813e9b in _L_lock_883 () from /lib64/libpthread.so.0
#2  0x00007f1605813d68 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x000000000043be1f in replnet_sess_bulk_free ()
#4  0x000000000043b1e3 in replnet_server_dropchan ()
#5  0x000000000043ca07 in replnet_client_connstate ()
#6  0x00000000004374e3 in replnet_conn_changestate ()
#7  0x0000000000437c18 in replnet_conn_evalpoll ()
#8  0x000000000044ac39 in vxev_loop ()
#9  0x0000000000405ab2 in main ()

Thread 2 :

#0  0x00007f1605815a35 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x000000000043902b in replnet_msgq_waitempty ()
#2  0x0000000000439082 in replnet_bulk_recv_func ()
#3  0x00007f1605811ea5 in start_thread () from /lib64/libpthread.so.0
#4  0x00007f1603ef29fd in clone () from /lib64/libc.so.6

DESCRIPTION:
When replication job is paused/resumed in a succession multiple times because of the race condition it may lead to a deadlock situation involving two threads.

RESOLUTION:
Fix the locking sequence and add additional holds on resources to avoid race leading to deadlock situation.

* 4060566 (Tracking ID: 4052449)

SYMPTOM:
Cluster goes in an 'unresponsive' mode while invalidating pages due to duplicate page entries in iowr structure.

DESCRIPTION:
While finding pages for invalidation of inodes, VxFS traverses radix tree by taking RCU lock and fills the IO structure with dirty/writeback pages that need to be invalidated in an array. This lock is efficient for read but does not protect the parallel creation/deletion of node. Hence, when VxFS finds page, consistency for the page in checked through radix_tree_exception()/radix_tree_deref_retry(). And if it fails, VxFS restarts the page finding from start offset. But VxFs does not reset the array index, leading to incorrect filling of IO structure's array which was causing  duplicate entries of pages. While trying to destroy these pages, VxFS takes page lock on each page. Because of duplicate entries, VxFS tries to take page lock couple of times on same page, leading to self-deadlock.

RESOLUTION:
Code is modified to reset the array index correctly in case of failure to find pages.

* 4060585 (Tracking ID: 4042925)

SYMPTOM:
Intermittent Performance issue on commands like df and ls.

DESCRIPTION:
Commands like "df" "ls" issue stat system call on node to calculate the statistics of the file system. In a CFS, when stat system call is issued, it compiles statistics from all nodes. When multiple df or ls are fired within specified time limit, vxfs is optimized. vxfs returns the cached statistics, instead of recalculating statistics from all nodes. If multiple such commands are fired in succession and one of the old caller of stat system call takes time, this optimization fails and VxFS recompiles statistics from all nodes. This can lead to bad performance of stat system call, leading to unresponsive situations for df, ls commands.

RESOLUTION:
Code is modified to protect last modified time of stat system call with a sleep lock.

* 4060805 (Tracking ID: 4042254)

SYMPTOM:
vxupgrade sets fullfsck flag in the filesystem if it is unable to upgrade the disk layout version because of ENOSPC.

DESCRIPTION:
If the filesystem is 100 % full and  its disk layout version is upgraded by using vxupgrade, then this utility starts the upgrade and later it fails with ENOSPC and ends up setting fullfsck flag in the filesystem.

RESOLUTION:
Code changes introduced which first calculate the required space to perform the disk layout upgrade. If the required space is not available, it fails the upgrade gracefully without setting fullfsck flag.

* 4061203 (Tracking ID: 4005620)

SYMPTOM:
Inode count maintained in the inode allocation unit (IAU) can be negative when an IAU is marked bad. An error such as the following is logged.

V-2-4: vx_mapbad - vx_inoauchk - /fs1 file system free inode bitmap in au 264 marked bad

Due to the negative inode count, errors like the following might be observed and processes might be stuck at inode allocation with a stack trace as shown.

V-2-14: vx_iget - inode table overflow

	vx_inoauchk 
	vx_inofindau 
	vx_findino 
	vx_ialloc 
	vx_dirmakeinode 
	vx_dircreate 
	vx_dircreate_tran 
	vx_pd_create 
	vx_create1_pd 
	vx_do_create 
	vx_create1 
	vx_create0 
	vx_create 
	vn_open 
	open

DESCRIPTION:
The inode count can be negative if somehow VxFS tries to allocate an inode from an IAU where the counter for regular file and directory inodes is zero. In such a situation, the inode allocation fails and the IAU map is marked bad. But the code tries to further reduce the already-zero counters, resulting in negative counts that can cause subsequent unresponsive situation.

RESOLUTION:
Code is modified to not reduce inode counters in vx_mapbad code path if the result is negative. A diagnostic message like the following flashes.
"vxfs: Error: Incorrect values of ias->ifree and Aus rifree detected."

* 4061527 (Tracking ID: 4054386)

SYMPTOM:
VxFS systemd service may show active status despite the module not being loaded.

DESCRIPTION:
If systemd service fails to load vxfs module, the service still shows status as active instead of failed.

RESOLUTION:
The script is modified to show the correct status in case of such failures.

Patch ID: VRTSvxfs-7.4.2.2300

* 4018174 (Tracking ID: 3941620)

SYMPTOM:
High memory usage is seen on Solaris system with delayed allocation enabled.

DESCRIPTION:
With VxFS delayed allocation enabled, in some unnecessary condition, it 
could 
make some put page operations return early before pages flushed and without 
retry later. Then only one vxfs working thread works on page flushing 
through 
delayed allocation flush path. This will result in the dirty page flushing 
much slow, and cause the memory starvation of the system.

RESOLUTION:
Code change has been done to make the page flushing work through multiple 
threads as normal.

* 4043042 (Tracking ID: 3990168)

SYMPTOM:
Accommodation of new interpretation of error from bio structure in linux kernel greater than 4.12.14

DESCRIPTION:
Interpretation of error from bio structure in linux kernel greater than 4.12 is changed. VxFS need to accommodate these changes to correctly interpret error from bio structure.

RESOLUTION:
Code added to accommodate new interpretation of error from bio structure in linux kernel greater than 4.12.14 for VxFS.

* 4052763 (Tracking ID: 4052883)

SYMPTOM:
The VxFS module fails to load on RHEL 8.5.

DESCRIPTION:
This issue occurs due to changes in the RHEL 8.5 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on RHEL 8.5.

Patch ID: VRTSvxfs-7.4.2.2200

* 4013420 (Tracking ID: 4013139)

SYMPTOM:
The abort operation on an ongoing online migration from the native file system to VxFS on RHEL 8.x systems.

DESCRIPTION:
The following error messages are logged when the abort operation fails:
umount: /mnt1/lost+found/srcfs: not mounted
UX:vxfs fsmigadm: ERROR: V-3-26835:  umount of source device: /dev/vx/dsk/testdg/vol1 failed, with error: 32

RESOLUTION:
The fsmigadm utility is updated to address the issue with the abort operation on an ongoing online migration.

* 4040238 (Tracking ID: 4035040)

SYMPTOM:
After replication job paused and resumed some of the fields got missed in stats command output and never shows missing fields on onward runs.

DESCRIPTION:
rs_start for the current stat initialized to the start time of the replication and default value of rs_start is zero.
Stat don't show some fields in-case rc_start is zero.

        if (rs->rs_start && dis_type == VX_DIS_CURRENT) {
                if (!rs->rs_done) {
                        diff = rs->rs_update - rs->rs_start;
                }
                else {
                        diff = rs->rs_done - rs->rs_start;
                }

                /*
                 * The unit of time is in seconds, hence
                 * assigning 1 if the amount of data
                 * was too small
                 */

                diff = diff ? diff : 1;
                rate = rs->rs_file_bytes_synced /
                        (diff - rs->rs_paused_duration);
                printf("\t\tTransfer Rate: %s/sec\n", fmt_bytes(h, rate));
        }

In replication we initialize the rs_start to zero and update with the start time but we don't save the stats to disk. That small window leave a case where
in-case, we pause the replication and start again we always see the rs_start to zero.

Now after initializing the rs_start we write to disk in the same function. In-case in resume case we found rs_start to zero, we again initialize the rs_start 
field to current replication start time.

RESOLUTION:
Write rs_start to disk and added a check in resume case to initialize rs_start value in-case found 0.

* 4040608 (Tracking ID: 4008616)

SYMPTOM:
fsck command got hung.

DESCRIPTION:
fsck got stuck due to deadlock when a thread which marked buffer aliased is waiting for itself for the reference drain, while
getting block code was called with NOBLOCK flag.

RESOLUTION:
honour NOBLOCK flag

* 4042686 (Tracking ID: 4042684)

SYMPTOM:
Command fails to resize the file.

DESCRIPTION:
There is a window where a parallel thread can clear IDELXWRI flag which it should not.

RESOLUTION:
setting the delayed extending write flag incase any parallel thread has cleared it.

* 4044184 (Tracking ID: 3993140)

SYMPTOM:
In every 60 seconds, compclock was lagging behind approximate 1.44 seconds from actual time elapsed.

DESCRIPTION:
In every 60 seconds, compclock was lagging behind approximate 1.44 seconds from actual time elapsed.

RESOLUTION:
Made adjustment to logic responsible for calculating and updating compclock timer.

* 4046265 (Tracking ID: 4037035)

SYMPTOM:
VxFS should have the ability to control the number of inactive processing threads.

DESCRIPTION:
VxFS may spawn a large number of worker threads that become inactive over time. As a result, heavy lock contention occurs during the removal of inactive threads on high-end servers.

RESOLUTION:
To avoid the contention, a new tunable, vx_ninact_proc_threads, is added. You can use vx_ninact_proc_threads to adjust the number of inactive processing threads based on your server configuration and workload.

* 4046266 (Tracking ID: 4043084)

SYMPTOM:
panic in vx_cbdnlc_lookup

DESCRIPTION:
Panic observed in the following stack trace:
vx_cbdnlc_lookup+000140 ()
vx_int_lookup+0002C0 ()
vx_do_lookup2+000328 ()
vx_do_lookup+0000E0 ()
vx_lookup+0000A0 ()
vnop_lookup+0001D4 (??, ??, ??, ??, ??, ??)
getFullPath+00022C (??, ??, ??, ??)
getPathComponents+0003E8 (??, ??, ??, ??, ??, ??, ??)
svcNameCheck+0002EC (??, ??, ??, ??, ??, ??, ??)
kopen+000180 (??, ??, ??)
syscall+00024C ()

RESOLUTION:
Code changes to handle memory pressure while changing FC connectivity

* 4046267 (Tracking ID: 4034910)

SYMPTOM:
Garbage values inside global list large_dirinfo.

DESCRIPTION:
Garbage values inside global list large_dirinfo, which will lead to fsck failure.

RESOLUTION:
Make access/updataion to global list large_dirinfo synchronous throughout the fsck binary, so that garbage values due to race condition can be avoided.

* 4046271 (Tracking ID: 3993822)

SYMPTOM:
running fsck on a file system core dumps

DESCRIPTION:
buffer was marked as busy without taking buffer lock while getting buffer from freelist in 1 thread and there was another thread 
that was accessing this buffer through its local variable

RESOLUTION:
marking buffer busy within the buffer lock while getting free buffer.

* 4046272 (Tracking ID: 4017104)

SYMPTOM:
Deleting a huge number of inodes can consume a lot of system resources during inactivations which cause hangs or even panic.

DESCRIPTION:
Delicache inactivations dumps all the inodes in its inventory, all at once for inactivation. This causes a surge in the resource consumptions due to which other processes can starve.

RESOLUTION:
Gradually process the inode inactivation.

* 4046829 (Tracking ID: 3993943)

SYMPTOM:
The fsck utility hit the coredump due to segmentation fault in get_dotdotlst().

Below is stack trace of the issue.

get_dotdotlst 
check_dotdot_tbl 
iproc_do_work
start_thread 
clone ()

DESCRIPTION:
Due to a bug in fsck utility the coredump was generated while running the fsck on the filesystem. The fsck operation aborted in between due to the coredump.

RESOLUTION:
Code changes are done to fix this issue

* 4047568 (Tracking ID: 4046169)

SYMPTOM:
On RHEL8, while doing a directory move from one FS (ext4 or vxfs) to migration VxFS, the migration can fail and FS will be disable. In debug testing, the issue was caught by internal assert, with following stack trace.

panic
ted_call_demon
ted_assert
vx_msgprint
vx_mig_badfile
vx_mig_linux_removexattr_int
__vfs_removexattr
__vfs_removexattr_locked
vfs_removexattr
removexattr
path_removexattr
__x64_sys_removexattr
do_syscall_64

DESCRIPTION:
Due to different implementation of "mv" operation in RHEL8 (as compared to RHEL7), there is a removexattr call on the target FS - which in migration case will be migration VxFS. In this removexattr call, kernel asks "system.posix_acl_default" attribute to be removed from the directory to be moved. But since the directory is not present on the target side yet (and hence no extended attributes for the directory), the code returns ENODATA. When code in vx_mig_linux_removexattr_int() encounter this error, it disables the FS and in debug pkg calls assert.

RESOLUTION:
The fix is to ignore ENODATA error and not assert or disable the FS.

* 4049091 (Tracking ID: 4035057)

SYMPTOM:
On RHEL8, IOs done on FS, while other FS to VxFS migration is in progress can cause panic, with following stack trace.
 machine_kexec
 __crash_kexec
 crash_kexec
 oops_end
 no_context
 do_page_fault
 page_fault
 [exception RIP: memcpy+18]
 _copy_to_iter
 copy_page_to_iter
 generic_file_buffered_read
 new_sync_read
 vfs_read
 kernel_read
 vx_mig_read
 vfs_read
 ksys_read
 do_syscall_64

DESCRIPTION:
- As part of RHEL8 support changes, vfs_read, vfs_write calls were replaced with kernel_read, kernel_write as the vfs_ calls are no longer exported. The kernel_read, kernel_write calls internally set the memory segment of the thread to KERNEL_DS and expects the buffer passed to have been allocated in kernel space.
- In migration code, if the read/write operation cannot be completed using target FS (VxFS), then the IO is redirected to source FS. And in doing so, the code passes the same buffer - which is a user buffer to kernel call. This worked well with vfs_read, vfs_write calls. But is does not work with kernel_read, kernel_write calls, causing a panic.

RESOLUTION:
- Fix is to use vfs_iter_read, vfs_iter_write calls, which work with user buffer. To use these methods the user buffer needs to passed as part of struct iovec.iov_base

* 4049097 (Tracking ID: 4049096)

SYMPTOM:
Tar command errors out with 1 throwing warnings.

DESCRIPTION:
This is happening due to dalloc which is changing the ctime of the file after allocating the extents `(worklist thread)->vx_dalloc_flush -> vx_dalloc_off` in between the 2 fsstat calls in tar.

RESOLUTION:
Avoiding changing ctime while allocating delayed extents in background.

Patch ID: VRTSvxvm-7.4.2.3200

* 4011971 (Tracking ID: 3991668)

SYMPTOM:
In a VVR configuration with secondary logging enabled, data inconsistency is reported after the "No IBC message arrived" error is encountered.

DESCRIPTION:
It might happen that the VVR secondary node handles updates with larger sequence IDs before the In-Band Control (IBC) update arrives. In this case, VVR drops the IBC update. Due to the updates with the larger sequence IDs than the one for the IBC update, data writes cannot be started, and they get queued. Data loss may occur after the VVR secondary receives an atomic commit and frees the queue. If this situation occurs, the "vradmin verifydata" command reports data inconsistency.

RESOLUTION:
VVR is modified to trigger updates as they are received in order to start data volume writes.

* 4013169 (Tracking ID: 4011691)

SYMPTOM:
Observed high CPU consumption on the VVR secondary nodes because of high pending IO load.

DESCRIPTION:
High replication related IO load on the VVR secondary and the requirement of maintaining write order fidelity with limited memory pools created  contention. This resulted in multiple VxVM kernel threads contending for shared resources and there by increasing the CPU consumption.

RESOLUTION:
Limited the way in which VVR consumes its resources so that a high pending IO load would not result into high CPU consumption.

* 4037288 (Tracking ID: 4034857)

SYMPTOM:
Current load of Vxvm modules were failing on SLES15 SP2(Kernel - 5.3.18-22.2-default).

DESCRIPTION:
With new kernel (5.3.18-22.2-default) below mentioned functions were depricated -
1. gettimeofday() 
2.struct timeval
3. bio_segments()
4. iov_for_each()
5.req filed in struct next_rq
Also, there was susceptible Data corruption with big size IO(>1M) processed by Linux kernel IO splitting.

RESOLUTION:
Code changes are mainly to support kernel 5.3.18 and to provide support for deprecated functions. 
Remove dependency on req->next_rq field in blk-mq code
And, changes related to bypassing the Linux kernel IO split functions, which seems redundant for VxVM IO processing.

* 4054311 (Tracking ID: 4040701)

SYMPTOM:
Below warnings are observed while installing the VXVM package.
WARNING: libhbalinux/libhbaapi is not installed. vxesd will not capture SNIA HBA API library events.
mv: '/var/adm/vx/cmdlog' and '/var/log/vx/cmdlog' are the same file
mv: '/var/adm/vx/cmdlog.1' and '/var/log/vx/cmdlog.1' are the same file
mv: '/var/adm/vx/cmdlog.2' and '/var/log/vx/cmdlog.2' are the same file
mv: '/var/adm/vx/cmdlog.3' and '/var/log/vx/cmdlog.3' are the same file
mv: '/var/adm/vx/cmdlog.4' and '/var/log/vx/cmdlog.4' are the same file
mv: '/var/adm/vx/ddl.log' and '/var/log/vx/ddl.log' are the same file
mv: '/var/adm/vx/ddl.log.0' and '/var/log/vx/ddl.log.0' are the same file
mv: '/var/adm/vx/ddl.log.1' and '/var/log/vx/ddl.log.1' are the same file
mv: '/var/adm/vx/ddl.log.10' and '/var/log/vx/ddl.log.10' are the same file
mv: '/var/adm/vx/ddl.log.11' and '/var/log/vx/ddl.log.11' are the same file

DESCRIPTION:
Some warnings are observed while installing vxvm package.

RESOLUTION:
Appropriate code changes are done to avoid the warnings.

* 4056919 (Tracking ID: 4056917)

SYMPTOM:
In Flexible Storage Sharing (FSS) environments, disk group import operation with few disks missing leads to data corruption.

DESCRIPTION:
In FSS environments, import of disk group with missing disks is not allowed. If disk with highest updated configuration information is not present during import, the import operation fired was leading incorrectly incrementing the config TID on remaining disks before failing the operation. When missing disk(s) with latest configuration came back, import was successful. But because of earlier failed transaction, import operation incorrectly choose wrong configuration to import the diskgroup leading to data corruption.

RESOLUTION:
Code logic in disk group import operation is modified to ensure failed/missing disks check happens early before attempting perform any on-disk update as part of import.

* 4058873 (Tracking ID: 4057526)

SYMPTOM:
Whenever vxnm-vxnetd is loaded, it reports "Cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory" in /var/log/messages.

DESCRIPTION:
New systemd update removed the support for "/var/lock/subsys/" directory. Thus, whenever vxnm-vxnetd is loaded on the systems supporting systemd, it 
reports "cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory"

RESOLUTION:
Added a check to validate if the /var/lock/subsys/ directory is supported in vxnm-vxnetd.sh

* 4060839 (Tracking ID: 3975667)

SYMPTOM:
NMI watchdog: BUG: soft lockup

DESCRIPTION:
When flow control on ioshipping channel is set there is window in code where vol_ioship_sender thread can go in tight loop.
This causes softlockup

RESOLUTION:
Relinquish CPU to schedule other process. vol_ioship_sender() thread will restart after some delay.

* 4060962 (Tracking ID: 3915202)

SYMPTOM:
vxconfigd hang in vxconfigd -k -r reset

DESCRIPTION:
vxconfigd hang is observed since all the file descriptors to the process have been utilized because of fd leak.
This issue was not integrated hence facing the issue.

RESOLUTION:
Appropriate code changes are done to handle scenario of the fd leak.

* 4060966 (Tracking ID: 3959716)

SYMPTOM:
System may panic with sync replication with VVR configuration, when VVR RVG is in DCM mode, with following panic stack:
volsync_wait [vxio]
voliod_iohandle [vxio]
volted_getpinfo [vxio]
voliod_loop [vxio]
voliod_kiohandle [vxio]
kthread

DESCRIPTION:
With sync replication, if ACK for data message is delayed from the secondary site, the 
primary site might incorrectly free the message from the waiting queue at primary site.
Due to incorrect handling of the message, a system panic may happen.

RESOLUTION:
Required code changes are done to resolve the panic issue.

* 4061004 (Tracking ID: 3993242)

SYMPTOM:
vxsnap prepare on vset might throw error : "VxVM vxsnap ERROR V-5-1-19171 Cannot perform prepare operation on cloud 
volume"

DESCRIPTION:
There were  some wrong volume-records entries being fetched for VSET and due to which required validations were failing and triggering the issue .

RESOLUTION:
Code changes have been done to resolve the issue .

* 4061036 (Tracking ID: 4031064)

SYMPTOM:
During master switch with replication in progress, cluster wide hang is seen on VVR secondary.

DESCRIPTION:
With application running on primary, and replication setup between VVR primary & secondary, when master switch operation is attempted on secondary, it gets hung permanently.

RESOLUTION:
Appropriate code changes are done to handle scenario of master switch operation and replication data on secondary.

* 4061055 (Tracking ID: 3999073)

SYMPTOM:
Data corruption occurred when the fast mirror resync (FMR) was enabled and the failed plex of striped-mirror layout was attached.

DESCRIPTION:
To determine and recover the regions of volumes using contents of detach, a plex attach operation with FMR tracking has been enabled.

For the given volume region, the DCO region size being higher than the stripe-unit of volume, the code logic in plex attached code path was incorrectly skipping the bits in detach maps. Thus, some of the regions (offset-len) of volume did not sync with the attached plex leading to inconsistent mirror contents.

RESOLUTION:
To resolve the data corruption issue, the code has been modified to consider all the bits for given region (offset-len) in plex attached code.

* 4061057 (Tracking ID: 3931583)

SYMPTOM:
Node may panic while uninstalling or upgrading the VxVM package or during reboot.

DESCRIPTION:
Due to a race condition in Volume Manager (VxVM), IO may be queued for processing while the vxio module is being unloaded. This results in VxVM acquiring and accessing a lock which is currently being freed and it may panic the system with the following backtrace:

 #0 [ffff88203da089f0] machine_kexec at ffffffff8105d87b
 #1 [ffff88203da08a50] __crash_kexec at ffffffff811086b2
 #2 [ffff88203da08b20] panic at ffffffff816a8665
 #3 [ffff88203da08ba0] nmi_panic at ffffffff8108ab2f
 #4 [ffff88203da08bb0] watchdog_overflow_callback at ffffffff81133885
 #5 [ffff88203da08bc8] __perf_event_overflow at ffffffff811727d7
 #6 [ffff88203da08c00] perf_event_overflow at ffffffff8117b424
 #7 [ffff88203da08c10] intel_pmu_handle_irq at ffffffff8100a078
 #8 [ffff88203da08e38] perf_event_nmi_handler at ffffffff816b7031
 #9 [ffff88203da08e58] nmi_handle at ffffffff816b88ec
#10 [ffff88203da08eb0] do_nmi at ffffffff816b8b1d
#11 [ffff88203da08ef0] end_repeat_nmi at ffffffff816b7d79
 [exception RIP: _raw_spin_unlock_irqrestore+21]
 RIP: ffffffff816b6575 RSP: ffff88203da03d98 RFLAGS: 00000283
 RAX: 0000000000000283 RBX: ffff882013f63000 RCX: 0000000000000080
 RDX: 0000000000000001 RSI: 0000000000000283 RDI: 0000000000000283
 RBP: ffff88203da03d98 R8: 00000000005d1cec R9: ffff8810e8ec0000
 R10: 0000000000000002 R11: ffff88203da03da8 R12: ffff88103af95560
 R13: ffff882013f630c8 R14: 0000000000000001 R15: 0000000000000ca5
 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
--- <NMI exception stack> ---
#12 [ffff88203da03d98] _raw_spin_unlock_irqrestore at ffffffff816b6575
#13 [ffff88203da03da0] voliod_qsio at ffffffffc0fd14c3 [vxio]
#14 [ffff88203da03dd0] vol_sample_timeout at ffffffffc101d8df [vxio]
#15 [ffff88203da03df0] __voluntimeout at ffffffffc0fd34be [vxio]
#16 [ffff88203da03e18] voltimercallback at ffffffffc0fd3568 [vxio]
...
...

RESOLUTION:
Code changes made to handle the race condition and prevent the access of resources that are being freed.

* 4061298 (Tracking ID: 3982103)

SYMPTOM:
When the memory available is low in the system , I/O hang is seen.

DESCRIPTION:
In low memory situation, the memory allocated to some VVR IOs was NOT getting released properly due to which the new application IO
could NOT be served as the VVR memory pool gets fully utilized. This was resulting in IO hang type of situation.

RESOLUTION:
Code changes are done to properly release the memory in the low memory situation.

* 4061317 (Tracking ID: 3925277)

SYMPTOM:
vxdisk resize corrupts disk public region and causes file system mount fail.

DESCRIPTION:
While resizing single path disk with GPT label, update policy data according to the changes made to da/dmrec during two transactions of resize is missed, hence the private region IOs are sent to the old private region device which is on partition 3. This may make the private region data written to public region and cause corruption.

RESOLUTION:
Code changes have been made to fix  the problem.

* 4061509 (Tracking ID: 4043337)

SYMPTOM:
rp_rv.log file uses space for logging.

DESCRIPTION:
rp_rv log files needs to be removed and logger file should have 16 mb rotational log files.

RESOLUTION:
The code changes are implemented to disabel logging for rp_rv.log files

* 4062461 (Tracking ID: 4066785)

SYMPTOM:
When replicated disks are in SPLIT mode, importing its diskgroup failed with "Device is a hardware mirror".

DESCRIPTION:
When replicated disks are in SPLIT mode, which are r/w, importing its diskgroup failed with "Device is a hardware mirror". Third party doesn't expose disk 
attribute to show when it's in SPLIT mode. Now DMP refers to its REPLICATED status to judge if diskgroup import is allowed or not. `-o usereplicatedev=on/off` 
is enhanced to archive it.

RESOLUTION:
The code is enhanced to allow diskgroup import when replicated disks are in SPLIT mode.

* 4062577 (Tracking ID: 4062576)

SYMPTOM:
When hastop -local is used to stop the cluster, dg deport command hangs. Below stack trace is observed in system logs :

#0 [ffffa53683bf7b30] __schedule at ffffffffa834a38d
 #1 [ffffa53683bf7bc0] schedule at ffffffffa834a868
 #2 [ffffa53683bf7bd0] blk_mq_freeze_queue_wait at ffffffffa7e4d4e6
 #3 [ffffa53683bf7c18] blk_cleanup_queue at ffffffffa7e433b8
 #4 [ffffa53683bf7c30] vxvm_put_gendisk at ffffffffc3450c6b [vxio]   
 #5 [ffffa53683bf7c50] volsys_unset_device at ffffffffc3450e9d [vxio]
 #6 [ffffa53683bf7c60] vol_rmgroup_devices at ffffffffc3491a6b [vxio]
 #7 [ffffa53683bf7c98] voldg_delete at ffffffffc34932fc [vxio]
 #8 [ffffa53683bf7cd8] vol_delete_group at ffffffffc3494d0d [vxio]
 #9 [ffffa53683bf7d18] volconfig_ioctl at ffffffffc3555b8e [vxio]
#10 [ffffa53683bf7d90] volsioctl_real at ffffffffc355fc8a [vxio]
#11 [ffffa53683bf7e60] vols_ioctl at ffffffffc124542d [vxspec]
#12 [ffffa53683bf7e78] vols_unlocked_ioctl at ffffffffc124547d [vxspec]
#13 [ffffa53683bf7e80] do_vfs_ioctl at ffffffffa7d2deb4
#14 [ffffa53683bf7ef8] ksys_ioctl at ffffffffa7d2e4f0
#15 [ffffa53683bf7f30] __x64_sys_ioctl at ffffffffa7d2e536

DESCRIPTION:
This issue is seen due to some updation from kernel side w.r.t to handling request queue.Existing VxVM code set the request handling area (make_request_fn) as vxvm_gen_strategy, this functionality is getting impacted.

RESOLUTION:
Code changes are added to handle the request queues using blk_mq_init_allocated_queue.

* 4062746 (Tracking ID: 3992053)

SYMPTOM:
Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex. This is due to 
inconsistent data across the plexes after attaching a plex in layered volumes.

DESCRIPTION:
When a plex is detached in a layered volume, the regions which are dirty/modified are tracked in DCO (Data change object) map.
When the plex is attached back, the data corresponding to these dirty regions is re-synced to the plex being attached.
There was a defect in the code due to which the some particular regions were NOT re-synced when a plex is attached.
This issue only happens only when the offset of the sub-volume is NOT aligned with the region size of DCO (Data change object) volume.

RESOLUTION:
The code defect is fixed to correctly copy the data for dirty regions when the sub-volume offset is NOT aligned with the DCO region size.

* 4062747 (Tracking ID: 3943707)

SYMPTOM:
vxconfigd reconfig hang when joing a cluster with below stack:
volsync_wait [vxio]
_vol_syncwait [vxio]
voldco_await_shared_tocflush [vxio]
volcvm_ktrans_fmr_cleanup [vxio]
vol_ktrans_commit [vxio]
volconfig_ioctl [vxio]
volsioctl_real [vxio]
vols_ioctl [vxspec]
vols_unlocked_ioctl [vxspec]
vfs_ioctl  
do_vfs_ioctl

DESCRIPTION:
There is a race condition that caused the current seqno on tocsio does not get incremented on one of the nodes. While master and other slaves move to next stage with higher seqno, this slave drops the DISTRIBUTE message. The messages is retried from master and slave keeps on dropping, leading to hang.

RESOLUTION:
Code changes have been made to avoid the race condition.

* 4062751 (Tracking ID: 3989185)

SYMPTOM:
In a Veritas Volume Manager(VVR) environment vxrecover command can hang.

DESCRIPTION:
When vxrecover is triggered after storage failure it is possible that the vxrecover operation may hang.
This is because vxrecover does the RVG recovery. As part of this recovery dummy updates are written on SRL.
Due to a bug in the code these updated were written incorrectly on the SRL which led the flush operation from SRL to data volume hang.

RESOLUTION:
Code changes are done appropriately so that the dummy updates are written  correctly to the SRL.

* 4062755 (Tracking ID: 3978453)

SYMPTOM:
Reconfig hang during master takeover with below stack:
volsync_wait+0xa7/0xf0 [vxio]
volsiowait+0xcb/0x110 [vxio]
vol_commit_iolock_objects+0xd3/0x270 [vxio]
vol_ktrans_commit+0x5d3/0x8f0 [vxio]
volconfig_ioctl+0x6ba/0x970 [vxio]
volsioctl_real+0x436/0x510 [vxio]
vols_ioctl+0x62/0xb0 [vxspec]
vols_unlocked_ioctl+0x21/0x30 [vxspec]
do_vfs_ioctl+0x3a0/0x5a0

DESCRIPTION:
There is a hang in dcotoc protocol on slave and that is causing couple of slave nodes not respond with LEAVE_DONE to master, hence the issue.

RESOLUTION:
Code changes have been made to add handling of transaction overlapping with a shared toc update. Passing toc update sio flag from old to new object during transaction, to resume recovery if required.

* 4063374 (Tracking ID: 4005121)

SYMPTOM:
Application IOs appear hung or progress slowly until SRL to DCM flush finished.

DESCRIPTION:
When VVR SRL gets full DCM protection was triggered and application IO appear hung until SRL to DCM flush finished.

RESOLUTION:
Added fix that avoided duplicate DCM tracking through vol_rvdcm_log_update(), which reduced the IOPS drop comparatively.

* 4064523 (Tracking ID: 4049082)

SYMPTOM:
I/O read error is displayed when remote FSS node rebooting.

DESCRIPTION:
When rebooting remote FSS node, I/O read requests to a mirror volume that is scheduled on the remote disk from the FSS node should be redirected to the remaining plex. However, current vxvm does not handle this correctly. The retrying I/O requests could still be sent to the offline remote disk, which cause to final I/O read failure.

RESOLUTION:
Code changes have been done to schedule the retrying read request on the remaining plex.

* 4066930 (Tracking ID: 3951527)

SYMPTOM:
Data loss issue is seen because of incorrect version check handling done as a part of SRL 4k update alignment changes in 7.4 release.

DESCRIPTION:
On primary, rv_target_rlink field always is set to NULL which internally skips checking the 4k version  in VOL_RU_INIT_UPDATE macro. It causes SRL writes to be written in a 4k aligned manner even though remote rvg version is <= 7.3.1. This resulted in data loss.

RESOLUTION:
Changes are done to use rv_replicas rather than rv_target_rlink to check the version appropriately for all sites and not write SRL IO's in 4k aligned manner. 
Also, RVG version is not upgraded as part of diskgroup upgrades if rlinks are in attached state. RVG version can be upgraded using vxrvg upgrade command after detaching the rlinks and also when all sites are upgraded.

* 4067706 (Tracking ID: 4060462)

SYMPTOM:
System is unresponsive while adding new nodes.

DESCRIPTION:
After a node is removed, and adding node with  different node name is attempted; system turns
unresponsive. When a node leaves a cluster, in-memory information related to the node is not cleared due to the race condition.

RESOLUTION:
Fixed race condition to clear in-memory information of the node that leaves the cluster.

* 4067710 (Tracking ID: 4064208)

SYMPTOM:
Node is unresponsive while it gets added to the cluster.

DESCRIPTION:
While a node joins the cluster, if bits on the node are upgraded; size
of the object is interpreted incorrectly. Issue is observed when number of objects is higher and on
InfoScale 7.3.1 and above.

RESOLUTION:
Correct sizes are calculated for the data received from the master node.

* 4067712 (Tracking ID: 3868140)

SYMPTOM:
VVR primary site node might panic if the rlink disconnects while some data is getting replicated to secondary with below stack: 

dump_stack()
panic()
vol_rv_service_message_start()
update_curr()
put_prev_entity()
voliod_iohandle()
voliod_loop()
voliod_iohandle()

DESCRIPTION:
If rlink disconnects, VVR will clear some handles to the in-progress updates in memory, but if some IOs are still getting acknowledged from secondary to primary, then accessing updates for these IOs might result in panic at primary node.

RESOLUTION:
Code fix is implemented to correctly access the primary node updates in order to avoid the panic.

* 4067713 (Tracking ID: 3997531)

SYMPTOM:
VVR replication is not working, as vxnetd does not start properly.

DESCRIPTION:
If vxnetd restarts, a race condition blocks the completion of vxnetd start function after the shutdown process is completed.

RESOLUTION:
To avoid the race condition, the vxnetd start and stop functions are Synchronized.

* 4067715 (Tracking ID: 4008740)

SYMPTOM:
System panic

DESCRIPTION:
Due to a race condition there was code accessing freed VVR update which resulted in system panic

RESOLUTION:
Fixed race condition to avoid incorrect memory access

* 4067717 (Tracking ID: 4009151)

SYMPTOM:
Auto-import of diskgroup on system reboot fails with error:
"Disk for diskgroup not found"

DESCRIPTION:
When diskgroup is auto-imported, VxVM (Veritas Volume Manager) tries to find the disk with latest configuration copy. During this the DG import process searches through all the disks. The procedure also tries to find out if the DG contains clone disks or standard disks. While doing this calculation the DG import process incorrectly determines that current DG contains cloned disks instead of standard disks because of the stale value being there for the previous DG selected. Since VxVM incorrectly decides to import cloned disks instead of standard disks the import fails with "Disk for diskgroup not found" error.

RESOLUTION:
Code has been modified to accurately determine whether the DG contains standard or cloned disks and accordingly use those disks for DG import.

* 4067914 (Tracking ID: 4037757)

SYMPTOM:
VVR services always get started on boot up even if VVR is not being used.

DESCRIPTION:
VVR services get auto start as they are integrated in system init.d or similar framework.

RESOLUTION:
Added a tunable to not start VVR services on boot up

* 4067915 (Tracking ID: 4059134)

SYMPTOM:
Resync task takes too long on large size raid-5 volume

DESCRIPTION:
The resync of raid-5 volume will be done by small regions, and a check point will be setup for each region. If the size of raid-5 volume is large, it will be divided to large number of regions for resync, and check point setup will be issued against each region in loop. In each cycle, resync utility will open and connect vxconfigd daemon to do that,  each client would be created in vxconfigds context along with each region. As the number of created clients is large, it will take long time for vxconfigd which need to traverse the client list, thus, introduce the performance issue for resync.

RESOLUTION:
Code changes are made so that only one client created during the whole resync process, few time spent in client list traversing.

* 4069522 (Tracking ID: 4043276)

SYMPTOM:
If admin has offlined disks with "vxdisk offline <disk_name>" then vxattachd may brings the disk back to online state.

DESCRIPTION:
The "offline" state of VxVM disks is not stored persistently, it is recommended to use "vxdisk define" to persistently offline a disk.
For Veritas Netbackup Appliance there is a requirement that vxattachd shouldn't online the disks offlined with "vxdisk offline" operation.
To cater this request we have added tunable based enhancement to vxattachd for Netbackup Appliance use case.
The enhancement are done specifically so that Netback Appliance script can use it.
Following are the tunable details.
If skip_offline tunable is set then it will avoid offlined disk into online state. 
If handle_invalid_disk is set then it will offlined the "online invalid" SAN disks.
If remove_disable_dmpnode is set then it will cleanup stale entries from disk.info file and VxVM layer.
By default these tunables are off, we DONOT recommend InfoScale users to enable these vxattachd tunables.

RESOLUTION:
Code changes are done in vxattachd to cater Netbackup Appliance usecases.

* 4069523 (Tracking ID: 4056751)

SYMPTOM:
When importing a disk group containing all cloned disks with cloneoff option(-c) and some of disks are in read only status, the import fails and some of writable disks are removed from the disk group unexpectedly.

DESCRIPTION:
During disk group import with cloneoff option, a flag DA_VFLAG_ASSOC_DG gets set as update dgid is necessary. When associating da record of the read only disks, update private region TOC failed because of write failure, so the pending associations get aborted for all disks. During the aborting, for those of disks containing flag DA_VFLAG_ASSOC_DG would be removed from the dg and offline their config copy. Hence we can see a kind of private region corruption on writeable disks, actually they were removed from the dg.

RESOLUTION:
The issue has been fixed by failing the import at early stage if some of disks are read-only.

* 4069524 (Tracking ID: 4056954)

SYMPTOM:
When performing addsec using the VIPs with SSL enable the hang is observed

DESCRIPTION:
the issues comes when on primary side,  vradmin tries to create a local socket with endpoints as Local VIP & Interface IP and ends up calling SSL_accept and gets stuck infinitely.

RESOLUTION:
Appropriate code changes are done to handle scenario of vvr_islocalip() function to identify if the ip is local to the node.
So now by using the vvr_islocalip() the SSL_accept() func get called only if the ip is remote ip

* 4070099 (Tracking ID: 3159650)

SYMPTOM:
vxtune did not support vol_vvr_use_nat.

DESCRIPTION:
Platform specific methods were required to set vol_vvr_use_nat tunable, as its support for vxtune command was not present.

RESOLUTION:
Added vol_vvr_use_nat support for vxtune command.

* 4070253 (Tracking ID: 3911930)

SYMPTOM:
Valid PGR operations sometimes fail on a DMP node.

DESCRIPTION:
As part of the PGR operations, if the inquiry command finds that PGR is not
supported on the DMP node, the PGR_FLAG_NOTSUPPORTED flag is set on the node. Further PGR operations check this value and issue PGR commands only if the flag is not set. PGR_FLAG_NOTSUPPORTED remains set even if the hardware is changed so as to support PGR.

RESOLUTION:
A new command, enablepr, is provided in the vxdmppr utility to clear this flag on the specified DMP node.

* 4071131 (Tracking ID: 4071605)

SYMPTOM:
A security vulnerability exists in the third-party component libxml2.

DESCRIPTION:
VxVM uses a third-party component named libxml2 in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libxml2 in which the security vulnerability has been addressed.

* 4072874 (Tracking ID: 4046786)

SYMPTOM:
During reboot , nodes go out of cluster and FS is not mounted .

DESCRIPTION:
NVMe asl can some time give different UDID (difference with actual UDID would be absence of space characters in UDID) during discovery.

RESOLUTION:
usage of nvme ioctl to fetch data has been removed and sysfs will be used instead

* 4070186 (Tracking ID: 4041822)

SYMPTOM:
In an SRDF/Metro array setup, the last path is in the enabled state even after all the host and the array-side switch ports are disabled.

DESCRIPTION:
In case of an SRDF/Metro array setup, if the path connectivity is disrupted, one path may still appear to be in the enabled state until the connectivity is restored. For example: 
# vxdmpadm getsubpaths dmpnodename=emc0_02e0
NAME                      STATE[A]    PATH-TYPE[M]  CTLR-NAME  ENCLR-TYPE  ENCLR-NAME  ATTRS  PRIORITY
=======================================================================================================
c7t50000975B00AD80Ad9s2   DISABLED       -           c7        EMC         emc0        -      -
c7t50000975B00AD84Ad9s2   DISABLED       -           c7        EMC         emc0        -      -
c7t50000975B00AF40Ad70s2  DISABLED       -           c7        EMC         emc0        -      -
c7t50000975B00AF44Ad70s2  DISABLED       -           c7        EMC         emc0        -      -
c8t50000975B00AD80Ad9s2   DISABLED       -           c8        EMC         emc0        -      -
c8t50000975B00AD84Ad9s2   DISABLED       -           c8        EMC         emc0        -      -
c8t50000975B00AF40Ad70s2  ENABLED(A)     -           c8        EMC         emc0        -      -
c8t50000975B00AF44Ad70s2  DISABLED       -           c8        EMC         emc0        -      -
Note: This situation does not occur if I/Os are in progress through this DMP node. DMP identifies the disruption in the connectivity and correctly updates the state of the path.

RESOLUTION:
Made code changes to fixed this issue, To refresh the state of the paths sooner, run the 'vxdisk scandisks' command.

Patch ID: VRTSvxvm-7.4.2.2400

* 4018181 (Tracking ID: 3995474)

SYMPTOM:
The following IO errors are reported on VxVM sub-disks result in DRL log detached on SLES12SP3 without any SCSI errors detected.

VxVM vxio V-5-0-1276 error on Subdisk [xxxx] while writing volume [yyyy][log] offset 0 length [zzzz]
VxVM vxio V-5-0-145 DRL volume yyyy[log] is detached

DESCRIPTION:
Following the Linux kernel changes since 4.4.68(SLES12SP3), VxVM stores the bio flags, including B_ERROR, to bi_flags_ext field from bi_flags, as bi_flags was reduced from long to short. As bi_flags_ext is a kind of hacking by modifying bio struct, it may take unknown problems. Checking bi_error instead as the

RESOLUTION:
Code changes have been made in the VxIO Disk IO done routine to fix the issue.

* 4051701 (Tracking ID: 4031597)

SYMPTOM:
vradmind generates a core dump in __strncpy_sse2_unaligned.

DESCRIPTION:
The following core dump is generated:
(gdb)bt
Thread 1 (Thread 0x7fcd140b2780 (LWP 90066)):
#0 0x00007fcd12b1d1a5 in __strncpy_sse2_unaligned () from /lib64/libc.so.6
#1 0x000000000059102e in IpmServer::accept (this=0xf21168, new_handlesp=0x0) at Ipm.C:3406
#2 0x0000000000589121 in IpmHandle::events (handlesp=0xf12088, new_eventspp=0x7ffc8e80a4e0, serversp=0xf120c8, new_handlespp=0x0, ms=100) at Ipm.C:613
#3 0x000000000058940b in IpmHandle::events (handlesp=0xfc8ab8, vlistsp=0xfc8938, ms=100) at Ipm.C:645
#4 0x000000000040ae2a in main (argc=1, argv=0x7ffc8e80e8e8) at srvmd.C:722

RESOLUTION:
vradmind is updated to properly handle getpeername(), which addresses this issue.

* 4051702 (Tracking ID: 4019182)

SYMPTOM:
In case of a VxDMP configuration, an InfoScale server panics when applying a patch. The following stack trace is generated:
unix:panicsys+0x40()
unix:vpanic_common+0x78()
unix:panic+0x1c()
unix:mutex_enter() - frame recycled
vxdmp(unloaded text):0x108b987c(jmpl?)()
vxdmp(unloaded text):0x108ab380(jmpl?)(0)
genunix:callout_list_expire+0x5c()
genunix:callout_expire+0x34()
genunix:callout_execute+0x10()
genunix:taskq_thread+0x42c()
unix:thread_start+4()

DESCRIPTION:
Some VxDMP functions create callouts. The VxDMP module may already be unloaded when a callout expires, which may cause the server to panic. VxDMP should cancel any previous timeout function calls before it unloads itself.

RESOLUTION:
VxDMP is updated to cancel any previous timeout function calls before unloading itself.

* 4051705 (Tracking ID: 4049371)

SYMPTOM:
DMP unable to display all WWN details when running "vxdmpadm getctlr all".

DESCRIPTION:
When udev discovers any sd device, vxvm will remove the old HBA info file and create a new one. During this period, vxesd could fail to obtain the HBA information.

RESOLUTION:
Code changes have been done to aviod missing the HBA info.

* 4051706 (Tracking ID: 4046007)

SYMPTOM:
disk private region gets corrupted after cluster name change in FSS environment.

DESCRIPTION:
Under some conditions, when vxconfigd tries to update TOC(table of contents) blocks of disk private region, the allocation maps may not be initialized in memory yet. This could make allocation maps incorrect and lead to corruption of disk private region.

RESOLUTION:
Code changes have been done to avoid corruption of disk private region.

* 4053228 (Tracking ID: 4053230)

SYMPTOM:
RHEL 8.5 support is to be provided with IS 7.4.1 and 7.4.2

DESCRIPTION:
RHEL 8.5 ZDS support is being provided with IS 7.4.1 and 7.4.2

RESOLUTION:
VxVM packages are available with RHEL 8.5 compatibility

* 4055211 (Tracking ID: 4052191)

SYMPTOM:
Any scripts or command files in the / directory may run unexpectedly at system startup.
and vxvm volumes will not be available until those scripts or commands are complete.

DESCRIPTION:
If this issue occurs, /var/svc/log/system-vxvm-vxvm-configure:default.log indicates that a script or command
located in the / directory has been executed.
example-
ABC Script ran!!
/lib/svc/method/vxvm-configure[241] abc.sh not found
/lib/svc/method/vxvm-configure[242] abc.sh not found
/lib/svc/method/vxvm-configure[243] abc.sh not found
/lib/svc/method/vxvm-configure[244] app/ cannot execute
In this example, abc.sh is located in the / directory and just echoes "ABC script ran !!". vxvm-configure launched abc.sh.

RESOLUTION:
Issue got fixed by changing the comments format in SunOS_5.11.vxvm-configure.sh script.

Patch ID: VRTSvxvm-7.4.2.2200

* 4018173 (Tracking ID: 3852146)

SYMPTOM:
In a CVM cluster, when importing a shared diskgroup specifying both -c and -o
noreonline options, the following error may be returned: 
VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed: Disk for disk
group not found.

DESCRIPTION:
The -c option will update the disk ID and disk group ID on the private region
of the disks in the disk group being imported. Such updated information is not
yet seen by the slave because the disks have not been re-onlined (given that
noreonline option is specified). As a result, the slave cannot identify the
disk(s) based on the updated information sent from the master, causing the
import to fail with the error Disk for disk group not found.

RESOLUTION:
The code is modified to handle the working of the "-c" and "-o noreonline"
options together.

* 4018178 (Tracking ID: 3906534)

SYMPTOM:
After enabling DMP (Dynamic Multipathing) Native support, enable /boot to be
mounted on DMP device.

DESCRIPTION:
Currently /boot is mounted on top of OS (Operating System) device. When DMP
Native support is enabled, only VG's (Volume Groups) are migrated from OS 
device to DMP device.This is the reason /boot is not migrated to DMP device.
With this if OS device path is not available then system becomes unbootable 
since /boot is not available. Thus it becomes necessary to mount /boot on DMP
device to provide multipathing and resiliency.

RESOLUTION:
Code changes have been done to migrate /boot on top of DMP device when DMP
Native support is enabled.
Note - The code changes are currently implemented for RHEL-6 only. For other
linux platforms, /boot will still not be mounted on the DMP device

* 4031342 (Tracking ID: 4031452)

SYMPTOM:
Add node operation is failing with error "Error found while invoking '' in the new node, and rollback done in both nodes"

DESCRIPTION:
Stack showed a valid address for pointer ptmap2, but still it generated core.
It suggested that it might be a double-free case. Issue lies in freeing a pointer

RESOLUTION:
Added handling for such case by doing NULL assignment to pointers wherever they are freed

* 4037283 (Tracking ID: 4021301)

SYMPTOM:
Data corruption issue happened with the big size IO processed by Linux kernel IO split on RHEL8.

DESCRIPTION:
On RHEL8 or as of Linux kernel 3.13, it introduces some changes in Linux kernel block layer, new item of the bio iterator structure is used to represent the start offset of bio or bio vectors after the IO processed by Linux kernel IO split functions. Also, in recent version of vxfs, it can generate bio with larger size than the size limitation defined within Linux kernel block layer and VxVM, which lead the IO from vxfs could be split by Linux kernel. For such split IOs, VxVM does not take the new item of the bio iterator into account while process them, which caused the data is written to wrong position of volume/disk. Hence, data corruption.

RESOLUTION:
Code changes have been made to bypass the Linux kernel IO split functions, which seems redundant for VxVM IO processing.

* 4042038 (Tracking ID: 4040897)

SYMPTOM:
This is new array and we need to add support for claiming HPE MSA 2060 arrays.

DESCRIPTION:
HPE MSA 2060 is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE MSA 2060 array have been done.

* 4046906 (Tracking ID: 3956607)

SYMPTOM:
When removing a VxVM disk using vxdg-rmdisk operation, the following error occurs requesting a disk reclaim.
VxVM vxdg ERROR V-5-1-0 Disk <device_name> is used by one or more subdisks which are pending to be reclaimed.
Use "vxdisk reclaim <device_name>" to reclaim space used by these subdisks, and retry "vxdg rmdisk" command.
Note: reclamation is irreversible. But when issuing vxdisk-reclaim as advised, the command dumps core.

DESCRIPTION:
In the disk-reclaim code path, memory allocation can fail at realloc() but the failure 
not detected, causing an invalid address to be referenced and a core dump results.

RESOLUTION:
The disk-reclaim code path now handles failure of realloc properly.

* 4046907 (Tracking ID: 4041001)

SYMPTOM:
When some nodes are rebooted in the system, nodes cannot join back as disk attach transactions are not
happening.

DESCRIPTION:
In VxVM, when some nodes are rebooted, some plexes of volume are detached. It may happen that all plexes
of volume are disabled. In this case, if all plexes of some DCO volume become inaccessible, that DCO
volume state should be marked as BADLOG.

If state is not marked BADLOG, transactions fail with following error.
VxVM ERROR V-5-1-10128  DCO experienced IO errors during the operation. Re-run the operation after ensuring that DCO is accessible

As the transactions are failing, system goes in hang state and nodes cannot join.

RESOLUTION:
The code is fixed to mark DCO state as BADLOG when all the plexes of DCO becomes inaccessible during IO load.

* 4046908 (Tracking ID: 4038865)

SYMPTOM:
System panick at vxdmp module with following calltrace in IRQ stack.
native_queued_spin_lock_slowpath
queued_spin_lock_slowpath
_raw_spin_lock_irqsave7
dmp_get_shared_lock
gendmpiodone
dmpiodone
bio_endio
blk_update_request
scsi_end_request
scsi_io_completion
scsi_finish_command
scsi_softirq_done
blk_done_softirq
__do_softirq
call_softirq
do_softirq
irq_exit
do_IRQ
 <IRQ stack>

DESCRIPTION:
A deadlock issue can happen between inode_hash_lock and DMP shared lock, when one process holding inode_hash_lock but acquires the DMP shared lock in IRQ context, in the mean time other process holding the DMP shared lock may acquire inode_hash_lock.

RESOLUTION:
Code changes have been done to avoid the deadlock issue.

* 4047588 (Tracking ID: 4044072)

SYMPTOM:
I/Os fail for NVMe disks with 4K block size on the RHEL 8.4 kernel.

DESCRIPTION:
This issue occurs only in the case of disks of the 4K block size. I/Os complete successfully when the disks of 512 block size are used. If disks of the 4K block size are used, the following error messages are logged:
[ 51.228908] VxVM vxdmp V-5-0-0 [Error] i/o error occurred (errno=0x206) on dmpnode 201/0x10
[ 51.230070] blk_update_request: operation not supported error, dev nvme1n1, sector 240 op 0x0:(READ) flags 0x800 phys_seg 1 prio class 0
[ 51.240861] blk_update_request: operation not supported error, dev nvme0n1, sector 0 op 0x0:(READ) flags 0x800 phys_seg 1 prio class 0

RESOLUTION:
Updated the VxVM and the VxDMP modules to address this issue. The logical block size is now set to 4096 bytes, which is the same as the physical block size.

* 4047590 (Tracking ID: 4045501)

SYMPTOM:
The following errors occur during the installation of the VRTSvxvm and the VRTSaslapm packages on CentOS 8.4 systems:
~
Verifying packages...
Preparing packages...
This release of VxVM is for Red Hat Enterprise Linux 8
and CentOS Linux 8.
Please install the appropriate OS
and then restart this installation of VxVM.
error: %prein(VRTSvxvm-7.4.1.2500-RHEL8.x86_64) scriptlet failed, exit status 1
error: VRTSvxvm-7.4.1.2500-RHEL8.x86_64: install failed
cat: 9: No such file or directory
~

DESCRIPTION:
The product installer reads the /etc/centos-release file to identify the Linux distribution. This issue occurs because the file has changed for CentOS 8.4.

RESOLUTION:
The product installer is updated to identify the correct Linux distribution so that the VRTSvxvm and the VRTSaslapm packages get installed successfully.

* 4047592 (Tracking ID: 3992040)

SYMPTOM:
VxFS Testing CFS Stress hits a kernel panic, f:vx_dio_bio_done:2

DESCRIPTION:
In RHEL8.0/SLES15 kernel code, The value in bi_status isn't a standard error code at and there are completely separate set of values that are all small positive integers (for example, BLK_STS_OK and BLK_STS_IOERROR) while actual errors sent by VM are different hence VM should send proper bi_status to FS with newer kernel. This fix avoids further kernel crashes.

RESOLUTION:
Code changes are done to have a map for bi_status and bi_error conversion( as it's been there in Linux Kernel code - blk-core.c)

* 4047695 (Tracking ID: 3911930)

SYMPTOM:
Valid PGR operations sometimes fail on a dmpnode.

DESCRIPTION:
As part of the PGR operations, if the inquiry command finds that PGR is not
supported on the dmpnode node, a flag PGR_FLAG_NOTSUPPORTED is set on the
dmpnode.
Further PGR operations check this flag and issue PGR commands only if this flag
is
NOT set.
This flag remains set even if the hardware is changed so as to support PGR.

RESOLUTION:
A new command (namely enablepr) is provided in the vxdmppr utility to clear this
flag on the specified dmpnode.

* 4047722 (Tracking ID: 4023390)

SYMPTOM:
Vxconfigd crashes as a disk contains invalid privoffset(160), which is smaller than minimum required offset(VTOC 265, GPT 208).

DESCRIPTION:
There may have disk label corruption or stale information residents on the disk header, which caused unexpected label written.

RESOLUTION:
Add a assert when updating CDS label to ensure the valid privoffset written to disk header.

* 4049268 (Tracking ID: 4044583)

SYMPTOM:
A system goes into the maintenance mode when DMP is enabled to manage native devices.

DESCRIPTION:
The "vxdmpadm gettune dmp_native_support=on" command is used to enable DMP to manage native devices. After you change the value of the dmp_native_support tunable, you need to reboot the system needs for the changes to take effect. However, the system goes into the maintenance mode after it reboots. The issue occurs due to the copying of the local liblicmgr72.so file instead of the original one while creating the vx_initrd image.

RESOLUTION:
Code changes have been made to copy the correct liblicmgr72.so file. The system successfully reboots without going into maintenance mode.

Patch ID: VRTSvxvm-7.4.2.1900

* 4020207 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature as ON, it can happen vxiod with ID as 128 starts replication where RVG was in DCM mode, this vxiod is waiting for filesystem's response if a given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a dead lock situation.

RESOLUTION:
Code changes have been made to avoid handling MDSHIP IO in vxiod whose ID is bigger than 127.

* 4039510 (Tracking ID: 4037915)

SYMPTOM:
Getting compilation errors due to RHEL's source code changes

DESCRIPTION:
While compiling the RHEL 8.4 kernel (4.18.0-304) the build compilation fails due to certain RH source changes.

RESOLUTION:
Following changes have been fixed to work with VxVM 7.4.1
__bdevname - depreciated
Solution: Have a struct block_device and use bdevname

blkg_tryget_closest - placed under EXPORT_SYMBOL_GPL
Solution: Locally defined the function where compilation error was hit

sync_core - implicit declaration
The implementation of function sync_core() has been moved to header file sync_core.h, so including this header file fixes the error

* 4039511 (Tracking ID: 4037914)

SYMPTOM:
Crash while running VxVM cert.

DESCRIPTION:
While running the VM cert, there is a panic reported and the

RESOLUTION:
Setting bio and submitting to IOD layer in our own vxvm_gen_strategy() function

* 4039512 (Tracking ID: 4017334)

SYMPTOM:
VXIO call stack trace generated in /var/log/messages

DESCRIPTION:
This issue occurs due to a limitation in the way InfoScale interacts with the RHEL8.2 kernel.
 Call Trace:
 kmsg_sys_rcv+0x16b/0x1c0 [vxio]
 nmcom_get_next_mblk+0x8e/0xf0 [vxio]
 nmcom_get_hdr_msg+0x108/0x200 [vxio]
 nmcom_get_next_msg+0x7d/0x100 [vxio]
 nmcom_wait_msg_tcp+0x97/0x160 [vxio]
 nmcom_server_main_tcp+0x4c2/0x11e0 [vxio]

RESOLUTION:
Making changes in header files for function definitions if rhel version>=8.2
This kernel warning can be safely ignore as it doesn't have any functionality impact.

* 4039517 (Tracking ID: 4012763)

SYMPTOM:
IO hang may happen in VVR (Veritas Volume Replicator) configuration when SRL overflows for one rlink while another one rlink is in AUTOSYNC mode.

DESCRIPTION:
In VVR, if the SRL overflow happens for rlink (R1) and some other rlink (R2) is ongoing the AUTOSYNC, then AUTOSYNC is aborted for R2, R2 gets detached and DCM mode is activated on R1 rlink.

However, due to a race condition in code handling AUTOSYNC abort and DCM activation in parallel, the DCM could not be activated properly and IO which caused DCM activation gets queued incorrectly, this results in a IO hang.

RESOLUTION:
The code has been modified to fix the race issue in handling the AUTOSYNC abort and DCM activation at same time.

Patch ID: VRTSvxvm-7.4.2.1500

* 4018182 (Tracking ID: 4008664)

SYMPTOM:
System panic occurs with the following stack:

void genunix:psignal+4()
void vxio:vol_logger_signal_gen+0x40()
int vxio:vollog_logentry+0x84()
void vxio:vollog_logger+0xcc()
int vxio:voldco_update_rbufq_chunk+0x200()
int vxio:voldco_chunk_updatesio_start+0x364()
void vxio:voliod_iohandle+0x30()
void vxio:voliod_loop+0x26c((void *)0)
unix:thread_start+4()

DESCRIPTION:
Vxio keeps vxloggerd proc_t that is used to send a signal to vxloggerd. In case vxloggerd has been ended for some reason, the signal may be sent to an unexpected process, which may cause panic.

RESOLUTION:
Code changes have been made to correct the problem.

* 4020207 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature as ON, it can happen vxiod with ID as 128 starts replication where RVG was in DCM mode, this vxiod is waiting for filesystem's response if a given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a dead lock situation.

RESOLUTION:
Code changes have been made to avoid handling MDSHIP IO in vxiod whose ID is bigger than 127.

* 4021238 (Tracking ID: 4008075)

SYMPTOM:
Observed with ASL changes for NVMe, This issue observed in reboot scenario. For every reboot machine was hitting panic And this was happening in loop.

DESCRIPTION:
panic was hitting for such splitted bios, root cause for this is RHEL8 introduced a new field named as __bi_remaining.
where __bi_remaining is maintanins the count of chained bios, And for every endio that __bi_remaining gets atomically decreased in bio_endio() function.
While decreasing __bi_remaining OS checks that the __bi_remaining 'should not <= 0' and in our case __bi_remaining was always 0 and we were hitting OS
BUG_ON.

RESOLUTION:
>>> For scsi devices maxsize is 4194304,
[   26.919333] DMP_BIO_SIZE(orig_bio) : 16384, maxsize: 4194304
[   26.920063] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 4194304

>>>and for NVMe devices maxsize is 131072
[  153.297387] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 131072
[  153.298057] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 131072

* 4021240 (Tracking ID: 4010612)

SYMPTOM:
$ vxddladm set namingscheme=ebn lowercase=no
This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on. means every nvme/ssd disks names would be 
hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....

DESCRIPTION:
$ vxddladm set namingscheme=ebn lowercase=no
This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on.
means every nvme/ssd disks names would be hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....
eg.
smicro125_nvme0_0 <--- disk1
smicro125_nvme1_0 <--- disk2

for lowercase=no our current code is suppressing the suffix digit of enclosurname and hence multiple disks gets same name and it is showing udid_mismatch 
because whatever udid of private region is not matching with ddl. ddl database showing wrong info because of multiple disks gets same name.

smicro125_nvme_0 <--- disk1   <<<<<<<-----here suffix digit of nvme enclosure suppressed
smicro125_nvme_0 <--- disk2

RESOLUTION:
Append the suffix integer while making da_name

* 4021346 (Tracking ID: 4010207)

SYMPTOM:
System panic occurred with the below stack:

native_queued_spin_lock_slowpath()
queued_spin_lock_slowpath()
_raw_spin_lock_irqsave()
volget_rwspinlock()
volkiodone()
volfpdiskiodone()
voldiskiodone_intr()
voldmp_iodone()
bio_endio()
gendmpiodone()
dmpiodone()
bio_endio()
blk_update_request()
scsi_end_request()
scsi_io_completion()
scsi_finish_command()
scsi_softirq_done()
blk_done_softirq()
__do_softirq()
call_softirq()

DESCRIPTION:
As part of collecting the IO statistics collection, the vxstat thread acquires a spinlock and tries to copy data to the user space. During the data copy, if some page fault happens, then the thread would relinquish the CPU and provide the same to some other thread. If the thread which gets scheduled on the CPU requests the same spinlock which vxstat thread had acquired, then this results in a hard lockup situation.

RESOLUTION:
Code has been changed to properly release the spinlock before copying out the data to the user space during vxstat collection.

* 4021359 (Tracking ID: 4010040)

SYMPTOM:
A security issue occurs during Volume Manager configuration.

DESCRIPTION:
This issue occurs during the configuration of the VRTSvxvm package.

RESOLUTION:
VVR daemon is updated so that this security issue no longer occurs.

* 4021366 (Tracking ID: 4008741)

SYMPTOM:
VxVM device files appears to have device_t SELinux label.

DESCRIPTION:
If an unauthorized or modified device is allowed to exist on the system, there is the possibility the system may perform unintended or unauthorized operations.
eg: ls -LZ
...
...
/dev/vx/dsk/testdg/vol1   system_u:object_r:device_t:s0
/dev/vx/dmpconfig         system_u:object_r:device_t:s0
/dev/vx/vxcloud           system_u:object_r:device_t:s0

RESOLUTION:
Code changes made to change the device labels to misc_device_t, fixed_disk_device_t.

* 4021428 (Tracking ID: 4020166)

SYMPTOM:
Build issue becuase of "struct request"

error: struct request has no member named next_rq
Linux has deprecated the member next_req

DESCRIPTION:
The issue was observed due to changes in OS structure

RESOLUTION:
code changes are done in required files

* 4021748 (Tracking ID: 4020260)

SYMPTOM:
While enabling dmp native support tunable dmp_native_support for Centos 8 below mentioned error was observed:

[root@dl360g9-4-vm2 ~]# vxdmpadm settune dmp_native_support=on
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groups

VxVM vxdmpadm ERROR V-5-1-15686 The following vgs could not be migrated as error in bootloader configuration file 

 cl
[root@dl360g9-4-vm2 ~]#

DESCRIPTION:
The issue was observed due to missing code check-ins for CentOS 8 in the required files.

RESOLUTION:
Changes are done in required files for dmp native support in CentOS 8

Patch ID: VRTSvxvm-7.4.2.1400

* 4018182 (Tracking ID: 4008664)

SYMPTOM:
System panic occurs with the following stack:

void genunix:psignal+4()
void vxio:vol_logger_signal_gen+0x40()
int vxio:vollog_logentry+0x84()
void vxio:vollog_logger+0xcc()
int vxio:voldco_update_rbufq_chunk+0x200()
int vxio:voldco_chunk_updatesio_start+0x364()
void vxio:voliod_iohandle+0x30()
void vxio:voliod_loop+0x26c((void *)0)
unix:thread_start+4()

DESCRIPTION:
Vxio keeps vxloggerd proc_t that is used to send a signal to vxloggerd. In case vxloggerd has been ended for some reason, the signal may be sent to an unexpected process, which may cause panic.

RESOLUTION:
Code changes have been made to correct the problem.

* 4020207 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature as ON, it can happen vxiod with ID as 128 starts replication where RVG was in DCM mode, this vxiod is waiting for filesystem's response if a given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a dead lock situation.

RESOLUTION:
Code changes have been made to avoid handling MDSHIP IO in vxiod whose ID is bigger than 127.

* 4021346 (Tracking ID: 4010207)

SYMPTOM:
System panic occurred with the below stack:

native_queued_spin_lock_slowpath()
queued_spin_lock_slowpath()
_raw_spin_lock_irqsave()
volget_rwspinlock()
volkiodone()
volfpdiskiodone()
voldiskiodone_intr()
voldmp_iodone()
bio_endio()
gendmpiodone()
dmpiodone()
bio_endio()
blk_update_request()
scsi_end_request()
scsi_io_completion()
scsi_finish_command()
scsi_softirq_done()
blk_done_softirq()
__do_softirq()
call_softirq()

DESCRIPTION:
As part of collecting the IO statistics collection, the vxstat thread acquires a spinlock and tries to copy data to the user space. During the data copy, if some page fault happens, then the thread would relinquish the CPU and provide the same to some other thread. If the thread which gets scheduled on the CPU requests the same spinlock which vxstat thread had acquired, then this results in a hard lockup situation.

RESOLUTION:
Code has been changed to properly release the spinlock before copying out the data to the user space during vxstat collection.

* 4021428 (Tracking ID: 4020166)

SYMPTOM:
Build issue becuase of "struct request"

error: struct request has no member named next_rq
Linux has deprecated the member next_req

DESCRIPTION:
The issue was observed due to changes in OS structure

RESOLUTION:
code changes are done in required files

* 4021748 (Tracking ID: 4020260)

SYMPTOM:
While enabling dmp native support tunable dmp_native_support for Centos 8 below mentioned error was observed:

[root@dl360g9-4-vm2 ~]# vxdmpadm settune dmp_native_support=on
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groups

VxVM vxdmpadm ERROR V-5-1-15686 The following vgs could not be migrated as error in bootloader configuration file 

 cl
[root@dl360g9-4-vm2 ~]#

DESCRIPTION:
The issue was observed due to missing code check-ins for CentOS 8 in the required files.

RESOLUTION:
Changes are done in required files for dmp native support in CentOS 8

Patch ID: VRTSvxvm-7.4.2.1300

* 4008606 (Tracking ID: 4004455)

SYMPTOM:
snapshot restore failed on a instant_snapshot created on older version DG

DESCRIPTION:
create a DG with older version, create a instant snapshot, 
do some IOs on source volume.
try to restore the snapshot.
snapshot failed for this scenario.

RESOLUTION:
rca for this issue is there flag values were conflicting.
fixed this issue code has been checkedin

* 4010892 (Tracking ID: 4009107)

SYMPTOM:
CA chain certificate verification fails in VVR when the number of intermediate certificates is greater than the depth. So, we get error in SSL initialization.

DESCRIPTION:
CA chain certificate verification fails in VVR when the number of intermediate certificates is greater than the depth. SSL_CTX_set_verify_depth() API decides the depth of certificates (in /etc/vx/vvr/cacert file) to be verified, which is limited to count 1 in code. Thus intermediate CA certificate present first  in /etc/vx/vvr/cacert (depth 1  CA/issuer certificate for server certificate) could be obtained and verified during connection, but root CA certificate (depth 2  higher CA certificate) could not be verified while connecting and hence the error.

RESOLUTION:
Removed the call of SSL_CTX_set_verify_depth() API so as to handle the depth automatically.

* 4011866 (Tracking ID: 3976678)

SYMPTOM:
vxvm-recover:  cat: write error: Broken pipe error encountered in syslog multiple times.

DESCRIPTION:
Due to a bug in vxconfigbackup script which is started by vxvm-recover "cat : write error: Broken pipe" is encountered in syslog 
and it is reported under vxvm-recover. In vxconfigbackup code multiple subshells are created in a function call and the first subshell is for cat command. When a particular if condition is satistfied, return is called exiting the later subshells even when there is data to be read in the created cat subshell, which results in broken pipe error.

RESOLUTION:
Changes are done in VxVM code to handle the broken pipe error.

* 4011971 (Tracking ID: 3991668)

SYMPTOM:
Configured with sec logging, VVR reports data inconsistency when hit "No IBC message arrived" error.

DESCRIPTION:
It might happen seconday node served updates with larger sequence ID when In-Band Control (IBC) update arrived. In this case, VVR will drop the IBC update. Any updates whose sequence ID are larger couldn't start data volume writes. They will get queued. Data lost will happen when seconary receives automic commit and clear the queue. Hence vradmin verifydata reports data inconsistency.

RESOLUTION:
Code changes have been made to trigger updates in order to start data volume writes.

* 4012485 (Tracking ID: 4000387)

SYMPTOM:
Existing VxVM module fails to load on Rhel 8.2

DESCRIPTION:
RHEL 8.2 is a new release and had few KABI changes  on which VxVM compilation breaks .

RESOLUTION:
Compiled VxVM code against 8.2 kernel and made changes to make it compatible.

* 4012848 (Tracking ID: 4011394)

SYMPTOM:
As a part of verifying the performance of CFS cloud tiering verses scale out file system tiering in Access, it was found that CFS cloud tiering performance was degraded.

DESCRIPTION:
On verifying the performance of CFS cloud tiering verses scale out file system tiering in Access, it was found that CFS cloud tiering performance was degraded because the design was single threaded which was causing bottleneck and performance issues.

RESOLUTION:
Code Changes are around Multiple IO queues in the kernel, Multithreaded request loop to fetch IOs from kernel queues into userland global queue and Allow curl threads to work in parallel.

* 4013155 (Tracking ID: 4010458)

SYMPTOM:
In VVR (Veritas Volume replicator), the rlink might inconsistently disconnect due to unexpected transactions with below messages:
VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed

DESCRIPTION:
In VVR (Veritas Volume replicator), a transaction is triggered when a change in the VxVM/VVR objects needs 
to be persisted on disk. 

In some scenario, few unnecessary transactions were getting triggered in loop. This was causing multiple rlink
disconnects with below message logged frequently:
VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed

One such unexpected transaction was happening due to open/close on volume as part of SmartIO caching.
Additionally, vradmind daemon was also issuing some open/close on volumes as part of IO statistics collection,
which was causing unnecessary transactions. 

Additionally some unexpected transactions were happening due to incorrect checks in code related
to some temporary flags on volume.

RESOLUTION:
The code is fixed to disable the SmartIO caching on the volumes if the SmartIO caching is not configured on the system.
Additionally code is fixed to avoid the unexpected transactions due to incorrect checking on the temporary flags
on volume.

* 4013169 (Tracking ID: 4011691)

SYMPTOM:
Observed high CPU consumption on the VVR secondary nodes because of high pending IO load.

DESCRIPTION:
High replication related IO load on the VVR secondary and the requirement of maintaining write order fidelity with limited memory pools created  contention. This resulted in multiple VxVM kernel threads contending for shared resources and there by increasing the CPU consumption.

RESOLUTION:
Limited the way in which VVR consumes its resources so that a high pending IO load would not result into high CPU consumption.

* 4013718 (Tracking ID: 4008942)

SYMPTOM:
file system gets disabled when cache object gets full and hence unmount is failing.

DESCRIPTION:
When cache object gets full, IO errors comes on volume. 
Because IOs are not getting served as cache object is full so there is inconsistency of IOs.
Because of IO inconsistency vxfs gets disabled and unmount failed

RESOLUTION:
Fixed the issue and code has been checkedin

Patch ID: VRTSvlic-4.01.742.300

* 4049416 (Tracking ID: 4049416)

SYMPTOM:
Frequent Security vulnerabilities reported in JRE

DESCRIPTION:
There are many vulnerabilities reported in JRE every quarter. To overcome this vulnerabilities issue migrate Telemetry Collector from Java to Python.
All other behavior of Telemetry Collector will remain the same.

RESOLUTION:
Migrated Telemetry Collector from Java to Python.

Patch ID: VRTSpython-3.7.4.35

* 4049693 (Tracking ID: 4049692)

SYMPTOM:
In order to support Licensing module, VRTSpython must include additional modules in it.

DESCRIPTION:
Licensing module utilizes VRTSpython package which needs additional modules to be added in VRTSpython.

RESOLUTION:
VRTSpython packages has been updated to include additional python modules in it.

Patch ID: VRTSdbac-7.4.2.2300

* 4053178 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTSdbac-7.4.2.1400

* 4037952 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSdbac-7.4.2.1200

* 4022056 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSdbac-7.4.2.1100

* 4012751 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
2(RHEL8.2) is now introduced.

Patch ID: VRTScps-7.4.2.2500

* 4054435 (Tracking ID: 4018218)

SYMPTOM:
Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2

DESCRIPTION:
Secure communication between a CP Server and a CP Client cannot be established using TLSv1.2.

RESOLUTION:
This hotfix updates the VRTScps module so that InfoScale CP Client can establish secure communication with a CP server using TLSv1.2. However, to enable TLSv1.2 communication between the CP client and CP server after installing this hotfix, you must perform the following steps:

To configure TLSv1.2 for CP server
1. Stop the process resource that has pathname="/opt/VRTScps/bin/vxcpserv"
   # hares -offline <vxcpserv> -sys <sysname> 
2. Check that the vxcpserv daemon is stopped using the following command:
   # ps -eaf | grep "/opt/VRTScps/bin/vxcpserv"
3. When the vxcpserv daemon is stopped, edit the "/etc/vxcps_ssl.properties" file and make the following changes:
   a. Remove or comment the entry: openSSL.server.requireTLSv1 = true 
   b. Add a new entry: openSSL.server.requireTLSv1.2 = true
4. Start the process resource that has pathname="/opt/VRTScps/bin/vxcpserv"
   # hares -offline <vxcpserv> -sys <sysname>

To configure TLSv1.2 for CP Client
Edit the "/etc/vxcps_ssl.properties" file and make the following changes:
   a. Remove or comment the entry: openSSL.server.requireTLSv1 = true 
   b. Add a new entry: openSSL.server.requireTLSv1.2 = true

* 4067464 (Tracking ID: 4056666)

SYMPTOM:
The Error writing to database message may appear in syslogs intermittently on InfoScale CP servers.

DESCRIPTION:
Typically, when a coordination point server (CP server) is shared among multiple InfoScale clusters, the following messages may intermittently appear in syslogs:
CPS CRITICAL V-97-1400-501 Error writing to database! :database is locked.
These messages appear in the context of the CP server protocol handshake between the clients and the server.

RESOLUTION:
The CP server is updated so that, in addition to its other database write operations, all the ones for the CP server protocol handshake action are also synchronized.

Patch ID: VRTScps-7.4.2.1600

* 4038101 (Tracking ID: 4034933)

SYMPTOM:
After installing VRTScps 6.2.1.002, the following error is logged in cpserver_A.log "CPS CRITICAL V-97-1400-22017 Error executing update nodes set is_reachable..."

DESCRIPTION:
This issue occurs due to unexpected locking of the CP server database that is related to the stale key detection feature.

RESOLUTION:
This hotfix updates the VRTScps RPM so that the unexpected database lock is cleared and the nodes can updated successfully.

Patch ID: VRTSvcsag-7.4.2.2500

* 4061462 (Tracking ID: 4044567)

SYMPTOM:
The HostMonitor agent faults while logging the memory usage of a system.

DESCRIPTION:
The HostMonitor agent runs in the background to monitor the usage of the resources of a system. It faults and terminates unexpectedly while logging the memory usage of a system and generates a core dump.

RESOLUTION:
This patch updates the HostMonitor agent to handle the issue that it encounters while logging the memory usage data of a system.

Patch ID: VRTScavf-7.4.2.1400

* 4054857 (Tracking ID: 4035066)

SYMPTOM:
For better debugging of monitor and EP timeout events, improved instrumentation in the code.

DESCRIPTION:
For better debugging of monitor and EP timeout events, improved instrumentation in the code.

RESOLUTION:
N/A

Patch ID: VRTScavf-7.4.2.1300

* 4007226 (Tracking ID: 4007196)

SYMPTOM:
fsck being set on the filesystem created on fss shared volume during cluster shutdown

DESCRIPTION:
fsck is being set on vxfs filesystem in case of cluster shutdown when the fss shared volume deports/disables on one node before the cfsmount resource on other node goes down

RESOLUTION:
Added code in the offline routine of cvmvoldg agent to hold the volume disable/deport if the filesystem is still mounted in case of cluster shutdown. This prevents the volume from being unavailable during cluster shutdown and fsck is not set on the filesystem

Patch ID: VRTSvcswiz-7.4.2.2100

* 4049572 (Tracking ID: 4049573)

SYMPTOM:
Veritas High Availability Configuration Wizard (HA-Plugin) is not supported on VMWare vCenter HTML based UI.

DESCRIPTION:
Veritas HA-Plugin was based on Adobe Flex. HA-Plugin fails to work because Flex is now deprecated.

RESOLUTION:
Veritas HA-Plugin now supports VMWare vCenter HTML based UI.

Patch ID: VRTSglm-7.4.2.2100

* 4037622 (Tracking ID: 4037621)

SYMPTOM:
GLM module failed to load on RHEL8.4

DESCRIPTION:
The RHEL8.4 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.4.

Patch ID: VRTSglm-7.4.2.1300

* 4008072 (Tracking ID: 4008071)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get Corrupted due to parallel Invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through lock.

* 4012063 (Tracking ID: 4001382)

SYMPTOM:
GLM module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.2



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel8_x86_64-Patch-7.4.2.2500.tar.gz to /tmp
2. Untar infoscale-rhel8_x86_64-Patch-7.4.2.2500.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel8_x86_64-Patch-7.4.2.2500.tar.gz
    # tar xf /tmp/infoscale-rhel8_x86_64-Patch-7.4.2.2500.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale742P2500 [<host1> <host2>...]

You can also install this patch together with 7.4.2 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.2 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE