infoscale-rhel8.4_x86_64-Patch-7.4.2.1800
Obsolete
The latest patch(es) : infoscale-rhel8.5_x86_64-Patch-7.4.2.2000 

 Basic information
Release type: Patch
Release date: 2021-06-18
OS update support: RHEL8 x86-64 Update 5 | RHEL8 x86-64 Update 6 | RHEL8 x86-64 Update 7 | RHEL8 x86-64 Update 9
Technote: None
Documentation: None
Popularity: 3224 viewed    downloaded
Download size: 364.24 MB
Checksum: 359853518

 Applies to one or more of the following products:
InfoScale Availability 7.4.2 On RHEL8 x86-64
InfoScale Enterprise 7.4.2 On RHEL8 x86-64
InfoScale Foundation 7.4.2 On RHEL8 x86-64
InfoScale Storage 7.4.2 On RHEL8 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
infoscale-rhel8.5_x86_64-Patch-7.4.2.2000 2021-11-09
infoscale-rhel8_x86_64-Patch-7.4.2.1900 (obsolete) 2021-10-19

This patch supersedes the following patches: Release date
infoscale-rhel8.3_x86_64-Patch-7.4.2.1400 (obsolete) 2021-02-26
infoscale-rhel8.3_x86_64-Patch-7.4.2.1300 (obsolete) 2021-01-04
infoscale-rhel8.2_x86_64-Patch-7.4.2.1100 (obsolete) 2020-10-28

 Fixes the following incidents:
4002850, 4005220, 4006982, 4007372, 4007374, 4007375, 4007376, 4007677, 4007692, 4007763, 4008072, 4008606, 4008986, 4010353, 4010892, 4011866, 4011971, 4012061, 4012062, 4012063, 4012397, 4012485, 4012522, 4012744, 4012745, 4012746, 4012751, 4012765, 4012787, 4012800, 4012801, 4012842, 4012848, 4012936, 4013034, 4013036, 4013084, 4013143, 4013144, 4013155, 4013169, 4013626, 4013718, 4013738, 4014720, 4015287, 4015835, 4016721, 4017282, 4017818, 4017820, 4017934, 4018170, 4018182, 4018770, 4019533, 4019535, 4019536, 4019877, 4020055, 4020056, 4020207, 4020912, 4021055, 4021057, 4021058, 4021059, 4021238, 4021240, 4021346, 4021359, 4021366, 4021428, 4021748, 4022049, 4022052, 4022053, 4022054, 4022056, 4027170, 4028123, 4029493, 4030882, 4037421, 4037576, 4037622, 4037806, 4037952, 4038101, 4039510, 4039511, 4039512, 4039517, 4039694, 4040831, 4040834, 4040837, 4042650

 Patch ID:
VRTSaslapm-7.4.2.1900-RHEL8
VRTSvxvm-7.4.2.1900-RHEL8
VRTSllt-7.4.2.1700-RHEL8
VRTSgab-7.4.2.1700-RHEL8
VRTSamf-7.4.2.1700-RHEL8
VRTSvxfen-7.4.2.1700-RHEL8
VRTSvcsag-7.4.2.1700-RHEL8
VRTSdbac-7.4.2.1400-RHEL8
VRTScps-7.4.2.1600-RHEL8
VRTSvxfs-7.4.2.2100-RHEL8
VRTSodm-7.4.2.2100-RHEL8
VRTSglm-7.4.2.2100-RHEL8

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.2 * * *
                         * * * Patch 1800 * * *
                         Patch Date: 2021-06-10


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH
   * KNOWN ISSUES


PATCH NAME
----------
InfoScale 7.4.2 Patch 1800


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL8 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTScps
VRTSdbac
VRTSgab
VRTSglm
VRTSllt
VRTSodm
VRTSvcsag
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.2
   * InfoScale Enterprise 7.4.2
   * InfoScale Foundation 7.4.2
   * InfoScale Storage 7.4.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxfs-7.4.2.2100
* 4018170 (3902600) Contention observed on vx_worklist_lk lock in cluster 
mounted file system with ODM
* 4027170 (4021368) FSCK fails with below error 
             "UX:vxfs fsck: ERROR: V-3-26113: bc_getfreebuf internal error"
* 4037421 (4037420) VxFS module failed to load on RHEL8.4
Patch ID: VRTSvxfs-7.4.2.1500
* 4012765 (4011570) WORM attribute replication support in VxFS.
* 4014720 (4011596) Multiple issues were observed during glmdump using hacli for communication
* 4015287 (4010255) "vfradmin promote" fails to promote target FS with selinux enabled.
* 4015835 (4015278) System panics during vx_uiomove_by _hand.
* 4016721 (4016927) For multi cloud tier scenario, system panic with NULL pointer dereference when we try to remove second cloud tier
* 4017282 (4016801) filesystem mark for fullfsck
* 4017818 (4017817) VFR performance enhancement changes.
* 4017820 (4017819) Adding cloud tier operation fails while trying to add AWS GovCloud.
* 4017934 (4015059) VFR command can hang when job is paused
* 4018770 (4018197) VxFS module failed to load on RHEL8.3
* 4019877 (4019876) Remove license library dependency from vxfsmisc.so library
* 4020055 (4012049) Man page changes to expose "metasave" and "target" options.
* 4020056 (4012049) Documented "metasave" option and added one new option in fsck binary.
* 4020912 (4020758) Filesystem mount or fsck with -y may see hang during log replay
Patch ID: VRTSvxfs-7.4.2.1300
* 4002850 (3994123) Running fsck on a system may show LCT count mismatch errors
* 4005220 (4002222) Code changes have been done to prevent cluster-wide hang in a scenario where the cluster filesystem is FCL enabled and the disk layout version is greater than or equals to 14.
* 4010353 (3993935) Fsck command of vxfs may hit segmentation fault.
* 4012061 (4001378) VxFS module failed to load on RHEL8.2
* 4012522 (4012243) Read/Write performance improvement in VxFS
* 4012765 (4011570) WORM attribute replication support in VxFS.
* 4012787 (4007328) VFR source keeps processing file change log(FCL) records even after connection closure from target.
* 4012800 (4008123) VFR fails to replicate named extended attributes if the job is paused.
* 4012801 (4001473) VFR fails to replicate named extended attributes set on files
* 4012842 (4006192) system panic with NULL pointer de-reference.
* 4012936 (4000465) FSCK binary loops when it detects break in sequence of log ids.
* 4013084 (4009328) In cluster filesystem, unmount hang could be observed if smap is marked bad previously.
* 4013143 (4008352) Using VxFS mount binary inside container to mount any device might result in core generation.
* 4013144 (4008274) Race between compression thread and clone remove thread while allocating reorg inode.
* 4013626 (4004181) Read the value of VxFS compliance clock
* 4013738 (3830300) Degraded CPU performance during backup of Oracle archive logs
on CFS vs local filesystem
Patch ID: VRTSvxvm-7.4.2.1900
* 4020207 (4018086) system hang was observed when RVG was in DCM resync with SmartMove as ON.
* 4039510 (4037915) VxVM 7.4.1 support for RHEL 8.4 compilation errors
* 4039511 (4037914) BUG: unable to handle kernel NULL pointer dereference
* 4039512 (4017334) vxio stack trace warning message kmsg_mblk_to_msg can be seen in systemlog
* 4039517 (4012763) IO hang may happen in VVR (Veritas Volume Replicator) configuration when SRL overflows for one rlink while another one rlink is in AUTOSYNC mode.
Patch ID: VRTSvxvm-7.4.2.1500
* 4018182 (4008664) System panic when signal vxlogger daemon that has ended.
* 4020207 (4018086) system hang was observed when RVG was in DCM resync with SmartMove as ON.
* 4021238 (4008075) Observed with ASL changes for NVMe, This issue observed in reboot scenario. For every reboot machine was hitting panic And this was happening in loop.
* 4021240 (4010612) This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on. means every nvme/ssd disks names would be 
hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....
* 4021346 (4010207) System panicked due to hard-lockup due to a spinlock not released properly during the vxstat collection.
* 4021359 (4010040) A security issue occurs during Volume Manager configuration.
* 4021366 (4008741) VxVM device files are not correctly labeled to prevent unauthorized modification - device_t
* 4021428 (4020166) Vxvm Support on RHEL8 Update3
* 4021748 (4020260) Failed to activate/set tunable dmp native support for Centos 8
Patch ID: VRTSvxvm-7.4.2.1400
* 4018182 (4008664) System panic when signal vxlogger daemon that has ended.
* 4020207 (4018086) system hang was observed when RVG was in DCM resync with SmartMove as ON.
* 4021346 (4010207) System panicked due to hard-lockup due to a spinlock not released properly during the vxstat collection.
* 4021428 (4020166) Vxvm Support on RHEL8 Update3
* 4021748 (4020260) Failed to activate/set tunable dmp native support for Centos 8
Patch ID: VRTSvxvm-7.4.2.1300
* 4008606 (4004455) Instant restore failed for a snapshot created on older version DG.
* 4010892 (4009107) CA chain certificate verification fails in SSL context.
* 4011866 (3976678) vxvm-recover:  cat: write error: Broken pipe error encountered in syslog.
* 4011971 (3991668) Veritas Volume Replicator (VVR) configured with sec logging reports data inconsistency when hit "No IBC message arrived" error.
* 4012485 (4000387) VxVM support on RHEL 8.2
* 4012848 (4011394) Performance enhancement for cloud tiering.
* 4013155 (4010458) In VVR (Veritas Volume replicator), the rlink might inconsistently disconnect due to unexpected transactions.
* 4013169 (4011691) High CPU consumption on the VVR secondary nodes because of high pending IO load.
* 4013718 (4008942) Docker infoscale plugin is failing to unmount the filesystem, if the cache object is full
Patch ID: VRTScps-7.4.2.1600
* 4038101 (4034933) After installing VRTScps 6.2.1.002, the following error is logged in cpserver_A.log "CPS CRITICAL V-97-1400-22017 Error executing update nodes set is_reachable..."
Patch ID: VRTSdbac-7.4.2.1400
* 4037952 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSdbac-7.4.2.1200
* 4022056 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSdbac-7.4.2.1100
* 4012751 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).
Patch ID: VRTSvcsag-7.4.2.1700
* 4028123 (4027915) Processes configured for HA using the ProcessOnOnly agent get killed during shutdown or reboot, even if they are still in use.
* 4037806 (4021370) The AWSIP and EBSVol resources fail to come online when IMDSv2 is used for requesting instance metadata.
Patch ID: VRTSvcsag-7.4.2.1400
* 4007372 (4016624) When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.
* 4007374 (1837967) Application agent falsely detects an application as faulted, due to corruption caused by non-redirected STDOUT or STDERR.
* 4012397 (4012396) AzureDisk agent fails to work with latest Azure Storage SDK.
* 4019536 (4009761) A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.
Patch ID: VRTSvcsag-7.4.2.1300
* 4007692 (4006979) When the AzureDisk resource comes online on a cluster node, it goes into the UNKNOWN state on all the other nodes.
* 4007763 (4007764) The NFS locks related log file is flooded with the "sync_dir:copy failed for link" error messages.
* 4008986 (3860766) HostMonitor agent shows incorrect swap space usage in the agent logs.
Patch ID: VRTSvxfen-7.4.2.1700
* 4029493 (4029261) An entire InfoScale cluster may go down unexpectedly if one of its nodes receives a RECONFIG message during a shutdown or a restart operation.
* 4040837 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSvxfen-7.4.2.1300
* 4021055 (4010237) On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.
Patch ID: VRTSvxfen-7.4.2.1200
* 4022054 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSvxfen-7.4.2.1100
* 4006982 (3988184) The vxfen process cannot complete due to incomplete vxfentab file.
* 4007375 (4000745) The VxFEN process fails to start due to late discovery of the VxFEN disk group.
* 4007376 (3996218) In a customized fencing mode, the 'vxfenconfig -c' command creates a new vxfend process even if VxFen is already configured.
* 4007677 (3970753) Freeing uninitialized/garbage memory causes panic in vxfen.
Patch ID: VRTSamf-7.4.2.1900
* 4042650 (4041703) The system panics when the Mount and the CFSMount agents fail to register with AMF.
Patch ID: VRTSamf-7.4.2.1700
* 4040834 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSamf-7.4.2.1300
* 4019533 (4018791) A cluster node panics when the AMF module attempts to access an executable binary or a script using its absolute path.
* 4021057 (4010237) On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.
Patch ID: VRTSamf-7.4.2.1200
* 4022053 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSamf-7.4.2.1100
* 4012746 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).
Patch ID: VRTSgab-7.4.2.1700
* 4040831 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSgab-7.4.2.1300
* 4021059 (4010237) On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.
Patch ID: VRTSgab-7.4.2.1200
* 4022052 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSgab-7.4.2.1100
* 4012745 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
* 4013034 (4011683) The GAB module failed to start and the system log messages indicate failures with the mknod command.
Patch ID: VRTSllt-7.4.2.1700
* 4030882 (4029253) LLT may not reuse the buffer slots on which NAK is received from the earlier RDMA writes.
* 4038101 (4034933) After installing VRTScps 6.2.1.002, the following error is logged in cpserver_A.log "CPS CRITICAL V-97-1400-22017 Error executing update nodes set is_reachable..."
* 4039694 (4037048) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).
Patch ID: VRTSllt-7.4.2.1300
* 4019535 (4018581) The LLT module fails to start and the system log messages indicate missing IP address.
* 4021058 (4010237) On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.
Patch ID: VRTSllt-7.4.2.1200
* 4022049 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSllt-7.4.2.1100
* 4012744 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
* 4013036 (3985775) Sometimes, the system log may get flooded with LLT heartbeat loss messages that do not necessarily indicate any actual issues with LLT.
Patch ID: VRTSglm-7.4.2.2100
* 4037622 (4037621) GLM module failed to load on RHEL8.4
Patch ID: VRTSglm-7.4.2.1300
* 4008072 (4008071) Rebooting the system results into emergency mode due to corruption of Module Dependency files.
* 4012063 (4001382) GLM module failed to load on RHEL8.2
Patch ID: VRTSodm-7.4.2.2100
* 4037576 (4037575) ODM module failed to load on RHEL8.4
Patch ID: VRTSodm-7.4.2.1300
* 4012062 (4001380) ODM module failed to load on RHEL8.2


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxfs-7.4.2.2100

* 4018170 (Tracking ID: 3902600)

SYMPTOM:
Contention observed on vx_worklist_lk lock in cluster mounted file 
system with ODM

DESCRIPTION:
In CFS environment for ODM async i/o reads, iodones are done 
immediately,  calling into ODM itself from the interrupt handler. But all 
CFS writes are currently processed in delayed fashion, where the requests
are queued and processed later by the worker thread. This was adding delays
in ODM writes.

RESOLUTION:
Optimized the IO processing of ODM work items on CFS so that those
are processed in the same context if possible.

* 4027170 (Tracking ID: 4021368)

SYMPTOM:
FSCK fails with below error
             "UX:vxfs fsck: ERROR: V-3-26113: bc_getfreebuf internal error"

DESCRIPTION:
During fsck read ahead, it was observed that read ahead buffers were deleted before finishing actual async read. This resulted in leaking buffers in user buffer cache maintained by FSCK. When fsck tries to find buffer in buffer cache for next subsequent operations, it was failing to get free buffers because of these leak. This was resulting in failing fsck with "bc_getfreebuf internal error".

RESOLUTION:
Code is modified to wait deletion of buffers till async read complete.

* 4037421 (Tracking ID: 4037420)

SYMPTOM:
VxFS module failed to load on RHEL8.4

DESCRIPTION:
The RHEL8.4 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.4.

Patch ID: VRTSvxfs-7.4.2.1500

* 4012765 (Tracking ID: 4011570)

SYMPTOM:
WORM attribute replication support in VxFS.

DESCRIPTION:
WORM attribute replication is not supported in VFR. Modified code to replicate WORM attribute during attribute processing in VFR.

RESOLUTION:
Code is modified to replicate WORM attributes in VFR.

* 4014720 (Tracking ID: 4011596)

SYMPTOM:
It throws error saying "No such file or directory present"

DESCRIPTION:
Bug observed during parallel communication between all the nodes. Some required temp files were not present on other nodes.

RESOLUTION:
Fixed to have consistency maintained while parallel node communication. Using hacp for transferring temp files.

* 4015287 (Tracking ID: 4010255)

SYMPTOM:
"vfradmin promote" fails to promote target FS with selinux enabled.

DESCRIPTION:
During promote operation, VxFS remounts FS at target. When remounting FS to remove "protected on" flag from target, VxFS first fetch current mount options. With Selinux enabled (either in permissive mode/enabled), OS adds default "seclable" option to mount. When VxFS fetch current mount options, "seclabel" was not recognized by VxFS. Hence it fails to mount FS.

RESOLUTION:
Code is modified to remove "seclabel" mount option during mount processing on target.

* 4015835 (Tracking ID: 4015278)

SYMPTOM:
System panics during vx_uiomove_by _hand

DESCRIPTION:
During uiomove, VxFS get the pages from OS through get_user_pages() to copy user data. Oracle use hugetablfs internally for performance reason. This can allocate hugepages. Under low memory condition, it is possible that get_user_pages() might return VxFS compound pages. In case of compound pages, only head page has valid mapping set and all other pages are mapped as TAIL_MAPPING. In case of uiomove, if VxFS gets compound page, then it try to check writable mapping for all pages from this compound page. This can result into dereferencing illegal address (TAIL_MAPPING) which was causing panic in  stack. VxFS doesn't support huge pages but it is possible that compound page is present on the system and VxFS might get one through get_user_pages.

RESOLUTION:
Code is modified to get head page in case of tail pages from compound page when VxFS checks writeable mapping.

* 4016721 (Tracking ID: 4016927)

SYMPTOM:
Remove tier command panics the system, crash has panic reason "BUG: unable to handle kernel NULL pointer dereference at 0000000000000150"

DESCRIPTION:
When fsvoladm removes device all devices are not moved. Number of device count also remains same unless it is the last device in the array. So check for free slot before trying to access device.

RESOLUTION:
In the device list check for free slot before accessing the device in that slot.

* 4017282 (Tracking ID: 4016801)

SYMPTOM:
filesystem mark for fullfsck

DESCRIPTION:
In cluster environment, some operation can be perform on primary node only. When such operations are executed from secondary node, message is 
passed to primary node. During this, it may possible sender node has some transaction and not yet reached to disk. In such scenario, if sender node rebooted 
then primary node can see stale data.

RESOLUTION:
Code is modified to make sure transactions are flush to log disk before sending message to primary.

* 4017818 (Tracking ID: 4017817)

SYMPTOM:
NA

DESCRIPTION:
In order to increase the overall throughput of VFR, code changes have been done
to replicate files parallelly.

RESOLUTION:
Code changes have been done to replicate file's data & metadata parallely over
multiple socket connections.

* 4017820 (Tracking ID: 4017819)

SYMPTOM:
Cloud tier add operation fails when user is trying to add the AWS GovCloud.

DESCRIPTION:
Adding AWS GovCloud as a cloud tier was not supported in InfoScale. With these changes, user will be able to add AWS GovCloud type of cloud.

RESOLUTION:
Added support for AWS GovCloud

* 4017934 (Tracking ID: 4015059)

SYMPTOM:
VFR can hang when job is paused

DESCRIPTION:
When VFR job is paused, thread doing actual replication is canceled. In some cases, cancelable state of threads needs to be disable during 
executing of certain code path. In some cases such handling is missing which could result into VFR command goes into hang state.

RESOLUTION:
Code is add to handle all such scenarios.

* 4018770 (Tracking ID: 4018197)

SYMPTOM:
VxFS module failed to load on RHEL8.3

DESCRIPTION:
The RHEL8.3 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.3.

* 4019877 (Tracking ID: 4019876)

SYMPTOM:
vxfsmisc.so is publicly shared library for samba and doesn't require infoscale license for its usage

DESCRIPTION:
vxfsmisc.so is publicly shared library for samba and doesn't require infoscale license for its usage

RESOLUTION:
Removed license dependency in vxfsmisc library

* 4020055 (Tracking ID: 4012049)

SYMPTOM:
fsck_vxfs(1m) man page doesn't contain the information related to the "metasave" and "target" option.

DESCRIPTION:
fsck_vxfs(1m) man page doesn't contain the information related to the "metasave" and "target" option.

RESOLUTION:
Code changes have been done to document these two options. 1. metasave 2. target

* 4020056 (Tracking ID: 4012049)

SYMPTOM:
"fsck" supports the "metasave" option but it was not documented anywhere.

DESCRIPTION:
"fsck" supports the "metasave" option while executing with the "-y" option. but it is not documented anywhere. Also, it tries to store metasave in a particular location. The user doesn't have the option to specify the location. If that location doesn't have enough space, "fsck" fails to take the metasave and it continues to change filesystem state.

RESOLUTION:
Code changes have been done to add one new option with which the user can specify the location to store metasave. "metasave" and "target", these two options have been added in the "usage" message of "fsck" binary.

* 4020912 (Tracking ID: 4020758)

SYMPTOM:
Filesystem mount or fsck with -y may see hang during log replay

DESCRIPTION:
fsck utility is used to perform the log replay. This log replay is performed during mount operation or during filesystem check with -y option, if needed. In certain cases if there are lot of logs that needs to be replayed then it end up into consuming entire buffer cache. This results into out of buffer scenario and results into hang.

RESOLUTION:
Code is modified to make sure enough buffers are always available.

Patch ID: VRTSvxfs-7.4.2.1300

* 4002850 (Tracking ID: 3994123)

SYMPTOM:
Running fsck on a system may show LCT count mismatch errors

DESCRIPTION:
Multi-block merged extents in IFIAT inodes, may only process the first block of the extent, thus leaving some references unprocessed. This will lead to LCT counts not matching. Resolving the issue will require a fullfsck.

RESOLUTION:
Code changes added to process merged multi-block extents in IFIAT inodes correctly.

* 4005220 (Tracking ID: 4002222)

SYMPTOM:
The cluster can hang if the cluster filesystem is FCL enabled and its disk layout version is greater than or equals to 14.

DESCRIPTION:
VxFS worker threads that are responsible for handling "File Change Log" feature related operations, can be stuck in a deadlock if the disk layout version of the FCL enabled cluster filesystem is greater than or equals to 14.

RESOLUTION:
Code changes have been done to prevent cluster-wide hang in a scenario where the cluster filesystem is FCL enabled and the disk layout version is greater than or equals to 14.

* 4010353 (Tracking ID: 3993935)

SYMPTOM:
Fsck command of vxfs may hit segmentation fault with following stack.
#0  get_dotdotlst ()
#1  find_dotino ()
#2  dir_sanity ()
#3  pass2 ()
#4  iproc_do_work ()
#5  start_thread ()
#6  sysctl ()

DESCRIPTION:
TURNON_CHUNK() and TURNOFF_CHUNK() are modifying the values of arguments.

RESOLUTION:
Code has been modified to fix the issue.

* 4012061 (Tracking ID: 4001378)

SYMPTOM:
VxFS module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.2

* 4012522 (Tracking ID: 4012243)

SYMPTOM:
During IO MM semaphores lock contention may reduce performance

DESCRIPTION:
During IO mmap locks taken may introduce lock contention and reduce IO performance.

RESOLUTION:
New VxFS API is introduced to skip these locks whenever required on specific file.

* 4012765 (Tracking ID: 4011570)

SYMPTOM:
WORM attribute replication support in VxFS.

DESCRIPTION:
WORM attribute replication is not supported in VFR. Modified code to replicate WORM attribute during attribute processing in VFR.

RESOLUTION:
Code is modified to replicate WORM attributes in VFR.

* 4012787 (Tracking ID: 4007328)

SYMPTOM:
After replication service is stopped on target, the job failed at source only after processing all the FCL records.

DESCRIPTION:
After replication service is stopped on target, the job failed at source only after processing all the fcl records. It should get failed immediately, but it is failed after processing all the fcl records. If target breaks the connection, ideally the source received the error, which job can fail while reading FCL records, but the source received that connection is closed but the other thread doesnt receive the signal to stop, while processing FCL and ends after processing is complete.

RESOLUTION:
If replication service is stopped at target and processing of FCL records are being handled fail immediately based on return status of the connection.

* 4012800 (Tracking ID: 4008123)

SYMPTOM:
If a file has more than one named extended attributes set & if the job is paused. It fails
to replicate the remaining named extended attributes. (This behaviour is intermittent).

DESCRIPTION:
During a VFR replication if the job is paused while a file's nxattr are getting replicated, next
time when the job is resumed, the seqno. triplet received from target side causes source to miss
the remaining nxattr.

RESOLUTION:
Handling of named extended attributes is re-worked to make sure it doesn't miss the remaining
attributes on resume.

* 4012801 (Tracking ID: 4001473)

SYMPTOM:
If a file has named extended attributes set, VFR fails to replicate the job &
job goes into failed state.

DESCRIPTION:
VFR tries to use open(2) on nxattr files, since this files are not visible outside
it fails with ENOTDIR.

RESOLUTION:
Using the internal VXFS specific API to get a valid file descriptor for nxattr files.

* 4012842 (Tracking ID: 4006192)

SYMPTOM:
system panic with NULL pointer de-reference.

DESCRIPTION:
VxFS supports checkpoint i.e. point in image copy of filesystem. For this it needs keep copy of some metadata for checkpoint. In some cases it 
misses to make copy. Later while processing files corresponds to this missed metadata, it got empty extent information. Extent information is block map for a 
give file. This empty extent information causing NULL pointer de-reference.

RESOLUTION:
Code changes are made to fix this issue.

* 4012936 (Tracking ID: 4000465)

SYMPTOM:
FSCK binary loops when it detects break in sequence of log ids.

DESCRIPTION:
When FS is not cleanly unmounted, FS will end up with unflushed intent log. This intent log will either be flushed during next subsequent mount or when fsck ran on the FS. Currently to build the transaction list that needs to be replayed, VxFS uses binary search to find out head and tail. But if there are breakage in intent log, then current code is susceptible to loop. To avoid this loop, VxFS is now going to use sequential search to find out range instead of binary search.

RESOLUTION:
Code is modified to incorporate sequential search instead of binary search to find out replayable transaction range.

* 4013084 (Tracking ID: 4009328)

SYMPTOM:
In a cluster filesystem, if smap corruption is seen and the smap is marked bad then it could cause hang while unmounting the filesystem.

DESCRIPTION:
While freeing an extent in vx_extfree1() for logversion >= VX_LOGVERSION13 if we are freeing whole AUs we set VX_AU_SMAPFREE flag for those AUs. This ensures that revoke of delegation for that AU is delayed till the AU has SMAP free transaction in progress. This flag gets cleared either in post commit/undo processing of the transaction or during error handling in vx_extfree1(). In one scenario when we are trying to free a whole AU and its smap is marked bad, we do not return any error to vx_extfree1() and neither do we add the subfunction to free the extent to the transaction. So, the VX_AU_SMAPFREE flag is not cleared and remains set even if there is no SMAP free transaction in progress. This could lead to hang while unmounting the cluster filesystem.

RESOLUTION:
Code changes have been done to add error handling in vx_extfree1 to clear VX_AU_SMAPFREE flag in case where error is returned due to bad smap.

* 4013143 (Tracking ID: 4008352)

SYMPTOM:
Using VxFS mount binary inside container to mount any device might result in core generation.

DESCRIPTION:
Using VxFS mount binary inside container to mount any device might result in core generation.
This issue is because of improper initialisation of local pointer, and dereferencing garbage value later.

RESOLUTION:
This fix properly initialises all the pointers before dereferencing them.

* 4013144 (Tracking ID: 4008274)

SYMPTOM:
Race between compression thread and clone remove thread while allocating reorg inode.

DESCRIPTION:
Compression thread does the reorg inode allocation without setting i_inreuse and it takes HLOCK in exclusive mode. Later this lock in downgraded to shared mode. While this processing is happening clone delete thread can do iget on this inode and call vx_getownership without hold. If the inode is of type IFEMR or IFPTI or FREE success is returned after the ownership call. Later in the same function getownership is called with hold set before doing the processing (truncate or mark the inode as IFPTI). Removing the first redundant ownership call

RESOLUTION:
Delay taking ownership on inode until we check the inode mode.

* 4013626 (Tracking ID: 4004181)

SYMPTOM:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

DESCRIPTION:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

RESOLUTION:
Provide an API on mount point to read the Compliance clock for that filesystem

* 4013738 (Tracking ID: 3830300)

SYMPTOM:
Heavy cpu usage while oracle archive process are running on a clustered
fs.

DESCRIPTION:
The cause of the poor read performance in this case was due to fragmentation,
fragmentation mainly happens when there are multiple archivers running on the
same node. The allocation pattern of the oracle archiver processes is 

1. write header with O_SYNC
2. ftruncate-up the file to its final size ( a few GBs typically)
3. do lio_listio with 1MB iocbs

The problem occurs because all the allocations in this manner go through
internal allocations i.e. allocations below file size instead of allocations
past the file size. Internal allocations are done at max 8 Pages at once. So if
there are multiple processes doing this, they all get these 8 Pages alternately
and the fs becomes very fragmented.

RESOLUTION:
Added a tunable, which will allocate zfod extents when ftruncate
tries to increase the size of the file, instead of creating a hole. This will
eliminate the allocations internal to file size thus the fragmentation. Fixed
the earlier implementation of the same fix, which ran into
locking issues. Also fixed the performance issue while writing from secondary node.

Patch ID: VRTSvxvm-7.4.2.1900

* 4020207 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature as ON, it can happen vxiod with ID as 128 starts replication where RVG was in DCM mode, this vxiod is waiting for filesystem's response if a given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a dead lock situation.

RESOLUTION:
Code changes have been made to avoid handling MDSHIP IO in vxiod whose ID is bigger than 127.

* 4039510 (Tracking ID: 4037915)

SYMPTOM:
Getting compilation errors due to RHEL's source code changes

DESCRIPTION:
While compiling the RHEL 8.4 kernel (4.18.0-304) the build compilation fails due to certain RH source changes.

RESOLUTION:
Following changes have been fixed to work with VxVM 7.4.1
__bdevname - depreciated
Solution: Have a struct block_device and use bdevname

blkg_tryget_closest - placed under EXPORT_SYMBOL_GPL
Solution: Locally defined the function where compilation error was hit

sync_core - implicit declaration
The implementation of function sync_core() has been moved to header file sync_core.h, so including this header file fixes the error

* 4039511 (Tracking ID: 4037914)

SYMPTOM:
Crash while running VxVM cert.

DESCRIPTION:
While running the VM cert, there is a panic reported and the

RESOLUTION:
Setting bio and submitting to IOD layer in our own vxvm_gen_strategy() function

* 4039512 (Tracking ID: 4017334)

SYMPTOM:
VXIO call stack trace generated in /var/log/messages

DESCRIPTION:
This issue occurs due to a limitation in the way InfoScale interacts with the RHEL8.2 kernel.
 Call Trace:
 kmsg_sys_rcv+0x16b/0x1c0 [vxio]
 nmcom_get_next_mblk+0x8e/0xf0 [vxio]
 nmcom_get_hdr_msg+0x108/0x200 [vxio]
 nmcom_get_next_msg+0x7d/0x100 [vxio]
 nmcom_wait_msg_tcp+0x97/0x160 [vxio]
 nmcom_server_main_tcp+0x4c2/0x11e0 [vxio]

RESOLUTION:
Making changes in header files for function definitions if rhel version>=8.2
This kernel warning can be safely ignore as it doesn't have any functionality impact.

* 4039517 (Tracking ID: 4012763)

SYMPTOM:
IO hang may happen in VVR (Veritas Volume Replicator) configuration when SRL overflows for one rlink while another one rlink is in AUTOSYNC mode.

DESCRIPTION:
In VVR, if the SRL overflow happens for rlink (R1) and some other rlink (R2) is ongoing the AUTOSYNC, then AUTOSYNC is aborted for R2, R2 gets detached and DCM mode is activated on R1 rlink.

However, due to a race condition in code handling AUTOSYNC abort and DCM activation in parallel, the DCM could not be activated properly and IO which caused DCM activation gets queued incorrectly, this results in a IO hang.

RESOLUTION:
The code has been modified to fix the race issue in handling the AUTOSYNC abort and DCM activation at same time.

Patch ID: VRTSvxvm-7.4.2.1500

* 4018182 (Tracking ID: 4008664)

SYMPTOM:
System panic occurs with the following stack:

void genunix:psignal+4()
void vxio:vol_logger_signal_gen+0x40()
int vxio:vollog_logentry+0x84()
void vxio:vollog_logger+0xcc()
int vxio:voldco_update_rbufq_chunk+0x200()
int vxio:voldco_chunk_updatesio_start+0x364()
void vxio:voliod_iohandle+0x30()
void vxio:voliod_loop+0x26c((void *)0)
unix:thread_start+4()

DESCRIPTION:
Vxio keeps vxloggerd proc_t that is used to send a signal to vxloggerd. In case vxloggerd has been ended for some reason, the signal may be sent to an unexpected process, which may cause panic.

RESOLUTION:
Code changes have been made to correct the problem.

* 4020207 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature as ON, it can happen vxiod with ID as 128 starts replication where RVG was in DCM mode, this vxiod is waiting for filesystem's response if a given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a dead lock situation.

RESOLUTION:
Code changes have been made to avoid handling MDSHIP IO in vxiod whose ID is bigger than 127.

* 4021238 (Tracking ID: 4008075)

SYMPTOM:
Observed with ASL changes for NVMe, This issue observed in reboot scenario. For every reboot machine was hitting panic And this was happening in loop.

DESCRIPTION:
panic was hitting for such splitted bios, root cause for this is RHEL8 introduced a new field named as __bi_remaining.
where __bi_remaining is maintanins the count of chained bios, And for every endio that __bi_remaining gets atomically decreased in bio_endio() function.
While decreasing __bi_remaining OS checks that the __bi_remaining 'should not <= 0' and in our case __bi_remaining was always 0 and we were hitting OS
BUG_ON.

RESOLUTION:
>>> For scsi devices maxsize is 4194304,
[   26.919333] DMP_BIO_SIZE(orig_bio) : 16384, maxsize: 4194304
[   26.920063] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 4194304

>>>and for NVMe devices maxsize is 131072
[  153.297387] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 131072
[  153.298057] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 131072

* 4021240 (Tracking ID: 4010612)

SYMPTOM:
$ vxddladm set namingscheme=ebn lowercase=no
This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on. means every nvme/ssd disks names would be 
hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....

DESCRIPTION:
$ vxddladm set namingscheme=ebn lowercase=no
This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on.
means every nvme/ssd disks names would be hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....
eg.
smicro125_nvme0_0 <--- disk1
smicro125_nvme1_0 <--- disk2

for lowercase=no our current code is suppressing the suffix digit of enclosurname and hence multiple disks gets same name and it is showing udid_mismatch 
because whatever udid of private region is not matching with ddl. ddl database showing wrong info because of multiple disks gets same name.

smicro125_nvme_0 <--- disk1   <<<<<<<-----here suffix digit of nvme enclosure suppressed
smicro125_nvme_0 <--- disk2

RESOLUTION:
Append the suffix integer while making da_name

* 4021346 (Tracking ID: 4010207)

SYMPTOM:
System panic occurred with the below stack:

native_queued_spin_lock_slowpath()
queued_spin_lock_slowpath()
_raw_spin_lock_irqsave()
volget_rwspinlock()
volkiodone()
volfpdiskiodone()
voldiskiodone_intr()
voldmp_iodone()
bio_endio()
gendmpiodone()
dmpiodone()
bio_endio()
blk_update_request()
scsi_end_request()
scsi_io_completion()
scsi_finish_command()
scsi_softirq_done()
blk_done_softirq()
__do_softirq()
call_softirq()

DESCRIPTION:
As part of collecting the IO statistics collection, the vxstat thread acquires a spinlock and tries to copy data to the user space. During the data copy, if some page fault happens, then the thread would relinquish the CPU and provide the same to some other thread. If the thread which gets scheduled on the CPU requests the same spinlock which vxstat thread had acquired, then this results in a hard lockup situation.

RESOLUTION:
Code has been changed to properly release the spinlock before copying out the data to the user space during vxstat collection.

* 4021359 (Tracking ID: 4010040)

SYMPTOM:
A security issue occurs during Volume Manager configuration.

DESCRIPTION:
This issue occurs during the configuration of the VRTSvxvm package.

RESOLUTION:
VVR daemon is updated so that this security issue no longer occurs.

* 4021366 (Tracking ID: 4008741)

SYMPTOM:
VxVM device files appears to have device_t SELinux label.

DESCRIPTION:
If an unauthorized or modified device is allowed to exist on the system, there is the possibility the system may perform unintended or unauthorized operations.
eg: ls -LZ
...
...
/dev/vx/dsk/testdg/vol1   system_u:object_r:device_t:s0
/dev/vx/dmpconfig         system_u:object_r:device_t:s0
/dev/vx/vxcloud           system_u:object_r:device_t:s0

RESOLUTION:
Code changes made to change the device labels to misc_device_t, fixed_disk_device_t.

* 4021428 (Tracking ID: 4020166)

SYMPTOM:
Build issue becuase of "struct request"

error: struct request has no member named next_rq
Linux has deprecated the member next_req

DESCRIPTION:
The issue was observed due to changes in OS structure

RESOLUTION:
code changes are done in required files

* 4021748 (Tracking ID: 4020260)

SYMPTOM:
While enabling dmp native support tunable dmp_native_support for Centos 8 below mentioned error was observed:

[root@dl360g9-4-vm2 ~]# vxdmpadm settune dmp_native_support=on
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groups

VxVM vxdmpadm ERROR V-5-1-15686 The following vgs could not be migrated as error in bootloader configuration file 

 cl
[root@dl360g9-4-vm2 ~]#

DESCRIPTION:
The issue was observed due to missing code check-ins for CentOS 8 in the required files.

RESOLUTION:
Changes are done in required files for dmp native support in CentOS 8

Patch ID: VRTSvxvm-7.4.2.1400

* 4018182 (Tracking ID: 4008664)

SYMPTOM:
System panic occurs with the following stack:

void genunix:psignal+4()
void vxio:vol_logger_signal_gen+0x40()
int vxio:vollog_logentry+0x84()
void vxio:vollog_logger+0xcc()
int vxio:voldco_update_rbufq_chunk+0x200()
int vxio:voldco_chunk_updatesio_start+0x364()
void vxio:voliod_iohandle+0x30()
void vxio:voliod_loop+0x26c((void *)0)
unix:thread_start+4()

DESCRIPTION:
Vxio keeps vxloggerd proc_t that is used to send a signal to vxloggerd. In case vxloggerd has been ended for some reason, the signal may be sent to an unexpected process, which may cause panic.

RESOLUTION:
Code changes have been made to correct the problem.

* 4020207 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature as ON, it can happen vxiod with ID as 128 starts replication where RVG was in DCM mode, this vxiod is waiting for filesystem's response if a given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a dead lock situation.

RESOLUTION:
Code changes have been made to avoid handling MDSHIP IO in vxiod whose ID is bigger than 127.

* 4021346 (Tracking ID: 4010207)

SYMPTOM:
System panic occurred with the below stack:

native_queued_spin_lock_slowpath()
queued_spin_lock_slowpath()
_raw_spin_lock_irqsave()
volget_rwspinlock()
volkiodone()
volfpdiskiodone()
voldiskiodone_intr()
voldmp_iodone()
bio_endio()
gendmpiodone()
dmpiodone()
bio_endio()
blk_update_request()
scsi_end_request()
scsi_io_completion()
scsi_finish_command()
scsi_softirq_done()
blk_done_softirq()
__do_softirq()
call_softirq()

DESCRIPTION:
As part of collecting the IO statistics collection, the vxstat thread acquires a spinlock and tries to copy data to the user space. During the data copy, if some page fault happens, then the thread would relinquish the CPU and provide the same to some other thread. If the thread which gets scheduled on the CPU requests the same spinlock which vxstat thread had acquired, then this results in a hard lockup situation.

RESOLUTION:
Code has been changed to properly release the spinlock before copying out the data to the user space during vxstat collection.

* 4021428 (Tracking ID: 4020166)

SYMPTOM:
Build issue becuase of "struct request"

error: struct request has no member named next_rq
Linux has deprecated the member next_req

DESCRIPTION:
The issue was observed due to changes in OS structure

RESOLUTION:
code changes are done in required files

* 4021748 (Tracking ID: 4020260)

SYMPTOM:
While enabling dmp native support tunable dmp_native_support for Centos 8 below mentioned error was observed:

[root@dl360g9-4-vm2 ~]# vxdmpadm settune dmp_native_support=on
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groups

VxVM vxdmpadm ERROR V-5-1-15686 The following vgs could not be migrated as error in bootloader configuration file 

 cl
[root@dl360g9-4-vm2 ~]#

DESCRIPTION:
The issue was observed due to missing code check-ins for CentOS 8 in the required files.

RESOLUTION:
Changes are done in required files for dmp native support in CentOS 8

Patch ID: VRTSvxvm-7.4.2.1300

* 4008606 (Tracking ID: 4004455)

SYMPTOM:
snapshot restore failed on a instant_snapshot created on older version DG

DESCRIPTION:
create a DG with older version, create a instant snapshot, 
do some IOs on source volume.
try to restore the snapshot.
snapshot failed for this scenario.

RESOLUTION:
rca for this issue is there flag values were conflicting.
fixed this issue code has been checkedin

* 4010892 (Tracking ID: 4009107)

SYMPTOM:
CA chain certificate verification fails in VVR when the number of intermediate certificates is greater than the depth. So, we get error in SSL initialization.

DESCRIPTION:
CA chain certificate verification fails in VVR when the number of intermediate certificates is greater than the depth. SSL_CTX_set_verify_depth() API decides the depth of certificates (in /etc/vx/vvr/cacert file) to be verified, which is limited to count 1 in code. Thus intermediate CA certificate present first  in /etc/vx/vvr/cacert (depth 1  CA/issuer certificate for server certificate) could be obtained and verified during connection, but root CA certificate (depth 2  higher CA certificate) could not be verified while connecting and hence the error.

RESOLUTION:
Removed the call of SSL_CTX_set_verify_depth() API so as to handle the depth automatically.

* 4011866 (Tracking ID: 3976678)

SYMPTOM:
vxvm-recover:  cat: write error: Broken pipe error encountered in syslog multiple times.

DESCRIPTION:
Due to a bug in vxconfigbackup script which is started by vxvm-recover "cat : write error: Broken pipe" is encountered in syslog 
and it is reported under vxvm-recover. In vxconfigbackup code multiple subshells are created in a function call and the first subshell is for cat command. When a particular if condition is satistfied, return is called exiting the later subshells even when there is data to be read in the created cat subshell, which results in broken pipe error.

RESOLUTION:
Changes are done in VxVM code to handle the broken pipe error.

* 4011971 (Tracking ID: 3991668)

SYMPTOM:
Configured with sec logging, VVR reports data inconsistency when hit "No IBC message arrived" error.

DESCRIPTION:
It might happen seconday node served updates with larger sequence ID when In-Band Control (IBC) update arrived. In this case, VVR will drop the IBC update. Any updates whose sequence ID are larger couldn't start data volume writes. They will get queued. Data lost will happen when seconary receives automic commit and clear the queue. Hence vradmin verifydata reports data inconsistency.

RESOLUTION:
Code changes have been made to trigger updates in order to start data volume writes.

* 4012485 (Tracking ID: 4000387)

SYMPTOM:
Existing VxVM module fails to load on Rhel 8.2

DESCRIPTION:
RHEL 8.2 is a new release and had few KABI changes  on which VxVM compilation breaks .

RESOLUTION:
Compiled VxVM code against 8.2 kernel and made changes to make it compatible.

* 4012848 (Tracking ID: 4011394)

SYMPTOM:
As a part of verifying the performance of CFS cloud tiering verses scale out file system tiering in Access, it was found that CFS cloud tiering performance was degraded.

DESCRIPTION:
On verifying the performance of CFS cloud tiering verses scale out file system tiering in Access, it was found that CFS cloud tiering performance was degraded because the design was single threaded which was causing bottleneck and performance issues.

RESOLUTION:
Code Changes are around Multiple IO queues in the kernel, Multithreaded request loop to fetch IOs from kernel queues into userland global queue and Allow curl threads to work in parallel.

* 4013155 (Tracking ID: 4010458)

SYMPTOM:
In VVR (Veritas Volume replicator), the rlink might inconsistently disconnect due to unexpected transactions with below messages:
VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed

DESCRIPTION:
In VVR (Veritas Volume replicator), a transaction is triggered when a change in the VxVM/VVR objects needs 
to be persisted on disk. 

In some scenario, few unnecessary transactions were getting triggered in loop. This was causing multiple rlink
disconnects with below message logged frequently:
VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed

One such unexpected transaction was happening due to open/close on volume as part of SmartIO caching.
Additionally, vradmind daemon was also issuing some open/close on volumes as part of IO statistics collection,
which was causing unnecessary transactions. 

Additionally some unexpected transactions were happening due to incorrect checks in code related
to some temporary flags on volume.

RESOLUTION:
The code is fixed to disable the SmartIO caching on the volumes if the SmartIO caching is not configured on the system.
Additionally code is fixed to avoid the unexpected transactions due to incorrect checking on the temporary flags
on volume.

* 4013169 (Tracking ID: 4011691)

SYMPTOM:
Observed high CPU consumption on the VVR secondary nodes because of high pending IO load.

DESCRIPTION:
High replication related IO load on the VVR secondary and the requirement of maintaining write order fidelity with limited memory pools created  contention. This resulted in multiple VxVM kernel threads contending for shared resources and there by increasing the CPU consumption.

RESOLUTION:
Limited the way in which VVR consumes its resources so that a high pending IO load would not result into high CPU consumption.

* 4013718 (Tracking ID: 4008942)

SYMPTOM:
file system gets disabled when cache object gets full and hence unmount is failing.

DESCRIPTION:
When cache object gets full, IO errors comes on volume. 
Because IOs are not getting served as cache object is full so there is inconsistency of IOs.
Because of IO inconsistency vxfs gets disabled and unmount failed

RESOLUTION:
Fixed the issue and code has been checkedin

Patch ID: VRTScps-7.4.2.1600

* 4038101 (Tracking ID: 4034933)

SYMPTOM:
After installing VRTScps 6.2.1.002, the following error is logged in cpserver_A.log "CPS CRITICAL V-97-1400-22017 Error executing update nodes set is_reachable..."

DESCRIPTION:
This issue occurs due to unexpected locking of the CP server database that is related to the stale key detection feature.

RESOLUTION:
This hotfix updates the VRTScps RPM so that the unexpected database lock is cleared and the nodes can updated successfully.

Patch ID: VRTSdbac-7.4.2.1400

* 4037952 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSdbac-7.4.2.1200

* 4022056 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSdbac-7.4.2.1100

* 4012751 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
2(RHEL8.2) is now introduced.

Patch ID: VRTSvcsag-7.4.2.1700

* 4028123 (Tracking ID: 4027915)

SYMPTOM:
Processes configured for HA using the ProcessOnOnly agent get killed during shutdown or reboot, even if they are still in use.

DESCRIPTION:
Processes that are started by the ProcessOnOnly agent do not have any dependencies on vcs.service. Such processes can therefore get killed during shutdown or reboot, even if they are being used by other VCS processes. Consequently, issues occur while bringing down Infoscale services during shutdown or reboot.

RESOLUTION:
This hotfix adresses the issue by enhancing the ProcessOnOnly agent such that the configured processes have their own systemd service files. The service file is used to set dependencies, so that the corresponding process is not killed unexpectedly during shutdown or reboot.

* 4037806 (Tracking ID: 4021370)

SYMPTOM:
The AWSIP and EBSVol resources fail to come online when IMDSv2 is used for requesting instance metadata.

DESCRIPTION:
By default, the AWSIP and EBSVol agents are developed to use IMDSv1 for requesting instance metadata. If the AWS cloud environment is configured to use IMDSv2, the AWSIP and EBSVol resource fail to come online and goes into UNKNOWN state.

RESOLUTION:
This hotfix updates the AWSIP and EBSVol agents to access the instance metadata based on the instance configuration for IMDS.

Patch ID: VRTSvcsag-7.4.2.1400

* 4007372 (Tracking ID: 4016624)

SYMPTOM:
When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.

DESCRIPTION:
When the ForceImport option is used, a disk group gets imported with the available disks, regardless of whether all the required disks are available or not. In such a scenario, if the ClearClone attribute is enabled, the available disks are successfully imported, but their DGIDs are updated to new values. Thus, the disks within the same disk group end up with different DGIDs, which may cause issues with the functioning of the storage configuration.

RESOLUTION:
The DiskGroup agent is updated to allow the ForceImport and the ClearClone attributes to be set to the following values as per the configuration requirements. ForceImport can be set to 0 or 1. ClearClone can be set to 0, 1, or 2. ClearClone is disabled when set to 0 and enabled when set to 1 or 2. ForceImport is disabled when set to 0 and is ignored when ClearClone is set to 1. To enable both, ClearClone and ForceImport, set ClearClone to 2 and ForceImport to 1.

* 4007374 (Tracking ID: 1837967)

SYMPTOM:
Application agent falsely detects an application as faulted, due to corruption caused by non-redirected STDOUT or STDERR.

DESCRIPTION:
This issue can occur when the STDOUT and STDERR file descriptors of the program to be started and monitored are not redirected to a specific file or to /dev/null. In this case, an application that is started by the Online entry point inherits the STDOUT and STDERR file descriptors from the entry point. Therefore, the entry point and the application, both, read from and write to the same file, which may lead to file corruption and cause the agent entry point to behave unexpectedly.

RESOLUTION:
The Application agent is updated to identify whether STDOUT and STDERR for the configured application are already redirected. If not, the agent redirects them to /dev/null.

* 4012397 (Tracking ID: 4012396)

SYMPTOM:
AzureDisk agent fails to work with latest Azure Storage SDK.

DESCRIPTION:
Latest Python SDK for Azure doesn't work with InfoScale AzureDisk agent.

RESOLUTION:
AzureDisk agent now supports latest Azure Storage Python SDK.

* 4019536 (Tracking ID: 4009761)

SYMPTOM:
A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.

DESCRIPTION:
As part of the Online operation, the NFSRestart agent copies the NFSv4 state data of clients from the shared storage to the local path. However, if the source location contains millions of files, some of which may be stale, their movement may not be completed before the operation times out.

RESOLUTION:
A new action entry point named "cleanup" is provided, which removes stale files. The usage of the entry point is as follows:
$ hares -action <resname> cleanup -actionargs <days> -sys <sys>
  <days>: number of days, deleting files that are <days> old
Example:
$ hares -action NFSRestart_L cleanup -actionargs 30 -sys <sys>
The cleanup action ensures that files older than the number of days specified in the -actionargs option are removed; the minimum expected duration is 30 days. Thus, only the relevant files to be moved remain, and the Online operation is completed in time.

Patch ID: VRTSvcsag-7.4.2.1300

* 4007692 (Tracking ID: 4006979)

SYMPTOM:
When the AzureDisk resource comes online on a cluster node, it goes into the UNKNOWN state on all the other nodes.

DESCRIPTION:
When an AzureDisk resource is online on one node, the status of that resource appears as UNKNOWN, instead of OFFLINE, on the other nodes in the cluster. Also, if the resource is brought online on a different node, its status on the remaining nodes appears as UNKNOWN. However, if the resource is not online on any node, its status correctly appears as OFFLINE on all the nodes.
This issue occurs when the VM name on the Azure portal does not match the local hostname of the cluster node. The monitor operation of the agent compares these two values to identify whether the VM to which the AzureDisk resource is attached is part of a cluster or not. If the values do not match, the agent incorrectly concludes that the resource is attached to a VM outside the cluster. Therefore, it displays the status of the resource as UNKNOWN.

RESOLUTION:
The AzureDisk agent is modified to compare the VM name with the appropriate attribute of the of the agent so that the status of an AzureDisk resource is reported correctly.

* 4007763 (Tracking ID: 4007764)

SYMPTOM:
The NFS locks related log file is flooded with the "sync_dir:copy failed for link" error messages.

DESCRIPTION:
The smsyncd daemon used by the NFSRestart agent copies the symbolic links and the NFS locks from the /var/statmon/sm directory to a specific directory. These files and links are used to track the clients who have set a lock on the NFS mount points. If this directory already has a symbolic link with the same name that the smsyncd daemon is trying to copy, the /bin/cp commands fails and logs an error message.

RESOLUTION:
The smsyncd daemon is enhanced to copy the symbolic links even if the link with same name is present.

* 4008986 (Tracking ID: 3860766)

SYMPTOM:
When the swap space is in terabytes, HostMonitor agent shows incorrect swap space usage in the agent logs.

DESCRIPTION:
When the swap space on the system is in terabytes, HostMonitor incorrectly calculated the available swap 
capacity. The incorrect swap space usage was displayed in the HostMonitor agent log.

RESOLUTION:
Veritas has modified the HostMonitor agent code to correctly calculate the swap space capacity on the 
system.

Patch ID: VRTSvxfen-7.4.2.1700

* 4029493 (Tracking ID: 4029261)

SYMPTOM:
An entire InfoScale cluster may go down unexpectedly if one of its nodes receives a RECONFIG message during a shutdown or a restart operation.

DESCRIPTION:
If a cluster node receives a RECONFIG message while a shutdown or a restart operation is in progress, it may participate in the fencing race. The node may also win the race and then proceed to shut down. If this situation occurs, the fencing module panics the nodes that lost the race, which may cause the entire cluster to go down.

RESOLUTION:
This hotfix updates the fencing module so that it stops a cluster node from joining a race, if it receives a RECONFIG message while a shutdown or a restart operation is in progress.

* 4040837 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSvxfen-7.4.2.1300

* 4021055 (Tracking ID: 4010237)

SYMPTOM:
On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.

DESCRIPTION:
On RHEL, device files should have correct SELinux labels to avoid unauthorized access.

RESOLUTION:
Code changes done to set correct SELinux labels.

Patch ID: VRTSvxfen-7.4.2.1200

* 4022054 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSvxfen-7.4.2.1100

* 4006982 (Tracking ID: 3988184)

SYMPTOM:
The vxfen process cannot complete due to incomplete vxfentab file.

DESCRIPTION:
When I/O fencing starts, the vxfen startup script creates the /etc/vxfentab file on each node. If the coordination disk discovery is slow, the vxfen startup script fails to include all the coordination points in the vxfentab file. As a result, the vxfen startup script gets stuck in a loop.

RESOLUTION:
The vxfen startup process is modified to exit from the loop if it gets stuck while configuring 'vxfenconfig -c'. On exiting from the loop, systemctl starts vxfen again and tries to use the updated vxfentab file.

* 4007375 (Tracking ID: 4000745)

SYMPTOM:
The VxFEN process fails to start due to late discovery of the VxFEN disk group.

DESCRIPTION:
When I/O fencing starts, the VxFEN startup script creates this /etc/vxfentab file on each node. During disk-based fencing, the VxVM module may take longer time to discover the VxFEN disk group. Because of this delay, the 'generate disk list' opreration times out. Therefore, the VxFEN process fails to start and reports the following error: 'ERROR: VxFEN cannot generate vxfentab because vxfendg does not exist'

RESOLUTION:
A new tunable, getdisks_timeout, is introduced to specify the timeout value for the VxFEN disk group discovery. The maximum and the default value for this tunable is 600 seconds. You can set the value of this tunable by adding an getdisks_timeout=<time_in_sec> entry in the /etc/vxfenmode file.

* 4007376 (Tracking ID: 3996218)

SYMPTOM:
In a customized fencing mode, the 'vxfenconfig -c' command creates a new vxfend process even if VxFen is already configured.

DESCRIPTION:
When you configure fencing in the customized mode and run the 'vxfenconfig -c' command, the vxfenconfig utility reports the 'VXFEN ERROR V-11-1-6 vxfen already configured...' error. Moreover, it also creates a new vxfend process even if VxFen is already configured. Such redundant processes may impact the performance of the system.

RESOLUTION:
The vxfenconfig utility is modified so that it does not create a new vxfend process when VxFen is already configured.

* 4007677 (Tracking ID: 3970753)

SYMPTOM:
Freeing uninitialized/garbage memory causes panic in vxfen.

DESCRIPTION:
Freeing uninitialized/garbage memory causes panic in vxfen.

RESOLUTION:
Veritas has modified the VxFen kernel module to fix the issue by initializing the object before attempting to free it.
 .

Patch ID: VRTSamf-7.4.2.1900

* 4042650 (Tracking ID: 4041703)

SYMPTOM:
The system panics when the Mount and the CFSMount agents fail to register with AMF.

DESCRIPTION:
This issue occurs after an operating system upgrade. The agents fail to register with AMF, which leads to a system panic.

RESOLUTION:
Added support for cursor in mount structures starting RHEL 8.4.

Patch ID: VRTSamf-7.4.2.1700

* 4040834 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSamf-7.4.2.1300

* 4019533 (Tracking ID: 4018791)

SYMPTOM:
A cluster node panics when the AMF module module attempts to access an executable binary or a script using its absolute path.

DESCRIPTION:
A cluster node panics and generates a core dump, which indicates that an issue with the AMF module. The AMF module function that locates an executable binary or a script using its absolute path fails to handle NULL values.

RESOLUTION:
The AMF module is updated to handle NULL values when locating an executable binary or a script using its absolute path.

* 4021057 (Tracking ID: 4010237)

SYMPTOM:
On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.

DESCRIPTION:
On RHEL, device files should have correct SELinux labels to avoid unauthorized access.

RESOLUTION:
Code changes done to set correct SELinux labels.

Patch ID: VRTSamf-7.4.2.1200

* 4022053 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSamf-7.4.2.1100

* 4012746 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
2(RHEL8.2) is now introduced.

Patch ID: VRTSgab-7.4.2.1700

* 4040831 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSgab-7.4.2.1300

* 4021059 (Tracking ID: 4010237)

SYMPTOM:
On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.

DESCRIPTION:
On RHEL, device files should have correct SELinux labels to avoid unauthorized access.

RESOLUTION:
Code changes done to set correct SELinux labels.

Patch ID: VRTSgab-7.4.2.1200

* 4022052 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSgab-7.4.2.1100

* 4012745 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

* 4013034 (Tracking ID: 4011683)

SYMPTOM:
The GAB module failed to start and the system log messages indicate failures with the mknod command.

DESCRIPTION:
The mknod command fails to start the GAB module because its format is invalid. If the names of multiple drivers in an environment contain the value "gab" as a substring, all their major device numbers get passed on to the mknod command. Instead, the command must contain the major device number for the GAB driver only.

RESOLUTION:
This hotfix addresses the issue so that the GAB module starts successfully even when other driver names in the environment contain "gab" as a substring.

Patch ID: VRTSllt-7.4.2.1700

* 4030882 (Tracking ID: 4029253)

SYMPTOM:
LLT may not reuse the buffer slots on which NAK is received from the earlier RDMA writes.

DESCRIPTION:
On receiving the buffer advertisement after an RDMA write, LLT also waits for the hardware/OS ACK for that RDMA write. Only after the ACK is received, LLT sets the state of the buffers to free (usable). If the connection between the cluter nodes breaks after LLT receives the buffer advertisement but before receiving the ACK, the local node generates a NAK. LLT does not acknowledge this NAK, and so, that specific buffer slot remains unusable. Over time, the number of buffer slots in the unusable state increases, which sets the flow control for the LLT client. This conditions leads to an FSS I/O hang.

RESOLUTION:
This hotfix updates the LLT module to mark a buffer slot as free (usable) even when a NAK is received from the previous RDMA write.

* 4038101 (Tracking ID: 4034933)

SYMPTOM:
After installing VRTScps 6.2.1.002, the following error is logged in cpserver_A.log "CPS CRITICAL V-97-1400-22017 Error executing update nodes set is_reachable..."

DESCRIPTION:
This issue occurs due to unexpected locking of the CP server database that is related to the stale key detection feature.

RESOLUTION:
This hotfix updates the VRTScps RPM so that the unexpected database lock is cleared and the nodes can updated successfully.

* 4039694 (Tracking ID: 4037048)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 4(RHEL8.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
4(RHEL8.4) is now introduced.

Patch ID: VRTSllt-7.4.2.1300

* 4019535 (Tracking ID: 4018581)

SYMPTOM:
The LLT module fails to start and the system log messages indicate missing IP address.

DESCRIPTION:
When only the low priority LLT links are configured over UDP, UDPBurst mode must be disabled. UDPBurst mode must only be enabled when the high priority LLT links are configured over UDP. If the UDPBurst mode gets enabled while configuring the low priority links, the LLT module fails to start and logs the following error: "V-14-2-15795 missing ip address / V-14-2-15800 UDPburst:Failed to get link info".

RESOLUTION:
This hotfix updates the LLT module to not enable the UDPBurst mode when only the low priority LLT links are configured over UDP.

* 4021058 (Tracking ID: 4010237)

SYMPTOM:
On Red Hat Enterprise Linux operating system, device files do not have correct SELinux label.

DESCRIPTION:
On RHEL, device files should have correct SELinux labels to avoid unauthorized access.

RESOLUTION:
Code changes done to set correct SELinux labels.

Patch ID: VRTSllt-7.4.2.1200

* 4022049 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSllt-7.4.2.1100

* 4012744 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

* 4013036 (Tracking ID: 3985775)

SYMPTOM:
Sometimes, the system log may get flooded with LLT heartbeat loss messages that do not necessarily indicate any actual issues with LLT.

DESCRIPTION:
LLT heartbeat loss messages can appear in the system log either due to actual heartbeat drops in the network or due to heartbeat packets arriving out of order. In either case, these messages are only informative and do not indicate any issue in the LLT functionality. Sometimes, the system log may get flooded with these messages, which are not useful.

RESOLUTION:
The LLT module is updated to lower the frequency of printing LLT heartbeat loss messages. This is achieved by increasing the number of missed sequential HB packets required to print this informative message.

Patch ID: VRTSglm-7.4.2.2100

* 4037622 (Tracking ID: 4037621)

SYMPTOM:
GLM module failed to load on RHEL8.4

DESCRIPTION:
The RHEL8.4 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.4.

Patch ID: VRTSglm-7.4.2.1300

* 4008072 (Tracking ID: 4008071)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get Corrupted due to parallel Invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through lock.

* 4012063 (Tracking ID: 4001382)

SYMPTOM:
GLM module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.2

Patch ID: VRTSodm-7.4.2.2100

* 4037576 (Tracking ID: 4037575)

SYMPTOM:
ODM module failed to load on RHEL8.4

DESCRIPTION:
The RHEL8.4 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.4.

Patch ID: VRTSodm-7.4.2.1300

* 4012062 (Tracking ID: 4001380)

SYMPTOM:
ODM module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.2



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel8.4_x86_64-Patch-7.4.2.1800.tar.gz to /tmp
2. Untar infoscale-rhel8.4_x86_64-Patch-7.4.2.1800.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel8.4_x86_64-Patch-7.4.2.1800.tar.gz
    # tar xf /tmp/infoscale-rhel8.4_x86_64-Patch-7.4.2.1800.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale742P1800 [<host1> <host2>...]

You can also install this patch together with 7.4.2 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.2 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


KNOWN ISSUES
------------
* Tracking ID: 4038257

SYMPTOM: The system panics when the Mount and the CFSMount agents register themselves with AMF.

WORKAROUND: scenario 1 - To prevent the system panic, disable IMF for the Mount and the CFSMount agents.
Run the following command before you upgrade the operating system:
# haimfconfig -disable -agent Mount CFSMount
This command disables IMF for the specified agents by changing the Mode value to 0 for each agent and for all the associated resources whose Mode values were
overridden.
- If VCS is running, the command prompts you to confirm whether you want to make the configuration changes persistent. If you choose No, the command exits. If
you choose Yes, it disables IMF and saves the update to the configuration by using the haconf -dump -makero command.
- If VCS is not running, the Mode value for the agents is modifed in the VCS configuration file. Before it makes any changes to configuration files, the command
prompts you for confirmation. If you choose No, the command exits. If you choose Yes, VCS the configuration file is updated.

Scenario 2 : To prevent the system panic, IMF for Mount and CFSMount agents is disabled by CPI during stack upgrade for RHEL 8.4 OS.



SPECIAL INSTRUCTIONS
--------------------
When performing a rolling upgrade of InfoScale 7.4.1 to InfoScale 7.4.2 on a RHEL 8 system, the xprtld service fails to start after phase 1. 

Workaround: 
After the upgrade completes, perform the following steps to stop and start the xprtld service:
1 Stop the xprtld service:
# systemctl stop xprtld.service 
2 Verify that the xprtld service is stopped:
# systemctl status xprtld.service 
3 Start the xprtld service:
# systemctl start xprtld.service
4 After the service starts, verify that the service status is active (running):
# systemctl status xprtld.service


OTHERS
------
NONE