infoscale-rhel8_x86_64-Patch-7.4.1.2100

 Basic information
Release type: Patch
Release date: 2020-04-28
OS update support: RHEL8 x86-64 Update 2
Technote: None
Documentation: None
Popularity: 815 viewed    downloaded
Download size: 201.98 MB
Checksum: 814034123

 Applies to one or more of the following products:
InfoScale Availability 7.4.1 On RHEL8 x86-64
InfoScale Enterprise 7.4.1 On RHEL8 x86-64
InfoScale Foundation 7.4.1 On RHEL8 x86-64
InfoScale Storage 7.4.1 On RHEL8 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
3984139, 3984731, 3986794, 3988238, 3988843, 3989317, 3989413, 3989416, 3990017, 3990018, 3990019, 3990020, 3990021, 3992902, 3995201, 3997065, 3997906, 3998169, 3998394, 3999030, 4000388, 4001379, 4001381, 4001383, 4001399, 4001736, 4001745, 4001746, 4001748, 4001750, 4001752, 4001755, 4001757, 4002124, 4002151, 4002152, 4002153, 4002154, 4002155

 Patch ID:
VRTSaslapm-7.4.1.2200-RHEL8
VRTSvxvm-7.4.1.2200-RHEL8
VRTSodm-7.4.1.2100-RHEL8
VRTSglm-7.4.1.1700-RHEL8
VRTSvxfs-7.4.1.2200-RHEL8
VRTSllt-7.4.1.2100-RHEL8
VRTSgab-7.4.1.2100-RHEL8
VRTSvxfen-7.4.1.2100-RHEL8
VRTSamf-7.4.1.2100-RHEL8
VRTSdbac-7.4.1.2100-RHEL8

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.1 * * *
                         * * * Patch 2100 * * *
                         Patch Date: 2020-04-22


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4.1 Patch 2100


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL8 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSglm
VRTSllt
VRTSodm
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.1
   * InfoScale Enterprise 7.4.1
   * InfoScale Foundation 7.4.1
   * InfoScale Storage 7.4.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSllt-7.4.1.2100
* 4002151 (4002150) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
Patch ID: VRTSllt-7.4.1.1600
* 3990017 (3990016) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).
Patch ID: VRTSglm-7.4.1.1700
* 3999030 (3999029) GLM module failed to unload because of VCS service hold.
* 4001383 (4001382) GLM module failed to load on RHEL8.2
Patch ID: VRTSodm-7.4.1.2100
* 4001381 (4001380) ODM module failed to load on RHEL8.2
Patch ID: VRTSodm-7.4.1.1600
* 3989416 (3989415) ODM module failed to load on RHEL8.1
Patch ID: VRTSvxfs-7.4.1.2200
* 3986794 (3992718) On CFS, read on clone's overlay attribute inode results in ERROR if clone is marked for removal and its the last clone being removed case, during attribute inode removal in inode inactivation process.
* 3989317 (3989303) In case of reconfig seen hang when fsck hits coredump and coredump is stuck in vx_statvfs() on rhel8 and sles15 where OS systemd coredump utility calls vx_statvfs(). This blocks recovery thread on FS.
* 3995201 (3990257) VxFS may face buffer overflow in case of doing I/O on File Change Log (FCL) file through Userspace Input Output (UIO) interface
* 3997065 (3996947) FSCK operation may behave incorrectly or hang
* 3998169 (3998168) vxresize operations results in system freeze for 8-10 mins causing application 
hangs and VCS timeouts
* 3998394 (3983958) Code changes have been done to return proper error code while performing open/read/write operations on a removed checkpoint.
* 4001379 (4001378) VxFS module failed to load on RHEL8.2
Patch ID: VRTSvxfs-7.4.1.1600
* 3989413 (3989412) VxFS module failed to load on RHEL8.1
Patch ID: VRTSvxvm-7.4.1.2200
* 3992902 (3975667) Softlock in vol_ioship_sender kernel thread
* 3997906 (3987937) VxVM command hang may happen when snapshot volume is configured.
* 4000388 (4000387) VxVM support on RHEL 8.2
* 4001399 (3995946) CVM Slave unable to join cluster - VxVM vxconfigd ERROR V-5-1-11092 cleanup_client: (Memory allocation failure) 12
* 4001736 (4000130) System panic when DMP co-exists with EMC PP on rhel8/sles12sp4.
* 4001745 (3992053) Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex.
* 4001746 (3999520) vxconfigd may hang waiting for dmp_reconfig_write_lock when the DMP iostat tunable is disabled.
* 4001748 (3991580) Deadlock may happen if IO performed on both source and snapshot volumes.
* 4001750 (3976392) Memory corruption might happen in VxVM (Veritas Volume Manager) while processing Plex detach request.
* 4001752 (3969487) Data corruption observed with layered volumes when mirror of the volume is detached and attached back.
* 4001755 (3980684) Kernel panic in voldrl_hfind_an_instant while accessing agenode.
* 4001757 (3969387) VxVM(Veritas Volume Manager) caused system panic when handle received request response in FSS environment.
Patch ID: VRTSvxvm-7.4.1.1600
* 3984139 (3965962) No option to disable auto-recovery when a slave node joins the CVM cluster.
* 3984731 (3984730) VxVM logs warning messages when the VxDMP module is stopped or removed for the first time after the system is rebooted
* 3988238 (3988578) Encrypted volume creation fails on RHEL 8
* 3988843 (3989796) RHEL 8.1 support for VxVM
Patch ID: VRTSaslapm-7.4.1.2200
* 4002124 (4002123) ASLAPM rpm Support on RHEL 8.2 kernel
Patch ID: VRTSdbac-7.4.1.2100
* 4002155 (4002150) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
Patch ID: VRTSdbac-7.4.1.1600
* 3990021 (3990016) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).
Patch ID: VRTSamf-7.4.1.2100
* 4002154 (4002150) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
Patch ID: VRTSamf-7.4.1.1600
* 3990020 (3990016) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).
Patch ID: VRTSvxfen-7.4.1.2100
* 4002153 (4002150) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
Patch ID: VRTSvxfen-7.4.1.1600
* 3990019 (3990016) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).
Patch ID: VRTSgab-7.4.1.2100
* 4002152 (4002150) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
Patch ID: VRTSgab-7.4.1.1600
* 3990018 (3990016) Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSllt-7.4.1.2100

* 4002151 (Tracking ID: 4002150)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux versions later than RHEL8.1 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

Patch ID: VRTSllt-7.4.1.1600

* 3990017 (Tracking ID: 3990016)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL8.0 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 
1(RHEL8.1) is now introduced.

Patch ID: VRTSglm-7.4.1.1700

* 3999030 (Tracking ID: 3999029)

SYMPTOM:
GLM module failed to unload because of VCS service hold.

DESCRIPTION:
GLM module was failed to unload during systemd shutdown because glm service was racing with vcs service. VCS takes hold on GLM which was resulting in failing to unload the module.

RESOLUTION:
Code is modified to add vcs service dependency in glm.service during systemd shutdown.

* 4001383 (Tracking ID: 4001382)

SYMPTOM:
GLM module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.2

Patch ID: VRTSodm-7.4.1.2100

* 4001381 (Tracking ID: 4001380)

SYMPTOM:
ODM module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.2

Patch ID: VRTSodm-7.4.1.1600

* 3989416 (Tracking ID: 3989415)

SYMPTOM:
ODM module failed to load on RHEL8.1

DESCRIPTION:
The RHEL8.1 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.1

Patch ID: VRTSvxfs-7.4.1.2200

* 3986794 (Tracking ID: 3992718)

SYMPTOM:
On CFS, due to race between clone removal and inode inactivation process, read on clone's overlay attribute inode results in ERROR 
if clone is marked for removal and its the last clone being removed case during attribute inode removal in inode inactivation process.

DESCRIPTION:
Issues is seen with primary fset inode having VX_IEREMOVE extop set and its clone inode having VX_IEPTTRUNC extop set. Clone has marked for removal and  its the last clone being removed case. This clone inode is in inode inactivation process and it has overlay attribute inode associated with it. During this inode inactivation process we tried this attribute inode removal, but due to overlay attribute inode and clone marked for removal and its the last clone being removed case we return with ENOENT error from vx_cfs_iread() for such inode and end up hitting this issue. The race between vx_clone_dispose() and vx_inactive_process() is also a reason for this issue.

We have not seen this issue in case of LM as we do extop processing unconditionally during clone creation process on local mounts.
On CFS we need to handle this case.

RESOLUTION:
In CFS case, during clone creation operation do extop processing of inodes having VX_IEPTTRUNC extop set.

* 3989317 (Tracking ID: 3989303)

SYMPTOM:
In case of reconfig seen hang when fsck hits coredump and coredump is stuck in vx_statvfs() on rhel8 and sles15 where OS systemd coredump utility calls vx_statvfs(). This blocks recovery thread on FS.

DESCRIPTION:
On rhel8 and sles15, OS systemd coredump utility calls vx_statvfs().
In case of reconfig where primary node dies and cfs recovery is in process to replay log files 
of the dead nodes for which vxfsckd runs fsck on secondary node and if fsck hits coredump in between 
due to some error, the coredump utility thread gets stuck at vx_statvfs() to get wakeup by new primary 
to collect fs stats and blocking recovery thread here and we are landing into deadlock.

RESOLUTION:
To unblock recovery thread in this case we should send older fs stats to coredump utility 
when cfs recovery is in process on secondary and "vx_fsckd_process" is set which indicates fsck is in 
progress for this filesystem.

* 3995201 (Tracking ID: 3990257)

SYMPTOM:
VxFS may face buffer overflow in case of doing I/O on File Change Log (FCL) file through Userspace Input Output (UIO) interface

DESCRIPTION:
In case of Userspace Input Output (UIO) interface, VxFS is not able to handle larger I/O request properly, resulting in buffer overflow.

RESOLUTION:
VxFS code is modified to limit the length of I/O request came through Userspace Input Output (UIO) interface.

* 3997065 (Tracking ID: 3996947)

SYMPTOM:
FSCK operation may behave incorrectly or hang

DESCRIPTION:
While checking filesystem with fsck utility we may see hang or undefined behavior if FSCK found specific type of corruption. Such type of 
corruption will be visible in presence of checkpoint. FSCK utility fixed any corruption as per input give (either "y" or "n"). During this for this specific 
type of corruption, due to bug it end up into unlocking mutex which is not locked.

RESOLUTION:
Code is modified to fix the bug to made sure mutex is locked before unlocking it.

* 3998169 (Tracking ID: 3998168)

SYMPTOM:
For multi-TB filesystems, vxresize operations results in system freeze for 8-10 mins
causing application hangs and VCS timeouts.

DESCRIPTION:
During resize, primary node get the delegation of all the allocation units. In case of larger filesystem,
the total time taken by delegation operation is quite large. Also, flushing the summary maps takes considerable
amount of time. This results in filesystem freeze of around 8-10 mins.

RESOLUTION:
Code changes have been done to reduce the total time taken by vxresize.

* 3998394 (Tracking ID: 3983958)

SYMPTOM:
"open" system call on a file which belongs to a removed checkpoint, returns "EPERM" which ideally should return "ENOENT".

DESCRIPTION:
"open" system call on a file which belongs to a removed checkpoint, returns "EPERM" which ideally should return "ENOENT".

RESOLUTION:
Code changes have been done so that, proper error code will be returned in those scenarios.

* 4001379 (Tracking ID: 4001378)

SYMPTOM:
VxFS module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.2

Patch ID: VRTSvxfs-7.4.1.1600

* 3989413 (Tracking ID: 3989412)

SYMPTOM:
VxFS module failed to load on RHEL8.1

DESCRIPTION:
The RHEL8.1 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.1.

Patch ID: VRTSvxvm-7.4.1.2200

* 3992902 (Tracking ID: 3975667)

SYMPTOM:
NMI watchdog: BUG: soft lockup

DESCRIPTION:
When flow control on ioshipping channel is set there is window in code where vol_ioship_sender thread can go in tight loop.
This causes softlockup

RESOLUTION:
Relinquish CPU to schedule other process. vol_ioship_sender() thread will restart after some delay.

* 3997906 (Tracking ID: 3987937)

SYMPTOM:
VxVM command hang happens when heavy IO load performed on VxVM volume with snapshot, IO memory pool full is also observed.

DESCRIPTION:
It's a deadlock situation occurring with heavy IOs on volume with snapshots. When a multistep SIO A acquired ilock and it's child MV write SIO is waiting for memory pool which is full, another multistep SIO B has acquired memory and waiting for the ilock held by multistep SIO A.

RESOLUTION:
Code changes have been made to fix the issue.

* 4000388 (Tracking ID: 4000387)

SYMPTOM:
Existing VxVM module fails to load on Rhel 8.2

DESCRIPTION:
RHEL 8.2 is a new release and had few KABI changes  on which VxVM compilation breaks .

RESOLUTION:
Compiled VxVM code against 8.2 kernel and made changes to make it compatible.

* 4001399 (Tracking ID: 3995946)

SYMPTOM:
CVM Slave unable to join cluster with below error:
VxVM vxconfigd ERROR V-5-1-11092 cleanup_client: (Memory allocation failure) 12
VxVM vxconfigd ERRORV-5-1-11467 kernel_fail_join(): Reconfiguration interrupted: Reason is retry to add a node failed (13, 0)

DESCRIPTION:
vol_vvr_tcp_keepalive and vol_vvr_tcp_timeout are introduced in 7.4.1 U1 for Linux only. For other platforms like Solaris and AIX, it isn't supported. Due a bug in code, those two tunables were exposed, and cvm couldn't get those two tunables info from master node. Hence the issue.

RESOLUTION:
Code change has been made to hide vol_vvr_tcp_keepalive and vol_vvr_tcp_timeout for other platforms like Solaris and AIX.

* 4001736 (Tracking ID: 4000130)

SYMPTOM:
System panic when DMP co-exists with EMC PP on rhel8/sles12sp4 with below stacks:

#6 [] do_page_fault 
#7 [] page_fault 
[exception RIP: dmp_kernel_scsi_ioctl+888]
#8 [] dmp_kernel_scsi_ioctl at [vxdmp]
#9 [] dmp_dev_ioctl at [vxdmp]
#10 [] do_passthru_ioctl at [vxdmp]
#11 [] dmp_tur_temp_pgr at [vxdmp]
#12 [] dmp_pgr_set_temp_key at [vxdmp]
#13 [] dmpioctl at [vxdmp]
#14 [] dmp_ioctl at [vxdmp]
#15 [] blkdev_ioctl 
#16 [] block_ioctl 
#17 [] do_vfs_ioctl
#18 [] ksys_ioctl 

Or

 #8 [ffff9c3404c9fb40] page_fault 
 #9 [ffff9c3404c9fbf0] dmp_kernel_scsi_ioctl at [vxdmp]
#10 [ffff9c3404c9fc30] dmp_scsi_ioctl at [vxdmp]
#11 [ffff9c3404c9fcb8] dmp_send_scsireq at [vxdmp]
#12 [ffff9c3404c9fcd0] dmp_do_scsi_gen at [vxdmp]
#13 [ffff9c3404c9fcf0] dmp_pr_send_cmd at [vxdmp]
#14 [ffff9c3404c9fd80] dmp_pr_do_read at [vxdmp]
#15 [ffff9c3404c9fdf0] dmp_pgr_read at [vxdmp]
#16 [ffff9c3404c9fe20] dmpioctl at [vxdmp]
#17 [ffff9c3404c9fe30] dmp_ioctl at [vxdmp]

DESCRIPTION:
Upwards 4.10.17, there is no such guarantee from the block layer or other drivers to ensure that the cmd pointer at least points to __cmd, when initialize a SCSI request. DMP directly accesses cmd pointer after got the SCSI request from underlayer without sanity check, hence the issue.

RESOLUTION:
Code changes have been made to do sanity check when initialize a SCSI request.

* 4001745 (Tracking ID: 3992053)

SYMPTOM:
Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex. This is due to 
inconsistent data across the plexes after attaching a plex in layered volumes.

DESCRIPTION:
When a plex is detached in a layered volume, the regions which are dirty/modified are tracked in DCO (Data change object) map.
When the plex is attached back, the data corresponding to these dirty regions is re-synced to the plex being attached.
There was a defect in the code due to which the some particular regions were NOT re-synced when a plex is attached.
This issue only happens only when the offset of the sub-volume is NOT aligned with the region size of DCO (Data change object) volume.

RESOLUTION:
The code defect is fixed to correctly copy the data for dirty regions when the sub-volume offset is NOT aligned with the DCO region size.

* 4001746 (Tracking ID: 3999520)

SYMPTOM:
VxVM commands may hang with below stack when user tries to start or stop the DMP IO statistics collection when
the DMP iostat tunable (dmp_iostats_state) was disabled earlier.

schedule()
rwsem_down_failed_common()
rwsem_down_write_failed()
call_rwsem_down_write_failed()
dmp_reconfig_write_lock()
dmp_update_reclaim_attr()
gendmpioctl()
dmpioctl()

DESCRIPTION:
When the DMP iostat tunable (dmp_iostats_state) is disabled and user tries to 
start (vxdmpadm iostat start) or stop (vxdmpadm iostat stop) the DMP iostat collection, then 
a thread which collects the IO statistics was exiting without releasing a lock. Due to this,
further VxVM commands were getting hung while waiting for the lock.

RESOLUTION:
The code is changed to correctly release the lock when the tunable 'dmp_iostats_state' is disabled.

* 4001748 (Tracking ID: 3991580)

SYMPTOM:
IO and VxVM command hang may happen if IO performed on both source and snapshot volumes.

DESCRIPTION:
It's a deadlock situation occurring with heavy IOs on both source volume and snapshot volume. 
SIO (a), USER_WRITE, on snap volume, held ILOCK (a), waiting for memory(full).
SIO (b),  PUSHED_WRITE, on snap volume, waiting for ILOCK (a).
SIO (c),  parent of SIO (b), USER_WRITE, on the source volume, held ILOCK (c) and memory, waiting for SIO (b) done.

RESOLUTION:
User separate pool for IO writes on Snapshot volume to resolve the issue.

* 4001750 (Tracking ID: 3976392)

SYMPTOM:
Memory corruption might happen in VxVM (Veritas Volume Manager) while processing Plex detach request.

DESCRIPTION:
During processing of plex detach request, the VxVM volume is operated in serial manner. During serialization it might happen that current thread has queued the I/O and still accessing the same. In the meantime the same I/O is picked up by one of VxVM threads for processing. The processing of the I/O is completed and the same is deleted after that. The current thread is still accessing the same memory which was already deleted which might lead to memory corruption.

RESOLUTION:
Fix is to not use the same I/O in the current thread once the I/O is queued as part of serialization and the processing is done before queuing the I/O.

* 4001752 (Tracking ID: 3969487)

SYMPTOM:
Data corruption observed with layered volumes after resynchronisation when mirror of the volume is detached and attached back.

DESCRIPTION:
In case of layered volume, if the IO fails at the underlying subvolume layer before doing the mirror detach the top volume in layered volume has to be serialized (run IO's in serial fashion). When volume is serialized IO's on the volume are directly tracked into detach map of DCO (Data Change Object). During this time period if some of the new IO's occur on the volume then those IO's would not be tracked as part of the detach map inside DCO since detach map tracking is not yet enabled by failed IO's. The new IO's which are not being tracked in detach map would be missed when the plex resynchronisation happens later which leads to corruption.

RESOLUTION:
Fix is to delay the unserialization of the volume till the point failed IO's actually detach the plex and enable detach map tracking. This would make sure new IO's are tracked as part of detach map of DCO.

* 4001755 (Tracking ID: 3980684)

SYMPTOM:
Kernel panic in voldrl_hfind_an_instant while accessing agenode with stack
[exception RIP: voldrl_hfind_an_instant+49]
#11 voldrl_find_mark_agenodes
#12 voldrl_log_internal_30
#13 voldrl_log_30
#14 volmv_log_drlfmr
#15 vol_mv_write_start
#16 volkcontext_process
#17 volkiostart
#18 vol_linux_kio_start
#19 vxiostrategy
...

DESCRIPTION:
Agenode corruption is hit in case of use of per file sequential hint. Agenode's linked list is corrupted as pointer was not set to NULL
when reusing the agenode.

RESOLUTION:
Changes are done in VxVM code to avoid Agenode list corruption.

* 4001757 (Tracking ID: 3969387)

SYMPTOM:
In FSS(Flexible Storage Sharing) environment, system might panic with below stack:
vol_get_ioscb [vxio]
vol_ecplex_rhandle_resp [vxio]
vol_ioship_rrecv [vxio]
gab_lrrecv [gab]
vx_ioship_llt_rrecv [llt]
vx_ioship_process_frag_packets [llt]
vx_ioship_process_data [llt]
vx_ioship_recv_data [llt]

DESCRIPTION:
In certain scenario, it may happen that request got purged and response came after that. Then system might panic due to access the freed resource.

RESOLUTION:
Code changes have been made to fix the issue.

Patch ID: VRTSvxvm-7.4.1.1600

* 3984139 (Tracking ID: 3965962)

SYMPTOM:
No option to disable auto-recovery when a slave node joins the CVM cluster.

DESCRIPTION:
In a CVM environment, when the slave node joins the CVM cluster, it is possible that the plexes may not be in sync. In such a scenario auto-recovery is triggered for the plexes.  If a node is stopped using the hastop -all command when the auto-recovery is in progress, the vxrecover operation may hang. An option to disable auto-recovery is not available.

RESOLUTION:
The VxVM module is updated to allow administrators to disable auto-recovery when a slave node joins a CVM cluster.
A new tunable, auto_recover, is introduced. By default, the tunable is set to 'on' to trigger the auto-recovery. Set its value to 'off' to disable auto-recovery. Use the vxtune command to set the tunable.

* 3984731 (Tracking ID: 3984730)

SYMPTOM:
VxVM logs warning messages when the VxDMP module is stopped or removed for the first time after the system is rebooted.

DESCRIPTION:
VxVM logs these warnings because the  QUEUE_FLAG_REGISTERED and QUEUE_FLAG_INIT_DONE queue flags are not cleared while registering the dmpnode.
The following stack is reported after stopping/removing VxDMP for first time after every reboot:
kernel: WARNING: CPU: 28 PID: 33910 at block/blk-core.c:619 blk_cleanup_queue+0x1a3/0x1b0
kernel: CPU: 28 PID: 33910 Comm: modprobe Kdump: loaded Tainted: P OE ------------ 3.10.0-957.21.3.el7.x86_64 #1
kernel: Hardware name: HPE ProLiant DL380 Gen10/ProLiant DL380 Gen10, BIOS U30 10/02/2018
kernel: Call Trace:
kernel: [<ffffffff9dd63107>] dump_stack+0x19/0x1b
kernel: [<ffffffff9d697768>] __warn+0xd8/0x100
kernel: [<ffffffff9d6978ad>] warn_slowpath_null+0x1d/0x20
kernel: [<ffffffff9d944b03>] blk_cleanup_queue+0x1a3/0x1b0
kernel: [<ffffffffc0cd1f3f>] dmp_unregister_disk+0x9f/0xd0 [vxdmp]
kernel: [<ffffffffc0cd7a08>] dmp_remove_mp_node+0x188/0x1e0 [vxdmp]
kernel: [<ffffffffc0cd7b45>] dmp_destroy_global_db+0xe5/0x2c0 [vxdmp]
kernel: [<ffffffffc0cde6cd>] dmp_unload+0x1d/0x30 [vxdmp]
kernel: [<ffffffffc0d0743a>] cleanup_module+0x5a/0xd0 [vxdmp]
kernel: [<ffffffff9d71692e>] SyS_delete_module+0x19e/0x310
kernel: [<ffffffff9dd75ddb>] system_call_fastpath+0x22/0x27
kernel: --[ end trace fd834bc7817252be ]--

RESOLUTION:
The queue flags are modified to handle this situation and not to log such warning messages.

* 3988238 (Tracking ID: 3988578)

SYMPTOM:
Encrypted volume creation fails on RHEL 8

DESCRIPTION:
On the RHEL 8 platform, python3 gets installed by default. However, the Python script that is used to create encrypted volumes and to communicate with the Key Management Service (KMS) is not compatible with python3. Additionally, an 'unsupported protocol' error is reported for the SSL protocol SSLv23 that is used in the PyKMIP library to communicate with the KMS.

RESOLUTION:
The python script is made compatible with python2 and python3. A new option ssl_version is made available in the /etc/vx/enc-kms-kmip.conf file to represent the SSL version to be used by the KMIP client. The 'unsupported protocol' error is addressed by using the protocol version PROTOCOL_TLSv1.
The following is an example of the sample configuration file:
[client]
host = kms-enterprise.example.com
port = 5696
keyfile = /etc/vx/client-key.pem
certfile = /etc/vx/client-crt.pem
cacerts = /etc/vx/cacert.pem
ssl_version = PROTOCOL_TLSv1

* 3988843 (Tracking ID: 3989796)

SYMPTOM:
Existing package failed to load on RHEL 8.1 setup.

DESCRIPTION:
RHEL 8.1 is a new release and hence VxVM module is compiled with this new kernel along with other few other changes .

RESOLUTION:
changes have been done to make VxVM compatible with RHEL 8.1

Patch ID: VRTSaslapm-7.4.1.2200

* 4002124 (Tracking ID: 4002123)

SYMPTOM:
Support for ASLAPM on RHEL 8.2  kernel

DESCRIPTION:
The RHEL8.2 is new release and hence APM module
should be recompiled with new kernel.

RESOLUTION:
Compiled APM with new kernel.

Patch ID: VRTSdbac-7.4.1.2100

* 4002155 (Tracking ID: 4002150)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux versions later than RHEL8.1 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

Patch ID: VRTSdbac-7.4.1.1600

* 3990021 (Tracking ID: 3990016)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL8.0 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 
1(RHEL8.1) is now introduced.

Patch ID: VRTSamf-7.4.1.2100

* 4002154 (Tracking ID: 4002150)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux versions later than RHEL8.1 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

Patch ID: VRTSamf-7.4.1.1600

* 3990020 (Tracking ID: 3990016)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL8.0 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 
1(RHEL8.1) is now introduced.

Patch ID: VRTSvxfen-7.4.1.2100

* 4002153 (Tracking ID: 4002150)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux versions later than RHEL8.1 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

Patch ID: VRTSvxfen-7.4.1.1600

* 3990019 (Tracking ID: 3990016)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL8.0 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 
1(RHEL8.1) is now introduced.

Patch ID: VRTSgab-7.4.1.2100

* 4002152 (Tracking ID: 4002150)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux versions later than RHEL8.1 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

Patch ID: VRTSgab-7.4.1.1600

* 3990018 (Tracking ID: 3990016)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 8 
Update 1(RHEL8.1).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL8.0 .

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 8 Update 
1(RHEL8.1) is now introduced.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel8_x86_64-Patch-7.4.1.2100.tar.gz to /tmp
2. Untar infoscale-rhel8_x86_64-Patch-7.4.1.2100.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel8_x86_64-Patch-7.4.1.2100.tar.gz
    # tar xf /tmp/infoscale-rhel8_x86_64-Patch-7.4.1.2100.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale741P2100 [<host1> <host2>...]

You can also install this patch together with 7.4.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE