infoscale-rhel7_x86_64-Patch-7.4.1.1900
Obsolete
The latest patch(es) : infoscale-rhel7_x86_64-Patch-7.4.1.2900 

 Basic information
Release type: Patch
Release date: 2020-04-24
OS update support: RHEL7 x86-64 Update 8
Technote: None
Documentation: None
Popularity: 4467 viewed    downloaded
Download size: 308.85 MB
Checksum: 2873469460

 Applies to one or more of the following products:
InfoScale Availability 7.4.1 On RHEL7 x86-64
InfoScale Enterprise 7.4.1 On RHEL7 x86-64
InfoScale Foundation 7.4.1 On RHEL7 x86-64
InfoScale Storage 7.4.1 On RHEL7 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
infoscale-rhel7.9_x86_64-Patch-7.4.1.2400 (obsolete) 2021-03-04

This patch supersedes the following patches: Release date
infoscale-rhel7.7_x86_64-Patch-7.4.1.1300 (obsolete) 2019-08-06

 Fixes the following incidents:
3970470, 3970482, 3973076, 3975897, 3977310, 3978184, 3978195, 3978208, 3978645, 3978646, 3978649, 3978678, 3979375, 3979397, 3979398, 3979400, 3979440, 3979462, 3979471, 3979475, 3979476, 3979656, 3980044, 3980457, 3980679, 3981028, 3981548, 3981628, 3981631, 3981738, 3982214, 3982215, 3982216, 3982217, 3982218, 3983742, 3983990, 3983994, 3983995, 3984139, 3984731, 3986472, 3987010, 3989317, 3991433, 3992030, 3992902, 3993469, 3993929, 3994756, 3995201, 3995695, 3995698, 3995980, 3996332, 3996401, 3996640, 3996787, 3997064, 3997065, 3997074, 3997076, 3998394, 3998677, 3998678, 3998679, 3998680, 3998681, 3998693, 3998696

 Patch ID:
VRTSvxfs-7.4.1.1900-RHEL7
VRTSodm-7.4.1.1900-RHEL7
VRTSllt-7.4.1.1900-RHEL7
VRTSgab-7.4.1.1900-RHEL7
VRTSvxfen-7.4.1.1900-RHEL7
VRTSamf-7.4.1.1900-RHEL7
VRTSdbac-7.4.1.1900-RHEL7
VRTSvxvm-7.4.1.2100-RHEL7
VRTSaslapm-7.4.1.2100-RHEL7

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.1 * * *
                         * * * Patch 1900 * * *
                         Patch Date: 2020-04-17


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4.1 Patch 1900


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL7 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSllt
VRTSodm
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.1
   * InfoScale Enterprise 7.4.1
   * InfoScale Foundation 7.4.1
   * InfoScale Storage 7.4.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSllt-7.4.1.1900
* 3998677 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSllt-7.4.1.1200
* 3982214 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSodm-7.4.1.1900
* 3983990 (3897161) Oracle Database on Veritas filesystem with Veritas ODM
library has high log file sync wait time.
* 3995698 (3995697) ODM module failed to load on RHEL7.8.
Patch ID: VRTSodm-7.4.1.1200
* 3981631 (3981630) ODM module failed to load on RHEL7.7
Patch ID: VRTSvxfs-7.4.1.1900
* 3983742 (3983330) On Rhel6, server panic if auditd is enabled and filesystem is umounted and mounted again.
* 3983994 (3902600) Contention observed on vx_worklist_lk lock in cluster 
mounted file system with ODM
* 3983995 (3973668) On linux, during system startup, boot.vxfs fails to load vxfs modules & throws following error:
* 3986472 (3978149) FIFO file's timestamps are not updated in case of writes.
* 3987010 (3985839) Cluster hang is observed during allocation of extent to file because of lost delegation of AU.
* 3989317 (3989303) In case of reconfig seen hang when fsck hits coredump and coredump is stuck in vx_statvfs() on rhel8 and sles15 where OS systemd coredump utility calls vx_statvfs(). This blocks recovery thread on FS.
* 3991433 (3990830) File system detected inconsistency with link count table and FSCK flag gets set on the file system.
* 3992030 (3982416) Data corruption observed with Sparse files
* 3993469 (3993442) File system mount hang on a node where NBUMasterWorker group is online
* 3993929 (3984750) Code changes to prevent memory leak in the "fsck" binary.
* 3994756 (3982510) System panic with null pointer deference.
* 3995201 (3990257) VxFS may face buffer overflow in case of doing I/O on File Change Log (FCL) file through Userspace Input Output (UIO) interface
* 3995695 (3995694) VxFS module failed to load on RHEL7.8.
* 3995980 (3995978) Support utility to find the nodes from which message response is pending
* 3996332 (3879310) The file system may get corrupted after a failed vxupgrade.
* 3996401 (3984557) Fsck core dumped during sanity check of directory.
* 3996640 (3987643) CFS code may panic  at function vx_cfs_siget
* 3996787 (3981190) Negative nr-inodes entries are seen on RHEL6 platform.
* 3997064 (3989311) FSCK operation may hang
* 3997065 (3996947) FSCK operation may behave incorrectly or hang
* 3997074 (3983191) Fullfsck failing on FS due to invalid attribute length.
* 3997076 (3995526) Fsck command of vxfs may coredump.
* 3998394 (3983958) Code changes have been done to return proper error code while performing open/read/write operations on a removed checkpoint.
Patch ID: VRTSvxfs-7.4.1.1300
* 3981548 (3980741) Code changes to prevent data corruption in delayed allocation enabled filesystem.
* 3981628 (3981627) VxFS module failed to load on RHEL7.7.
* 3981738 (3979693) Fix for vxupgrade failing to upgrade from DLV 7 to 8 and returning EINVAL
Patch ID: VRTSvxfs-7.4.1.1200
* 3970470 (3970480) A kernel panic occurs when writing to cloud files.
* 3970482 (3970481) A file system panic occurs if many inodes are being used.
* 3978645 (3975962) Mounting a VxFS file system with more than 64 PDTs may panic the server.
* 3978646 (3931761) Cluster wide hang may be observed in case of high workload.
* 3978649 (3978305) The vx_upgrade command causes VxFS to panic.
* 3979400 (3979297) A kernel panic occurs when installing VxFS on RHEL6.
* 3980044 (3980043) A file system corruption occurred during a filesystem mount operation.
Patch ID: VRTSvxvm-7.4.1.2100
* 3984139 (3965962) No option to disable auto-recovery when a slave node joins the CVM cluster.
* 3984731 (3984730) VxVM logs warning messages when the VxDMP module is stopped or removed for the first time after the system is rebooted
* 3992902 (3975667) Softlock in vol_ioship_sender kernel thread
* 3998693 (3998692) VxVM support for RHEL 7.8
Patch ID: VRTSvxvm-7.4.1.1300
* 3980679 (3980678) VxVM support on RHEL 7.7
Patch ID: VRTSvxvm-7.4.1.1200
* 3973076 (3968642) [VVR Encrypted]Intermittent vradmind hang on the new Primary
* 3975897 (3931048) VxVM (Veritas Volume Manager) creates particular log files with write permission
to all users.
* 3978184 (3868154) When DMP Native Support is set to ON, dmpnode with multiple VGs cannot be listed
properly in the 'vxdmpadm native ls' command
* 3978195 (3925345) /tmp/vx.* directories are frequently created due to a bug in vxvolgrp command.
* 3978208 (3969860) Event source daemon (vxesd) takes a lot of time to start when lot of LUNS (around 1700) are attached to the system.
* 3978678 (3907596) vxdmpadm setattr command gives error while setting the path attribute.
* 3979375 (3973364) I/O hang may occur when VVR Replication is enabled in synchronous mode.
* 3979397 (3899568) Adding tunable dmp_compute_iostats to start/stop the iostat gathering
persistently.
* 3979398 (3955979) I/O gets hang in case of synchronous Replication.
* 3979440 (3947265) Delay added in vxvm-startup script to wait for infiniband devices to get 
discovered leads to various issues.
* 3979462 (3964964) Soft lockup may happen in vxnetd because of invalid packets kept sending from port scan tool.
* 3979471 (3915523) Local disk from other node belonging to private DG(diskgroup) is exported to the
node when a private DG is imported on current 
node.
* 3979475 (3959986) Restarting the vxencryptd daemon may cause some IOs to be lost.
* 3979476 (3972679) vxconfigd kept crashing and couldn't start up.
* 3979656 (3975405) cvm_clus fails to stop even after "hastop -all" is triggered, and so the cluster nodes get stuck in the LEAVING state.
* 3980457 (3980609) Secondary node panic in server threads
* 3981028 (3978330) The values of the VxVM and the VxDMP tunables do not persist after reboot with 4.4 and later versions of the Linux kernel.
Patch ID: VRTSaslapm-7.4.1.2100
* 3998696 (3998695) ASLAPM rpm Support on RHEL 7.8 kernel
Patch ID: VRTSdbac-7.4.1.1900
* 3998681 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSdbac-7.4.1.1200
* 3982218 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSamf-7.4.1.1900
* 3998680 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSamf-7.4.1.1200
* 3982217 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSvxfen-7.4.1.1900
* 3998679 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSvxfen-7.4.1.1300
* 3982216 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSvxfen-7.4.1.1200
* 3977310 (3974739) VxFen should be able to identify SCSI3 disks in Nutanix environments.
Patch ID: VRTSgab-7.4.1.1900
* 3998678 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSgab-7.4.1.1200
* 3982215 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSllt-7.4.1.1900

* 3998677 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSllt-7.4.1.1200

* 3982214 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSodm-7.4.1.1900

* 3983990 (Tracking ID: 3897161)

SYMPTOM:
Oracle Database on Veritas filesystem with Veritas ODM library has high
log file sync wait time.

DESCRIPTION:
The ODM_IOP lock would not be held for long, so instead of trying
to take a trylock, and deferring the IO when we fail to get the trylock, it
would be better to call the non-trylock lock and finish the IO in the interrupt
context. It should be fine on solaris since this "sleep" lock is actually an
adaptive mutex.

RESOLUTION:
Instead of ODM_IOP_TRYLOCK() call ODM_IOP_LOCK() in the odm_iodone
and finish the IO. This fix will not defer any IO.

* 3995698 (Tracking ID: 3995697)

SYMPTOM:
ODM module failed to load on RHEL7.8.

DESCRIPTION:
RHEL7.8 is new release and it has some changes in kernel which caused ODM module failed to load on it.

RESOLUTION:
Added code to support ODM on RHEL7.8.

Patch ID: VRTSodm-7.4.1.1200

* 3981631 (Tracking ID: 3981630)

SYMPTOM:
ODM module failed to load on RHEL7.7

DESCRIPTION:
The RHEL7.7 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL7.7.

Patch ID: VRTSvxfs-7.4.1.1900

* 3983742 (Tracking ID: 3983330)

SYMPTOM:
If auditd is enabled on VxFS filesystem and filesystem gets unmounted then server might panic if either filesystem is mounted again or auditd is disabled.

machine_kexec at ffffffff81040f1b
crash_kexec at ffffffff810d6722
__do_page_fault at ffffffff81054f7c
do_page_fault at ffffffff8156029e 
page_fault at ffffffff8155d265
[exception RIP: pin_inotify_watch+20]		
untag_chunk at ffffffff810f3771
 prune_one at ffffffff810f3bb5
 prune_tree_thread at ffffffff810f3c3f

or

do_page_fault at ffffffff8156029e
page_fault at ffffffff8155d265
[exception RIP: vx_ireuse_clean+796]
vx_ireuse_clean at ffffffffa09492f6 [vxfs]
 vx_iget at ffffffffa094ba0b [vxfs]

DESCRIPTION:
If auditd is enabled on VxFS filesystem and filesystem is unmounted, then inotify watches are still present for root inode and when this inode is either being reused or OS tries to cleanup its iwatch tree then server panic.

RESOLUTION:
Code changes have been done to clear inotify watches from root inode.

* 3983994 (Tracking ID: 3902600)

SYMPTOM:
Contention observed on vx_worklist_lk lock in cluster mounted file 
system with ODM

DESCRIPTION:
In CFS environment for ODM async i/o reads, iodones are done 
immediately,  calling into ODM itself from the interrupt handler. But all 
CFS writes are currently processed in delayed fashion, where the requests
are queued and processed later by the worker thread. This was adding delays
in ODM writes.

RESOLUTION:
Optimized the IO processing of ODM work items on CFS so that those
are processed in the same context if possible.

* 3983995 (Tracking ID: 3973668)

SYMPTOM:
Following error is thrown by modinst script:
/etc/vx/modinst-vxfs: line 251: /var/VRTS/vxfs/sort.xxx: No such file or directory

DESCRIPTION:
After the changes done through e3935401, the files created by modinst-vxfs.sh script are dumped in 
/var/VRTS/vxfs. If '/var' happens to be a separate filesystem, it is mounted by boot.localfs script. 
boot.localfs starts after boot.vxfs(evident from boot logs).
Hence the file creation fails & boot.vxfs doesn't load modules.

RESOLUTION:
Adding the dependency of boot.localfs in LSB of boot.vxfs will 
cause localfs to run before boot.vxfs thereby fixing the issue.

* 3986472 (Tracking ID: 3978149)

SYMPTOM:
When the FIFO file is created on VXFS filesystem, its timestamps are
not updated when writes are done to it.

DESCRIPTION:
In write context, Linux kernel calls update_time inode op in order to update timestamp
fields.This op was not implemented in VXFS.

RESOLUTION:
Implemented update_time inode op in VXFS.

* 3987010 (Tracking ID: 3985839)

SYMPTOM:
Cluster hang is observed during allocation of extent which is larger than 32K blocks to file.

DESCRIPTION:
when there is request more than 32k blocks allocation to file from secondary node, VxFS sends this request to primary. To serve this request, primary node start allocating extent (or AU) based on the last successful allocation unit number. VxFS delegates the AU's to all nodes including primary and release these delegation after some time (10sec).  There is a 3 way race between delegation release thread, allocater thread and extent removal thread. If delegation release thread picks up the AU to release delegation and in the interim, allocater thread picked same AU to allocate, then allocater thread allocates the extent from this AU and change the state of AU. If another thread comes and removes this extent then that thread will race with delegation release thread. This will cause the lost delegation of that AU and allocater engine will fail to recognize this. Subsequent write on that AU will hang which later on causes system hang.

RESOLUTION:
Code is modified to serialize these operations which will avoid the race.

* 3989317 (Tracking ID: 3989303)

SYMPTOM:
In case of reconfig seen hang when fsck hits coredump and coredump is stuck in vx_statvfs() on rhel8 and sles15 where OS systemd coredump utility calls vx_statvfs(). This blocks recovery thread on FS.

DESCRIPTION:
On rhel8 and sles15, OS systemd coredump utility calls vx_statvfs().
In case of reconfig where primary node dies and cfs recovery is in process to replay log files 
of the dead nodes for which vxfsckd runs fsck on secondary node and if fsck hits coredump in between 
due to some error, the coredump utility thread gets stuck at vx_statvfs() to get wakeup by new primary 
to collect fs stats and blocking recovery thread here and we are landing into deadlock.

RESOLUTION:
To unblock recovery thread in this case we should send older fs stats to coredump utility 
when cfs recovery is in process on secondary and "vx_fsckd_process" is set which indicates fsck is in 
progress for this filesystem.

* 3991433 (Tracking ID: 3990830)

SYMPTOM:
File system detected inconsistency with link count table and FSCK flag gets set on the file system with following messages in the syslog

kernel: vxfs: msgcnt 259 mesg 036: V-2-36: vx_lctbad - /dev/vx/dsk/<dg>/<vol> file system link count table 0 bad
kernel: vxfs: msgcnt 473 mesg 096: V-2-96: vx_setfsflags - /dev/vx/dsk/<dg>/<vol> file system fullfsck flag set - vx_lctbad

DESCRIPTION:
Full FSCK flag is getting set because of inconsistency with Link count table. Inconsistency is caused because of race condition when files are being removed and created in parallel. This leads to incorrect LCT updates.

RESOLUTION:
Fixed the race condition between the file removal thread and creation thread.

* 3992030 (Tracking ID: 3982416)

SYMPTOM:
Data issued to sparse region(HOLE) in the file is lost

DESCRIPTION:
Write to sparse region in file races with putpage(data) flush on the same file. Sparse region(HOLE) length information is cached by vxfs putpage 
code and is used for optimizing data flush for next pages in the file. Meanwhile, if user issues write in the same sparse region, vxfs putpage code based on 
cached size of HOLE can ignore write flush.

RESOLUTION:
code changes have been made to cache the length of hole reduced to page boundary.

* 3993469 (Tracking ID: 3993442)

SYMPTOM:
If FS is bind mounted in container and after unmounting FS from host, if FS is mounted on host again then it hangs.

DESCRIPTION:
While unmounting the FS, inode watch is removed from the root inode. if it is not done in the root context then the operation to remove watch on 
root inode gets stuck.

RESOLUTION:
Code changes have been done to resolve this issue.

* 3993929 (Tracking ID: 3984750)

SYMPTOM:
"fsck" can leak memory in some error scenarios.

DESCRIPTION:
There are some cases in the "fsck" binary where it is not cleaning up memory in some error scenarios. Because of this, some pending buffers can be leaked.

RESOLUTION:
Code changes have been done in "fsck" to clean up those memories properly to prevent any potential memory leak.

* 3994756 (Tracking ID: 3982510)

SYMPTOM:
System will be rebooted due to panic

DESCRIPTION:
In case filesystem is running out of space (ENOSPC), internally VxFS tries to delete removable checkpoints to create space in filesystem. Such 
checkpoints can be created by setting "remove" attribute while creating checkpoint. During such removal, VxFS internally sets file-operations to NULL so that 
further file operation on files belong such removed checkpoint, won't be reach to VxFS. But while doing so, other file operation can race with it. This will 
result into accessing NULL pointer.

RESOLUTION:
Code is modified to return specific operation for all file operations.

* 3995201 (Tracking ID: 3990257)

SYMPTOM:
VxFS may face buffer overflow in case of doing I/O on File Change Log (FCL) file through Userspace Input Output (UIO) interface

DESCRIPTION:
In case of Userspace Input Output (UIO) interface, VxFS is not able to handle larger I/O request properly, resulting in buffer overflow.

RESOLUTION:
VxFS code is modified to limit the length of I/O request came through Userspace Input Output (UIO) interface.

* 3995695 (Tracking ID: 3995694)

SYMPTOM:
VxFS module failed to load on RHEL7.8.

DESCRIPTION:
RHEL7.8 is new release and it has some changes in kernel which caused VxFS module failed to load on it.

RESOLUTION:
Added code to support VxFS on RHEL7.8.

* 3995980 (Tracking ID: 3995978)

SYMPTOM:
Machine is in hang state

DESCRIPTION:
In case of hang issues this utility can be used to find if message response from any node is pending. It can help in debugging. No need to collect dumps of all nodes. Only node having pending response can be analyzed.

RESOLUTION:
Added support for the utility pendingmsg

* 3996332 (Tracking ID: 3879310)

SYMPTOM:
The file system may get corrupted after the file system freeze during 
vxupgrade. The full fsck gives the following errors:

UX:vxfs fsck: ERROR: V-3-20451: No valid device inodes found
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate

DESCRIPTION:
The vxupgrade requires the file system to be frozen during its functional 
operation. It may happen that the corruption can be detected while the freeze 
is in progress and the full fsck flag can be set on the file system. However, 
this doesn't stop the vxupgrade from proceeding.
At later stage of vxupgrade, after structures related to the new disk layout 
are updated on the disk, vxfs frees up and zeroes out some of the old metadata 
inodes. If any error occurs after this point (because of full fsck being set), 
the file system needs to go back completely to the previous version at the tile 
of full fsck. Since the metadata corresponding to the previous version is 
already cleared, the full fsck cannot proceed and gives the error.

RESOLUTION:
The code is modified to check for the full fsck flag after freezing the file 
system during vxupgrade. Also, disable the file system if an error occurs after 
writing new metadata on the disk. This will force the newly written metadata to 
be loaded in memory on the next mount.

* 3996401 (Tracking ID: 3984557)

SYMPTOM:
Fsck core dumped during sanity check of directory.

DESCRIPTION:
Fsck core dumped during sanity check of directory in case dentry is corrupted/invalid.

RESOLUTION:
Modified the code to validate the inode number before referencing it during sanity check.

* 3996640 (Tracking ID: 3987643)

SYMPTOM:
CFS code may panic  at function vx_cfs_siget

DESCRIPTION:
In case of deployments with local as well as CFS mounts, during filesystem freeze, a node may panic in the fuction vx_cfs_siget

do_page_fault at ffffffffb076f915
page_fault at ffffffffb076b758
    [exception RIP: vx_cfs_siget+460]
vx_recv_cistat at ffffffff	c0add2de [vxfs]
vx_recv_rpc at ffffffffc0ae1f40 [vxfs]

RESOLUTION:
Code is modified to fix the issue.

* 3996787 (Tracking ID: 3981190)

SYMPTOM:
Negative nr-inodes entries are seen on RHEL6 platform.

DESCRIPTION:
When final_iput() is called on VxFS inode, it decreases the incore inode count (nr-inodes) in RHEL6 through gerneric_delete_inode(). VxFS never keep inodes on any global OS list thus it never increments this incore inode counter. To fix this, VxFS used to adjust this counter during inactivation of inode but this has been removed during one of the enhancement. And now VxFS does this nr-inodes adjustment only during unmount of FS and during unload of FS. This was resulting in negative values of nr-inodes on live m/c where FS is mounted.

RESOLUTION:
Code is modified to increase nr-inodes during node inactivation.

* 3997064 (Tracking ID: 3989311)

SYMPTOM:
FSCK operation will hang

DESCRIPTION:
While checking filesystem with fsck utility we may see hang. During checking filesystem, this utility needs to read metadata data from disk and for 
that it uses buffer cache. This buffer cache has race due to which same buffer can be with two different threads with incorrect flag copy and wait for each 
other to finish there work. This can result into deadlock.

RESOLUTION:
Code is modified to fix the race.

* 3997065 (Tracking ID: 3996947)

SYMPTOM:
FSCK operation may behave incorrectly or hang

DESCRIPTION:
While checking filesystem with fsck utility we may see hang or undefined behavior if FSCK found specific type of corruption. Such type of 
corruption will be visible in presence of checkpoint. FSCK utility fixed any corruption as per input give (either "y" or "n"). During this for this specific 
type of corruption, due to bug it end up into unlocking mutex which is not locked.

RESOLUTION:
Code is modified to fix the bug to made sure mutex is locked before unlocking it.

* 3997074 (Tracking ID: 3983191)

SYMPTOM:
Fullfsck failing on FS due to invalid attribute length

DESCRIPTION:
Due to invalid attribute length fullfsck failing on FS and reporting corruption.

RESOLUTION:
VxFS fsck code is modified to handle invalid attribute length.

* 3997076 (Tracking ID: 3995526)

SYMPTOM:
Fsck command of vxfs may coredump with following stack.
#0  __memmove_sse2_unaligned_erms ()
#1  check_nxattr )
#2  check_attrbuf ()
#3  attr_walk ()
#4  check_attrs ()
#5  pass1d ()
#6  iproc_do_work()
#7  start_thread ()
#8  clone ()

DESCRIPTION:
The length passed to bcopy operation was invalid.

RESOLUTION:
Code has been modified to allow bcopy operation only if the length is valid. Otherwise EINVAL error is returned which is handled by caller.

* 3998394 (Tracking ID: 3983958)

SYMPTOM:
"open" system call on a file which belongs to a removed checkpoint, returns "EPERM" which ideally should return "ENOENT".

DESCRIPTION:
"open" system call on a file which belongs to a removed checkpoint, returns "EPERM" which ideally should return "ENOENT".

RESOLUTION:
Code changes have been done so that, proper error code will be returned in those scenarios.

Patch ID: VRTSvxfs-7.4.1.1300

* 3981548 (Tracking ID: 3980741)

SYMPTOM:
File data can be lost in a race scenario between two dalloc background flushers.

DESCRIPTION:
In a race between two dalloc back ground flusher, we may end up flushing the data on disk without updating the file size accordingly, which create a scenario where some bytes of data will be lost.

RESOLUTION:
Code changes have been done in dalloc code path to remove the possibility of flushing the data without updating the on-disk size.

* 3981628 (Tracking ID: 3981627)

SYMPTOM:
VxFS module failed to load on RHEL7.7.

DESCRIPTION:
The RHEL7.7 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL7.7.

* 3981738 (Tracking ID: 3979693)

SYMPTOM:
vxupgrade fails while upgrading from DLV 7 to DLV 8 with following error message:
ERROR: V-3-22567: cannot upgrade /dev/vx/rdsk/dg_share/sq_informatica - Invalid argument

DESCRIPTION:
While doing allocation for RCQ inodes as part of vxupgrade from DLV 7 to 8, only the first RCQ inode 
allocation should be done from initial ilist extents and rest can be allocated from anywhere. In order to 
implement special allocation for first RCQ inode, VX_UPG_IALLOC_IEXT1 flag was checked in vx_upg_ialloc(). 
But the code changes done through incident 3936138 removed this check, which resulted in all RCQ inodes 
being allocated in same way. Since, vx_upg_olt_inoalloc() code only handles allocation for first RCQ inode and 
not others, it returned EINVAL.

RESOLUTION:
Added the check of flag  VX_UPG_IALLOC_IEXT1 in vx_upg_ialloc().

Patch ID: VRTSvxfs-7.4.1.1200

* 3970470 (Tracking ID: 3970480)

SYMPTOM:
A kernel panic occurs when writing to cloud files.

DESCRIPTION:
This issue occurs due to missing error value initialization.

RESOLUTION:
Initialization of the error value is done in the write code path, so that a proper error message is displayed when writing to cloud files.

* 3970482 (Tracking ID: 3970481)

SYMPTOM:
A file system panic occurs if many inodes are being used.

DESCRIPTION:
This issue occurs due to improperly managed ownership of inodes.

RESOLUTION:
The ownership of inodes in case of a large inode count is been fixed.

* 3978645 (Tracking ID: 3975962)

SYMPTOM:
Mounting a VxFS file system with more than 64 PDTs may panic the server.

DESCRIPTION:
For large memory systems, the number of auto-tuned VMM buffers is huge. To accumulate these buffers, VxFS needs more PDTs. Currently up to 128 PDTs are supported. However, for more than 64 PDTs, VxFS fails to initialize the strategy routine and calls a wrong function in the mount code path causing the system to panic.

RESOLUTION:
VxFS has been updated to initialize strategy routine for more than 64 PDTs.

* 3978646 (Tracking ID: 3931761)

SYMPTOM:
cluster wide hang may be observed in a race scenario if freeze gets initiated and there are multiple pending workitems in the worklist related to lazy isize update workitems.

DESCRIPTION:
If lazy_isize_enable tunable is set to ON and the 'ls -l' command is executed frequently from a non-writing node of the cluster, it accumulates a huge number of workitems to get processed by the worker threads. If any workitem with active level 1 is enqueued after these workitems, and clusterwide freeze gets initiated, it leads to a deadlock situation. The worker threads gets exhausted in processing the lazy isize update workitems and the workitem with active level 1 never gets processed. This leads to the cluster to stop responding.

RESOLUTION:
VxFS has been updated to discard the blocking lazy isize update workitems if freeze is in progress.

* 3978649 (Tracking ID: 3978305)

SYMPTOM:
The vx_upgrade command causes VxFS to panic.

DESCRIPTION:
When the vx_upgrade command is executed, VxFS incorrectly accesses the freed memory, and then it panics if the memory is paged-out.

RESOLUTION:
The code is modified to make sure that VXFS does not access the freed memory locations.

* 3979400 (Tracking ID: 3979297)

SYMPTOM:
A kernel panic occurs when installing VxFS on RHEL6.

DESCRIPTION:
During VxFS installation, the fs_supers list is not initialized. While de-referencing the fs_supers pointer, the kernel gets a NULL value for the superblock address and panics.

RESOLUTION:
VxFS has been updated to initialize fs_super during VxFS installation.

* 3980044 (Tracking ID: 3980043)

SYMPTOM:
During a filesystem mount operation, after the Intent log replay, a file system metadata corruption occurred.

DESCRIPTION:
As part of the log replay during mount, fsck replays the transactions, rebuilds the secondary maps, and updates the EAU and the superblock summaries. Fsck flushes the EAU secondary map and the EAU summaries to the disk in a delayed manner, but the EAU state is flushed to the disk synchronously. As a result, if the log replay fails once before succeeding during the filesystem mount, the state of the metadata on the disk may become inconsistent.

RESOLUTION:
The fsck log replay is updated to synchronously write secondary map and EAU summary to the disk.

Patch ID: VRTSvxvm-7.4.1.2100

* 3984139 (Tracking ID: 3965962)

SYMPTOM:
No option to disable auto-recovery when a slave node joins the CVM cluster.

DESCRIPTION:
In a CVM environment, when the slave node joins the CVM cluster, it is possible that the plexes may not be in sync. In such a scenario auto-recovery is triggered for the plexes.  If a node is stopped using the hastop -all command when the auto-recovery is in progress, the vxrecover operation may hang. An option to disable auto-recovery is not available.

RESOLUTION:
The VxVM module is updated to allow administrators to disable auto-recovery when a slave node joins a CVM cluster.
A new tunable, auto_recover, is introduced. By default, the tunable is set to 'on' to trigger the auto-recovery. Set its value to 'off' to disable auto-recovery. Use the vxtune command to set the tunable.

* 3984731 (Tracking ID: 3984730)

SYMPTOM:
VxVM logs warning messages when the VxDMP module is stopped or removed for the first time after the system is rebooted.

DESCRIPTION:
VxVM logs these warnings because the  QUEUE_FLAG_REGISTERED and QUEUE_FLAG_INIT_DONE queue flags are not cleared while registering the dmpnode.
The following stack is reported after stopping/removing VxDMP for first time after every reboot:
kernel: WARNING: CPU: 28 PID: 33910 at block/blk-core.c:619 blk_cleanup_queue+0x1a3/0x1b0
kernel: CPU: 28 PID: 33910 Comm: modprobe Kdump: loaded Tainted: P OE ------------ 3.10.0-957.21.3.el7.x86_64 #1
kernel: Hardware name: HPE ProLiant DL380 Gen10/ProLiant DL380 Gen10, BIOS U30 10/02/2018
kernel: Call Trace:
kernel: [<ffffffff9dd63107>] dump_stack+0x19/0x1b
kernel: [<ffffffff9d697768>] __warn+0xd8/0x100
kernel: [<ffffffff9d6978ad>] warn_slowpath_null+0x1d/0x20
kernel: [<ffffffff9d944b03>] blk_cleanup_queue+0x1a3/0x1b0
kernel: [<ffffffffc0cd1f3f>] dmp_unregister_disk+0x9f/0xd0 [vxdmp]
kernel: [<ffffffffc0cd7a08>] dmp_remove_mp_node+0x188/0x1e0 [vxdmp]
kernel: [<ffffffffc0cd7b45>] dmp_destroy_global_db+0xe5/0x2c0 [vxdmp]
kernel: [<ffffffffc0cde6cd>] dmp_unload+0x1d/0x30 [vxdmp]
kernel: [<ffffffffc0d0743a>] cleanup_module+0x5a/0xd0 [vxdmp]
kernel: [<ffffffff9d71692e>] SyS_delete_module+0x19e/0x310
kernel: [<ffffffff9dd75ddb>] system_call_fastpath+0x22/0x27
kernel: --[ end trace fd834bc7817252be ]--

RESOLUTION:
The queue flags are modified to handle this situation and not to log such warning messages.

* 3992902 (Tracking ID: 3975667)

SYMPTOM:
NMI watchdog: BUG: soft lockup

DESCRIPTION:
When flow control on ioshipping channel is set there is window in code where vol_ioship_sender thread can go in tight loop.
This causes softlockup

RESOLUTION:
Relinquish CPU to schedule other process. vol_ioship_sender() thread will restart after some delay.

* 3998693 (Tracking ID: 3998692)

SYMPTOM:
Earlier module failed to load on RHEL 7.8 .

DESCRIPTION:
RHEL 7.8 is new release and hence VxVM module is compiled with the RHEL 7.8 kernel .

RESOLUTION:
Compiled VxVM with RHEL 7.8 kernel bits .

Patch ID: VRTSvxvm-7.4.1.1300

* 3980679 (Tracking ID: 3980678)

SYMPTOM:
Earlier module failed to load on RHEL 7.7 .

DESCRIPTION:
RHEL 7.7 is new release and hence VxVM module is compiled with the RHEL 7.7 kernel .

RESOLUTION:
Compiled VxVM with RHEl 7.7 kernel bits .

Patch ID: VRTSvxvm-7.4.1.1200

* 3973076 (Tracking ID: 3968642)

SYMPTOM:
Intermittent vradmind hang on the new VVR Primary

DESCRIPTION:
Vradmind was trying to hold pthread write lock with corresponding read lock already held, during race conditions seen while migration role on new primary, which led to intermittent vradmind hangs.

RESOLUTION:
Changes done to minimize the window for which readlock is held and ensure that read lock is released early so that further attempts to grab write lock succeed.

* 3975897 (Tracking ID: 3931048)

SYMPTOM:
Few VxVM log files listed below are created with write permission to all users
which might lead to security issues.

/etc/vx/log/vxloggerd.log
/var/adm/vx/logger.txt
/var/adm/vx/kmsg.log

DESCRIPTION:
The log files are created with write permissions to all users, which is a
security hole. 
The files are created with default rw-rw-rw- (666) permission because the umask
is set to 0 while creating these files.

RESOLUTION:
Changed umask to 022 while creating these files and fixed an incorrect open
system call. Log files will now have rw-r--r--(644) permissions.

* 3978184 (Tracking ID: 3868154)

SYMPTOM:
When DMP Native Support is set to ON, and if a dmpnode has multiple VGs,
'vxdmpadm native ls' shows incorrect VG entries for dmpnodes.

DESCRIPTION:
When DMP Native Support is set to ON, multiple VGs can be created on a disk as
Linux supports creating VG on a whole disk as well as on a partition of 
a disk.This possibility was not handled in the code, hence the display of
'vxdmpadm native ls' was getting messed up.

RESOLUTION:
Code now handles the situation of multiple VGs of a single disk

* 3978195 (Tracking ID: 3925345)

SYMPTOM:
/tmp/vx.* directories are frequently created.

DESCRIPTION:
/tmp/vx.* directories are frequently created due to a bug in vxvolgrp command.

RESOLUTION:
Source change has been made.

* 3978208 (Tracking ID: 3969860)

SYMPTOM:
Event source daemon (vxesd) takes a lot of time to start when lot of LUNS (around 1700) are attached to the system.

DESCRIPTION:
Event source daemon creates a configuration file ddlconfig.info with the help of HBA API libraries. The configuration file is created by child process while the parent process is waiting for child to create the configuration file. If the number of LUNS are large then time taken for creation of configuration is also more. Thus the parent process keeps on waiting for the child process to complete the configuration and exit.

RESOLUTION:
Changes have been done to create the ddlconfig.info file in the background and let the parent exit immediately.

* 3978678 (Tracking ID: 3907596)

SYMPTOM:
vxdmpadm setattr command gives the below error while setting the path attribute:
"VxVM vxdmpadm ERROR V-5-1-14526 Failed to save path information persistently"

DESCRIPTION:
Device names on linux change once the system is rebooted. Thus the persistent attributes of the device are stored using persistent 
hardware path. The hardware paths are stored as symbolic links in the directory /dev/vx/.dmp. The hardware paths are obtained from 
/dev/disk/by-path using the path_id command. In SLES12, the command to extract the hardware path changes to path_id_compat. Since 
the command changed, the script was failing to generate the hardware paths in /dev/vx/.dmp directory leading to the persistent 
attributes not being set.

RESOLUTION:
Code changes have been made to use the command path_id_compat to get the hardware path from /dev/disk/by-path directory.

* 3979375 (Tracking ID: 3973364)

SYMPTOM:
In case of VVR (Veritas Volume Replicator) synchronous mode of replication with TCP protocol, if there are any network issues
I/O's may hang for upto 15-20 mins.

DESCRIPTION:
In VVR synchronous replication mode, if a node on primary site is unable to receive ACK (acknowledgement) message sent from the secondary
within the TCP timeout period, then IO may get hung till the TCP layer detects a timeout, which is ~ 15-20 minutes.
This issue may frequently happen in a lossy network where the ACKs could not be delivered to primary due to some network issues.

RESOLUTION:
A hidden tunable 'vol_vvr_tcp_keepalive' is added to allow users to enable TCP 'keepalive' for VVR data ports if the TCP timeout happens frequently.

* 3979397 (Tracking ID: 3899568)

SYMPTOM:
"vxdmpadm iostat stop" as per design cannot stop the iostat gathering
persistently. To avoid Performance & Memory crunch related issues, it is
generally recommended to stop the iostat gathering.There is a requirement
to provide such ability to stop/start the iostat gathering persistently
in those cases.

DESCRIPTION:
Today DMP iostat daemon is stopped using - "vxdmpadm iostat stop". but this 
is not persistent setting. After reboot this would be lost and hence 
customer
needs to also have to put this in init scripts at appropriate place for
persistent effect.

RESOLUTION:
Code is modified to provide a  tunable "dmp_compute_iostats" which can
start/stop the iostat gathering persistently.

Notes:
Use following command to start/stop the iostat gathering persistently.
# vxdmpadm settune dmp_compute_iostats=on/off.

* 3979398 (Tracking ID: 3955979)

SYMPTOM:
In case of Synchronous mode of replication with TCP , if there are any network related issues,
I/O's get hang for upto 15-30 mins.

DESCRIPTION:
When synchronous replication is used , and if because of some network issues secondary is not being able
to send the network ack's to the primary, I/O gets hang on primary waiting for these network ack's. In case 
of TCP mode we depend on TCP for timeout to happen and then I/O's get drained out, since in this case there is no 
handling from VVR side, I/O's get hang until TCP triggers its timeout which in normal case happens within 15-30 mins.

RESOLUTION:
Code changes are done to allow user to set the time for tcp within which timeout should
get triggered.

* 3979440 (Tracking ID: 3947265)

SYMPTOM:
vxfen tends to fail and creates split brain issues.

DESCRIPTION:
Currently to check whether the infiniband devices are present or not
we check for some modules which on rhel 7.4 comes by default.

RESOLUTION:
TO check for infiniband devices we would be checking for /sys/class/infiniband
directory in which the device information gets populated if infiniband
devices are present.

* 3979462 (Tracking ID: 3964964)

SYMPTOM:
Vxnetd gets into soft lockup when port scan tool keeps sending packets over the network. Call trace likes the following:

kmsg_sys_poll+0x6c/0x180 [vxio]
? poll_initwait+0x50/0x50
? poll_select_copy_remaining+0x150/0x150
? poll_select_copy_remaining+0x150/0x150
? schedule+0x29/0x70
? schedule_timeout+0x239/0x2c0
? task_rq_unlock+0x1a/0x20
? _raw_spin_unlock_bh+0x1e/0x20
? first_packet_length+0x151/0x1d0
? udp_ioctl+0x51/0x80
? inet_ioctl+0x8a/0xa0
? kmsg_sys_rcvudata+0x7e/0x170 [vxio]
nmcom_server_start+0x7be/0x4810 [vxio]

DESCRIPTION:
In case received non-NMCOM packet, vxnetd skips it and goes back to poll more packets without giving up CPU, so if the packets are kept sending vxnetd may get into soft lockup status.

RESOLUTION:
A bit delay has been added to vxnetd to fix the issue.

* 3979471 (Tracking ID: 3915523)

SYMPTOM:
Local disk from other node belonging to private DG is exported to the node when
a private DG is imported on current node.

DESCRIPTION:
When we try to import a DG, all the disks belonging to the DG are automatically
exported to the current node so as to make sure 
that the DG gets imported. This is done to have same behaviour as SAN with local
disks as well. Since we are exporting all disks in 
the DG, then it happens that disks which belong to same DG name but different
private DG on other node get exported to current node 
as well. This leads to wrong disk getting selected while DG gets imported.

RESOLUTION:
Instead of DG name, DGID (diskgroup ID) is used to decide whether disk needs to
be exported or not.

* 3979475 (Tracking ID: 3959986)

SYMPTOM:
Some IOs may not be written to disk, when vxencryptd daemon is restarted.

DESCRIPTION:
If vxencryptd daemon is restarted, as part of restart some of the IOs still waiting in 
pending queue are lost and not written to the underlying disk.

RESOLUTION:
Code changes made to restart the IOs in pending queue once vxencryptd is started.

* 3979476 (Tracking ID: 3972679)

SYMPTOM:
vxconfigd kept crashing and couldn't start up with below stack:
(gdb) bt
#0 0x000000000055e5c2 in request_loop ()
#1 0x0000000000479f06 in main ()
its disassemble code like below:
0x000000000055e5ae <+1160>: callq 0x55be64 <vold_request_poll_unlock>
0x000000000055e5b3 <+1165>: mov 0x582aa6(%rip),%rax # 0xae1060
0x000000000055e5ba <+1172>: mov (%rax),%rax
0x000000000055e5bd <+1175>: test %rax,%rax
0x000000000055e5c0 <+1178>: je 0x55e60e <request_loop+1256>
=> 0x000000000055e5c2 <+1180>: mov 0x65944(%rax),%edx

DESCRIPTION:
vxconfigd takes message buffer as valid as if it's non-NULL.When vxconfigd failed to get share memory for message buffer, OS returned -1. 
In this case vxconfigd accessed a invalid address and caused segment fault.

RESOLUTION:
The code changes are done to check the message buffer properly before accessing it.

* 3979656 (Tracking ID: 3975405)

SYMPTOM:
cvm_clus fails to stop even after "hastop -all" is triggered, and so the cluster nodes get stuck in the LEAVING state.

DESCRIPTION:
When a slave node initiates a write request on RVG, I/O is shipped to the master node (VVR write-ship). If the I/O fails, the VKE_EIO error is passed back from the master node as response for write-ship request. This error is not handled, VxVM continues to retry the I/O operation.

RESOLUTION:
VxVM is updated to handle VKE_EIO error properly.

* 3980457 (Tracking ID: 3980609)

SYMPTOM:
Logowner node in DR secondary site is rebooted

DESCRIPTION:
Freed memory is getting accessed in server thread code path on secondary site

RESOLUTION:
Code changes have been made to fix access to freed memory

* 3981028 (Tracking ID: 3978330)

SYMPTOM:
The values of the VxVM and the VxDMP tunables do not persist after reboot with 4.4 and later versions of the Linux kernel.

DESCRIPTION:
Some changes were made in the Linux kernel from version 4.4 onwards, due to which the values of the these tunables could not persist after reboot.

RESOLUTION:
VxVM has been updated to make the tunable values persistent upon reboot.

Patch ID: VRTSaslapm-7.4.1.2100

* 3998696 (Tracking ID: 3998695)

SYMPTOM:
Support for ASLAPM on RHEL 7.8 kernel

DESCRIPTION:
The RHEL7.8 is new release and hence APM module
should be recompiled with new kernel.

RESOLUTION:
Compiled APM with new kernel.

Patch ID: VRTSdbac-7.4.1.1900

* 3998681 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSdbac-7.4.1.1200

* 3982218 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSamf-7.4.1.1900

* 3998680 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSamf-7.4.1.1200

* 3982217 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSvxfen-7.4.1.1900

* 3998679 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSvxfen-7.4.1.1300

* 3982216 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSvxfen-7.4.1.1200

* 3977310 (Tracking ID: 3974739)

SYMPTOM:
VxFen should be able to identify SCSI3 disks in Nutanix environments.

DESCRIPTION:
The fencing module did not work in Nutanix environments, because it was not able to uniquely identify the Nutanix disks.

RESOLUTION:
VxFen has been updated to correctly identify Nutanix disks so that fencing can work in Nutanix environments.

Patch ID: VRTSgab-7.4.1.1900

* 3998678 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSgab-7.4.1.1200

* 3982215 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel7_x86_64-Patch-7.4.1.1900.tar.gz to /tmp
2. Untar infoscale-rhel7_x86_64-Patch-7.4.1.1900.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel7_x86_64-Patch-7.4.1.1900.tar.gz
    # tar xf /tmp/infoscale-rhel7_x86_64-Patch-7.4.1.1900.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale741P1900 [<host1> <host2>...]

You can also install this patch together with 7.4.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
1. When using VIOM, if any issue is seen related to vxlist functionality, please download and install the below VIOM patch:

 https://www.veritas.com/content/support/en_US/downloads/update.UPD890188

2. When using yum repodata for installing a patch:

   a. Run the following commands to refresh the YUM repository

      # yum repolist
      # yum updateinfo

   b. Run the following command to update the patches

      # yum upgrade VRTS*


OTHERS
------
NONE