infoscale-rhel7_x86_64-Patch-7.4.0.1700

 Basic information
Release type: Patch
Release date: 2020-04-24
OS update support: RHEL7 x86-64 Update 7
Technote: None
Documentation: None
Popularity: 3739 viewed    downloaded
Download size: 265.95 MB
Checksum: 3180853543

 Applies to one or more of the following products:
InfoScale Availability 7.4 On RHEL7 x86-64
InfoScale Enterprise 7.4 On RHEL7 x86-64
InfoScale Foundation 7.4 On RHEL7 x86-64
InfoScale Storage 7.4 On RHEL7 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vm-rhel7_x86_64-Patch-7.4.0.1600 (obsolete) 2019-07-15

 Fixes the following incidents:
3948761, 3949320, 3949322, 3949500, 3949506, 3949507, 3949508, 3949509, 3949510, 3950578, 3950740, 3950760, 3950799, 3951488, 3951744, 3952340, 3953235, 3958838, 3958854, 3958867, 3958884, 3958887, 3958976, 3959065, 3959204, 3959302, 3959306, 3959433, 3959996, 3964083, 3966524, 3966896, 3966920, 3966973, 3967002, 3967004, 3967006, 3967030, 3967032, 3967089, 3967098, 3967099, 3967343, 3967344, 3967345, 3967346, 3967347, 3970687, 3974224, 3974225, 3974226, 3974227, 3974228, 3974230, 3974231, 3975899, 3976572, 3978013, 3978151, 3979382, 3979385, 3980907, 3982792, 3983033, 3983040, 3983331, 3990034, 3990047, 3990102, 3990131, 3990140, 3990245, 3990247, 3990264, 3990358, 3990359, 3996074, 3996078, 3996090, 3996290, 3996296, 3996307, 3996563, 3996662, 3998388, 3998652, 3998653, 3998654, 3998655, 3998656, 3998688, 3998689

 Patch ID:
VRTSllt-7.4.0.1500-RHEL7
VRTSgab-7.4.0.1400-RHEL7
VRTSvxfen-7.4.0.1400-RHEL7
VRTSamf-7.4.0.1400-RHEL7
VRTSdbac-7.4.0.1400-RHEL7
VRTSvxfs-7.4.0.1600-RHEL7
VRTSodm-7.4.0.1600-RHEL7
VRTSvxvm-7.4.0.1700-RHEL7
VRTSaslapm-7.4.0.1700-RHEL7

Readme file
                          * * * READ ME * * *
                       * * * InfoScale 7.4 * * *
                         * * * Patch 1700 * * *
                         Patch Date: 2020-03-27


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4 Patch 1700


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL7 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSllt
VRTSodm
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4
   * InfoScale Enterprise 7.4
   * InfoScale Foundation 7.4
   * InfoScale Storage 7.4


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSllt-7.4.0.1500
* 3998652 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSllt-7.4.0.1400
* 3967343 (3967265) Support for RHEL 7.6 and RHEL 7.x RETPOLINE kernels.
Patch ID: VRTSllt-7.4.0.1200
* 3948761 (3948507) If RDMA dependencies are not fulfilled by the setup, the LLT
init/systemctl script should load the non-RDMA module.
Patch ID: VRTSodm-7.4.0.1600
* 3990359 (3981630) ODM module failed to load on RHEL7.7
* 3996296 (3897161) Oracle Database on Veritas filesystem with Veritas ODM
library has high log file sync wait time.
Patch ID: VRTSodm-7.4.0.1400
* 3958867 (3958865) ODM module failed to load on RHEL7.6.
Patch ID: VRTSodm-7.4.0.1200
* 3953235 (3953233) After installing the 7.4.0.1100 (on AIX and Solaris) and 7.4.0.1200 (on Linux)
patch, the Oracle Disk Management (ODM) module fails to load.
Patch ID: VRTSvxfs-7.4.0.1600
* 3976572 (3973628) Panic while mounting VXFS filesystem on SLES.
* 3978151 (3978149) FIFO file's timestamps are not updated in case of writes.
* 3982792 (3980043) A file system corruption occurred during a filesystem mount operation.
* 3983033 (3955957) File data Corruption might be reported in a tight race between writer thread and dalloc background flusher thread.
* 3983040 (3980741) Code changes to prevent data corruption in delayed allocation enabled filesystem.
* 3983331 (3983330) On Rhel6, server panic if auditd is enabled and filesystem is umounted and mounted again.
* 3990034 (3980754) In function vx_io_proxy_thread(), system may hit kernel panic due to general protection fault.
* 3990047 (3985839) Cluster hang is observed during allocation of extent to file because of lost delegation of AU.
* 3990102 (3969280) Buffer not getting invalidated or marked stale in transaction failure error code path for Large Directory Hash (LDH) feature
* 3990131 (3973668) On linux, during system startup, boot.vxfs fails to load vxfs modules & throws following error:
* 3990140 (3978305) vxupgrade command causes VxFS to panic.
* 3990245 (3979693) Fix for vxupgrade failing to upgrade from DLV 7 to 8 and returning EINVAL
* 3990247 (3981190) Negative nr-inodes entries are seen on RHEL6 platform.
* 3990264 (3975533) Kernel panic in vx_felrcy_space_reserve
* 3990358 (3981627) VxFS module failed to load on RHEL7.7.
* 3996290 (3902600) Contention observed on vx_worklist_lk lock in cluster 
mounted file system with ODM
* 3996307 (3990830) File system detected inconsistency with link count table and FSCK flag gets set on the file system.
* 3996563 (3879310) The file system may get corrupted after a failed vxupgrade.
Patch ID: VRTSvxfs-7.4.0.1400
* 3958854 (3958853) VxFS module failed to load on RHEL7.6.
* 3959065 (3957285) job promote operation from replication target node fails.
* 3959302 (3959299) Improve file creation time on systems with Selinux enabled.
* 3959306 (3959305) Fix a bug in security attribute initialisation of files with named attributes.
* 3959996 (3938256) When checking file size through seek_hole, it will return incorrect offset/size 
when delayed allocation is enabled on the file.
* 3964083 (3947421) DLV upgrade operation fails while upgrading Filesystem from DLV 9 to DLV 10.
* 3966524 (3966287) During the multiple mounts of Shared CFS mount points, more than 100MB is consumed per mount
* 3966896 (3957092) System panic with spin_lock_irqsave thru splunkd.
* 3966920 (3925281) Hexdump the incore inode data and piggyback data when inode 
revalidation fails.
* 3966973 (3952837) VXFS mount fails during startup as its dependency, autofs.service is not up.
* 3967002 (3955766) CFS hung when doing extent allocating.
* 3967004 (3958688) System panic when VxFS got force unmounted.
* 3967006 (3934175) 4-node FSS CFS experienced IO hung on all nodes.
* 3967030 (3947433) While adding a volume (part of vset) in already mounted filesystem, fsvoladm
displays error.
* 3967032 (3947648) Mistuning of vxfs_ninode and vx_bc_bufhwm to very small 
value.
* 3967089 (3908785) System panic observed because of null page address in writeback 
structure in case of 
kswapd process.
Patch ID: VRTSvxfs-7.4.0.1200
* 3949500 (3949308) In a scenario where FEL caching is enabled, application I/O on a file does not
proceed when file system utilization is 100%.
* 3949506 (3949501) When SmartIO FEL-based writeback caching is enabled, the "sfcache offline"
command hangs during Inode deinit.
* 3949507 (3949502) When SmartIO FEL-based writeback caching is enabled, memory leak of few bytes
may happen during node reconfiguration.
* 3949508 (3949503) When FEL is enabled in CFS environment, data corruption may occur after node
recovery.
* 3949509 (3949504) When SmartIO FEL-based writeback caching is enabled, memory leak of few bytes
may happen during node reconfiguration.
* 3949510 (3949505) When SmartIO FEL-based writeback caching is enabled, I/O operations on a file in
filesystem can result in panic after cluster reconfiguration.
* 3950740 (3953165) Messages in syslog are seen with message string "0000" for VxFS module.
* 3952340 (3953148) Due to space reservation mechanism, extent delegation deficit may be seen on a node, although it may not be 
using any feature involving space reservation mechanism like CFS Delayed allocation/FEL.
Patch ID: VRTSvxvm-7.4.0.1700
* 3996074 (3947178) System panicked while removing VxVM(Veritas Volume Manager) package.
* 3996078 (3852146) Shared DiskGroup(DG) fails to import when "-c" and "-o noreonline" options 
are
specified together
* 3996090 (3983159) System panic encountered while doing IO in FSS (Flexible Storage Sharing) Environment.
* 3996662 (3987937) VxVM command hang may happen when snapshot volume is configured.
* 3998388 (3991580) Deadlock may happen if IO performed on both source and snapshot volumes.
* 3998688 (3980678) VxVM support on RHEL 7.7
Patch ID: VRTSvxvm-7.4.0.1600
* 3951744 (3951938) Retpoline support for ASLAPM rpm on RHEL6.10 and RHEL6.x 
retpoline
kernels
* 3974224 (3973364) I/O hang may occur when VVR Replication is enabled in synchronous mode.
* 3974225 (3968279) Vxconfigd dumping core for NVME disk setup.
* 3974226 (3948140) System panic can occur if size of RTPG (Report Target Port Groups) data returned
by underlying array is greater than 255.
* 3974227 (3934700) VxVM is not able to recognize AIX LVM disk with 4k sector.
* 3974228 (3969860) Event source daemon (vxesd) takes a lot of time to start when lot of LUNS (around 1700) are attached to the system.
* 3974230 (3915523) Local disk from other node belonging to private DG(diskgroup) is exported to the
node when a private DG is imported on current 
node.
* 3974231 (3955979) I/O gets hang in case of synchronous Replication.
* 3975899 (3931048) VxVM (Veritas Volume Manager) creates particular log files with write permission
to all users.
* 3978013 (3907800) VxVM package installation will fail on SLES12 SP2.
* 3979382 (3950373) Allow a slave node to be configured as logowner in CVR configurations
* 3979385 (3953711) Panic observed while switching the logowner to slave while IO's are in progress
* 3980907 (3978343) Negative IO count while switching logowner
Patch ID: VRTSvxvm-7.4.0.1500
* 3970687 (3971046) Replication does not switch between synchronous and asynchronous mode automatically based on the network conditions.
Patch ID: VRTSvxvm-7.4.0.1400
* 3949320 (3947022) VVR:vxconfigd hang during during /scripts/configuratio/assoc_datavol.tc#6
* 3958838 (3953681) Data corruption issue is seen when more than one plex of volume is detached.
* 3958884 (3954787) Data corruption may occur in GCO along with FSS environment on RHEL 7.5 Operating system.
* 3958887 (3953711) Panic observed while switching the logowner to slave while IO's are in progress
* 3958976 (3955101) Panic observed in GCO environment (cluster to cluster replication) during replication.
* 3959204 (3949954) Dumpstack messages are printed when vxio module is loaded for the first time when called blk_register_queue.
* 3959433 (3956134) System panic might occur when IO is in progress in VVR (veritas volume replicator) environment.
* 3967098 (3966239) IO hang observed while copying data in cloud environments.
* 3967099 (3965715) vxconfigd may core dump when VIOM(Veritas InfoScale Operation Manager) is enabled.
Patch ID: VRTSvxvm-7.4.0.1200
* 3949322 (3944259) The vradmin verifydata and vradmin ibc commands fail on private diskgroups
with Lost connection error.
* 3950578 (3953241) Messages in syslog are seen with message string "0000" for VxVM module.
* 3950760 (3946217) In a scenario where encryption over wire is enabled and secondary logging is
disabled, vxconfigd hangs and replication does not progress.
* 3950799 (3950384) In a scenario where volume encryption at rest is enabled, data corruption may
occur if the file system size exceeds 1TB.
* 3951488 (3950759) The application I/Os hang if the volume-level I/O shipping is enabled and 
the
volume layout is mirror-concat or mirror-stripe.
Patch ID: VRTSaslapm-7.4.0.1700
* 3998689 (3988286) ASLAPM rpm Support on RHEL 7.7 kernel
Patch ID: VRTSdbac-7.4.0.1400
* 3998656 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSdbac-7.4.0.1300
* 3967347 (3967265) Support for RHEL 7.6 and RHEL 7.x RETPOLINE kernels.
Patch ID: VRTSamf-7.4.0.1400
* 3998655 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSamf-7.4.0.1300
* 3967346 (3967265) Support for RHEL 7.6 and RHEL 7.x RETPOLINE kernels.
Patch ID: VRTSvxfen-7.4.0.1400
* 3998654 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSvxfen-7.4.0.1300
* 3967345 (3967265) Support for RHEL 7.6 and RHEL 7.x RETPOLINE kernels.
Patch ID: VRTSgab-7.4.0.1400
* 3998653 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSgab-7.4.0.1300
* 3967344 (3967265) Support for RHEL 7.6 and RHEL 7.x RETPOLINE kernels.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSllt-7.4.0.1500

* 3998652 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSllt-7.4.0.1400

* 3967343 (Tracking ID: 3967265)

SYMPTOM:
RHEL 7.x RETPOLINE kernels and RHEL 7.6 are not supported

DESCRIPTION:
Red Hat has released RHEL 7.6 which has RETPOLINE kernel, and also released RETPOLINE kernels for older RHEL 7.x Updates. Veritas Cluster Server 
kernel modules need to be recompiled with RETPOLINE aware GCC to support RETPOLINE kernel.

RESOLUTION:
Support for RHEL 7.6 and RETPOLINE kernels on RHEL 7.x kernels is now introduced.

Patch ID: VRTSllt-7.4.0.1200

* 3948761 (Tracking ID: 3948507)

SYMPTOM:
LLT loads the RDMA module during its configuration, even if RDMA
dependencies are not fulfilled by the setup.

DESCRIPTION:
LLT loads the RDMA module during its configuration, even if RDMA
dependencies are not fulfilled by the setup.

Moreover, the user is unable to manually unload the IB modules. This issue
occurs because the LLT_RDMA module holds a use count on the ib_core module even
though LLT is not configured to work over RDMA.

RESOLUTION:
LLT now loads the non-RDMA module if RDMA dependency fails during
configuration.

Patch ID: VRTSodm-7.4.0.1600

* 3990359 (Tracking ID: 3981630)

SYMPTOM:
ODM module failed to load on RHEL7.7

DESCRIPTION:
The RHEL7.7 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL7.7.

* 3996296 (Tracking ID: 3897161)

SYMPTOM:
Oracle Database on Veritas filesystem with Veritas ODM library has high
log file sync wait time.

DESCRIPTION:
The ODM_IOP lock would not be held for long, so instead of trying
to take a trylock, and deferring the IO when we fail to get the trylock, it
would be better to call the non-trylock lock and finish the IO in the interrupt
context. It should be fine on solaris since this "sleep" lock is actually an
adaptive mutex.

RESOLUTION:
Instead of ODM_IOP_TRYLOCK() call ODM_IOP_LOCK() in the odm_iodone
and finish the IO. This fix will not defer any IO.

Patch ID: VRTSodm-7.4.0.1400

* 3958867 (Tracking ID: 3958865)

SYMPTOM:
ODM module failed to load on RHEL7.6.

DESCRIPTION:
Since RHEL7.6 is new release therefore ODM module failed to load
on it.

RESOLUTION:
Added ODM support for RHEL7.6.

Patch ID: VRTSodm-7.4.0.1200

* 3953235 (Tracking ID: 3953233)

SYMPTOM:
After installing the 7.4.0.1100 (on AIX and Solaris) and 7.4.0.1200 (on Linux)
patch, the Oracle Disk Management (ODM) module fails to load.

DESCRIPTION:
As part of the 7.4.0.1100 (on AIX and Solaris) and 7.4.0.1200 (on Linux) patch,
the VxFS version has been updated to 7.4.0.1200.
Because of the VxFS version update, the ODM module needs to be repackaged due to an
internal dependency on VxFS version.

RESOLUTION:
As part of this fix, the ODM module has been repackaged to support the updated
VxFS version.

Patch ID: VRTSvxfs-7.4.0.1600

* 3976572 (Tracking ID: 3973628)

SYMPTOM:
Machine panics while mounting VXFS filesystem.

DESCRIPTION:
System panics during VXFS mount with newest sles12sp3 kernel(4.4.175-94.79).
Due to recent code change done in linux kernel, struct inode's i_sb field is dereferenced inside 
linux kernel function:truncate_inode_pages_final() which is called by VXFS as part of clearing
the inode.Panic stack looks something like this:
truncate_inode_pages_final()
vx_osrel_clear_inode()
vx_idetach()
vx_getnewvnode()
vx_get_dummy_inode()
vx_fill_super()

RESOLUTION:
Code change has been done to make call to truncate_inode_pages_final() conditional.Which avoids the panic.

* 3978151 (Tracking ID: 3978149)

SYMPTOM:
When the FIFO file is created on VXFS filesystem, its timestamps are
not updated when writes are done to it.

DESCRIPTION:
In write context, Linux kernel calls update_time inode op in order to update timestamp
fields.This op was not implemented in VXFS.

RESOLUTION:
Implemented update_time inode op in VXFS.

* 3982792 (Tracking ID: 3980043)

SYMPTOM:
During a filesystem mount operation, after the Intent log replay, a file system metadata corruption occurred.

DESCRIPTION:
As part of the log replay during mount, fsck replays the transactions, rebuilds the secondary maps, and updates the EAU and the superblock summaries. Fsck flushes the EAU secondary map and the EAU summaries to the disk in a delayed manner, but the EAU state is flushed to the disk synchronously. As a result, if the log replay fails once before succeeding during the filesystem mount, the state of the metadata on the disk may become inconsistent.

RESOLUTION:
The fsck log replay is updated to synchronously write secondary map and EAU summary to the disk.

* 3983033 (Tracking ID: 3955957)

SYMPTOM:
In write scenarios, Due to a tight race among writer threads and background flusher threads, file corruption might be observed and there would be no block allocation to the file.

DESCRIPTION:
It is a race between writer thread and dalloc background flusher thread. Since, we allow multiple flusher thread to pick the same inode. Once the first 
background flusher thread passes through vx_putpage_dirty and it sets i_sizetime to zero, It opens a door to other flusher threads to pick the same inode. 
Suppose that one flusher threads has already flushed the pages and reset IDALLOC flag and removed the inode from the dalloc list and other flusher thread is 
sleeping in vx_putpage_dirty code path.Meanwhile, a writer thread might adjust i_offa and i_enda to a new value and set IDALLOC. Once the writer thread releases 
the GLOCK and the second flusher thread wakes up and takes GLOCK, it simply checks for IDALLOC flag and resets it and removes the inode from the dalloc list. This might leave that inode in the jeopardy state as i_offa and i_enda is still indicating the it does need allocation and flushing of pages are still pending but IDALLOC flag is not set and hence this inode never gets picked up again for flushing. It leads to file corruption scenario.

RESOLUTION:
Code changes have been done to reset IDALLOC flag only in case the allocations are done completely.

* 3983040 (Tracking ID: 3980741)

SYMPTOM:
File data can be lost in a race scenario between two dalloc background flushers.

DESCRIPTION:
In a race between two dalloc back ground flusher, we may end up flushing the data on disk without updating the file size accordingly, which create a scenario where some bytes of data will be lost.

RESOLUTION:
Code changes have been done in dalloc code path to remove the possibility of flushing the data without updating the on-disk size.

* 3983331 (Tracking ID: 3983330)

SYMPTOM:
If auditd is enabled on VxFS filesystem and filesystem gets unmounted then server might panic if either filesystem is mounted again or auditd is disabled.

machine_kexec at ffffffff81040f1b
crash_kexec at ffffffff810d6722
__do_page_fault at ffffffff81054f7c
do_page_fault at ffffffff8156029e 
page_fault at ffffffff8155d265
[exception RIP: pin_inotify_watch+20]		
untag_chunk at ffffffff810f3771
 prune_one at ffffffff810f3bb5
 prune_tree_thread at ffffffff810f3c3f

or

do_page_fault at ffffffff8156029e
page_fault at ffffffff8155d265
[exception RIP: vx_ireuse_clean+796]
vx_ireuse_clean at ffffffffa09492f6 [vxfs]
 vx_iget at ffffffffa094ba0b [vxfs]

DESCRIPTION:
If auditd is enabled on VxFS filesystem and filesystem is unmounted, then inotify watches are still present for root inode and when this inode is either being reused or OS tries to cleanup its iwatch tree then server panic.

RESOLUTION:
Code changes have been done to clear inotify watches from root inode.

* 3990034 (Tracking ID: 3980754)

SYMPTOM:
In function vx_io_proxy_thread(), system may hit kernel panic due to general protection fault.

DESCRIPTION:
In function vx_io_proxy_thread(), a value is being saved into memory through the uninitialized pointer. This may result in memory corruption.

RESOLUTION:
Function vx_io_proxy_thread() is changed to use the pointer after initializing it.

* 3990047 (Tracking ID: 3985839)

SYMPTOM:
Cluster hang is observed during allocation of extent which is larger than 32K blocks to file.

DESCRIPTION:
when there is request more than 32k blocks allocation to file from secondary node, VxFS sends this request to primary. To serve this request, primary node start allocating extent (or AU) based on the last successful allocation unit number. VxFS delegates the AU's to all nodes including primary and release these delegation after some time (10sec).  There is a 3 way race between delegation release thread, allocater thread and extent removal thread. If delegation release thread picks up the AU to release delegation and in the interim, allocater thread picked same AU to allocate, then allocater thread allocates the extent from this AU and change the state of AU. If another thread comes and removes this extent then that thread will race with delegation release thread. This will cause the lost delegation of that AU and allocater engine will fail to recognize this. Subsequent write on that AU will hang which later on causes system hang.

RESOLUTION:
Code is modified to serialize these operations which will avoid the race.

* 3990102 (Tracking ID: 3969280)

SYMPTOM:
Buffer not getting invalidated or marked stale in transaction failure error code path for Large Directory Hash (LDH) feature

DESCRIPTION:
Buffers allocated in Large Directory Hash (LDH) feature code path are not invalidated or marked stale, if the transaction commit fails.

RESOLUTION:
Code changed to ensure correct invalidation of buffers in transaction undo routine for Large Directory Hash (LDH) feature code path.

* 3990131 (Tracking ID: 3973668)

SYMPTOM:
Following error is thrown by modinst script:
/etc/vx/modinst-vxfs: line 251: /var/VRTS/vxfs/sort.xxx: No such file or directory

DESCRIPTION:
After the changes done through e3935401, the files created by modinst-vxfs.sh script are dumped in 
/var/VRTS/vxfs. If '/var' happens to be a separate filesystem, it is mounted by boot.localfs script. 
boot.localfs starts after boot.vxfs(evident from boot logs).
Hence the file creation fails & boot.vxfs doesn't load modules.

RESOLUTION:
Adding the dependency of boot.localfs in LSB of boot.vxfs will 
cause localfs to run before boot.vxfs thereby fixing the issue.

* 3990140 (Tracking ID: 3978305)

SYMPTOM:
vxupgrade command causes VxFS to panic.

DESCRIPTION:
When vxupgrade command is executed, VxFS incorrectly accesses the freed memory, and then it panics if the memory is paged-out.

RESOLUTION:
The code is modified to make sure that VXFS does not access the freed memory locations.

* 3990245 (Tracking ID: 3979693)

SYMPTOM:
vxupgrade fails while upgrading from DLV 7 to DLV 8 with following error message:
ERROR: V-3-22567: cannot upgrade /dev/vx/rdsk/dg_share/sq_informatica - Invalid argument

DESCRIPTION:
While doing allocation for RCQ inodes as part of vxupgrade from DLV 7 to 8, only the first RCQ inode 
allocation should be done from initial ilist extents and rest can be allocated from anywhere. In order to 
implement special allocation for first RCQ inode, VX_UPG_IALLOC_IEXT1 flag should be checked in vx_upg_ialloc(). 
We missed this check which resulted in all RCQ inodes being allocated in same way. 
Since, vx_upg_olt_inoalloc() code only handles allocation for first RCQ inode and 
not others, it returned EINVAL.

RESOLUTION:
Added the check of flag  VX_UPG_IALLOC_IEXT1 in vx_upg_ialloc().

* 3990247 (Tracking ID: 3981190)

SYMPTOM:
Negative nr-inodes entries are seen on RHEL6 platform.

DESCRIPTION:
When final_iput() is called on VxFS inode, it decreases the incore inode count (nr-inodes) in RHEL6 through gerneric_delete_inode(). VxFS never keep inodes on any global OS list thus it never increments this incore inode counter. To fix this, VxFS used to adjust this counter during inactivation of inode but this has been removed during one of the enhancement. And now VxFS does this nr-inodes adjustment only during unmount of FS and during unload of FS. This was resulting in negative values of nr-inodes on live m/c where FS is mounted.

RESOLUTION:
Code is modified to increase nr-inodes during node inactivation.

* 3990264 (Tracking ID: 3975533)

SYMPTOM:
Kernel panic in vx_felrcy_space_reserve

DESCRIPTION:
After performing vxupgrade from DLV 13 to DLV 14 if we unmount the same file system from primary node then 
kernel panic might hit in vx_felrcy_space_reserve on secondary node.

RESOLUTION:
Added code to fix this issue.

* 3990358 (Tracking ID: 3981627)

SYMPTOM:
VxFS module failed to load on RHEL7.7.

DESCRIPTION:
The RHEL7.7 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL7.7.

* 3996290 (Tracking ID: 3902600)

SYMPTOM:
Contention observed on vx_worklist_lk lock in cluster mounted file 
system with ODM

DESCRIPTION:
In CFS environment for ODM async i/o reads, iodones are done 
immediately,  calling into ODM itself from the interrupt handler. But all 
CFS writes are currently processed in delayed fashion, where the requests
are queued and processed later by the worker thread. This was adding delays
in ODM writes.

RESOLUTION:
Optimized the IO processing of ODM work items on CFS so that those
are processed in the same context if possible.

* 3996307 (Tracking ID: 3990830)

SYMPTOM:
File system detected inconsistency with link count table and FSCK flag gets set on the file system with following messages in the syslog

kernel: vxfs: msgcnt 259 mesg 036: V-2-36: vx_lctbad - /dev/vx/dsk/<dg>/<vol> file system link count table 0 bad
kernel: vxfs: msgcnt 473 mesg 096: V-2-96: vx_setfsflags - /dev/vx/dsk/<dg>/<vol> file system fullfsck flag set - vx_lctbad

DESCRIPTION:
Full FSCK flag is getting set because of inconsistency with Link count table. Inconsistency is caused because of race condition when files are being removed and created in parallel. This leads to incorrect LCT updates.

RESOLUTION:
Fixed the race condition between the file removal thread and creation thread.

* 3996563 (Tracking ID: 3879310)

SYMPTOM:
The file system may get corrupted after the file system freeze during 
vxupgrade. The full fsck gives the following errors:

UX:vxfs fsck: ERROR: V-3-20451: No valid device inodes found
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate

DESCRIPTION:
The vxupgrade requires the file system to be frozen during its functional 
operation. It may happen that the corruption can be detected while the freeze 
is in progress and the full fsck flag can be set on the file system. However, 
this doesn't stop the vxupgrade from proceeding.
At later stage of vxupgrade, after structures related to the new disk layout 
are updated on the disk, vxfs frees up and zeroes out some of the old metadata 
inodes. If any error occurs after this point (because of full fsck being set), 
the file system needs to go back completely to the previous version at the tile 
of full fsck. Since the metadata corresponding to the previous version is 
already cleared, the full fsck cannot proceed and gives the error.

RESOLUTION:
The code is modified to check for the full fsck flag after freezing the file 
system during vxupgrade. Also, disable the file system if an error occurs after 
writing new metadata on the disk. This will force the newly written metadata to 
be loaded in memory on the next mount.

Patch ID: VRTSvxfs-7.4.0.1400

* 3958854 (Tracking ID: 3958853)

SYMPTOM:
VxFS module failed to load on RHEL7.6.

DESCRIPTION:
Since RHEL7.6 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for RHEL7.6.

* 3959065 (Tracking ID: 3957285)

SYMPTOM:
job promote operation executed on replication target node fails with error message like:

# /opt/VRTS/bin/vfradmin job promote myjob1 /mnt2
UX:vxfs vfradmin: INFO: V-3-28111: Current replication direction:
<machine1>:/mnt1 -> <machine2>:/mnt2
UX:vxfs vfradmin: INFO: V-3-28112: If you continue this command, replication direction will change to:
<machine2>:/mnt2 -> <machine1>:/mnt1
UX:vxfs vfradmin: QUESTION: V-3-28101: Do you want to continue? [ynq]y
UX:vxfs vfradmin: INFO: V-3-28090: Performing final sync for job myjob1 before promoting...
UX:vxfs vfradmin: INFO: V-3-28099: Job promotion failed. If you continue, replication will be stopped and the filesystem will be made available on this host for 
use. To resume replication when <machine1> returns, use the vfradmin job recover command.
UX:vxfs vfradmin: INFO: V-3-28100: Continuing may result in data loss.
UX:vxfs vfradmin: QUESTION: V-3-28101: Do you want to continue? [ynq]y
UX:vxfs vfradmin: INFO: V-3-28227: Unable to unprotect filesystem.

DESCRIPTION:
job promote from target node sends promote operation related message to source node. After this message is processed on source side, 'seqno' file updating/write 
is done. 'seqno' file is created on target side and not present on source side, hence 'seqno' file update returns error and promote fails.

RESOLUTION:
'seqno' file write is not required as part of promote message.  Passing SKIP_SEQNO_UPDATE flag in promote message so that seqno file write is skipped on 
source side during promote processing.
Note: job should be stopped on source node before doing promote from target node.

* 3959302 (Tracking ID: 3959299)

SYMPTOM:
When large number of files are created at once, on a system with Selinux enabled, the file creation
may take longer time as compared to on a system with Selinux disabled.

DESCRIPTION:
On an Selinux enabled system, during file creation Selinux security labels are needed to be stored as
extended attributes. This requires allocation of attribute inode and it's data extent. The content of
the extent are read synchronously into the buffer. If this is a newly allocated extent, it's content
are anyway garbage. And it will get overwritten with the attribute data containing Selinux security
labels. Thus it was found that, for newly allocated attribute extents, the read operation is redundant.

RESOLUTION:
As a fix, for newly allocated attribute extent the reading of the data from that extent is skipped. 
However, If the allocated extent gets merged with previously allocated extent, then extent returned
by allocator could be a combined extent. In such cases, read of entire extent is allowed to ensure 
that previously written data is correctly loaded in-core.

* 3959306 (Tracking ID: 3959305)

SYMPTOM:
When large number of files with named attributes are being created/written to/deleted in a loop, 
along with other operations on an Selinux enabled system, some files may end up without 
security attributes. This may lead to access being denied to such files later.

DESCRIPTION:
On an Selinux enabled system, during file creation, security initialisation happens and
security attributes are stored. However when there are parallel create/write/delete operations 
on multiple files or on same files multiple times which have named attributes, due to a race condition,
it is possible that a security attribute initialization may get skipped for some files. Since these 
file dont have security attributes set, at later time Selinux security module will prevent access 
to such files for other operations. These operations will fail with access denied error.

RESOLUTION:
If this is a file creation context, then while writing named attributes also attempt to do security
initialisation of file by explicitly calling security initiailzation routine. This is an additional provision 
(in addition to security initialisation during default file create code) to ensure that security-initialization
always happens (notwithstanding race conditions) in named attribute write codepath.

* 3959996 (Tracking ID: 3938256)

SYMPTOM:
When checking file size through seek_hole, it will return incorrect offset/size when 
delayed allocation is enabled on the file.

DESCRIPTION:
In recent version of RHEL7 onwards, grep command uses seek_hole feature to check 
current file size and then it reads data depends on this file size. In VxFS, when dalloc is enabled, we 
allocate the extent to file later but we increment the file size as soon as write completes. When 
checking the file size in seek_hole, VxFS didn't completely consider case of dalloc and it was 
returning stale size, depending on the extent allocated to file, instead of actual file size which was 
resulting in reading less amount of data than expected.

RESOLUTION:
Code is modified in such way that VxFS will now return correct size in case dalloc is 
enabled on file and seek_hole is called on that file.

* 3964083 (Tracking ID: 3947421)

SYMPTOM:
DLV upgrade operation fails while upgrading Filesystem from DLV 9 to DLV 10 with
following error message:
ERROR: V-3-22567: cannot upgrade /dev/vx/rdsk/metadg/metavol - Invalid argument

DESCRIPTION:
If the filesystem has been created with DLV 5 or lesser and later the successful
upgrade opration has been done from 5 to 6, 6 to 7, 7 to 8, 8 to 9. The newly
written code tries to find the "mkfs" logging in the history log. There was no
concept of logging the mkfs operation in the history log for DLV 5 or lesser so
Upgrade operation fails while upgrading from DLV 9 to 10

RESOLUTION:
Code changes have been done to complete upgrade operation even in case mkfs
logging is not found.

* 3966524 (Tracking ID: 3966287)

SYMPTOM:
During the multiple mounts of Shared CFS mount points, more than 100MB is consumed per mount

DESCRIPTION:
During initialization of GLM there is scaling of memory usage for the message queues. Original design provide memory per max inode number which don't scale well 
for multiple cfs mount points, memory rise linearly with mount points while inode max number is constant, more over current memory  usage per inode is about 
1/4N where N is max number inodes which translate to 130MB for standard Linux mount which looks like aggressive strategy.
Here we provide additional parameter in order to scale this usage per local scope (shared mount point).
Thanks to that we are able easily to reduce memory footprint.

RESOLUTION:
Introduce kernel parameter which will scale memory usage, to use it kernel parameter needs to be included, example of how to change 100MB per FS to 100KB by providing scope factor = 1000 (Kind of recomended value for multiple CFS mount > 100)

options vxfs vxfs_max_scopes_possible=1000

* 3966896 (Tracking ID: 3957092)

SYMPTOM:
System panic with spin_lock_irqsave thru splunkd in rddirahead path.

DESCRIPTION:
As per current state of, it seems to be spinlock getting re-initialized somehow in rddirahead path which is causing this deadlock.

RESOLUTION:
Code changes done accordingly to avoid this situation.

* 3966920 (Tracking ID: 3925281)

SYMPTOM:
Hexdump the incore inode data and piggyback data when inode revalidation fails.

DESCRIPTION:
While assuming the inode ownership, if inode revalidation fails with piggyback 
data, then we doesn't hexdump the piggyback and incore inode data. This will loose the 
current 
state of inode. Added inode revalidation failure message and hexdump the incore inode data 
and 
piggyback data.

RESOLUTION:
Code is modified to print hexdump of incore inode and piggyback data when 
revalidation of inode fails.

* 3966973 (Tracking ID: 3952837)

SYMPTOM:
If a VXFS is to be mounted during boot-up in systemd environment, if its
dependency service autofs.service is not up, service trying to mount vxfs enters
fail state.

DESCRIPTION:
vxfs.service has dependency on autofs.service, If autofs starts after vxfs.service
 & another service tries to mount a vxfs , mount fails and service enters fail state.

RESOLUTION:
Dependency of vxfs.service on autofs.service & systemd-remountfs.service has been
removed to solve the issue.

* 3967002 (Tracking ID: 3955766)

SYMPTOM:
CFS hung when doing extent allocating, there is a thread like following to loop forever doing extent allocation:

#0 [ffff883fe490fb30] schedule at ffffffff81552d9a
#1 [ffff883fe490fc18] schedule_timeout at ffffffff81553db2
#2 [ffff883fe490fcc8] vx_delay at ffffffffa054e4ee [vxfs]
#3 [ffff883fe490fcd8] vx_searchau at ffffffffa036efc6 [vxfs]
#4 [ffff883fe490fdf8] vx_extentalloc_device at ffffffffa036f945 [vxfs]
#5 [ffff883fe490fea8] vx_extentalloc_device_proxy at ffffffffa054c68f [vxfs]
#6 [ffff883fe490fec8] vx_worklist_process_high_pri_locked at ffffffffa054b0ef [vxfs]
#7 [ffff883fe490fee8] vx_worklist_dedithread at ffffffffa0551b9e [vxfs]
#8 [ffff883fe490ff28] vx_kthread_init at ffffffffa055105d [vxfs]
#9 [ffff883fe490ff48] kernel_thread at ffffffff8155f7d0

DESCRIPTION:
In the current code of emtran_process_commit(), it is possible that the EAU summary got updated without delegation of the corresponding EAU, because we clear the VX_AU_SMAPFREE flag before updating EAU summary, which could lead to possible hang. Also, some improper error handling in case of bad map can also cause some hang situations.

RESOLUTION:
To avoid potential hang, modify the code to clear the VX_AU_SMAPFREE flag after updating the EAU summary, and improve some error handling in emtran_commit/undo.

* 3967004 (Tracking ID: 3958688)

SYMPTOM:
System panic when VxFS got force unmounted, the panic stack trace could be like following:

#8 [ffff88622a497c10] do_page_fault at ffffffff81691fc5
#9 [ffff88622a497c40] page_fault at ffffffff8168e288
    [exception RIP: vx_nfsd_encode_fh_v2+89]
    RIP: ffffffffa0c505a9  RSP: ffff88622a497cf8  RFLAGS: 00010202
    RAX: 0000000000000002  RBX: ffff883e5c731558  RCX: 0000000000000000
    RDX: 0000000000000010  RSI: 0000000000000000  RDI: ffff883e5c731558
    RBP: ffff88622a497d48   R8: 0000000000000010   R9: 000000000000fffe
    R10: 0000000000000000  R11: 000000000000000f  R12: ffff88622a497d6c
    R13: 00000000000203d6  R14: ffff88622a497d78  R15: ffff885ffd60ec00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
#10 [ffff88622a497d50] exportfs_encode_inode_fh at ffffffff81285cb0
#11 [ffff88622a497d60] show_mark_fhandle at ffffffff81243ed4
#12 [ffff88622a497de0] inotify_fdinfo at ffffffff8124411d
#13 [ffff88622a497e18] inotify_show_fdinfo at ffffffff812441b0
#14 [ffff88622a497e50] seq_show at ffffffff81273ec7
#15 [ffff88622a497e90] seq_read at ffffffff8122253a
#16 [ffff88622a497f00] vfs_read at ffffffff811fe0ee
#17 [ffff88622a497f38] sys_read at ffffffff811fecbf
#18 [ffff88622a497f80] system_call_fastpath at ffffffff816967c9

DESCRIPTION:
There is no error handling for the situation that file system gets disabled/unmounted in nfsd_encode code path, which could lead to panic.

RESOLUTION:
Added error handling in vx_nfsd_encode_fh_v2()to avoid panic in case the file system get unmounted/dsiabled.

* 3967006 (Tracking ID: 3934175)

SYMPTOM:
4-node FSS CFS experienced IO hung on all nodes.

DESCRIPTION:
IO requests are processed in LIFO manner when IO requests are handed off to 
worker thread in case of low stack space, which is not expected, they should be 
process in FIFO.

RESOLUTION:
Modified the code to pick up the older work items from tail of the queue.

* 3967030 (Tracking ID: 3947433)

SYMPTOM:
While adding a volume (part of vset) in already mounted filesystem, fsvoladm
displays following error:
UX:vxfs fsvoladm: ERROR: V-3-28487: Could not find the volume <volume name> in vset

DESCRIPTION:
The code to find the volume in the vset requires the file descriptor of character
special device but in the concerned code path, the file descriptor that is being
passed is of block device.

RESOLUTION:
Code changes have been done to pass the file descriptor of character special device.

* 3967032 (Tracking ID: 3947648)

SYMPTOM:
Due to the wrong auto tuning of vxfs_ninode/inode cache, there could be hang 
observed due to lot of memory pressure.

DESCRIPTION:
If kernel heap memory is very large(particularly observed from SOLARIS T7 
servers), there can be overflow due to smaller size data type.

RESOLUTION:
Changed the code to handle overflow.

* 3967089 (Tracking ID: 3908785)

SYMPTOM:
System panic observed because of null page address in writeback structure in case of 
kswapd 
process.

DESCRIPTION:
Secfs2/Encryptfs layers had used write VOP as a hook when Kswapd is triggered to 
free page. 
Ideally kswapd should call writepage() routine where writeback structure are correctly filled.  When 
write VOP is 
called because of hook in secfs2/encrypts, writeback structures are cleared, resulting in null page 
address.

RESOLUTION:
Code changes has been done to call VxFS kswapd routine only if valid page address is 
present.

Patch ID: VRTSvxfs-7.4.0.1200

* 3949500 (Tracking ID: 3949308)

SYMPTOM:
In a scenario where FEL caching is enabled, application I/O on a file does not
proceed when file system utilization is 100%.

DESCRIPTION:
When file system capacity is utilized 100%, application I/O on a file does not
proceed. This issue occurs because the ENOSPC error handling code path tries to
take the Inode ownership lock which is already held by the current thread. As a
result, any operation on that file hangs.

RESOLUTION:
This fix releases the Inode ownership lock and reclaims it after ENOSPC error
handling is complete.

* 3949506 (Tracking ID: 3949501)

SYMPTOM:
The sfcache offline command hangs.

DESCRIPTION:
During Inode inactivation, Inode with FEL dirty flag set gets included in the
cluster-wide inactive list instead of the local inactive list. This issue occurs
due to an internal error.
As a result, the FEL dirty flag does not get cleared and the "sfcache offline"
command hangs.

RESOLUTION:
This fix now includes the Inodes with FEL dirty flag in a local inactive list.

* 3949507 (Tracking ID: 3949502)

SYMPTOM:
When SmartIO FEL-based writeback caching is enabled, memory leak of few bytes
may happen during node reconfiguration.

DESCRIPTION:
FEL recovery is initiated during node reconfiguration during which some internal
data structure remains held. Due to this extra hold, the data structures are not
freed, which leads to small memory leak.

RESOLUTION:
This fix now ensures that the hold on the data structure is handled correctly.

* 3949508 (Tracking ID: 3949503)

SYMPTOM:
When FEL is enabled in CFS environment, after a node crash, stale data may be
seen on a file.

DESCRIPTION:
While revoking RWlock after node recovery, the file system ensures that the FEL
writes are flushed to the disks and a write completion record is written in FEL
log. 
In scenarios where a node crashes and the write completion record is not
updated, FEL writes get replayed during FEL recovery. This may overwrite the
writes that may have happened after the revoke, on some other cluster node,
resulting in data corruption.

RESOLUTION:
This fix now writes the completion record after FEL write flush.

* 3949509 (Tracking ID: 3949504)

SYMPTOM:
When SmartIO FEL-based writeback caching is enabled, memory leak of few bytes
may happen during node reconfiguration.

DESCRIPTION:
FEL recovery is initiated during node reconfiguration during which some internal
data structure remains held. Due to this extra hold, the data structures are not
freed that leads to small memory leak.

RESOLUTION:
This fix now ensures that the hold on the data structure is handled correctly.

* 3949510 (Tracking ID: 3949505)

SYMPTOM:
When SmartIO FEL-based writeback caching is enabled, I/O operations on a file in
filesystem can result in panic after cluster reconfiguration.

DESCRIPTION:
In an FEL environment, the file system may get mounted after a node recovery,
but the FEL device may remain offline. In such a scenario, the FEL related data
structures remain inaccessible and the node panics during an application I/O
that attempts to access the FEL related data structures.

RESOLUTION:
This fix checks whether the FEL device recovery has completed before accessing
the FEL related data structures.

* 3950740 (Tracking ID: 3953165)

SYMPTOM:
Customer may get generic message or warning in syslog with string as "vxfs:0000:<msg>" instead of uniquely 
numbered message id for VxFS module.

DESCRIPTION:
Few syslog messages introduced in InfoScale 7.4 release were not given unique message number to identify 
correct places in the product where they are originated. Instead they are marked with common message 
identification number "0000".

RESOLUTION:
This patch fixes syslog messages generated by VxFS module, containing "0000" as the message string and 
provides them with a unique numbering.

* 3952340 (Tracking ID: 3953148)

SYMPTOM:
Extra extent delegation messages exchanges may be noticed between delegation master and delegation 
client.

DESCRIPTION:
If a node is not using delayed allocation or SmartIO based write back cache (Front end Log) and if an extent 
allocation unit is being revoked while non-dalloc/non-FEL extent allocation is in progress then node may have 
delegation deficit. This is not correct as node is not using delayed allocation/FEL.

RESOLUTION:
Fix is to ignore delegation deficit if deficit is on account of non-delayed allocation/non-FEL allocation.

Patch ID: VRTSvxvm-7.4.0.1700

* 3996074 (Tracking ID: 3947178)

SYMPTOM:
System panic occurs while removing VxVM(Veritas Volume Manager) package with the following stack:

voliod_qsio+0xfb/0x160 [vxio]
vol_sample_timeout+0xbf/0xd0 [vxio]
__voluntimeout+0x144/0x190 [vxio]
scheduler_tick+0xd2/0x270
voltimercallback+0x13/0x20 [vxio]
run_timer_softirq+0x199/0x350
update_process_times+0x76/0x90
__do_softirq+0xea/0x240
call_softirq+0x1c/0x30
do_softirq+0x65/0xa0

DESCRIPTION:
Vxio kernel module is unloaded while uninstalling the VxVM package. During the unloading, vxio IO daemons are stopped prior to stopping sampling IOs, so the sampling IOs could be still alive and calling killed IO daemons to finish the job, hence the panic.

RESOLUTION:
Code change has been made to fix the issue.

* 3996078 (Tracking ID: 3852146)

SYMPTOM:
In a CVM cluster, when importing a shared diskgroup specifying both -c and -o
noreonline options, the following error may be returned: 
VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed: Disk for disk
group not found.

DESCRIPTION:
The -c option will update the disk ID and disk group ID on the private region
of the disks in the disk group being imported. Such updated information is not
yet seen by the slave because the disks have not been re-onlined (given that
noreonline option is specified). As a result, the slave cannot identify the
disk(s) based on the updated information sent from the master, causing the
import to fail with the error Disk for disk group not found.

RESOLUTION:
The code is modified to handle the working of the "-c" and "-o noreonline"
options together.

* 3996090 (Tracking ID: 3983159)

SYMPTOM:
System panic encountered while doing IO in FSS (Flexible Storage Sharing) Environment with following stack:
[exception RIP: vol_get_ioscb+105]
#7 vol_disk_rhandle_read_resp [vxio]
#8 vol_ioship_rrecv [vxio]
#9 gab_lrrecv [gab]
#10 llt_multiport_rrecv_func [llt]
#11 llt_deliver_msg_direct [llt]
#12 llt_rdma_lrsrv_port [llt]
#13 llt_process_evt_nr_channel [llt]
#14 llt_process_one [llt]
#15 llt_recvdata [llt]
#16 llt_open_recv [llt]
#17 llt_msg_recv1 [llt]
#18 llt_process_intrq [llt]
#19 llt_process_events [llt]
#20 llt_rdlv_thread [llt]
...

DESCRIPTION:
Kernel panic is observed in vol_get_ioscb while accessing response ioscb. During reconfiguration ioship requests in pending queue are purged. After the request is purged it is possible the response is received and while accessing the purged pending requests panic occurs.

RESOLUTION:
Changes are done in VxVM code to check if ioscb corresponding to the response is already purged and avoid panic.

* 3996662 (Tracking ID: 3987937)

SYMPTOM:
VxVM command hang happens when heavy IO load performed on VxVM volume with snapshot, IO memory pool full is also observed.

DESCRIPTION:
It's a deadlock situation occurring with heavy IOs on volume with snapshots. When a multistep SIO A acquired ilock and it's child MV write SIO is waiting for memory pool which is full, another multistep SIO B has acquired memory and waiting for the ilock held by multistep SIO A.

RESOLUTION:
Code changes have been made to fix the issue.

* 3998388 (Tracking ID: 3991580)

SYMPTOM:
IO and VxVM command hang may happen if IO performed on both source and snapshot volumes.

DESCRIPTION:
It's a deadlock situation occurring with heavy IOs on both source volume and snapshot volume. 
SIO (a), USER_WRITE, on snap volume, held ILOCK (a), waiting for memory(full).
SIO (b),  PUSHED_WRITE, on snap volume, waiting for ILOCK (a).
SIO (c),  parent of SIO (b), USER_WRITE, on the source volume, held ILOCK (c) and memory, waiting for SIO (b) done.

RESOLUTION:
User separate pool for IO writes on Snapshot volume to resolve the issue.

* 3998688 (Tracking ID: 3980678)

SYMPTOM:
Earlier module failed to load on RHEL 7.7 .

DESCRIPTION:
RHEL 7.7 is new release and hence VxVM module is compiled with the RHEL 7.7 kernel .

RESOLUTION:
Compiled VxVM with RHEL 7.7 kernel bits .

Patch ID: VRTSvxvm-7.4.0.1600

* 3951744 (Tracking ID: 3951938)

SYMPTOM:
Retpoline support for ASLAPM on RHEL6.10 and RHEL6.x retpoline 
kernels

DESCRIPTION:
The RHEL6.10 is new release and it has Retpoline kernel. Also
redhat released retpoline kernel for older RHEL6.x releases. The APM module 
should be recompiled with retpoline aware GCC to support retpoline
kernel.

RESOLUTION:
Compiled APM with retpoline GCC.

* 3974224 (Tracking ID: 3973364)

SYMPTOM:
In case of VVR (Veritas Volume Replicator) synchronous mode of replication with TCP protocol, if there are any network issues
I/O's may hang for upto 15-20 mins.

DESCRIPTION:
In VVR synchronous replication mode, if a node on primary site is unable to receive ACK (acknowledgement) message sent from the secondary
within the TCP timeout period, then IO may get hung till the TCP layer detects a timeout, which is ~ 15-20 minutes.
This issue may frequently happen in a lossy network where the ACKs could not be delivered to primary due to some network issues.

RESOLUTION:
A hidden tunable 'vol_vvr_tcp_keepalive' is added to allow users to enable TCP 'keepalive' for VVR data ports if the TCP timeout happens frequently.

* 3974225 (Tracking ID: 3968279)

SYMPTOM:
Vxconfigd dumps core with SEGFAULT/SIGABRT on boot for NVME setup.

DESCRIPTION:
For NVME setup, vxconfigd dumps core while doing device discovery as the data structure is accessed by multiple threads and can hit a race condition. For sector size other than 512, the partition size mismatch is seen because we are doing comparison with partition size from devintf_getpart() and it is in sector size of the disk. This can lead to call of NVME device discovery.

RESOLUTION:
Added mutex lock while accessing the data structure so as to prevent core. Made calculations in terms of sector size of the disk to prevent the partition size mismatch.

* 3974226 (Tracking ID: 3948140)

SYMPTOM:
System may panic if RTPG data returned by the array is greater than 255 with
below stack:

dmp_alua_get_owner_state()
dmp_alua_get_path_state()
dmp_get_path_state()
dmp_check_path_state()
dmp_restore_callback()
dmp_process_scsireq()
dmp_daemons_loop()

DESCRIPTION:
The size of the buffer given to RTPG SCSI command is currently 255 bytes. But the
size of data returned by underlying array for RTPG can be greater than 255
bytes. As a result
incomplete data is retrieved (only the first 255 bytes) and when trying to read
the RTPG data, it causes invalid access of memory resulting in error while
claiming the devices. This invalid access of memory may lead to system panic.

RESOLUTION:
The RTPG buffer size has been increased to 1024 bytes for handling this.

* 3974227 (Tracking ID: 3934700)

SYMPTOM:
VxVM is not able to recognize AIX LVM disk with 4k sector.

DESCRIPTION:
With 4k sector disk, the AIX LVM ID locates at different new offset(4096 
rather than 3584 bytes) in the disk header. VxVM tries to read the LVM ID at 
the original offset, so it is not able to recognize it.

RESOLUTION:
Code has been changed to compatible with the new offset of 4k sector disk.

* 3974228 (Tracking ID: 3969860)

SYMPTOM:
Event source daemon (vxesd) takes a lot of time to start when lot of LUNS (around 1700) are attached to the system.

DESCRIPTION:
Event source daemon creates a configuration file ddlconfig.info with the help of HBA API libraries. The configuration file is created by child process while the parent process is waiting for child to create the configuration file. If the number of LUNS are large then time taken for creation of configuration is also more. Thus the parent process keeps on waiting for the child process to complete the configuration and exit.

RESOLUTION:
Changes have been done to create the ddlconfig.info file in the background and let the parent exit immediately.

* 3974230 (Tracking ID: 3915523)

SYMPTOM:
Local disk from other node belonging to private DG is exported to the node when
a private DG is imported on current node.

DESCRIPTION:
When we try to import a DG, all the disks belonging to the DG are automatically
exported to the current node so as to make sure 
that the DG gets imported. This is done to have same behaviour as SAN with local
disks as well. Since we are exporting all disks in 
the DG, then it happens that disks which belong to same DG name but different
private DG on other node get exported to current node 
as well. This leads to wrong disk getting selected while DG gets imported.

RESOLUTION:
Instead of DG name, DGID (diskgroup ID) is used to decide whether disk needs to
be exported or not.

* 3974231 (Tracking ID: 3955979)

SYMPTOM:
In case of Synchronous mode of replication with TCP , if there are any network related issues,
I/O's get hang for upto 15-30 mins.

DESCRIPTION:
When synchronous replication is used , and if because of some network issues secondary is not being able
to send the network ack's to the primary, I/O gets hang on primary waiting for these network ack's. In case 
of TCP mode we depend on TCP for timeout to happen and then I/O's get drained out, since in this case there is no 
handling from VVR side, I/O's get hang until TCP triggers its timeout which in normal case happens within 15-30 mins.

RESOLUTION:
Code changes are done to allow user to set the time for tcp within which timeout should
get triggered.

* 3975899 (Tracking ID: 3931048)

SYMPTOM:
Few VxVM log files listed below are created with write permission to all users
which might lead to security issues.

/etc/vx/log/vxloggerd.log
/var/adm/vx/logger.txt
/var/adm/vx/kmsg.log

DESCRIPTION:
The log files are created with write permissions to all users, which is a
security hole. 
The files are created with default rw-rw-rw- (666) permission because the umask
is set to 0 while creating these files.

RESOLUTION:
Changed umask to 022 while creating these files and fixed an incorrect open
system call. Log files will now have rw-r--r--(644) permissions.

* 3978013 (Tracking ID: 3907800)

SYMPTOM:
VxVM package installation will fail on SLES12 SP2.

DESCRIPTION:
Since SLES12 SP2 has lot of kernel changes, package
installation fails. Added code changes to provide SLES12 SP2 platform support
for VxVM.

RESOLUTION:
Added code changes to provide SLES12 SP2 platform support.

* 3979382 (Tracking ID: 3950373)

SYMPTOM:
In CVR environment, master node acts as default logowner and slave node cannot be assigned logowner role.

DESCRIPTION:
In CVR environment, master node acts as default logowner. In case of heavy I/O load, master node acts as becomes bottleneck, impacting overall cluster and replication performance. In case of scaling CVR configurations with multiple nodes and multiple RVGs, master node acts as logowner for all RVGS and further degrades the performance and put restriction in scaling.
In FSS-CVR environment, logowner node, having remote connectivity to SRL/data volume, adds I/O shipping overhead.

RESOLUTION:
Changes are done to enable configuring any slave node in cluster as logowner per RVG in CVR environment.

* 3979385 (Tracking ID: 3953711)

SYMPTOM:
System might encounter panic while switching the logowner to slave while IO's are in progress with the following stack:

vol_rv_service_message_free()
vol_rv_replica_reconfigure()
sched_clock_cpu()
vol_rv_error_handle()
vol_rv_errorhandler_callback()
vol_klog_start()
voliod_iohandle()
voliod_loop()
voliod_kiohandle()
kthread()
insert_kthread_work()
ret_from_fork_nospec_begin()
insert_kthread_work()
vol_rv_service_message_free()

DESCRIPTION:
While processing a transaction, we leave the IO count on the RV object to let the transaction to proceed. In such case we set the RV object in SIO to NULL. But while freeing the message, the object is de-referenced without taking into consideration it can be NULL. This can lead to a panic because of NULL pointer de-reference in code.

RESOLUTION:
Code changes have been made to handle NULL value of the RV object.

* 3980907 (Tracking ID: 3978343)

SYMPTOM:
Log owner change in CVR could lead to a hang or panic due to access to freed memory

DESCRIPTION:
In case of logowner change one of the error case was not handled properly. IO initiated from
slave node needs to be retried from slave node only. In this case it was getting retried from
master node causing inconsistency.

RESOLUTION:
IO from slave is retried from slave node in case of error.

Patch ID: VRTSvxvm-7.4.0.1500

* 3970687 (Tracking ID: 3971046)

SYMPTOM:
Replication does not switch between synchronous and asynchronous mode automatically based on the network conditions.

DESCRIPTION:
Network conditions may impact the replication performance. However, the current VVR replication does not switch between synchronous and asynchronous mode automatically based on the network conditions.

RESOLUTION:
This patch provides the adaptive synchronous mode for VVR, which is an enhancement to the existing synchronous override mode. In the adaptive synchronous mode, the replication mode switches from synchronous to asynchronous based on the cross-site network latency. Thus replication happens in the synchronous mode when the network conditions are good, and it automatically switches to the asynchronous mode when there is an increase in the cross-site network latency. You can also set alerts that notify you when the system undergoes network deterioration. 
For more details, see https://www.veritas.com/bin/support/docRepoServlet?bookId=136858821-137189101-1&requestType=pdf

Patch ID: VRTSvxvm-7.4.0.1400

* 3949320 (Tracking ID: 3947022)

SYMPTOM:
vxconfigd hang.

DESCRIPTION:
There was a window between NIOs being added to rp_port->pt_waitq and rlink being disconnected where NIOs were left in pt_waitq and hence their parent (ack sio) were not done. The ack sio had IO count which led to the vxconfigd hang.

RESOLUTION:
Don't add NIO to rp_port->pt_waitq if rp_port->pt_closing is set. Instead call done on the NIO with error ENC_CLOSING.Before deleting the port, call done on the NIOs in pt_waitq with error ENC_CLOSING.

* 3958838 (Tracking ID: 3953681)

SYMPTOM:
Data corruption issue is seen when more than one plex of volume is detached.

DESCRIPTION:
When a plex of volume gets detached, DETACH map gets enabled in the DCO (Data Change Object). The incoming IO's are tracked in DRL (Dirty Region Log) and then asynchronously copied to DETACH map for tracking.
If one more plex gets detached then it might happen that some of the new incoming regions are missed in the DETACH map of the previously detached plex.
This leads to corruption when the disk comes back and plex resync happens using corrupted DETACH map.

RESOLUTION:
Code changes are done to correctly track the IO's in the DETACH map of previously detached plex and avoid corruption.

* 3958884 (Tracking ID: 3954787)

SYMPTOM:
On a RHEL 7.5 FSS environment with GCO configured having NVMe devices and Infiniband network, data corruption might occur when sending the IO from Master to slave node.

DESCRIPTION:
In the recent RHEL 7.5 release, linux stopped allowing IO on the underlying NVMe device which has gaps in between BIO vectors. In case of VVR, the SRL header of 3 blocks is added to the BIO . When the BIO is sent through LLT to the other node because of LLT limitation of 32 fragments can lead to unalignment of BIO vectors. When this unaligned BIO is sent to the underlying NVMe device, the last 3 blocks of the BIO are skipped and not written to the disk on the slave node. This leads to incomplete data written on the slave node which leads to data corruption.

RESOLUTION:
Code changes have been done to handle this case and send the BIO aligned to the underlying NVMe device.

* 3958887 (Tracking ID: 3953711)

SYMPTOM:
System might encounter panic while switching the logowner to slave while IO's are in progress with the following stack:

vol_rv_service_message_free()
vol_rv_replica_reconfigure()
sched_clock_cpu()
vol_rv_error_handle()
vol_rv_errorhandler_callback()
vol_klog_start()
voliod_iohandle()
voliod_loop()
voliod_kiohandle()
kthread()
insert_kthread_work()
ret_from_fork_nospec_begin()
insert_kthread_work()
vol_rv_service_message_free()

DESCRIPTION:
While processing a transaction, we leave the IO count on the RV object to let the transaction to proceed. In such case we set the RV object in SIO to NULL. But while freeing the message, the object is de-referenced without taking into consideration it can be NULL. This can lead to a panic because of NULL pointer de-reference in code.

RESOLUTION:
Code changes have been made to handle NULL value of the RV object.

* 3958976 (Tracking ID: 3955101)

SYMPTOM:
Server might panic in a GCO environment with the following stack:

nmcom_server_main_tcp()
ttwu_do_wakeup()
ttwu_do_activate.constprop.90()
try_to_wake_up()
update_curr()
update_curr()
account_entity_dequeue()
 __schedule()
nmcom_server_proc_tcp()
kthread()
kthread_create_on_node()
ret_from_fork()
kthread_create_on_node()

DESCRIPTION:
There are recent changes done in the code to handle Dynamic port changes i.e deletion and addition of ports can now happen dynamically. It might happen that while accessing the port, it was deleted in the background by other thread. This would lead to a panic in the code since the port to be accessed has been already deleted.

RESOLUTION:
Code changes have been done to take care of this situation and check if the port is available before accessing it.

* 3959204 (Tracking ID: 3949954)

SYMPTOM:
Dumpstack messages are printed when vxio module is loaded for the first time when called blk_register_queue.

DESCRIPTION:
In RHEL 7.5 a new check was added in kernel code in blk_register_queue where if QUEUE_FLAG_REGISTERED was already
set on the queue a dumpstack warning message was printed. In vxvm the flag was already set as the flag got copied from the device queue 
which was earlier registered by the OS.

RESOLUTION:
Changes are done in VxVM code to avoid copying of QUEUE_FLAG_REGISTERED fix the dumpstack warnings.

* 3959433 (Tracking ID: 3956134)

SYMPTOM:
System panic might occur when IO is in progress in VVR (veritas volume replicator) environment with below stack:

page_fault()
voliomem_grab_special()
volrv_seclog_wsio_start()
voliod_iohandle()
voliod_loop()
kthread()
ret_from_fork()

DESCRIPTION:
In a memory crunch scenario in some cases the memory reservation for SIO (staged IO) in VVR configuration might fail. If this situation occurs then SIO is tried at a later time when the memory becomes available again but while doing some of the fields of SIO are passed NULL values which leads to panic in the VVR code.

RESOLUTION:
Code changes have been done to pass proper values to IO when it is retired in VVR environment.

* 3967098 (Tracking ID: 3966239)

SYMPTOM:
IO hang observed while copying data to VxVM (Veritas Volume Manager) volumes in cloud environments.

DESCRIPTION:
In an cloud environment having a mirrored volume with DCO  (Data Change Object) attached to the volume, IO's issued on the volume have to process DRL (Dirty region log) which is used for faster recovery on reboot. When IO processing is happening on DRL, there is an condition in code which can lead to IO's not being driven through DRL resulting in a hang situation. Further IO's keep on queuing waiting for the IO's on the DRL to complete leading to a hang situation.

RESOLUTION:
Code changes have been done to resolve the condition which leads to IO's not being driven through DRL.

* 3967099 (Tracking ID: 3965715)

SYMPTOM:
vxconfigd may core dump when VIOM(Veritas InfoScale Operation Manager) is enabled. 
Following is stack trace:

#0  0x00007ffff7309d22 in ____strtoll_l_internal () from /lib64/libc.so.6
#1  0x000000000059c61f in ddl_set_vom_discovered_attr ()
#2  0x00000000005a4230 in ddl_find_devices_in_system ()
#3  0x0000000000535231 in find_devices_in_system ()
#4  0x0000000000535530 in mode_set ()
#5  0x0000000000477a73 in setup_mode ()
#6  0x0000000000479485 in main ()

DESCRIPTION:
vxconfigd daemon reads  Json data generated by VIOM for dynamically updating some of the VxVM disk(LUN) attributes.
While accessing  this data it wrongly parsed LUN size attribute as string which returns NULL instead of returning LUN size.
Accessing this NULL value leads to vxconfigd daemon to core dump and generates segmentation fault.

RESOLUTION:
Appropriate changes are done to handle the LUN size attribute correctly.

Patch ID: VRTSvxvm-7.4.0.1200

* 3949322 (Tracking ID: 3944259)

SYMPTOM:
The vradmin verifydata and vradmin ibc commands fail on private diskgroups
with Lost connection error.

DESCRIPTION:
This issue occurs because of a deadlock between the IBC mechanism and the
ongoing I/Os on the secondary RVG. IBC mechanism expects I/O transfer to
secondary in a sequential order, however to improve performance I/Os are now
written in parallel. The mismatch in IBC behavior causes a deadlock and the
vradmin verifydata and vradmin ibc fail due to time out error.

RESOLUTION:
As a part of this fix, IBC behavior is now improved such that it now considers
parallel and possible out-of-sequence I/O writes to the secondary.

* 3950578 (Tracking ID: 3953241)

SYMPTOM:
Customer may get generic message or warning in syslog with string as "vxvm:0000:
<msg>" instead of uniquely 
numbered message id for VxVM module.

DESCRIPTION:
Few syslog messages introduced in InfoScale 7.4 release were not given unique 
message number to identify 
correct places in the product where they are originated. Instead they are marked 
with common message 
identification number "0000".

RESOLUTION:
This patch fixes syslog messages generated by VxVM module, containing "0000" as 
the message string and 
provides them with a unique numbering.

* 3950760 (Tracking ID: 3946217)

SYMPTOM:
In a scenario where encryption over wire is enabled and secondary logging is
disabled, vxconfigd hangs and replication does not progress.

DESCRIPTION:
In a scenario where encryption over wire is enabled and secondary logging is
disabled, the application I/Os are encrypted in a sequence, but are not written
to the secondary in the same order. The out-of-sequence and in-sequence I/Os are
stuck in a loop, waiting for each other to complete. Due to this, I/Os are left
incomplete and eventually hang. As a result, the vxconfigd hangs and the
replication does not progress.

RESOLUTION:
As a part of this fix, the I/O encryption and write sequence is improved such
that all I/Os are first encrypted and then sequentially written to the
secondary.

* 3950799 (Tracking ID: 3950384)

SYMPTOM:
In a scenario where volume data encryption at rest is enabled, data corruption
may occur if the file system size exceeds 1TB and the data is located in a file
extent which has an extent size bigger than 256KB.

DESCRIPTION:
In a scenario where data encryption at rest is enabled, data corruption may
occur when both the following cases are satisfied:
- File system size is over 1TB
- The data is located in a file extent which has an extent size bigger than 256KB
This issue occurs due to a bug which causes an integer overflow for the offset.

RESOLUTION:
As a part of this fix, appropriate code changes have been made to improve data
encryption behavior such that the data corruption does not occur.

* 3951488 (Tracking ID: 3950759)

SYMPTOM:
The application I/Os hang if the volume-level I/O shipping is enabled and 
the
volume layout is mirror-concat or mirror-stripe.

DESCRIPTION:
In a scenario where an application I/O is issued over a volume that has
volume-level I/O shipping enabled, the I/O is shipped to all target nodes.
Typically, on the target nodes, the I/O must be sent only to the local disk.
However, in case of mirror-concat or mirror-stripe volumes, I/Os are

sent to
remote disks as well. This at times leads in to an I/O hang.

RESOLUTION:
As a part of this fix, I/O once shipped to the target node is restricted to 
only
locally connected disks and remote disks are skipped.

Patch ID: VRTSaslapm-7.4.0.1700

* 3998689 (Tracking ID: 3988286)

SYMPTOM:
Support for ASLAPM on RHEL 7.7  kernel

DESCRIPTION:
The RHEL7.7 is new release and hence APM module
should be recompiled with new kernel.

RESOLUTION:
Compiled APM with new kernel.

Patch ID: VRTSdbac-7.4.0.1400

* 3998656 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSdbac-7.4.0.1300

* 3967347 (Tracking ID: 3967265)

SYMPTOM:
RHEL 7.x RETPOLINE kernels and RHEL 7.6 are not supported

DESCRIPTION:
Red Hat has released RHEL 7.6 which has RETPOLINE kernel, and also released RETPOLINE kernels for older RHEL 7.x Updates. Veritas Cluster Server 
kernel modules need to be recompiled with RETPOLINE aware GCC to support RETPOLINE kernel.

RESOLUTION:
Support for RHEL 7.6 and RETPOLINE kernels on RHEL 7.x kernels is now introduced.

Patch ID: VRTSamf-7.4.0.1400

* 3998655 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSamf-7.4.0.1300

* 3967346 (Tracking ID: 3967265)

SYMPTOM:
RHEL 7.x RETPOLINE kernels and RHEL 7.6 are not supported

DESCRIPTION:
Red Hat has released RHEL 7.6 which has RETPOLINE kernel, and also released RETPOLINE kernels for older RHEL 7.x Updates. Veritas Cluster Server 
kernel modules need to be recompiled with RETPOLINE aware GCC to support RETPOLINE kernel.

RESOLUTION:
Support for RHEL 7.6 and RETPOLINE kernels on RHEL 7.x kernels is now introduced.

Patch ID: VRTSvxfen-7.4.0.1400

* 3998654 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSvxfen-7.4.0.1300

* 3967345 (Tracking ID: 3967265)

SYMPTOM:
RHEL 7.x RETPOLINE kernels and RHEL 7.6 are not supported

DESCRIPTION:
Red Hat has released RHEL 7.6 which has RETPOLINE kernel, and also released RETPOLINE kernels for older RHEL 7.x Updates. Veritas Cluster Server 
kernel modules need to be recompiled with RETPOLINE aware GCC to support RETPOLINE kernel.

RESOLUTION:
Support for RHEL 7.6 and RETPOLINE kernels on RHEL 7.x kernels is now introduced.

Patch ID: VRTSgab-7.4.0.1400

* 3998653 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSgab-7.4.0.1300

* 3967344 (Tracking ID: 3967265)

SYMPTOM:
RHEL 7.x RETPOLINE kernels and RHEL 7.6 are not supported

DESCRIPTION:
Red Hat has released RHEL 7.6 which has RETPOLINE kernel, and also released RETPOLINE kernels for older RHEL 7.x Updates. Veritas Cluster Server 
kernel modules need to be recompiled with RETPOLINE aware GCC to support RETPOLINE kernel.

RESOLUTION:
Support for RHEL 7.6 and RETPOLINE kernels on RHEL 7.x kernels is now introduced.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel7_x86_64-Patch-7.4.0.1700.tar.gz to /tmp
2. Untar infoscale-rhel7_x86_64-Patch-7.4.0.1700.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel7_x86_64-Patch-7.4.0.1700.tar.gz
    # tar xf /tmp/infoscale-rhel7_x86_64-Patch-7.4.0.1700.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale740P1700 [<host1> <host2>...]

You can also install this patch together with 7.4 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
- When using yum repodata for installing the patch:
 
 1. Run the following commands to refresh the YUM repository
   
   # yum repolist
   # yum updateinfo
      
 2.   Run the following command to update the patches
          
   # yum upgrade VRTS*

- When using VIOM, if any issue is seen related to vxlist functionality, please download and install the below VIOM patch:
   https://www.veritas.com/content/support/en_US/downloads/update.UPD827667


OTHERS
------
NONE