infoscale-rhel7.9_x86_64-Patch-7.4.1.2400

 Basic information
Release type: Patch
Release date: 2021-03-04
OS update support: RHEL7 x86-64 Update 9
Technote: None
Documentation: None
Popularity: 2426 viewed    4 downloaded
Download size: 619.72 MB
Checksum: 2090938218

 Applies to one or more of the following products:
InfoScale Availability 7.4.1 On RHEL7 x86-64
InfoScale Enterprise 7.4.1 On RHEL7 x86-64
InfoScale Foundation 7.4.1 On RHEL7 x86-64
InfoScale Storage 7.4.1 On RHEL7 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
infoscale-rhel7_x86_64-Patch-7.4.1.1900 (obsolete) 2020-04-24
sig_licensing-rhel7_x86_64-Patch-7.4.1.003.01 (obsolete) 2020-01-14
vcs-rhel7_x86_64-Patch-7.4.1.1200 (obsolete) 2019-11-05

 Fixes the following incidents:
3969748, 3970470, 3970482, 3970852, 3972077, 3973076, 3973119, 3973227, 3975897, 3976693, 3977099, 3978184, 3978195, 3978208, 3978645, 3978646, 3978649, 3978678, 3979375, 3979397, 3979398, 3979400, 3979440, 3979462, 3979471, 3979475, 3979476, 3979596, 3979656, 3980021, 3980044, 3980457, 3980564, 3980679, 3980944, 3981028, 3981548, 3981628, 3981631, 3981738, 3982214, 3982215, 3982217, 3982218, 3982912, 3983165, 3983742, 3983989, 3983990, 3983994, 3983995, 3984139, 3984155, 3984343, 3984731, 3985584, 3986468, 3986472, 3986572, 3986960, 3987010, 3987228, 3989085, 3989099, 3989317, 3991264, 3991433, 3992030, 3992222, 3992902, 3993469, 3993898, 3993929, 3994756, 3995201, 3995684, 3995695, 3995698, 3995826, 3995980, 3996332, 3996401, 3996640, 3996787, 3997064, 3997065, 3997074, 3997076, 3998394, 3998677, 3998678, 3998680, 3998681, 3998693, 3999030, 3999398, 3999671, 4000203, 4000598, 4000746, 4002584, 4003442, 4004174, 4004182, 4004927, 4006950, 4008070, 4008253, 4009762, 4010546, 4012318, 4014715, 4014718, 4014984, 4015142, 4015824, 4016077, 4016082, 4016283, 4016291, 4016483, 4016486, 4016487, 4016488, 4016625, 4016768, 4017194, 4019003, 4019753, 4019755, 4019757, 4019758, 4019768, 4020130, 4020291

 Patch ID:
VRTSvlic-4.01.74.005-RHEL7
VRTSvxfs-7.4.1.2800-RHEL7
VRTSglm-7.4.1.2800-RHEL7
VRTSllt-7.4.1.2800-RHEL7
VRTSgab-7.4.1.2800-RHEL7
VRTSamf-7.4.1.2800-RHEL7
VRTSvxfen-7.4.1.2800-RHEL7
VRTSvcsag-7.4.1.2800-RHEL7
VRTSvcs-7.4.1.2800-RHEL7
VRTSdbac-7.4.1.2800-RHEL7
VRTSaslapm-7.4.1.2800-RHEL7
VRTSvxvm-7.4.1.2800-RHEL7
VRTSsfcpi-7.4.1.2800-GENERIC
VRTSsfmh-7.4.0.801-0
VRTSodm-7.4.1.2800-RHEL7

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.1 * * *
                         * * * Patch 2400 * * *
                         Patch Date: 2020-11-25


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH
   * KNOWN ISSUES


PATCH NAME
----------
InfoScale 7.4.1 Patch 2400


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL7 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSglm
VRTSllt
VRTSodm
VRTSsfcpi
VRTSsfmh
VRTSvcs
VRTSvcsag
VRTSvlic
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.1
   * InfoScale Enterprise 7.4.1
   * InfoScale Foundation 7.4.1
   * InfoScale Storage 7.4.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxfs-7.4.1.2800
* 4016077 (4009328) In cluster filesystem, unmount hang could be observed if smap is marked bad previously.
* 3983165 (3975019) Under IO load running with NFS v4 using NFS lease, may panic the server
* 3976693 (4016085) fsdb command "xxxiau" refers wrong device to dump information
* 4004182 (4004181) Read the value of VxFS compliance clock
* 4004927 (3983350) Secondary may falsely assume that the ilist extent is pushed and do the allocation, even if the actual push transaction failed on primary.
* 4014718 (4011596) man page changes for glmdump
* 4015824 (4015278) System panics during vx_uiomove_by _hand.
* 4016082 (4000465) FSCK binary loops when it detects break in sequence of log ids.
Patch ID: VRTSvxfs-7.4.1.2400
* 4008253 (4008352) Using VxFS mount binary inside container to mount any device might result in core generation.
Patch ID: VRTSvxfs-7.4.1.2300
* 3983989 (3830300) Degraded CPU performance during backup of Oracle archive logs
on CFS vs local filesystem
Patch ID: VRTSvxfs-7.4.1.1900
* 3983742 (3983330) On Rhel6, server panic if auditd is enabled and filesystem is umounted and mounted again.
* 3983994 (3902600) Contention observed on vx_worklist_lk lock in cluster 
mounted file system with ODM
* 3983995 (3973668) On linux, during system startup, boot.vxfs fails to load vxfs modules & throws following error:
* 3986472 (3978149) FIFO file's timestamps are not updated in case of writes.
* 3987010 (3985839) Cluster hang is observed during allocation of extent to file because of lost delegation of AU.
* 3989317 (3989303) In case of reconfig seen hang when fsck hits coredump and coredump is stuck in vx_statvfs() on rhel8 and sles15 where OS systemd coredump utility calls vx_statvfs(). This blocks recovery thread on FS.
* 3991433 (3990830) File system detected inconsistency with link count table and FSCK flag gets set on the file system.
* 3992030 (3982416) Data corruption observed with Sparse files
* 3993469 (3993442) File system mount hang on a node where NBUMasterWorker group is online
* 3993929 (3984750) Code changes to prevent memory leak in the "fsck" binary.
* 3994756 (3982510) System panic with null pointer deference.
* 3995201 (3990257) VxFS may face buffer overflow in case of doing I/O on File Change Log (FCL) file through Userspace Input Output (UIO) interface
* 3995695 (3995694) VxFS module failed to load on RHEL7.8.
* 3995980 (3995978) Support utility to find the nodes from which message response is pending
* 3996332 (3879310) The file system may get corrupted after a failed vxupgrade.
* 3996401 (3984557) Fsck core dumped during sanity check of directory.
* 3996640 (3987643) CFS code may panic  at function vx_cfs_siget
* 3996787 (3981190) Negative nr-inodes entries are seen on RHEL6 platform.
* 3997064 (3989311) FSCK operation may hang
* 3997065 (3996947) FSCK operation may behave incorrectly or hang
* 3997074 (3983191) Fullfsck failing on FS due to invalid attribute length.
* 3997076 (3995526) Fsck command of vxfs may coredump.
* 3998394 (3983958) Code changes have been done to return proper error code while performing open/read/write operations on a removed checkpoint.
Patch ID: VRTSvxfs-7.4.1.1300
* 3981548 (3980741) Code changes to prevent data corruption in delayed allocation enabled filesystem.
* 3981628 (3981627) VxFS module failed to load on RHEL7.7.
* 3981738 (3979693) Fix for vxupgrade failing to upgrade from DLV 7 to 8 and returning EINVAL
Patch ID: VRTSvxfs-7.4.1.1200
* 3970470 (3970480) A kernel panic occurs when writing to cloud files.
* 3970482 (3970481) A file system panic occurs if many inodes are being used.
* 3978645 (3975962) Mounting a VxFS file system with more than 64 PDTs may panic the server.
* 3978646 (3931761) Cluster wide hang may be observed in case of high workload.
* 3978649 (3978305) The vx_upgrade command causes VxFS to panic.
* 3979400 (3979297) A kernel panic occurs when installing VxFS on RHEL6.
* 3980044 (3980043) A file system corruption occurred during a filesystem mount operation.
Patch ID: VRTSodm-7.4.1.2800
* 4020291 (4020290) VRTSodm-7.4.1 module unable to load on RHEL7.x.
Patch ID: VRTSodm-7.4.1.1900
* 3983990 (3897161) Oracle Database on Veritas filesystem with Veritas ODM
library has high log file sync wait time.
* 3995698 (3995697) ODM module failed to load on RHEL7.8.
Patch ID: VRTSodm-7.4.1.1200
* 3981631 (3981630) ODM module failed to load on RHEL7.7
Patch ID: VRTSsfmh-vom-HF074801
* 4020130 (4020129) VIOM Agent for InfoScale 7.4.1 Update 4
Patch ID: VRTSvlic-4.01.74.005
* 3991264 (3991265) Java version upgrade support (SDSCPE-600) and Remove /opt/Veritas dependancy (STESC-5159)
Patch ID: VRTSsfcpi-7.4.1.2800
* 3969748 (3969438) Rolling upgrade fails when local node is specified at the system name prompt on non-fss environment.
* 3970852 (3970848) The CPI configures the product even if you use the -install option while installing the InfoScale Foundation version 7.x or later.
* 3972077 (3972075) Veki fails to unload during patch installation when -patch_path option is used with the installer.
* 3973119 (3973114) While upgrading to Infoscale 7.4, the installer fails to stop vxspec, vxio and
vxdmp as vxcloudd is still running.
* 3979596 (3979603) While upgrading from InfoScale 7.4.1 to 7.4.1.xxx, CPI installs the packages from the 7.4.1.xxx patch only and not the base packages of 7.4.1 GA.
* 3980564 (3980562) CPI does not perform the installation when two InfoScale patch paths are provided, and displays the following message: "CPI ERROR V-9-30-1421 The patch_path and patch2_path patches are both for the same package: VRTSinfoscale".
* 3980944 (3981519) An uninstallation of InfoScale 7.4.1 using response files fails with an error.
* 3985584 (3985583) The addnode operation fails during symmetry check of a new node with other nodes in the cluster.
* 3986468 (3987894) An InfoScale 7.4.1 installation fails even though the installer automatically downloads the appropriate platform support patch from SORT.
* 3986572 (3965602) Rolling upgrade Phase2 fails if a patch is installed as a part of Rolling
upgrade Phase1.
* 3986960 (3986959) The installer fails to install the 'infoscale-sles12.4_x86_64-Patch-7.4.1.100' patch on SLES 12 SP4.
* 3987228 (3987171) The installer takes longer than expected time to start installation process.
* 3989085 (3989081) When a system is restarted after a successful VVR configuration, it becomes unresponsive.
* 3989099 (3989098) For SLES15, system clock synchronization using the NTP server fails while configuring server-based fencing.
* 3992222 (3992254) The installer fails to install InfoScale 7.4.1 on SLES12 SP5.
* 3993898 (3993897) On SLES12 SP4, if the kernel version is not 4.12.14-94.41-default, the installer fails to install InfoScale 7.4.1.
* 3995826 (3995825) The installer script fails to stop the vxfen service while configuring InfoScale components or applying patches.
* 3999671 (3999669) A single-node HA configuration failed on a NetBackup Appliance system because CollectorService failed to start.
* 4000598 (4000596) The 'showversion' option of the InfoScale 7.4.1 installer fails to download the available maintenance releases or patch releases.
* 4004174 (4004172) On SLES15 SP1, while installing InfoScale 7.4.1 along with product patch, the installer fails to install some of the base rpms and exits with an error.
* 4008070 (4008578) Even though a cluster node may have a fully qualified hostname, the product installer trims this value and uses the shorter hostname for the cluster configuration.
* 4014984 (4014983) The product installer does not display a warning at the time of the pre-upgrade check to suggest that you will need to provide telemetry details later on if the cluster nodes are not registered with TES or VCR.
* 4015142 (4015139) Product installer fails to install InfoScale on RHEL 8 systems if IPv6 addresses are provided for the system list.
Patch ID: VRTSdbac-7.4.1.2800
* 4019768 (4013953) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).
Patch ID: VRTSdbac-7.4.1.1900
* 3998681 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSdbac-7.4.1.1200
* 3982218 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSvcs-7.4.1.2800
* 3995684 (3995685) Discrepancy in engine log messages of PR and DR site in GCO configuration.
* 4012318 (4012518) The gcoconfig command does not accept "." in the interface name.
Patch ID: VRTSvcs-7.4.1.1200
* 3982912 (3981992) A potentially critical security vulnerability in VCS needs to be addressed.
Patch ID: VRTSvcs-7.4.1.1100
* 3973227 (3978724) CmdServer starts every time the HAD starts, and keeps one port open although the service running on that port is no longer needed.
* 3977099 (3977098) VCS does not support non-evacuation of the service groups during a system restart.
* 3980021 (3969838) A failover Service Group can be brought online on one node even when it is ONLINE on another node
Patch ID: VRTSvcsag-7.4.1.2800
* 3984343 (3982300) A warning message related to the process priority is logged in the ProcessOnOnly agent log every minute.
* 4006950 (4006979) When the AzureDisk resource comes online on a cluster node, it goes into the UNKNOWN state on all the other nodes.
* 4009762 (4009761) A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.
* 4016488 (4007764) The NFS locks related log file is flooded with the "sync_dir:copy failed for link" error messages.
* 4016625 (4016624) When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.
Patch ID: VRTSvxfen-7.4.1.2800
* 4000203 (3970753) Freeing uninitialized/garbage memory causes panic in vxfen.
* 4000746 (4000745) The VxFEN process fails to start due to late discovery of the VxFEN disk group.
* 4019758 (4013953) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).
Patch ID: VRTSamf-7.4.1.2800
* 4019003 (4018791) A cluster node panics when the AMF module attempts to access an executable binary or a script using its absolute path.
* 4019757 (4013953) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).
Patch ID: VRTSamf-7.4.1.1900
* 3998680 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSamf-7.4.1.1200
* 3982217 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSgab-7.4.1.2800
* 4016486 (4011683) The GAB module failed to start and the system log messages indicate failures with the mknod command.
* 4016487 (4007726) When a GAB message that is longer than the value specified by GAB_MAX_MSGSIZE is transfered, an error message is added to the VCS logs. However the error message is not sufficiently descriptive.
* 4019755 (4013953) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).
Patch ID: VRTSgab-7.4.1.1900
* 3998678 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSgab-7.4.1.1200
* 3982215 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSllt-7.4.1.2800
* 3999398 (3989440) The dash (-) in the device name may cause the LLT link configuration to fail.
* 4002584 (3994996) Adding -H miscellaneous flag to add new functionalities in lltconfig. Add a tunable to allow skb alloc with SLEEP flag.
* 4003442 (3983418) In a rare case, after a panic or a reboot of a node, it may fail to join the CVM master due to an inconsistent LLT port state on the master.
* 4010546 (4018581) The LLT module fails to start and the system log messages indicate missing IP address.
* 4016483 (4016484) The vxexplorer utility panics the node on which it runs if the LLT version on the node is llt-rhel8_x86_64-Patch-7.4.1.2100 or later.
* 4019753 (4013953) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).
Patch ID: VRTSllt-7.4.1.1900
* 3998677 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSllt-7.4.1.1200
* 3982214 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).
Patch ID: VRTSvxvm-7.4.1.2800
* 3984155 (3976678) vxvm-recover:  cat: write error: Broken pipe error encountered in syslog.
* 4016283 (3973202) A VVR primary node may panic due to accessing already freed memory.
* 4016291 (4002066) Panic and Hang seen in reclaim
* 4016768 (3989161) The system panic occurs when dealing with getting log requests from vxloggerd.
* 4017194 (4012681) If vradmind process terminates due to some reason, it is not properly restarted by RVG agent of VCS.
Patch ID: VRTSvxvm-7.4.1.2100
* 3984139 (3965962) No option to disable auto-recovery when a slave node joins the CVM cluster.
* 3984731 (3984730) VxVM logs warning messages when the VxDMP module is stopped or removed for the first time after the system is rebooted
* 3992902 (3975667) Softlock in vol_ioship_sender kernel thread
* 3998693 (3998692) VxVM support for RHEL 7.8
Patch ID: VRTSvxvm-7.4.1.1300
* 3980679 (3980678) VxVM support on RHEL 7.7
Patch ID: VRTSvxvm-7.4.1.1200
* 3973076 (3968642) [VVR Encrypted]Intermittent vradmind hang on the new Primary
* 3975897 (3931048) VxVM (Veritas Volume Manager) creates particular log files with write permission
to all users.
* 3978184 (3868154) When DMP Native Support is set to ON, dmpnode with multiple VGs cannot be listed
properly in the 'vxdmpadm native ls' command
* 3978195 (3925345) /tmp/vx.* directories are frequently created due to a bug in vxvolgrp command.
* 3978208 (3969860) Event source daemon (vxesd) takes a lot of time to start when lot of LUNS (around 1700) are attached to the system.
* 3978678 (3907596) vxdmpadm setattr command gives error while setting the path attribute.
* 3979375 (3973364) I/O hang may occur when VVR Replication is enabled in synchronous mode.
* 3979397 (3899568) Adding tunable dmp_compute_iostats to start/stop the iostat gathering
persistently.
* 3979398 (3955979) I/O gets hang in case of synchronous Replication.
* 3979440 (3947265) Delay added in vxvm-startup script to wait for infiniband devices to get 
discovered leads to various issues.
* 3979462 (3964964) Soft lockup may happen in vxnetd because of invalid packets kept sending from port scan tool.
* 3979471 (3915523) Local disk from other node belonging to private DG(diskgroup) is exported to the
node when a private DG is imported on current 
node.
* 3979475 (3959986) Restarting the vxencryptd daemon may cause some IOs to be lost.
* 3979476 (3972679) vxconfigd kept crashing and couldn't start up.
* 3979656 (3975405) cvm_clus fails to stop even after "hastop -all" is triggered, and so the cluster nodes get stuck in the LEAVING state.
* 3980457 (3980609) Secondary node panic in server threads
* 3981028 (3978330) The values of the VxVM and the VxDMP tunables do not persist after reboot with 4.4 and later versions of the Linux kernel.
Patch ID: VRTSglm-7.4.1.2800
* 4014715 (4011596) Multiple issues were observed during glmdump using hacli for communication
Patch ID: VRTSglm-7.4.1.1700
* 3999030 (3999029) GLM module failed to unload because of VCS service hold.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxfs-7.4.1.2800

* 4016077 (Tracking ID: 4009328)

SYMPTOM:
In a cluster filesystem, if smap corruption is seen and the smap is marked bad then it could cause hang while unmounting the filesystem.

DESCRIPTION:
While freeing an extent in vx_extfree1() for logversion >= VX_LOGVERSION13 if we are freeing whole AUs we set VX_AU_SMAPFREE flag for those AUs. This ensures that revoke of delegation for that AU is delayed till the AU has SMAP free transaction in progress. This flag gets cleared either in post commit/undo processing of the transaction or during error handling in vx_extfree1(). In one scenario when we are trying to free a whole AU and its smap is marked bad, we do not return any error to vx_extfree1() and neither do we add the subfunction to free the extent to the transaction. So, the VX_AU_SMAPFREE flag is not cleared and remains set even if there is no SMAP free transaction in progress. This could lead to hang while unmounting the cluster filesystem.

RESOLUTION:
Code changes have been done to add error handling in vx_extfree1 to clear VX_AU_SMAPFREE flag in case where error is returned due to bad smap.

* 3983165 (Tracking ID: 3975019)

SYMPTOM:
Under IO load running with NFS v4 using NFS lease, system may panic with below message.
Kernel panic - not syncing: GAB: Port h halting system due to client process failure

DESCRIPTION:
NFS v4 uses lease per file. This delegation can be taken in RD or RW mode and can be released conditionally. For CFS, we release such delegation from specific node while inode is being normalized (i.e. losing ownership). This can race with another setlease operation on this node and can end up into deadlock for ->i_lock.

RESOLUTION:
Code changes are made to disable lease.

* 3976693 (Tracking ID: 4016085)

SYMPTOM:
Shows garbage values, instead of correct information

DESCRIPTION:
Dumps information of device 0 while it was asking for other device information(may be device 1, 2).

RESOLUTION:
Updated curpos pointer to point to correct device as needed.

* 4004182 (Tracking ID: 4004181)

SYMPTOM:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

DESCRIPTION:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

RESOLUTION:
Provide an API on mount point to read the Compliance clock for that filesystem

* 4004927 (Tracking ID: 3983350)

SYMPTOM:
Inodes are allocated without pushing the ilist extent

DESCRIPTION:
Multiple messages can be sent from vx_cfs_ilist_push for inodes that are in same block. On receiver side i.e primary node vx_find_iposition() may return bno VX_OVERLAY and btranid 0 until anyone actually do the push. All these will get serialize in vx_ilist_pushino() on VX_IRWLOCK_RANGE and VX_CFS_IGLOCK. First one will do the push and set the btranid to last commit id. As btranid is non null vx_recv_ilist_push_msg() will wait for vx_tranidflush() to flush the transaction to disk. Other receiver threads will not do the push and will have tranid 0 so they will return success without waiting for the transactions to be flushed to disk. Now if the file system gets disable while flushing we end up in an inconsistent state because some of the inodes have actually returned success and marked this block as pushed in-core on secondary.

RESOLUTION:
If the block is pushed or pulled and tranid is 0 again lookup for ilist extent containing the inode. This will populate the correct tranid from ilptranid and the thread will wait for transaction flush.

* 4014718 (Tracking ID: 4011596)

SYMPTOM:
Man page is missing details about of the feature we support

DESCRIPTION:
Need to include new option "-h" in glmdump in man page for using hacli utility for communicating across the nodes in the cluster.

RESOLUTION:
Added the details about the feature supported by glmdump in man page

* 4015824 (Tracking ID: 4015278)

SYMPTOM:
System panics during vx_uiomove_by _hand

DESCRIPTION:
During uiomove, VxFS get the pages from OS through get_user_pages() to copy user data. Oracle use hugetablfs internally for performance reason. This can allocate hugepages. Under low memory condition, it is possible that get_user_pages() might return VxFS compound pages. In case of compound pages, only head page has valid mapping set and all other pages are mapped as TAIL_MAPPING. In case of uiomove, if VxFS gets compound page, then it try to check writable mapping for all pages from this compound page. This can result into dereferencing illegal address (TAIL_MAPPING) which was causing panic in  stack. VxFS doesn't support huge pages but it is possible that compound page is present on the system and VxFS might get one through get_user_pages.

RESOLUTION:
Code is modified to get head page in case of tail pages from compound page when VxFS checks writeable mapping.

* 4016082 (Tracking ID: 4000465)

SYMPTOM:
FSCK binary loops when it detects break in sequence of log ids.

DESCRIPTION:
When FS is not cleanly unmounted, FS will end up with unflushed intent log. This intent log will either be flushed during next subsequent mount or when fsck ran on the FS. Currently to build the transaction list that needs to be replayed, VxFS uses binary search to find out head and tail. But if there are breakage in intent log, then current code is susceptible to loop. To avoid this loop, VxFS is now going to use sequential search to find out range instead of binary search.

RESOLUTION:
Code is modified to incorporate sequential search instead of binary search to find out replayable transaction range.

Patch ID: VRTSvxfs-7.4.1.2400

* 4008253 (Tracking ID: 4008352)

SYMPTOM:
Using VxFS mount binary inside container to mount any device might result in core generation.

DESCRIPTION:
Using VxFS mount binary inside container to mount any device might result in core generation.
This issue is because of improper initialisation of local pointer, and dereferencing garbage value later.

RESOLUTION:
This fix properly initialises all the pointers before dereferencing them.

Patch ID: VRTSvxfs-7.4.1.2300

* 3983989 (Tracking ID: 3830300)

SYMPTOM:
Heavy cpu usage while oracle archive process are running on a clustered
fs.

DESCRIPTION:
The cause of the poor read performance in this case was due to fragmentation,
fragmentation mainly happens when there are multiple archivers running on the
same node. The allocation pattern of the oracle archiver processes is 

1. write header with O_SYNC
2. ftruncate-up the file to its final size ( a few GBs typically)
3. do lio_listio with 1MB iocbs

The problem occurs because all the allocations in this manner go through
internal allocations i.e. allocations below file size instead of allocations
past the file size. Internal allocations are done at max 8 Pages at once. So if
there are multiple processes doing this, they all get these 8 Pages alternately
and the fs becomes very fragmented.

RESOLUTION:
Added a tunable, which will allocate zfod extents when ftruncate
tries to increase the size of the file, instead of creating a hole. This will
eliminate the allocations internal to file size thus the fragmentation. Fixed
the earlier implementation of the same fix, which ran into
locking issues. Also fixed the performance issue while writing from secondary node.

Patch ID: VRTSvxfs-7.4.1.1900

* 3983742 (Tracking ID: 3983330)

SYMPTOM:
If auditd is enabled on VxFS filesystem and filesystem gets unmounted then server might panic if either filesystem is mounted again or auditd is disabled.

machine_kexec at ffffffff81040f1b
crash_kexec at ffffffff810d6722
__do_page_fault at ffffffff81054f7c
do_page_fault at ffffffff8156029e 
page_fault at ffffffff8155d265
[exception RIP: pin_inotify_watch+20]		
untag_chunk at ffffffff810f3771
 prune_one at ffffffff810f3bb5
 prune_tree_thread at ffffffff810f3c3f

or

do_page_fault at ffffffff8156029e
page_fault at ffffffff8155d265
[exception RIP: vx_ireuse_clean+796]
vx_ireuse_clean at ffffffffa09492f6 [vxfs]
 vx_iget at ffffffffa094ba0b [vxfs]

DESCRIPTION:
If auditd is enabled on VxFS filesystem and filesystem is unmounted, then inotify watches are still present for root inode and when this inode is either being reused or OS tries to cleanup its iwatch tree then server panic.

RESOLUTION:
Code changes have been done to clear inotify watches from root inode.

* 3983994 (Tracking ID: 3902600)

SYMPTOM:
Contention observed on vx_worklist_lk lock in cluster mounted file 
system with ODM

DESCRIPTION:
In CFS environment for ODM async i/o reads, iodones are done 
immediately,  calling into ODM itself from the interrupt handler. But all 
CFS writes are currently processed in delayed fashion, where the requests
are queued and processed later by the worker thread. This was adding delays
in ODM writes.

RESOLUTION:
Optimized the IO processing of ODM work items on CFS so that those
are processed in the same context if possible.

* 3983995 (Tracking ID: 3973668)

SYMPTOM:
Following error is thrown by modinst script:
/etc/vx/modinst-vxfs: line 251: /var/VRTS/vxfs/sort.xxx: No such file or directory

DESCRIPTION:
After the changes done through e3935401, the files created by modinst-vxfs.sh script are dumped in 
/var/VRTS/vxfs. If '/var' happens to be a separate filesystem, it is mounted by boot.localfs script. 
boot.localfs starts after boot.vxfs(evident from boot logs).
Hence the file creation fails & boot.vxfs doesn't load modules.

RESOLUTION:
Adding the dependency of boot.localfs in LSB of boot.vxfs will 
cause localfs to run before boot.vxfs thereby fixing the issue.

* 3986472 (Tracking ID: 3978149)

SYMPTOM:
When the FIFO file is created on VXFS filesystem, its timestamps are
not updated when writes are done to it.

DESCRIPTION:
In write context, Linux kernel calls update_time inode op in order to update timestamp
fields.This op was not implemented in VXFS.

RESOLUTION:
Implemented update_time inode op in VXFS.

* 3987010 (Tracking ID: 3985839)

SYMPTOM:
Cluster hang is observed during allocation of extent which is larger than 32K blocks to file.

DESCRIPTION:
when there is request more than 32k blocks allocation to file from secondary node, VxFS sends this request to primary. To serve this request, primary node start allocating extent (or AU) based on the last successful allocation unit number. VxFS delegates the AU's to all nodes including primary and release these delegation after some time (10sec).  There is a 3 way race between delegation release thread, allocater thread and extent removal thread. If delegation release thread picks up the AU to release delegation and in the interim, allocater thread picked same AU to allocate, then allocater thread allocates the extent from this AU and change the state of AU. If another thread comes and removes this extent then that thread will race with delegation release thread. This will cause the lost delegation of that AU and allocater engine will fail to recognize this. Subsequent write on that AU will hang which later on causes system hang.

RESOLUTION:
Code is modified to serialize these operations which will avoid the race.

* 3989317 (Tracking ID: 3989303)

SYMPTOM:
In case of reconfig seen hang when fsck hits coredump and coredump is stuck in vx_statvfs() on rhel8 and sles15 where OS systemd coredump utility calls vx_statvfs(). This blocks recovery thread on FS.

DESCRIPTION:
On rhel8 and sles15, OS systemd coredump utility calls vx_statvfs().
In case of reconfig where primary node dies and cfs recovery is in process to replay log files 
of the dead nodes for which vxfsckd runs fsck on secondary node and if fsck hits coredump in between 
due to some error, the coredump utility thread gets stuck at vx_statvfs() to get wakeup by new primary 
to collect fs stats and blocking recovery thread here and we are landing into deadlock.

RESOLUTION:
To unblock recovery thread in this case we should send older fs stats to coredump utility 
when cfs recovery is in process on secondary and "vx_fsckd_process" is set which indicates fsck is in 
progress for this filesystem.

* 3991433 (Tracking ID: 3990830)

SYMPTOM:
File system detected inconsistency with link count table and FSCK flag gets set on the file system with following messages in the syslog

kernel: vxfs: msgcnt 259 mesg 036: V-2-36: vx_lctbad - /dev/vx/dsk/<dg>/<vol> file system link count table 0 bad
kernel: vxfs: msgcnt 473 mesg 096: V-2-96: vx_setfsflags - /dev/vx/dsk/<dg>/<vol> file system fullfsck flag set - vx_lctbad

DESCRIPTION:
Full FSCK flag is getting set because of inconsistency with Link count table. Inconsistency is caused because of race condition when files are being removed and created in parallel. This leads to incorrect LCT updates.

RESOLUTION:
Fixed the race condition between the file removal thread and creation thread.

* 3992030 (Tracking ID: 3982416)

SYMPTOM:
Data issued to sparse region(HOLE) in the file is lost

DESCRIPTION:
Write to sparse region in file races with putpage(data) flush on the same file. Sparse region(HOLE) length information is cached by vxfs putpage 
code and is used for optimizing data flush for next pages in the file. Meanwhile, if user issues write in the same sparse region, vxfs putpage code based on 
cached size of HOLE can ignore write flush.

RESOLUTION:
code changes have been made to cache the length of hole reduced to page boundary.

* 3993469 (Tracking ID: 3993442)

SYMPTOM:
If FS is bind mounted in container and after unmounting FS from host, if FS is mounted on host again then it hangs.

DESCRIPTION:
While unmounting the FS, inode watch is removed from the root inode. if it is not done in the root context then the operation to remove watch on 
root inode gets stuck.

RESOLUTION:
Code changes have been done to resolve this issue.

* 3993929 (Tracking ID: 3984750)

SYMPTOM:
"fsck" can leak memory in some error scenarios.

DESCRIPTION:
There are some cases in the "fsck" binary where it is not cleaning up memory in some error scenarios. Because of this, some pending buffers can be leaked.

RESOLUTION:
Code changes have been done in "fsck" to clean up those memories properly to prevent any potential memory leak.

* 3994756 (Tracking ID: 3982510)

SYMPTOM:
System will be rebooted due to panic

DESCRIPTION:
In case filesystem is running out of space (ENOSPC), internally VxFS tries to delete removable checkpoints to create space in filesystem. Such 
checkpoints can be created by setting "remove" attribute while creating checkpoint. During such removal, VxFS internally sets file-operations to NULL so that 
further file operation on files belong such removed checkpoint, won't be reach to VxFS. But while doing so, other file operation can race with it. This will 
result into accessing NULL pointer.

RESOLUTION:
Code is modified to return specific operation for all file operations.

* 3995201 (Tracking ID: 3990257)

SYMPTOM:
VxFS may face buffer overflow in case of doing I/O on File Change Log (FCL) file through Userspace Input Output (UIO) interface

DESCRIPTION:
In case of Userspace Input Output (UIO) interface, VxFS is not able to handle larger I/O request properly, resulting in buffer overflow.

RESOLUTION:
VxFS code is modified to limit the length of I/O request came through Userspace Input Output (UIO) interface.

* 3995695 (Tracking ID: 3995694)

SYMPTOM:
VxFS module failed to load on RHEL7.8.

DESCRIPTION:
RHEL7.8 is new release and it has some changes in kernel which caused VxFS module failed to load on it.

RESOLUTION:
Added code to support VxFS on RHEL7.8.

* 3995980 (Tracking ID: 3995978)

SYMPTOM:
Machine is in hang state

DESCRIPTION:
In case of hang issues this utility can be used to find if message response from any node is pending. It can help in debugging. No need to collect dumps of all nodes. Only node having pending response can be analyzed.

RESOLUTION:
Added support for the utility pendingmsg

* 3996332 (Tracking ID: 3879310)

SYMPTOM:
The file system may get corrupted after the file system freeze during 
vxupgrade. The full fsck gives the following errors:

UX:vxfs fsck: ERROR: V-3-20451: No valid device inodes found
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate

DESCRIPTION:
The vxupgrade requires the file system to be frozen during its functional 
operation. It may happen that the corruption can be detected while the freeze 
is in progress and the full fsck flag can be set on the file system. However, 
this doesn't stop the vxupgrade from proceeding.
At later stage of vxupgrade, after structures related to the new disk layout 
are updated on the disk, vxfs frees up and zeroes out some of the old metadata 
inodes. If any error occurs after this point (because of full fsck being set), 
the file system needs to go back completely to the previous version at the tile 
of full fsck. Since the metadata corresponding to the previous version is 
already cleared, the full fsck cannot proceed and gives the error.

RESOLUTION:
The code is modified to check for the full fsck flag after freezing the file 
system during vxupgrade. Also, disable the file system if an error occurs after 
writing new metadata on the disk. This will force the newly written metadata to 
be loaded in memory on the next mount.

* 3996401 (Tracking ID: 3984557)

SYMPTOM:
Fsck core dumped during sanity check of directory.

DESCRIPTION:
Fsck core dumped during sanity check of directory in case dentry is corrupted/invalid.

RESOLUTION:
Modified the code to validate the inode number before referencing it during sanity check.

* 3996640 (Tracking ID: 3987643)

SYMPTOM:
CFS code may panic  at function vx_cfs_siget

DESCRIPTION:
In case of deployments with local as well as CFS mounts, during filesystem freeze, a node may panic in the fuction vx_cfs_siget

do_page_fault at ffffffffb076f915
page_fault at ffffffffb076b758
    [exception RIP: vx_cfs_siget+460]
vx_recv_cistat at ffffffff	c0add2de [vxfs]
vx_recv_rpc at ffffffffc0ae1f40 [vxfs]

RESOLUTION:
Code is modified to fix the issue.

* 3996787 (Tracking ID: 3981190)

SYMPTOM:
Negative nr-inodes entries are seen on RHEL6 platform.

DESCRIPTION:
When final_iput() is called on VxFS inode, it decreases the incore inode count (nr-inodes) in RHEL6 through gerneric_delete_inode(). VxFS never keep inodes on any global OS list thus it never increments this incore inode counter. To fix this, VxFS used to adjust this counter during inactivation of inode but this has been removed during one of the enhancement. And now VxFS does this nr-inodes adjustment only during unmount of FS and during unload of FS. This was resulting in negative values of nr-inodes on live m/c where FS is mounted.

RESOLUTION:
Code is modified to increase nr-inodes during node inactivation.

* 3997064 (Tracking ID: 3989311)

SYMPTOM:
FSCK operation will hang

DESCRIPTION:
While checking filesystem with fsck utility we may see hang. During checking filesystem, this utility needs to read metadata data from disk and for 
that it uses buffer cache. This buffer cache has race due to which same buffer can be with two different threads with incorrect flag copy and wait for each 
other to finish there work. This can result into deadlock.

RESOLUTION:
Code is modified to fix the race.

* 3997065 (Tracking ID: 3996947)

SYMPTOM:
FSCK operation may behave incorrectly or hang

DESCRIPTION:
While checking filesystem with fsck utility we may see hang or undefined behavior if FSCK found specific type of corruption. Such type of 
corruption will be visible in presence of checkpoint. FSCK utility fixed any corruption as per input give (either "y" or "n"). During this for this specific 
type of corruption, due to bug it end up into unlocking mutex which is not locked.

RESOLUTION:
Code is modified to fix the bug to made sure mutex is locked before unlocking it.

* 3997074 (Tracking ID: 3983191)

SYMPTOM:
Fullfsck failing on FS due to invalid attribute length

DESCRIPTION:
Due to invalid attribute length fullfsck failing on FS and reporting corruption.

RESOLUTION:
VxFS fsck code is modified to handle invalid attribute length.

* 3997076 (Tracking ID: 3995526)

SYMPTOM:
Fsck command of vxfs may coredump with following stack.
#0  __memmove_sse2_unaligned_erms ()
#1  check_nxattr )
#2  check_attrbuf ()
#3  attr_walk ()
#4  check_attrs ()
#5  pass1d ()
#6  iproc_do_work()
#7  start_thread ()
#8  clone ()

DESCRIPTION:
The length passed to bcopy operation was invalid.

RESOLUTION:
Code has been modified to allow bcopy operation only if the length is valid. Otherwise EINVAL error is returned which is handled by caller.

* 3998394 (Tracking ID: 3983958)

SYMPTOM:
"open" system call on a file which belongs to a removed checkpoint, returns "EPERM" which ideally should return "ENOENT".

DESCRIPTION:
"open" system call on a file which belongs to a removed checkpoint, returns "EPERM" which ideally should return "ENOENT".

RESOLUTION:
Code changes have been done so that, proper error code will be returned in those scenarios.

Patch ID: VRTSvxfs-7.4.1.1300

* 3981548 (Tracking ID: 3980741)

SYMPTOM:
File data can be lost in a race scenario between two dalloc background flushers.

DESCRIPTION:
In a race between two dalloc back ground flusher, we may end up flushing the data on disk without updating the file size accordingly, which create a scenario where some bytes of data will be lost.

RESOLUTION:
Code changes have been done in dalloc code path to remove the possibility of flushing the data without updating the on-disk size.

* 3981628 (Tracking ID: 3981627)

SYMPTOM:
VxFS module failed to load on RHEL7.7.

DESCRIPTION:
The RHEL7.7 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL7.7.

* 3981738 (Tracking ID: 3979693)

SYMPTOM:
vxupgrade fails while upgrading from DLV 7 to DLV 8 with following error message:
ERROR: V-3-22567: cannot upgrade /dev/vx/rdsk/dg_share/sq_informatica - Invalid argument

DESCRIPTION:
While doing allocation for RCQ inodes as part of vxupgrade from DLV 7 to 8, only the first RCQ inode 
allocation should be done from initial ilist extents and rest can be allocated from anywhere. In order to 
implement special allocation for first RCQ inode, VX_UPG_IALLOC_IEXT1 flag was checked in vx_upg_ialloc(). 
But the code changes done through incident 3936138 removed this check, which resulted in all RCQ inodes 
being allocated in same way. Since, vx_upg_olt_inoalloc() code only handles allocation for first RCQ inode and 
not others, it returned EINVAL.

RESOLUTION:
Added the check of flag  VX_UPG_IALLOC_IEXT1 in vx_upg_ialloc().

Patch ID: VRTSvxfs-7.4.1.1200

* 3970470 (Tracking ID: 3970480)

SYMPTOM:
A kernel panic occurs when writing to cloud files.

DESCRIPTION:
This issue occurs due to missing error value initialization.

RESOLUTION:
Initialization of the error value is done in the write code path, so that a proper error message is displayed when writing to cloud files.

* 3970482 (Tracking ID: 3970481)

SYMPTOM:
A file system panic occurs if many inodes are being used.

DESCRIPTION:
This issue occurs due to improperly managed ownership of inodes.

RESOLUTION:
The ownership of inodes in case of a large inode count is been fixed.

* 3978645 (Tracking ID: 3975962)

SYMPTOM:
Mounting a VxFS file system with more than 64 PDTs may panic the server.

DESCRIPTION:
For large memory systems, the number of auto-tuned VMM buffers is huge. To accumulate these buffers, VxFS needs more PDTs. Currently up to 128 PDTs are supported. However, for more than 64 PDTs, VxFS fails to initialize the strategy routine and calls a wrong function in the mount code path causing the system to panic.

RESOLUTION:
VxFS has been updated to initialize strategy routine for more than 64 PDTs.

* 3978646 (Tracking ID: 3931761)

SYMPTOM:
cluster wide hang may be observed in a race scenario if freeze gets initiated and there are multiple pending workitems in the worklist related to lazy isize update workitems.

DESCRIPTION:
If lazy_isize_enable tunable is set to ON and the 'ls -l' command is executed frequently from a non-writing node of the cluster, it accumulates a huge number of workitems to get processed by the worker threads. If any workitem with active level 1 is enqueued after these workitems, and clusterwide freeze gets initiated, it leads to a deadlock situation. The worker threads gets exhausted in processing the lazy isize update workitems and the workitem with active level 1 never gets processed. This leads to the cluster to stop responding.

RESOLUTION:
VxFS has been updated to discard the blocking lazy isize update workitems if freeze is in progress.

* 3978649 (Tracking ID: 3978305)

SYMPTOM:
The vx_upgrade command causes VxFS to panic.

DESCRIPTION:
When the vx_upgrade command is executed, VxFS incorrectly accesses the freed memory, and then it panics if the memory is paged-out.

RESOLUTION:
The code is modified to make sure that VXFS does not access the freed memory locations.

* 3979400 (Tracking ID: 3979297)

SYMPTOM:
A kernel panic occurs when installing VxFS on RHEL6.

DESCRIPTION:
During VxFS installation, the fs_supers list is not initialized. While de-referencing the fs_supers pointer, the kernel gets a NULL value for the superblock address and panics.

RESOLUTION:
VxFS has been updated to initialize fs_super during VxFS installation.

* 3980044 (Tracking ID: 3980043)

SYMPTOM:
During a filesystem mount operation, after the Intent log replay, a file system metadata corruption occurred.

DESCRIPTION:
As part of the log replay during mount, fsck replays the transactions, rebuilds the secondary maps, and updates the EAU and the superblock summaries. Fsck flushes the EAU secondary map and the EAU summaries to the disk in a delayed manner, but the EAU state is flushed to the disk synchronously. As a result, if the log replay fails once before succeeding during the filesystem mount, the state of the metadata on the disk may become inconsistent.

RESOLUTION:
The fsck log replay is updated to synchronously write secondary map and EAU summary to the disk.

Patch ID: VRTSodm-7.4.1.2800

* 4020291 (Tracking ID: 4020290)

SYMPTOM:
VRTSodm-7.4.1 module unable to load on RHEL7.x.

DESCRIPTION:
Need recompilation of VRTSodm due to recent changes in VRTSvxfs 
header files due to which some symbols are not being resolved.

RESOLUTION:
Recompiled the VRTSodm with new changes in VRTSvxfs header files.

Patch ID: VRTSodm-7.4.1.1900

* 3983990 (Tracking ID: 3897161)

SYMPTOM:
Oracle Database on Veritas filesystem with Veritas ODM library has high
log file sync wait time.

DESCRIPTION:
The ODM_IOP lock would not be held for long, so instead of trying
to take a trylock, and deferring the IO when we fail to get the trylock, it
would be better to call the non-trylock lock and finish the IO in the interrupt
context. It should be fine on solaris since this "sleep" lock is actually an
adaptive mutex.

RESOLUTION:
Instead of ODM_IOP_TRYLOCK() call ODM_IOP_LOCK() in the odm_iodone
and finish the IO. This fix will not defer any IO.

* 3995698 (Tracking ID: 3995697)

SYMPTOM:
ODM module failed to load on RHEL7.8.

DESCRIPTION:
RHEL7.8 is new release and it has some changes in kernel which caused ODM module failed to load on it.

RESOLUTION:
Added code to support ODM on RHEL7.8.

Patch ID: VRTSodm-7.4.1.1200

* 3981631 (Tracking ID: 3981630)

SYMPTOM:
ODM module failed to load on RHEL7.7

DESCRIPTION:
The RHEL7.7 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL7.7.

Patch ID: VRTSsfmh-vom-HF074801

* 4020130 (Tracking ID: 4020129)

SYMPTOM:
N/A

DESCRIPTION:
VIOM Agent for InfoScale 7.4.1 Update 4

RESOLUTION:
N/A

Patch ID: VRTSvlic-4.01.74.005

* 3991264 (Tracking ID: 3991265)

SYMPTOM:
Security vulnerabilities in old version of Java

DESCRIPTION:
Bundled the latest JRE version to 1.8.0_271 in the VRTSvlic package.

RESOLUTION:
Providing Patch of VRTSvlic for Java upgrade in IS 7.4.1 U4 patch release.

Patch ID: VRTSsfcpi-7.4.1.2800

* 3969748 (Tracking ID: 3969438)

SYMPTOM:
Rolling upgrade fails with the error:
CPI ERROR V-9-0-0
0 Cannot execute _cmd_vxdg list | _cmd_grep enabled | _cmd_awk '{print $1}' on system_name with undefined sys->{padv}

DESCRIPTION:
The following prompt appears as part of the rolling upgrade process: Enter the system name of the cluster on which you would like to perform rolling upgrade [q,?] After the prompt, the installer suggests the names of systems on which to perform the upgrade. You may select one of the suggested system names, or you may type a system name, and then press Enter. Either way, if you specify the system name of the local node, the installer displays the following error and exits.
CPI ERROR V-9-0-0
0 Cannot execute _cmd_vxdg list | _cmd_grep enabled | _cmd_awk '{print $1}' on system_name with undefined sys->{padv}

RESOLUTION:
The installer is enhanced to verify if the fss disk group is available. This enhancements was made for the Linux, Solaris, and Aix platforms.

* 3970852 (Tracking ID: 3970848)

SYMPTOM:
The CPI configures the product even if you use the -install option while installing the InfoScale Foundation version 7.x or later.

DESCRIPTION:
When you use the CPI to install InfoScale Foundation, the ./installer option performs the installation and configurations by default. If you use the -install option, the installer must only install the packages and not perform the configuration. However, in case of InfoScale Foundation version 7.x or later, when you use the -install option, the installer continues to configure the product after installation.

RESOLUTION:
For InfoScale Foundation, the CPI is modified so that the -install option only install the packages. Users can then use the -configure option to configure the product.

* 3972077 (Tracking ID: 3972075)

SYMPTOM:
When the -patch_path option is used, the installer fails to install the VRTSveki patch.

DESCRIPTION:
When the VRTSveki patch is present in a patch bundle, if the GA installer is run with the -patch_path option, it fails to install the VRTSveki patch. Consequently, the installation of any dependent VxVM patches also fails. This issue occurs because the veki module, and any other modules that depend on it, are loaded immediately after the packages are installed. When a patch installation starts, the Veki patch attempts to unload the veki module, but fails, because the dependent modules have a hold on it.

RESOLUTION:
This hotfix updates CPI so that if the -patch_path option is used when a VRTSveki patch is present in the bundle, it first unloads the dependent modules and then unloads the veki module.

* 3973119 (Tracking ID: 3973114)

SYMPTOM:
While upgrading Infoscale 7.4, the installer fails to stop vxspec, vxio and vxdmp as
vxcloudd is still running.

DESCRIPTION:
As the vxcloudd is running it fails to stop vxspec, vxio and vxdmp.

RESOLUTION:
Modified the CPI code to stop vxcloudd before stopping vxspec, vxio and vxdmp.

* 3979596 (Tracking ID: 3979603)

SYMPTOM:
CPI assumes that the third digit in an InfoScale 7.4.1 version indicates a patch version, and not a GA version. Therefore, it upgrades the packages from the patch only and does not upgrade the base packages.

DESCRIPTION:
To compare product versions and to set the type of installation, CPI compares the currently installed version with the target version to be installed. However, instead of comparing all the digits in a version, it incorrectly compares only the first two digits. In this case, CPI compares 7.4 with 7.4.1.xxx, and finds that the first 2 digits match exactly. Therefore, it assumes that the base version is already installed and then installs the patch packages only.

RESOLUTION:
This hotfix updates the CPI to recognize InfoScale 7.4.1 as a base version and 7.4.1.xxx (for example) as a patch version. After you apply this patch, CPI can properly upgrade the base packages first, and then proceed to upgrade the packages that are in the patch.

* 3980564 (Tracking ID: 3980562)

SYMPTOM:
CPI does not perform the installation when two InfoScale patch paths are provided, and displays the following message: "CPI ERROR V-9-30-1421 The patch_path and patch2_path patches are both for the same package: VRTSinfoscale".

DESCRIPTION:
CPI checks whether the patches mentioned with the patch_path option are the same. Even though the patch bundles are different, they have the same starting string, VRTSinfoscale, followed by the version number. CPI does not compare the version part of the patch bundle name, and so it incorrectly assumes them to be the same. Therefore, CPI does not install the patch bundles simultaneously, and instead, displays the inappropriate error.

RESOLUTION:
CPI is updated to check the complete patch bundle name including the version, for example: VRTSinfoscale740P1100. Now, CPI can properly differentiate between the patches and begin the installation without displaying the inappropriate error.

* 3980944 (Tracking ID: 3981519)

SYMPTOM:
An uninstallation of InfoScale 7.4.1 using response files fails with an error:
CPI ERROR V-9-40-1582 Following edge server related entries are missing in response file. Please correct.
edgeserver_host
edgeserver_port

DESCRIPTION:
When you uninstall InfoScale using a response file, the installer checks whether the response file contains the edge server related entries. If these entries are not available in the response file, the uninstall operation fails with the following error:
CPI ERROR V-9-40-1582 Following edge server related entries are missing in response file. Please correct.
edgeserver_host
edgeserver_port
This issue occurs because the installer checks the availability of these entries in the response file while performing any operations using the response files. However, these entries are required only for the configure and upgrade operations.

RESOLUTION:
The installer is enhanced to check the availability of the edge server related entries in the response file only for the configure or upgrade operations.

* 3985584 (Tracking ID: 3985583)

SYMPTOM:
For InfoScale 7.4.1 Patch 1200 onwards, the addnode operation fails with the following error message: 
"Cluster protocol version mismatch was detected between cluster <<cluster_name>> and <<new node name>>. Refer to the Configuration and Upgrade Guide for more details on how to set or upgrade cluster protocol version."

DESCRIPTION:
In InfoScale 7.4.1 Patch 1200 the cluster protocol version was changed from 220 to 230 in the VxVM component. However, the same update is not made in the installer, which continues to set the default cluster protocol version to 220. During the addnode operation, when the new node is compared with the other cluster nodes, the cluster protocol versions do not match, and the error is displayed.

RESOLUTION:
The common product installer is enhanced to check the installed VRTSvxvm package version on the new node and accordingly set the cluster protocol version.

* 3986468 (Tracking ID: 3987894)

SYMPTOM:
An InfoScale 7.4.1 installation fails even though the installer automatically downloads the appropriate platform support patch from SORT.

DESCRIPTION:
During an InfoScale installation, the product installer looks for any applicable platform support patches that are available on SORT and automatically downloads them. Thereafter however, it fails to correctly identify the base version of the product package on the system, and hence fails to complete the installation even though the appropriate patch is available.

RESOLUTION:
The installer is updated to correctly identify the base version of the product package.

* 3986572 (Tracking ID: 3965602)

SYMPTOM:
When a patch is installed as a part of rolling upgrade Phase1, the rolling
upgrade Phase2 fails with the error:
A more recent version of InfoScale Enterprise, 7.3.1.xxx, is already
installed.

DESCRIPTION:
When a patch is installed as a part of rolling upgrade Phase1, the kernel
package version might get upgraded to a version that is higher than the version
considered for upgrade. 

This results in failure of the rolling upgrade Phase2 with the error:  A
more recent version of InfoScale Enterprise, 7.3.1.xxx, is already
installed.

RESOLUTION:
The CPI rolling upgrade prerequisites are modified to continue even if a patch
for InfoScale product is installed as part of Rolling upgrade Phase1.

* 3986960 (Tracking ID: 3986959)

SYMPTOM:
The installer fails to install the 'infoscale-sles12.4_x86_64-Patch-7.4.1.100' patch on SLES 12 SP4.

DESCRIPTION:
InfoScale support for SUSE Linux Enterprise Server 12 SP4 is introduced with the 'infoscale-sles12.4_x86_64-Patch-7.4.1.100' patch. However, the path of the 'ethtool' command is changed in SLES 12 SP4. Therefore, the installer does not recognize the changed path and fails to install. The following error is logged:
CPI ERROR V-9-30-1570 The following required OS commands are missing on <<node name>>:
/bin/ls: cannot access '/sbin/ethtool': No such file or directory

RESOLUTION:
The path of the 'ethtool' command is updated in the installer for SLES12 SP4.

* 3987228 (Tracking ID: 3987171)

SYMPTOM:
The installer takes longer than expected time to start installation process.

DESCRIPTION:
This issue occurs because, before it begins the installation, the installer incorrectly attempts to download latest CPI patches using a private IPv6 address until the process times out.

RESOLUTION:
The installer is updated to use first available public IP address to download latest CPI patches so that it can complete the activity and start the installation within the expected amount of time.

* 3989085 (Tracking ID: 3989081)

SYMPTOM:
When a system is restarted after a successful VVR configuration, it becomes unresponsive.

DESCRIPTION:
The installer starts the vxconfigd service in the user slice instead of the system slice, and does not make the vxvm-boot service active. When the system is restarted after VVR configuration, the services from the user slice get killed and the CVMVxconfigd agent cannot bring vxconfigd online. As a result, the system becomes unresponsive and fails to come up at the primary site.

RESOLUTION:
The installer is updated to start and stop the vxconfigd service using the systemctl commands.

* 3989099 (Tracking ID: 3989098)

SYMPTOM:
For SLES15, system clock synchronization using the NTP server fails while configuring server-based fencing.

DESCRIPTION:
While configuring server-based fencing, when multiple CP servers are provided in the configuration, and the ssh connection is not already established, then the installer does not create the sys object with the required values on the first CP server. As a result, the system clock synchronization always fails on the first CP server.

RESOLUTION:
The installer is enhanced to create the sys object with the appropriate values.

* 3992222 (Tracking ID: 3992254)

SYMPTOM:
On SLES12 SP5, the installer fails to install InfoScale 7.4.1 and displays the following message: "CPI ERROR V-9-0-0 This release is intended to operate on SLES x86_64 version 3.12.22  but <<hostname>> is running version 4.12.14-120-default".

DESCRIPTION:
On SLES12 SP5, the installer fails to install InfoScale 7.4.1. During the release compatibility check, it displays the following message: "CPI ERROR V-9-0-0 This release is intended to operate on SLES x86_64 version 3.12.22  but <<hostname>> is running version 4.12.14-120-default". This issue occurs because the kernel versions included in the installer is different that the kernel version required by SLES12 SP5.

RESOLUTION:
The installer is enhanced to include the kernel version that is required to support SLES12 SP5.

* 3993898 (Tracking ID: 3993897)

SYMPTOM:
On SLES12 SP4, if the kernel version is not 4.12.14-94.41-default, the installer fails to install InfoScale 7.4.1.

DESCRIPTION:
On SLES12 SP4, if the kernel version is not 4.12.14-94.41-default, the installer fails to install InfoScale 7.4.1 and displays the following message: "CPI ERROR V-9-0-0 0 No padv object defined for padv SLES15x8664 for system". When installing InfoScale, the product installer performs a release compatibility check and defines the padv object. However, during this check, the installer fails if the kernel versions included in the installer is different than the kernel version installed on the system, and the padv object cannot be defined.

RESOLUTION:
The installer is enhanced to support different kernel versions for SLES12 SP4.

* 3995826 (Tracking ID: 3995825)

SYMPTOM:
The installer script fails to stop the vxfen service while configuring InfoScale components or applying patches.

DESCRIPTION:
When InfoScale Enterprise is installed using the CPI, the value of the START_<<COMP>> and the STOP_<<COMP>> variables is set to '1' in the configuration files for some components like vxfen, amf, and llt. During the post-installation phase, the CPI script sets these values back to '0'.
When InfoScale Enterprise is installed using Yum or Red Hat Satellite, the values of these variables remain set to '1' even after the installation is completed. Later, if CPI is used to install patches or to configure any of the components on such an installation, the installer script fails to stop the vxfen service.

RESOLUTION:
To avoid such issues, the installer script is updated to check the values of the START_<<COMP>> and the STOP_<<COMP>> variables and set them to '0' during the pre-configuration phase.

* 3999671 (Tracking ID: 3999669)

SYMPTOM:
A single-node HA configuration failed on a NetBackup Appliance system because CollectorService failed to start.

DESCRIPTION:
A single-node HA setup fails on a NetBackup Appliance system, and the following error is logged: "Failed to Configure the VCS". This issue occurs because the CollectorService fails to start. The CollectorService is not directly related to a cluster setup, so its failure should not impact the HA configuration.

RESOLUTION:
The InfoScale product installer addresses this issue by blocking the start of CollectorService on any appliance-based system.

* 4000598 (Tracking ID: 4000596)

SYMPTOM:
The 'showversion' option of the InfoScale 7.4.1 installer fails to download the available maintenance releases or patch releases.

DESCRIPTION:
The "showversion" option of installer lets you view the currently installed InfoScale version, and to download the available maintenance or patch releases. However, the InfoScale 7.4.1 installer fails to identify the path of the repository where the maintenance or the patch releases should be downloaded.

RESOLUTION:
The installer is updated to either use /OPT/VRTS/repository as the default repository or to accept a different location to download the suggested releases.

* 4004174 (Tracking ID: 4004172)

SYMPTOM:
On SLES15 SP1, while installing InfoScale 7.4.1 along with product patch, the installer fails to install some of the base rpms and exits with an error.

DESCRIPTION:
While installing InfoScale 7.4.1 along with product patch 'infoscale-sles15_x86_64-Patch-7.4.1.1800', the installer first installs the base packages and then the other patches available in the patch bundle. However, the installer fails to install the VTRSvxvm, VRTSaslapm, and VRTScavf base rpms and exit with the following error although the patches available in patch bundle installs successfully:
The following rpms failed to install on <<system name>>:
VRTSvxvm
VRTSaslapm
VRTScavf

RESOLUTION:
The installer is enhanced to ignore the failures in the base package installation when InfoScale 7.4.1 is installed along with the available product patch.

* 4008070 (Tracking ID: 4008578)

SYMPTOM:
Even though a cluster node may have a fully qualified hostname, the product installer trims this value and uses the shorter hostname for the cluster configuration.

DESCRIPTION:
The name of a cluster node may be set to a fully qualified hostname, for example, somehost.example.com. However, by default, the product installer trims this value and uses the shorter hostname (for example, somehost) for the cluster configuration.

RESOLUTION:
This hotfix updates the installer to allow the use of the new "-fqdn" option. If this option is specified, the installer uses the fully qualified hostname for cluster configuration. Otherwise, the installer continues with the default behavior.

* 4014984 (Tracking ID: 4014983)

SYMPTOM:
The product installer does not display a warning at the time of the pre-upgrade check to suggest that you will need to provide telemetry details later on if the cluster nodes are not registered with TES or VCR.

DESCRIPTION:
The product installer prompts you to provide the telemetry details of cluster nodes after upgrading the InfoScale packages but before starting the services. If you cancel the installation at this stage, the Cluster Server resources cannot be brought online. Therefore, a warning message is required during the pre-upgrade checks to remind you to keep these details ready.

RESOLUTION:
The product installer is updated to notify you at the time of the pre-upgrade check, that if the cluster nodes are not registered with TES or VCR, you will need to provide these telemetry details later on.

* 4015142 (Tracking ID: 4015139)

SYMPTOM:
If IPv6 addresses are provided for the system list on a RHEL 8 system, the product installer fails to verify the network communication with the remote systems and cannot proceed with the installation. The following error is logged:
CPI ERROR V-9-20-1104 Cannot ping <IPv6_address>. Please make sure that:
        - Provided hostname is correct
        - System <IPv6_address> is in same network and reachable
        - 'ping' command is available to use (provided by 'iputils' package)

DESCRIPTION:
This issue occurs because the installer uses the ping6 command to verify the communication with the remote systems if IPv6 addresses are provided for the system list. For RHEL 8 and its minor versions, the path for ping6 has changed from /bin/ping6 to /sbin/ping6, but the installer uses the old path.

RESOLUTION:
This hotfix updates the installer to use the correct path for the ping6 command.

Patch ID: VRTSdbac-7.4.1.2800

* 4019768 (Tracking ID: 4013953)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 8.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
9(RHEL7.9) is now introduced.

Patch ID: VRTSdbac-7.4.1.1900

* 3998681 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSdbac-7.4.1.1200

* 3982218 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSvcs-7.4.1.2800

* 3995684 (Tracking ID: 3995685)

SYMPTOM:
A discrepancy was observed between the VCS engine log messages at the primary site and those at the DR site in a GCO configuration.

DESCRIPTION:
If a resource that was online at the primary site is taken offline outside VCS control, the VCS engine logs the messages related to the unexpected change in the state of the resource[, successful clean Entry Point execution and so on]. The messages clearly indicate that the resource is faulted. However, the VCS engine does not log any debugging error messages regarding the fault on the primary site, but instead logs them at the DR site. Consequently, there is a discrepancy in the engine log messages at the primary site and those at the DR site.

RESOLUTION:
The VCS engine module is updated to log the appropriate debugging error messages at the primary site when a resource goes into the Faulted state.

FILE / VERSION:
had.exe / 7.4.10004.0
hacf.exe / 7.4.10004.0
haconf.exe  / 7.4.10004.0

* 4012318 (Tracking ID: 4012518)

SYMPTOM:
The gcoconfig command does not accept "." in the interface name.

DESCRIPTION:
The naming guidelines for network interfaces allow the "." character to be included as part of the name string. However, if this character is included, the gcoconfig command returns an error stating that the NIC name is invalid.

RESOLUTION:
This hotfix updates the gcoconfig command code to allow the inclusion of the "." character when providing interface names.

Patch ID: VRTSvcs-7.4.1.1200

* 3982912 (Tracking ID: 3981992)

SYMPTOM:
A potentially critical security vulnerability in VCS needs to be addressed.

DESCRIPTION:
A potentially critical security vulnerability in VCS needs to be addressed.

RESOLUTION:
This hotfix addresses the security vulnerability. For details, refer to the security advisory at: https://www.veritas.com/content/support/en_US/security/VTS19-003.html

Patch ID: VRTSvcs-7.4.1.1100

* 3973227 (Tracking ID: 3978724)

SYMPTOM:
CmdServer starts every time the HAD starts, and keeps one port open although the service running on that port is no longer needed.

DESCRIPTION:
Each time the HAD starts, the CmdServer process also starts and runs as a background service. If the service provided by CmdServer is no longer needed, you must manually stop that service every time the HAD starts. Also, there is no automated way to disable the CmdServer daemon.

RESOLUTION:
A new environment variable, STARTCMDSERVER, is now available, which lets you specify whether CmdServer daemon should be started when HAD starts. The default value of STARTCMDSERVER is 1. Make sure to set the value to 0 if you do not want the CmdServer service to be started when HAD starts. This environment variable is available in the /etc/sysconfig/vcs file for Linux and in the /etc/default/vcs file for Solaris and AIX.

* 3977099 (Tracking ID: 3977098)

SYMPTOM:
VCS does not support non-evacuation of the service groups during a system restart.

DESCRIPTION:
When VCS is stopped as part of a system restart operation, the active service groups on the node are migrated to another cluster node. In some casesfor example, to avoid administrative intervention during a manual shutdownyou may not want to evacuate the service groups during a system restart. However, VCS does not support such non-evacuation of the service groups.

RESOLUTION:
A new environment variable, NOEVACUATE, is now available, which lets you specify whether to evacuate service groups or not. The default value of NOEVACUATE is 0. Make sure to set the value to 1 if you do not want VCS to evacuate the service groups. This environment
variable is available in the /etc/sysconfig/vcs file for Linux and in the /etc/default/vcs file for Solaris and AIX.

* 3980021 (Tracking ID: 3969838)

SYMPTOM:
A failover Service Group can be brought online on one node even when it is ONLINE on another node

DESCRIPTION:
The flush operation clears the internal state of a resource, but doesn't stop the entry points that are already running. In this situation, the entry point may report pseudo fault even when the service group is already offline on that particular node. When such fault is reported, the value of CurrentCount is decremented to zero although the service group is active in the cluster. The zero value signifies that the group is completely offline and hence VCS inadvertently allows any subsequent online request.

RESOLUTION:
Additional checks are introduced to ensure that this incorrect decrement in CurrentCount is prevented when the failover service group is active on any other node in the cluster.

Patch ID: VRTSvcsag-7.4.1.2800

* 3984343 (Tracking ID: 3982300)

SYMPTOM:
A warning message related to the process priority is logged in the ProcessOnOnly agent log every minute.

DESCRIPTION:
This issue occurs because value of the Priority attribute of processes monitored by the ProcessOnOnly agent did not match the actual process priority value. As part of the Monitor function, if the priority of a process is found to be different from the value that is configured for the Priority attribute, warning messages are logged in the following scenarios:
1. The process is started outside VCS control with a different priority.
2. The priority of the process is changed after it is started by VCS.

RESOLUTION:
The ProcessOnOnly agent is updated to set the current value of the priority of a process to the Priority attribute if these values are found to be different.

* 4006950 (Tracking ID: 4006979)

SYMPTOM:
When the AzureDisk resource comes online on a cluster node, it goes into the UNKNOWN state on all the other nodes.

DESCRIPTION:
When an AzureDisk resource is online on one node, the status of that resource appears as UNKNOWN, instead of OFFLINE, on the other nodes in the cluster. Also, if the resource is brought online on a different node, its status on the remaining nodes appears as UNKNOWN. However, if the resource is not online on any node, its status correctly appears as OFFLINE on all the nodes.
This issue occurs when the VM name on the Azure portal does not match the local hostname of the cluster node. The monitor operation of the agent compares these two values to identify whether the VM to which the AzureDisk resource is attached is part of a cluster or not. If the values do not match, the agent incorrectly concludes that the resource is attached to a VM outside the cluster. Therefore, it displays the status of the resource as UNKNOWN.

RESOLUTION:
The AzureDisk agent is modified to compare the VM name with the appropriate attribute of the of the agent so that the status of an AzureDisk resource is reported correctly.

* 4009762 (Tracking ID: 4009761)

SYMPTOM:
A lower NFSRestart resoure fails to come online within the duration specified in OnlineTimeout when the share directory for NFSv4 lock state information contains millions of small files.

DESCRIPTION:
As part of the Online operation, the NFSRestart agent copies the NFSv4 state data of clients from the shared storage to the local path. However, if the source location contains millions of files, some of which may be stale, their movement may not be completed before the operation times out.

RESOLUTION:
A new action entry point named "cleanup" is provided, which removes stale files. The usage of the entry point is as follows:
$ hares -action <resname> cleanup -actionargs <days> -sys <sys>
  <days>: number of days, deleting files that are <days> old
Example:
$ hares -action NFSRestart_L cleanup -actionargs 30 -sys <sys>
The cleanup action ensures that files older than the number of days specified in the -actionargs option are removed; the minimum expected duration is 30 days. Thus, only the relevant files to be moved remain, and the Online operation is completed in time.

* 4016488 (Tracking ID: 4007764)

SYMPTOM:
The NFS locks related log file is flooded with the "sync_dir:copy failed for link" error messages.

DESCRIPTION:
The smsyncd daemon used by the NFSRestart agent copies the symbolic links and the NFS locks from the /var/statmon/sm directory to a specific directory. These files and links are used to track the clients who have set a lock on the NFS mount points. If this directory already has a symbolic link with the same name that the smsyncd daemon is trying to copy, the /bin/cp commands fails and logs an error message.

RESOLUTION:
The smsyncd daemon is enhanced to copy the symbolic links even if the link with same name is present.

* 4016625 (Tracking ID: 4016624)

SYMPTOM:
When a disk group is forcibly imported with ClearClone enabled, different DGIDs are assigned to the associated disks.

DESCRIPTION:
When the ForceImport option is used, a disk group gets imported with the available disks, regardless of whether all the required disks are available or not. In such a scenario, if the ClearClone attribute is enabled, the available disks are successfully imported, but their DGIDs are updated to new values. Thus, the disks within the same disk group end up with different DGIDs, which may cause issues with the functioning of the storage configuration.

RESOLUTION:
The DiskGroup agent is updated to allow the ForceImport and the ClearClone attributes to be set to the following values as per the configuration requirements. ForceImport can be set to 0 or 1. ClearClone can be set to 0, 1, or 2. ClearClone is disabled when set to 0 and enabled when set to 1 or 2. ForceImport is disabled when set to 0 and is ignored when ClearClone is set to 1. To enable both, ClearClone and ForceImport, set ClearClone to 2 and ForceImport to 1.

Patch ID: VRTSvxfen-7.4.1.2800

* 4000203 (Tracking ID: 3970753)

SYMPTOM:
Freeing uninitialized/garbage memory causes panic in vxfen.

DESCRIPTION:
Freeing uninitialized/garbage memory causes panic in vxfen.

RESOLUTION:
Veritas has modified the VxFen kernel module to fix the issue by initializing the object before attempting to free it.
 .

* 4000746 (Tracking ID: 4000745)

SYMPTOM:
The VxFEN process fails to start due to late discovery of the VxFEN disk group.

DESCRIPTION:
When I/O fencing starts, the VxFEN startup script creates this /etc/vxfentab file on each node. During disk-based fencing, the VxVM module may take longer time to discover the VxFEN disk group. Because of this delay, the 'generate disk list' opreration times out. Therefore, the VxFEN process fails to start and reports the following error: 'ERROR: VxFEN cannot generate vxfentab because vxfendg does not exist'

RESOLUTION:
A new tunable, getdisks_timeout, is introduced to specify the timeout value for the VxFEN disk group discovery. The maximum and the default value for this tunable is 600 seconds. You can set the value of this tunable by adding an getdisks_timeout=<time_in_sec> entry in the /etc/vxfenmode file.

* 4019758 (Tracking ID: 4013953)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 8.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
9(RHEL7.9) is now introduced.

Patch ID: VRTSamf-7.4.1.2800

* 4019003 (Tracking ID: 4018791)

SYMPTOM:
A cluster node panics when the AMF module module attempts to access an executable binary or a script using its absolute path.

DESCRIPTION:
A cluster node panics and generates a core dump, which indicates that an issue with the AMF module. The AMF module function that locates an executable binary or a script using its absolute path fails to handle NULL values.

RESOLUTION:
The AMF module is updated to handle NULL values when locating an executable binary or a script using its absolute path.

* 4019757 (Tracking ID: 4013953)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 8.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
9(RHEL7.9) is now introduced.

Patch ID: VRTSamf-7.4.1.1900

* 3998680 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSamf-7.4.1.1200

* 3982217 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSgab-7.4.1.2800

* 4016486 (Tracking ID: 4011683)

SYMPTOM:
The GAB module failed to start and the system log messages indicate failures with the mknod command.

DESCRIPTION:
The mknod command fails to start the GAB module because its format is invalid. If the names of multiple drivers in an environment contain the value "gab" as a substring, all their major device numbers get passed on to the mknod command. Instead, the command must contain the major device number for the GAB driver only.

RESOLUTION:
This hotfix addresses the issue so that the GAB module starts successfully even when other driver names in the environment contain "gab" as a substring.

* 4016487 (Tracking ID: 4007726)

SYMPTOM:
When a GAB message that is longer than the value specified by GAB_MAX_MSGSIZE is transfered, an error message is added to the VCS logs. However the error message is not sufficiently descriptive.

DESCRIPTION:
The current error message does not mention the type of the GAB message that was transferred and the port that was used to transfer the message. Thus, the error message is not useful for troubleshooting.

RESOLUTION:
This hotfix addresses the issue by enhacing the error message that is logged. It now mentions whether the message type was DIRECTED or BROADCAST and also the port number that was used to transer the GAB message.

* 4019755 (Tracking ID: 4013953)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 8.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
9(RHEL7.9) is now introduced.

Patch ID: VRTSgab-7.4.1.1900

* 3998678 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSgab-7.4.1.1200

* 3982215 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSllt-7.4.1.2800

* 3999398 (Tracking ID: 3989440)

SYMPTOM:
The dash (-) in the device name may cause the LLT link configuration to fail.

DESCRIPTION:
While configuring LLT links, if the LLT module finds a dash in the device name, it assumes that the device name is in the 'eth-<mac-address>' format and considers the string after the dash as the mac address. However, if the user specifies an interface name that includes a dash, the string after the dash is not intended to be a MAC address. In such a case, the LLT link configuration fails.

RESOLUTION:
The LLT module is updated to check for the string 'eth-' before validating the device name with the 'eth-<mac-address>' format. If the string 'eth-' is not found, LLT assumes the name to be an interface name.

* 4002584 (Tracking ID: 3994996)

SYMPTOM:
Adding -H miscellaneous flag to add new functionalities in lltconfig. Add a tunable to allow skb alloc with SLEEP flag.

DESCRIPTION:
Adding -H miscellaneous flag, which we will use to add new functionalities in lltconfig, as very less alphabets are left to be able to assign an alphabet to each functionality.

RESOLUTION:
Inside -H flag
1. Add a tunable to allow skb alloc with SLEEP flag, in case memory is scarce.
2. Add skb_alloc failure count in lltstat output.

* 4003442 (Tracking ID: 3983418)

SYMPTOM:
In a rare case, after a panic or a reboot of a node, it may fail to join the CVM master due to an inconsistent LLT port state on the master.

DESCRIPTION:
When a node tries to join the cluster after a reboot or a panic, in a rare case, on one of the remaining nodes the port state of CVM or any other port may be in an inconsistent state with respect to LLT.

RESOLUTION:
This hotfix updates the LLT module to fix the issue by not accepting a particular type of a packet when not connected to the remote node and also adds more states to log into the LLT circular buffer.

* 4010546 (Tracking ID: 4018581)

SYMPTOM:
The LLT module fails to start and the system log messages indicate missing IP address.

DESCRIPTION:
When only the low priority LLT links are configured over UDP, UDPBurst mode must be disabled. UDPBurst mode must only be enabled when the high priority LLT links are configured over UDP. If the UDPBurst mode gets enabled while configuring the low priority links, the LLT module fails to start and logs the following error: "V-14-2-15795 missing ip address / V-14-2-15800 UDPburst:Failed to get link info".

RESOLUTION:
This hotfix updates the LLT module to not enable the UDPBurst mode when only the low priority LLT links are configured over UDP.

* 4016483 (Tracking ID: 4016484)

SYMPTOM:
The vxexplorer utility panics the node on which it runs if the LLT version on the node is llt-rhel8_x86_64-Patch-7.4.1.2100 or later.

DESCRIPTION:
The vxexplorer utility panics the node on which it runs if the LLT version on the node is llt-rhel8_x86_64-Patch-7.4.1.2100 or later.

RESOLUTION:
This hotfix addresses the issue so that vxexplorer utility does not panic nodes that run on the RHEL 8 platform.

* 4019753 (Tracking ID: 4013953)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 8.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
9(RHEL7.9) is now introduced.

Patch ID: VRTSllt-7.4.1.1900

* 3998677 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSllt-7.4.1.1200

* 3982214 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSvxvm-7.4.1.2800

* 3984155 (Tracking ID: 3976678)

SYMPTOM:
vxvm-recover:  cat: write error: Broken pipe error encountered in syslog multiple times.

DESCRIPTION:
Due to a bug in vxconfigbackup script which is started by vxvm-recover "cat : write error: Broken pipe" is encountered in syslog 
and it is reported under vxvm-recover. In vxconfigbackup code multiple subshells are created in a function call and the first subshell is for cat command. When a particular if condition is satistfied, return is called exiting the later subshells even when there is data to be read in the created cat subshell, which results in broken pipe error.

RESOLUTION:
Changes are done in VxVM code to handle the broken pipe error.

* 4016283 (Tracking ID: 3973202)

SYMPTOM:
A VVR primary node may panic with below stack due to accessing the freed memory:
nmcom_throttle_send()
nmcom_sender()
kthread ()
kernel_thread()

DESCRIPTION:
After sending the data to VVR (Veritas Volume Replicator) secondary site, the code was accessing some variables for which the memory was already released due to the data ACK getting processed quite early. This was a rare race condition which may happen due to accessing the freed memory.

RESOLUTION:
Code changes have been made to avoid the incorrect memory access.

* 4016291 (Tracking ID: 4002066)

SYMPTOM:
System panic with below stack when do reclaim:
__wake_up_common_lock+0x7c/0xc0
sbitmap_queue_wake_all+0x43/0x60
blk_mq_tag_wakeup_all+0x15/0x30
blk_mq_wake_waiters+0x3d/0x50
blk_set_queue_dying+0x22/0x40
blk_cleanup_queue+0x21/0xd0
vxvm_put_gendisk+0x3b/0x120 [vxio]
volsys_unset_device+0x1d/0x30 [vxio]
vol_reset_devices+0x12b/0x180 [vxio]
vol_reset_kernel+0x16c/0x220 [vxio]
volconfig_ioctl+0x866/0xdf0 [vxio]

DESCRIPTION:
With recent kernel, it is expected that kernel will return the pre-allocated sense buffer. These sense buffer pointers are supposed to be unchanged across multiple uses of a request. They are pre-allocated and expected to be unchanged until such a time as the request memory is to be freed. DMP overwrote the  original sense buffer, hence the issue.

RESOLUTION:
Code changes have been made to avoid tampering the pre-allocated sense buffer.

* 4016768 (Tracking ID: 3989161)

SYMPTOM:
The system panic occurs because of hard lockup with the following stack:

#13 [ffff9467ff603860] native_queued_spin_lock_slowpath at ffffffffb431803e
#14 [ffff9467ff603868] queued_spin_lock_slowpath at ffffffffb497a024
#15 [ffff9467ff603878] _raw_spin_lock_irqsave at ffffffffb4988757
#16 [ffff9467ff603890] vollog_logger at ffffffffc105f7fa [vxio]
#17 [ffff9467ff603918] vol_rv_update_childdone at ffffffffc11ab0b1 [vxio]
#18 [ffff9467ff6039f8] volsiodone at ffffffffc104462c [vxio]
#19 [ffff9467ff603a88] vol_subdisksio_done at ffffffffc1048eef [vxio]
#20 [ffff9467ff603ac8] volkcontext_process at ffffffffc1003152 [vxio]
#21 [ffff9467ff603b10] voldiskiodone at ffffffffc0fd741d [vxio]
#22 [ffff9467ff603c40] voldiskiodone_intr at ffffffffc0fda92b [vxio]
#23 [ffff9467ff603c80] voldmp_iodone at ffffffffc0f801d0 [vxio]
#24 [ffff9467ff603c90] bio_endio at ffffffffb448cbec
#25 [ffff9467ff603cc0] gendmpiodone at ffffffffc0e4f5ca [vxdmp]
... ...
#50 [ffff9497e99efa60] do_page_fault at ffffffffb498d975
#51 [ffff9497e99efa90] page_fault at ffffffffb4989778
#52 [ffff9497e99efb40] conv_copyout at ffffffffc10005da [vxio]
#53 [ffff9497e99efbc8] conv_copyout at ffffffffc100044e [vxio]
#54 [ffff9497e99efc50] volioctl_copyout at ffffffffc1032db3 [vxio]
#55 [ffff9497e99efc80] vol_get_logger_data at ffffffffc105e4ce [vxio]
#56 [ffff9497e99efcf8] voliot_ioctl at ffffffffc105e66b [vxio]
#57 [ffff9497e99efd78] volsioctl_real at ffffffffc10aee82 [vxio]
#58 [ffff9497e99efe50] vols_ioctl at ffffffffc0646452 [vxspec]
#59 [ffff9497e99efe70] vols_unlocked_ioctl at ffffffffc06464c1 [vxspec]
#60 [ffff9497e99efe80] do_vfs_ioctl at ffffffffb4462870
#61 [ffff9497e99eff00] sys_ioctl at ffffffffb4462b21

DESCRIPTION:
Vxio kernel sends a signal to vxloggerd to flush the log as it is almost full. Vxloggerd calls into vxio kernel to copy the log buffer out.  As vxio copy log date from kernel to user with holding a spinlock, if a page fault occurs during the copy out, hard lockup and panic occur.

RESOLUTION:
Code changes have been made the fix the problem.

* 4017194 (Tracking ID: 4012681)

SYMPTOM:
If vradmind process terminates due to some reason, it is not properly restarted by RVG agent of VCS.

DESCRIPTION:
The RVG(Replicated Volume Group) agent of VCS(Veritas Cluster Server) restarts the vradmind process if it gets killed or terminated
due to some reason, this was not working properly on systemd enabled platforms like RHEL-7.
In the systemd enabled platforms, after the vradmind process dies, the vras-vradmind service used to stay in active/running state, due to this, even 
after the RVG agent issued a command to start the vras-vradmind service, the vradmind process was not getting started.

RESOLUTION:
The code is modified to fix the parameters for vras-vradmind service, so that the service status will change to failed/faulted if vradmind process gets killed. 
The service can be manually started later or RVG agent of VCS can start the service, which will start the vradmind process as well.

Patch ID: VRTSvxvm-7.4.1.2100

* 3984139 (Tracking ID: 3965962)

SYMPTOM:
No option to disable auto-recovery when a slave node joins the CVM cluster.

DESCRIPTION:
In a CVM environment, when the slave node joins the CVM cluster, it is possible that the plexes may not be in sync. In such a scenario auto-recovery is triggered for the plexes.  If a node is stopped using the hastop -all command when the auto-recovery is in progress, the vxrecover operation may hang. An option to disable auto-recovery is not available.

RESOLUTION:
The VxVM module is updated to allow administrators to disable auto-recovery when a slave node joins a CVM cluster.
A new tunable, auto_recover, is introduced. By default, the tunable is set to 'on' to trigger the auto-recovery. Set its value to 'off' to disable auto-recovery. Use the vxtune command to set the tunable.

* 3984731 (Tracking ID: 3984730)

SYMPTOM:
VxVM logs warning messages when the VxDMP module is stopped or removed for the first time after the system is rebooted.

DESCRIPTION:
VxVM logs these warnings because the  QUEUE_FLAG_REGISTERED and QUEUE_FLAG_INIT_DONE queue flags are not cleared while registering the dmpnode.
The following stack is reported after stopping/removing VxDMP for first time after every reboot:
kernel: WARNING: CPU: 28 PID: 33910 at block/blk-core.c:619 blk_cleanup_queue+0x1a3/0x1b0
kernel: CPU: 28 PID: 33910 Comm: modprobe Kdump: loaded Tainted: P OE ------------ 3.10.0-957.21.3.el7.x86_64 #1
kernel: Hardware name: HPE ProLiant DL380 Gen10/ProLiant DL380 Gen10, BIOS U30 10/02/2018
kernel: Call Trace:
kernel: [<ffffffff9dd63107>] dump_stack+0x19/0x1b
kernel: [<ffffffff9d697768>] __warn+0xd8/0x100
kernel: [<ffffffff9d6978ad>] warn_slowpath_null+0x1d/0x20
kernel: [<ffffffff9d944b03>] blk_cleanup_queue+0x1a3/0x1b0
kernel: [<ffffffffc0cd1f3f>] dmp_unregister_disk+0x9f/0xd0 [vxdmp]
kernel: [<ffffffffc0cd7a08>] dmp_remove_mp_node+0x188/0x1e0 [vxdmp]
kernel: [<ffffffffc0cd7b45>] dmp_destroy_global_db+0xe5/0x2c0 [vxdmp]
kernel: [<ffffffffc0cde6cd>] dmp_unload+0x1d/0x30 [vxdmp]
kernel: [<ffffffffc0d0743a>] cleanup_module+0x5a/0xd0 [vxdmp]
kernel: [<ffffffff9d71692e>] SyS_delete_module+0x19e/0x310
kernel: [<ffffffff9dd75ddb>] system_call_fastpath+0x22/0x27
kernel: --[ end trace fd834bc7817252be ]--

RESOLUTION:
The queue flags are modified to handle this situation and not to log such warning messages.

* 3992902 (Tracking ID: 3975667)

SYMPTOM:
NMI watchdog: BUG: soft lockup

DESCRIPTION:
When flow control on ioshipping channel is set there is window in code where vol_ioship_sender thread can go in tight loop.
This causes softlockup

RESOLUTION:
Relinquish CPU to schedule other process. vol_ioship_sender() thread will restart after some delay.

* 3998693 (Tracking ID: 3998692)

SYMPTOM:
Earlier module failed to load on RHEL 7.8 .

DESCRIPTION:
RHEL 7.8 is new release and hence VxVM module is compiled with the RHEL 7.8 kernel .

RESOLUTION:
Compiled VxVM with RHEL 7.8 kernel bits .

Patch ID: VRTSvxvm-7.4.1.1300

* 3980679 (Tracking ID: 3980678)

SYMPTOM:
Earlier module failed to load on RHEL 7.7 .

DESCRIPTION:
RHEL 7.7 is new release and hence VxVM module is compiled with the RHEL 7.7 kernel .

RESOLUTION:
Compiled VxVM with RHEl 7.7 kernel bits .

Patch ID: VRTSvxvm-7.4.1.1200

* 3973076 (Tracking ID: 3968642)

SYMPTOM:
Intermittent vradmind hang on the new VVR Primary

DESCRIPTION:
Vradmind was trying to hold pthread write lock with corresponding read lock already held, during race conditions seen while migration role on new primary, which led to intermittent vradmind hangs.

RESOLUTION:
Changes done to minimize the window for which readlock is held and ensure that read lock is released early so that further attempts to grab write lock succeed.

* 3975897 (Tracking ID: 3931048)

SYMPTOM:
Few VxVM log files listed below are created with write permission to all users
which might lead to security issues.

/etc/vx/log/vxloggerd.log
/var/adm/vx/logger.txt
/var/adm/vx/kmsg.log

DESCRIPTION:
The log files are created with write permissions to all users, which is a
security hole. 
The files are created with default rw-rw-rw- (666) permission because the umask
is set to 0 while creating these files.

RESOLUTION:
Changed umask to 022 while creating these files and fixed an incorrect open
system call. Log files will now have rw-r--r--(644) permissions.

* 3978184 (Tracking ID: 3868154)

SYMPTOM:
When DMP Native Support is set to ON, and if a dmpnode has multiple VGs,
'vxdmpadm native ls' shows incorrect VG entries for dmpnodes.

DESCRIPTION:
When DMP Native Support is set to ON, multiple VGs can be created on a disk as
Linux supports creating VG on a whole disk as well as on a partition of 
a disk.This possibility was not handled in the code, hence the display of
'vxdmpadm native ls' was getting messed up.

RESOLUTION:
Code now handles the situation of multiple VGs of a single disk

* 3978195 (Tracking ID: 3925345)

SYMPTOM:
/tmp/vx.* directories are frequently created.

DESCRIPTION:
/tmp/vx.* directories are frequently created due to a bug in vxvolgrp command.

RESOLUTION:
Source change has been made.

* 3978208 (Tracking ID: 3969860)

SYMPTOM:
Event source daemon (vxesd) takes a lot of time to start when lot of LUNS (around 1700) are attached to the system.

DESCRIPTION:
Event source daemon creates a configuration file ddlconfig.info with the help of HBA API libraries. The configuration file is created by child process while the parent process is waiting for child to create the configuration file. If the number of LUNS are large then time taken for creation of configuration is also more. Thus the parent process keeps on waiting for the child process to complete the configuration and exit.

RESOLUTION:
Changes have been done to create the ddlconfig.info file in the background and let the parent exit immediately.

* 3978678 (Tracking ID: 3907596)

SYMPTOM:
vxdmpadm setattr command gives the below error while setting the path attribute:
"VxVM vxdmpadm ERROR V-5-1-14526 Failed to save path information persistently"

DESCRIPTION:
Device names on linux change once the system is rebooted. Thus the persistent attributes of the device are stored using persistent 
hardware path. The hardware paths are stored as symbolic links in the directory /dev/vx/.dmp. The hardware paths are obtained from 
/dev/disk/by-path using the path_id command. In SLES12, the command to extract the hardware path changes to path_id_compat. Since 
the command changed, the script was failing to generate the hardware paths in /dev/vx/.dmp directory leading to the persistent 
attributes not being set.

RESOLUTION:
Code changes have been made to use the command path_id_compat to get the hardware path from /dev/disk/by-path directory.

* 3979375 (Tracking ID: 3973364)

SYMPTOM:
In case of VVR (Veritas Volume Replicator) synchronous mode of replication with TCP protocol, if there are any network issues
I/O's may hang for upto 15-20 mins.

DESCRIPTION:
In VVR synchronous replication mode, if a node on primary site is unable to receive ACK (acknowledgement) message sent from the secondary
within the TCP timeout period, then IO may get hung till the TCP layer detects a timeout, which is ~ 15-20 minutes.
This issue may frequently happen in a lossy network where the ACKs could not be delivered to primary due to some network issues.

RESOLUTION:
A hidden tunable 'vol_vvr_tcp_keepalive' is added to allow users to enable TCP 'keepalive' for VVR data ports if the TCP timeout happens frequently.

* 3979397 (Tracking ID: 3899568)

SYMPTOM:
"vxdmpadm iostat stop" as per design cannot stop the iostat gathering
persistently. To avoid Performance & Memory crunch related issues, it is
generally recommended to stop the iostat gathering.There is a requirement
to provide such ability to stop/start the iostat gathering persistently
in those cases.

DESCRIPTION:
Today DMP iostat daemon is stopped using - "vxdmpadm iostat stop". but this 
is not persistent setting. After reboot this would be lost and hence 
customer
needs to also have to put this in init scripts at appropriate place for
persistent effect.

RESOLUTION:
Code is modified to provide a  tunable "dmp_compute_iostats" which can
start/stop the iostat gathering persistently.

Notes:
Use following command to start/stop the iostat gathering persistently.
# vxdmpadm settune dmp_compute_iostats=on/off.

* 3979398 (Tracking ID: 3955979)

SYMPTOM:
In case of Synchronous mode of replication with TCP , if there are any network related issues,
I/O's get hang for upto 15-30 mins.

DESCRIPTION:
When synchronous replication is used , and if because of some network issues secondary is not being able
to send the network ack's to the primary, I/O gets hang on primary waiting for these network ack's. In case 
of TCP mode we depend on TCP for timeout to happen and then I/O's get drained out, since in this case there is no 
handling from VVR side, I/O's get hang until TCP triggers its timeout which in normal case happens within 15-30 mins.

RESOLUTION:
Code changes are done to allow user to set the time for tcp within which timeout should
get triggered.

* 3979440 (Tracking ID: 3947265)

SYMPTOM:
vxfen tends to fail and creates split brain issues.

DESCRIPTION:
Currently to check whether the infiniband devices are present or not
we check for some modules which on rhel 7.4 comes by default.

RESOLUTION:
TO check for infiniband devices we would be checking for /sys/class/infiniband
directory in which the device information gets populated if infiniband
devices are present.

* 3979462 (Tracking ID: 3964964)

SYMPTOM:
Vxnetd gets into soft lockup when port scan tool keeps sending packets over the network. Call trace likes the following:

kmsg_sys_poll+0x6c/0x180 [vxio]
? poll_initwait+0x50/0x50
? poll_select_copy_remaining+0x150/0x150
? poll_select_copy_remaining+0x150/0x150
? schedule+0x29/0x70
? schedule_timeout+0x239/0x2c0
? task_rq_unlock+0x1a/0x20
? _raw_spin_unlock_bh+0x1e/0x20
? first_packet_length+0x151/0x1d0
? udp_ioctl+0x51/0x80
? inet_ioctl+0x8a/0xa0
? kmsg_sys_rcvudata+0x7e/0x170 [vxio]
nmcom_server_start+0x7be/0x4810 [vxio]

DESCRIPTION:
In case received non-NMCOM packet, vxnetd skips it and goes back to poll more packets without giving up CPU, so if the packets are kept sending vxnetd may get into soft lockup status.

RESOLUTION:
A bit delay has been added to vxnetd to fix the issue.

* 3979471 (Tracking ID: 3915523)

SYMPTOM:
Local disk from other node belonging to private DG is exported to the node when
a private DG is imported on current node.

DESCRIPTION:
When we try to import a DG, all the disks belonging to the DG are automatically
exported to the current node so as to make sure 
that the DG gets imported. This is done to have same behaviour as SAN with local
disks as well. Since we are exporting all disks in 
the DG, then it happens that disks which belong to same DG name but different
private DG on other node get exported to current node 
as well. This leads to wrong disk getting selected while DG gets imported.

RESOLUTION:
Instead of DG name, DGID (diskgroup ID) is used to decide whether disk needs to
be exported or not.

* 3979475 (Tracking ID: 3959986)

SYMPTOM:
Some IOs may not be written to disk, when vxencryptd daemon is restarted.

DESCRIPTION:
If vxencryptd daemon is restarted, as part of restart some of the IOs still waiting in 
pending queue are lost and not written to the underlying disk.

RESOLUTION:
Code changes made to restart the IOs in pending queue once vxencryptd is started.

* 3979476 (Tracking ID: 3972679)

SYMPTOM:
vxconfigd kept crashing and couldn't start up with below stack:
(gdb) bt
#0 0x000000000055e5c2 in request_loop ()
#1 0x0000000000479f06 in main ()
its disassemble code like below:
0x000000000055e5ae <+1160>: callq 0x55be64 <vold_request_poll_unlock>
0x000000000055e5b3 <+1165>: mov 0x582aa6(%rip),%rax # 0xae1060
0x000000000055e5ba <+1172>: mov (%rax),%rax
0x000000000055e5bd <+1175>: test %rax,%rax
0x000000000055e5c0 <+1178>: je 0x55e60e <request_loop+1256>
=> 0x000000000055e5c2 <+1180>: mov 0x65944(%rax),%edx

DESCRIPTION:
vxconfigd takes message buffer as valid as if it's non-NULL.When vxconfigd failed to get share memory for message buffer, OS returned -1. 
In this case vxconfigd accessed a invalid address and caused segment fault.

RESOLUTION:
The code changes are done to check the message buffer properly before accessing it.

* 3979656 (Tracking ID: 3975405)

SYMPTOM:
cvm_clus fails to stop even after "hastop -all" is triggered, and so the cluster nodes get stuck in the LEAVING state.

DESCRIPTION:
When a slave node initiates a write request on RVG, I/O is shipped to the master node (VVR write-ship). If the I/O fails, the VKE_EIO error is passed back from the master node as response for write-ship request. This error is not handled, VxVM continues to retry the I/O operation.

RESOLUTION:
VxVM is updated to handle VKE_EIO error properly.

* 3980457 (Tracking ID: 3980609)

SYMPTOM:
Logowner node in DR secondary site is rebooted

DESCRIPTION:
Freed memory is getting accessed in server thread code path on secondary site

RESOLUTION:
Code changes have been made to fix access to freed memory

* 3981028 (Tracking ID: 3978330)

SYMPTOM:
The values of the VxVM and the VxDMP tunables do not persist after reboot with 4.4 and later versions of the Linux kernel.

DESCRIPTION:
Some changes were made in the Linux kernel from version 4.4 onwards, due to which the values of the these tunables could not persist after reboot.

RESOLUTION:
VxVM has been updated to make the tunable values persistent upon reboot.

Patch ID: VRTSglm-7.4.1.2800

* 4014715 (Tracking ID: 4011596)

SYMPTOM:
It throws error saying "No such file or directory present"

DESCRIPTION:
Bug observed during parallel communication between all the nodes. Some required temp files were not present on other nodes.

RESOLUTION:
Fixed to have consistency maintained while parallel node communication. Using hacp for transferring temp files.

Patch ID: VRTSglm-7.4.1.1700

* 3999030 (Tracking ID: 3999029)

SYMPTOM:
GLM module failed to unload because of VCS service hold.

DESCRIPTION:
GLM module was failed to unload during systemd shutdown because glm service was racing with vcs service. VCS takes hold on GLM which was resulting in failing to unload the module.

RESOLUTION:
Code is modified to add vcs service dependency in glm.service during systemd shutdown.



INSTALLING THE PATCH
--------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


KNOWN ISSUES
------------
* Tracking ID: 4020203

SYMPTOM: While start/stop glm service using /etc/init.d/vxglm script customer can observe below error

/etc/init.d/vxglm: No such file or directory

WORKAROUND: Use 'systemctl [start/stop] vxglm' to start or stop the service.



SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE


Read and accept Terms of Service