infoscale-rhel8.3_x86_64-Patch-7.4.2.1300

 Basic information
Release type: Patch
Release date: 2021-01-04
OS update support: None
Technote: None
Documentation: None
Popularity: 43 viewed    0 downloaded
Download size: 275.79 MB
Checksum: 1385943334

 Applies to one or more of the following products:
InfoScale Availability 7.4.2 On RHEL8 x86-64
InfoScale Enterprise 7.4.2 On RHEL8 x86-64
InfoScale Foundation 7.4.2 On RHEL8 x86-64
InfoScale Storage 7.4.2 On RHEL8 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
4002850, 4005220, 4006982, 4007375, 4007376, 4007677, 4008072, 4008606, 4010353, 4010892, 4011866, 4011971, 4012061, 4012062, 4012063, 4012485, 4012522, 4012744, 4012745, 4012746, 4012751, 4012765, 4012787, 4012800, 4012801, 4012842, 4012848, 4012936, 4013034, 4013036, 4013084, 4013143, 4013144, 4013155, 4013169, 4013626, 4013718, 4013738, 4014719, 4014720, 4015287, 4015835, 4016721, 4017282, 4017818, 4017820, 4017934, 4018182, 4018770, 4018771, 4018772, 4019877, 4020055, 4020056, 4020207, 4020912, 4021346, 4021428, 4021748, 4022049, 4022052, 4022053, 4022054, 4022056, 4022069

 Patch ID:
VRTSsfcpi-7.4.1.2800-GENERIC
VRTSllt-7.4.2.1200-RHEL8
VRTSgab-7.4.2.1200-RHEL8
VRTSamf-7.4.2.1200-RHEL8
VRTSdbac-7.4.2.1200-RHEL8
VRTSvxfen-7.4.2.1200-RHEL8
VRTSaslapm-7.4.2.1400-RHEL8
VRTSvxvm-7.4.2.1400-RHEL8
VRTSvxfs-7.4.2.1500-RHEL8
VRTSglm-7.4.2.1400-RHEL8
VRTSodm-7.4.2.1400-RHEL8
VRTSgms-7.4.2.1100-RHEL8

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.2 * * *
                         * * * Patch 1300 * * *
                         Patch Date: 2020-12-22


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4.2 Patch 1300


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL8 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSsfcpi
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.2
   * InfoScale Enterprise 7.4.2
   * InfoScale Foundation 7.4.2
   * InfoScale Storage 7.4.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSllt-7.4.2.1200
* 4022049 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSllt-7.4.2.1100
* 4012744 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
* 4013036 (3985775) Sometimes, the system log may get flooded with LLT heartbeat loss messages that do not necessarily indicate any actual issues with LLT.
Patch ID: VRTSgms-7.4.2.1100
* 4022069 (4022067) Unable to load the vxgms module.
Patch ID: VRTSglm-7.4.2.1400
* 4014719 (4011596) Multiple issues were observed during glmdump using hacli for communication
* 4018772 (4018213) GLM module failed to load on RHEL8.3
Patch ID: VRTSglm-7.4.2.1300
* 4008072 (4008071) Rebooting the system results into emergency mode due to corruption of Module Dependency files.
* 4012063 (4001382) GLM module failed to load on RHEL8.2
Patch ID: VRTSodm-7.4.2.1400
* 4018771 (4018200) ODM module failed to load on RHEL8.3
Patch ID: VRTSodm-7.4.2.1300
* 4012062 (4001380) ODM module failed to load on RHEL8.2
Patch ID: VRTSvxfs-7.4.2.1500
* 4012765 (4011570) WORM attribute replication support in VxFS.
* 4014720 (4011596) man page changes for glmdump
* 4015287 (4010255) "vfradmin promote" fails to promote target FS with selinux enabled.
* 4015835 (4015278) System panics during vx_uiomove_by _hand.
* 4016721 (4016927) For multi cloud tier scenario, system panic with NULL pointer dereference when we try to remove second cloud tier
* 4017282 (4016801) filesystem mark for fullfsck
* 4017818 (4017817) VFR performance enhancement changes.
* 4017820 (4017819) Adding cloud tier operation fails while trying to add AWS GovCloud.
* 4017934 (4015059) VFR command can hang when job is paused
* 4018770 (4018197) VxFS module failed to load on RHEL8.3
* 4019877 (4019876) Remove license library dependency from vxfsmisc.so library
* 4020055 (4012049) Man page changes to expose "metasave" and "target" options.
* 4020056 (4012049) Documented "metasave" option and added one new option in fsck binary.
* 4020912 (4020758) Filesystem mount or fsck with -y may see hang during log replay
Patch ID: VRTSvxfs-7.4.2.1300
* 4002850 (3994123) Running fsck on a system may show LCT count mismatch errors
* 4005220 (4002222) Code changes have been done to prevent cluster-wide hang in a scenario where the cluster filesystem is FCL enabled and the disk layout version is greater than or equals to 14.
* 4010353 (3993935) Fsck command of vxfs may hit segmentation fault.
* 4012061 (4001378) VxFS module failed to load on RHEL8.2
* 4012522 (4012243) Read/Write performance improvement in VxFS
* 4012765 (4011570) WORM attribute replication support in VxFS.
* 4012787 (4007328) VFR source keeps processing file change log(FCL) records even after connection closure from target.
* 4012800 (4008123) VFR fails to replicate named extended attributes if the job is paused.
* 4012801 (4001473) VFR fails to replicate named extended attributes set on files
* 4012842 (4006192) system panic with NULL pointer de-reference.
* 4012936 (4000465) FSCK binary loops when it detects break in sequence of log ids.
* 4013084 (4009328) In cluster filesystem, unmount hang could be observed if smap is marked bad previously.
* 4013143 (4008352) Using VxFS mount binary inside container to mount any device might result in core generation.
* 4013144 (4008274) Race between compression thread and clone remove thread while allocating reorg inode.
* 4013626 (4004181) Read the value of VxFS compliance clock
* 4013738 (3830300) Degraded CPU performance during backup of Oracle archive logs
on CFS vs local filesystem
Patch ID: VRTSvxvm-7.4.2.1400
* 4018182 (4008664) System panic when signal vxlogger daemon that has ended.
* 4020207 (4018086) system hang was observed when RVG was in DCM resync with SmartMove as ON.
* 4021346 (4010207) System panicked due to hard-lockup due to a spinlock not released properly during the vxstat collection.
* 4021428 (4020166) Vxvm Support on RHEL8 Update3
* 4021748 (4020260) Failed to activate/set tunable dmp native support for Centos 8
Patch ID: VRTSvxvm-7.4.2.1300
* 4008606 (4004455) Instant restore failed for a snapshot created on older version DG.
* 4010892 (4009107) CA chain certificate verification fails in SSL context.
* 4011866 (3976678) vxvm-recover:  cat: write error: Broken pipe error encountered in syslog.
* 4011971 (3991668) Veritas Volume Replicator (VVR) configured with sec logging reports data inconsistency when hit "No IBC message arrived" error.
* 4012485 (4000387) VxVM support on RHEL 8.2
* 4012848 (4011394) Performance enhancement for cloud tiering.
* 4013155 (4010458) In VVR (Veritas Volume replicator), the rlink might inconsistently disconnect due to unexpected transactions.
* 4013169 (4011691) High CPU consumption on the VVR secondary nodes because of high pending IO load.
* 4013718 (4008942) Docker infoscale plugin is failing to unmount the filesystem, if the cache object is full
Patch ID: VRTSvxfen-7.4.2.1200
* 4022054 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSvxfen-7.4.2.1100
* 4006982 (3988184) The vxfen process cannot complete due to incomplete vxfentab file.
* 4007375 (4000745) The VxFEN process fails to start due to late discovery of the VxFEN disk group.
* 4007376 (3996218) In a customized fencing mode, the 'vxfenconfig -c' command creates a new vxfend process even if VxFen is already configured.
* 4007677 (3970753) Freeing uninitialized/garbage memory causes panic in vxfen.
Patch ID: VRTSdbac-7.4.2.1200
* 4022056 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSdbac-7.4.2.1100
* 4012751 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).
Patch ID: VRTSamf-7.4.2.1200
* 4022053 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSamf-7.4.2.1100
* 4012746 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).
Patch ID: VRTSgab-7.4.2.1200
* 4022052 (4019674) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).
Patch ID: VRTSgab-7.4.2.1100
* 4012745 (4012742) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).
* 4013034 (4011683) The GAB module failed to start and the system log messages indicate failures with the mknod command.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSllt-7.4.2.1200

* 4022049 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSllt-7.4.2.1100

* 4012744 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

* 4013036 (Tracking ID: 3985775)

SYMPTOM:
Sometimes, the system log may get flooded with LLT heartbeat loss messages that do not necessarily indicate any actual issues with LLT.

DESCRIPTION:
LLT heartbeat loss messages can appear in the system log either due to actual heartbeat drops in the network or due to heartbeat packets arriving out of order. In either case, these messages are only informative and do not indicate any issue in the LLT functionality. Sometimes, the system log may get flooded with these messages, which are not useful.

RESOLUTION:
The LLT module is updated to lower the frequency of printing LLT heartbeat loss messages. This is achieved by increasing the number of missed sequential HB packets required to print this informative message.

Patch ID: VRTSgms-7.4.2.1100

* 4022069 (Tracking ID: 4022067)

SYMPTOM:
Unable to load the vxgms module

DESCRIPTION:
Need recompilation of VRTSgms due to recent changes.

RESOLUTION:
Recompiled the VRTSgms module.

Patch ID: VRTSglm-7.4.2.1400

* 4014719 (Tracking ID: 4011596)

SYMPTOM:
It throws error saying "No such file or directory present"

DESCRIPTION:
Bug observed during parallel communication between all the nodes. Some required temp files were not present on other nodes.

RESOLUTION:
Fixed to have consistency maintained while parallel node communication. Using hacp for transferring temp files.

* 4018772 (Tracking ID: 4018213)

SYMPTOM:
GLM module failed to load on RHEL8.3

DESCRIPTION:
The RHEL8.3 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.3.

Patch ID: VRTSglm-7.4.2.1300

* 4008072 (Tracking ID: 4008071)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get Corrupted due to parallel Invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through lock.

* 4012063 (Tracking ID: 4001382)

SYMPTOM:
GLM module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on RHEL8.2

Patch ID: VRTSodm-7.4.2.1400

* 4018771 (Tracking ID: 4018200)

SYMPTOM:
ODM module failed to load on RHEL8.3

DESCRIPTION:
The RHEL8.3 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.3.

Patch ID: VRTSodm-7.4.2.1300

* 4012062 (Tracking ID: 4001380)

SYMPTOM:
ODM module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.2

Patch ID: VRTSvxfs-7.4.2.1500

* 4012765 (Tracking ID: 4011570)

SYMPTOM:
WORM attribute replication support in VxFS.

DESCRIPTION:
WORM attribute replication is not supported in VFR. Modified code to replicate WORM attribute during attribute processing in VFR.

RESOLUTION:
Code is modified to replicate WORM attributes in VFR.

* 4014720 (Tracking ID: 4011596)

SYMPTOM:
Man page is missing details about of the feature we support

DESCRIPTION:
Need to include new option "-h" in glmdump in man page for using hacli utility for communicating across the nodes in the cluster.

RESOLUTION:
Added the details about the feature supported by glmdump in man page.

* 4015287 (Tracking ID: 4010255)

SYMPTOM:
"vfradmin promote" fails to promote target FS with selinux enabled.

DESCRIPTION:
During promote operation, VxFS remounts FS at target. When remounting FS to remove "protected on" flag from target, VxFS first fetch current mount options. With Selinux enabled (either in permissive mode/enabled), OS adds default "seclable" option to mount. When VxFS fetch current mount options, "seclabel" was not recognized by VxFS. Hence it fails to mount FS.

RESOLUTION:
Code is modified to remove "seclabel" mount option during mount processing on target.

* 4015835 (Tracking ID: 4015278)

SYMPTOM:
System panics during vx_uiomove_by _hand

DESCRIPTION:
During uiomove, VxFS get the pages from OS through get_user_pages() to copy user data. Oracle use hugetablfs internally for performance reason. This can allocate hugepages. Under low memory condition, it is possible that get_user_pages() might return VxFS compound pages. In case of compound pages, only head page has valid mapping set and all other pages are mapped as TAIL_MAPPING. In case of uiomove, if VxFS gets compound page, then it try to check writable mapping for all pages from this compound page. This can result into dereferencing illegal address (TAIL_MAPPING) which was causing panic in  stack. VxFS doesn't support huge pages but it is possible that compound page is present on the system and VxFS might get one through get_user_pages.

RESOLUTION:
Code is modified to get head page in case of tail pages from compound page when VxFS checks writeable mapping.

* 4016721 (Tracking ID: 4016927)

SYMPTOM:
Remove tier command panics the system, crash has panic reason "BUG: unable to handle kernel NULL pointer dereference at 0000000000000150"

DESCRIPTION:
When fsvoladm removes device all devices are not moved. Number of device count also remains same unless it is the last device in the array. So check for free slot before trying to access device.

RESOLUTION:
In the device list check for free slot before accessing the device in that slot.

* 4017282 (Tracking ID: 4016801)

SYMPTOM:
filesystem mark for fullfsck

DESCRIPTION:
In cluster environment, some operation can be perform on primary node only. When such operations are executed from secondary node, message is 
passed to primary node. During this, it may possible sender node has some transaction and not yet reached to disk. In such scenario, if sender node rebooted 
then primary node can see stale data.

RESOLUTION:
Code is modified to make sure transactions are flush to log disk before sending message to primary.

* 4017818 (Tracking ID: 4017817)

SYMPTOM:
NA

DESCRIPTION:
In order to increase the overall throughput of VFR, code changes have been done
to replicate files parallelly.

RESOLUTION:
Code changes have been done to replicate file's data & metadata parallely over
multiple socket connections.

* 4017820 (Tracking ID: 4017819)

SYMPTOM:
Cloud tier add operation fails when user is trying to add the AWS GovCloud.

DESCRIPTION:
Adding AWS GovCloud as a cloud tier was not supported in InfoScale. With these changes, user will be able to add AWS GovCloud type of cloud.

RESOLUTION:
Added support for AWS GovCloud

* 4017934 (Tracking ID: 4015059)

SYMPTOM:
VFR can hang when job is paused

DESCRIPTION:
When VFR job is paused, thread doing actual replication is canceled. In some cases, cancelable state of threads needs to be disable during 
executing of certain code path. In some cases such handling is missing which could result into VFR command goes into hang state.

RESOLUTION:
Code is add to handle all such scenarios.

* 4018770 (Tracking ID: 4018197)

SYMPTOM:
VxFS module failed to load on RHEL8.3

DESCRIPTION:
The RHEL8.3 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.3.

* 4019877 (Tracking ID: 4019876)

SYMPTOM:
vxfsmisc.so is publicly shared library for samba and doesn't require infoscale license for its usage

DESCRIPTION:
vxfsmisc.so is publicly shared library for samba and doesn't require infoscale license for its usage

RESOLUTION:
Removed license dependency in vxfsmisc library

* 4020055 (Tracking ID: 4012049)

SYMPTOM:
fsck_vxfs(1m) man page doesn't contain the information related to the "metasave" and "target" option.

DESCRIPTION:
fsck_vxfs(1m) man page doesn't contain the information related to the "metasave" and "target" option.

RESOLUTION:
Code changes have been done to document these two options. 1. metasave 2. target

* 4020056 (Tracking ID: 4012049)

SYMPTOM:
"fsck" supports the "metasave" option but it was not documented anywhere.

DESCRIPTION:
"fsck" supports the "metasave" option while executing with the "-y" option. but it is not documented anywhere. Also, it tries to store metasave in a particular location. The user doesn't have the option to specify the location. If that location doesn't have enough space, "fsck" fails to take the metasave and it continues to change filesystem state.

RESOLUTION:
Code changes have been done to add one new option with which the user can specify the location to store metasave. "metasave" and "target", these two options have been added in the "usage" message of "fsck" binary.

* 4020912 (Tracking ID: 4020758)

SYMPTOM:
Filesystem mount or fsck with -y may see hang during log replay

DESCRIPTION:
fsck utility is used to perform the log replay. This log replay is performed during mount operation or during filesystem check with -y option, if needed. In certain cases if there are lot of logs that needs to be replayed then it end up into consuming entire buffer cache. This results into out of buffer scenario and results into hang.

RESOLUTION:
Code is modified to make sure enough buffers are always available.

Patch ID: VRTSvxfs-7.4.2.1300

* 4002850 (Tracking ID: 3994123)

SYMPTOM:
Running fsck on a system may show LCT count mismatch errors

DESCRIPTION:
Multi-block merged extents in IFIAT inodes, may only process the first block of the extent, thus leaving some references unprocessed. This will lead to LCT counts not matching. Resolving the issue will require a fullfsck.

RESOLUTION:
Code changes added to process merged multi-block extents in IFIAT inodes correctly.

* 4005220 (Tracking ID: 4002222)

SYMPTOM:
The cluster can hang if the cluster filesystem is FCL enabled and its disk layout version is greater than or equals to 14.

DESCRIPTION:
VxFS worker threads that are responsible for handling "File Change Log" feature related operations, can be stuck in a deadlock if the disk layout version of the FCL enabled cluster filesystem is greater than or equals to 14.

RESOLUTION:
Code changes have been done to prevent cluster-wide hang in a scenario where the cluster filesystem is FCL enabled and the disk layout version is greater than or equals to 14.

* 4010353 (Tracking ID: 3993935)

SYMPTOM:
Fsck command of vxfs may hit segmentation fault with following stack.
#0  get_dotdotlst ()
#1  find_dotino ()
#2  dir_sanity ()
#3  pass2 ()
#4  iproc_do_work ()
#5  start_thread ()
#6  sysctl ()

DESCRIPTION:
TURNON_CHUNK() and TURNOFF_CHUNK() are modifying the values of arguments.

RESOLUTION:
Code has been modified to fix the issue.

* 4012061 (Tracking ID: 4001378)

SYMPTOM:
VxFS module failed to load on RHEL8.2

DESCRIPTION:
The RHEL8.2 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL8.2

* 4012522 (Tracking ID: 4012243)

SYMPTOM:
During IO MM semaphores lock contention may reduce performance

DESCRIPTION:
During IO mmap locks taken may introduce lock contention and reduce IO performance.

RESOLUTION:
New VxFS API is introduced to skip these locks whenever required on specific file.

* 4012765 (Tracking ID: 4011570)

SYMPTOM:
WORM attribute replication support in VxFS.

DESCRIPTION:
WORM attribute replication is not supported in VFR. Modified code to replicate WORM attribute during attribute processing in VFR.

RESOLUTION:
Code is modified to replicate WORM attributes in VFR.

* 4012787 (Tracking ID: 4007328)

SYMPTOM:
After replication service is stopped on target, the job failed at source only after processing all the FCL records.

DESCRIPTION:
After replication service is stopped on target, the job failed at source only after processing all the fcl records. It should get failed immediately, but it is failed after processing all the fcl records. If target breaks the connection, ideally the source received the error, which job can fail while reading FCL records, but the source received that connection is closed but the other thread doesnt receive the signal to stop, while processing FCL and ends after processing is complete.

RESOLUTION:
If replication service is stopped at target and processing of FCL records are being handled fail immediately based on return status of the connection.

* 4012800 (Tracking ID: 4008123)

SYMPTOM:
If a file has more than one named extended attributes set & if the job is paused. It fails
to replicate the remaining named extended attributes. (This behaviour is intermittent).

DESCRIPTION:
During a VFR replication if the job is paused while a file's nxattr are getting replicated, next
time when the job is resumed, the seqno. triplet received from target side causes source to miss
the remaining nxattr.

RESOLUTION:
Handling of named extended attributes is re-worked to make sure it doesn't miss the remaining
attributes on resume.

* 4012801 (Tracking ID: 4001473)

SYMPTOM:
If a file has named extended attributes set, VFR fails to replicate the job &
job goes into failed state.

DESCRIPTION:
VFR tries to use open(2) on nxattr files, since this files are not visible outside
it fails with ENOTDIR.

RESOLUTION:
Using the internal VXFS specific API to get a valid file descriptor for nxattr files.

* 4012842 (Tracking ID: 4006192)

SYMPTOM:
system panic with NULL pointer de-reference.

DESCRIPTION:
VxFS supports checkpoint i.e. point in image copy of filesystem. For this it needs keep copy of some metadata for checkpoint. In some cases it 
misses to make copy. Later while processing files corresponds to this missed metadata, it got empty extent information. Extent information is block map for a 
give file. This empty extent information causing NULL pointer de-reference.

RESOLUTION:
Code changes are made to fix this issue.

* 4012936 (Tracking ID: 4000465)

SYMPTOM:
FSCK binary loops when it detects break in sequence of log ids.

DESCRIPTION:
When FS is not cleanly unmounted, FS will end up with unflushed intent log. This intent log will either be flushed during next subsequent mount or when fsck ran on the FS. Currently to build the transaction list that needs to be replayed, VxFS uses binary search to find out head and tail. But if there are breakage in intent log, then current code is susceptible to loop. To avoid this loop, VxFS is now going to use sequential search to find out range instead of binary search.

RESOLUTION:
Code is modified to incorporate sequential search instead of binary search to find out replayable transaction range.

* 4013084 (Tracking ID: 4009328)

SYMPTOM:
In a cluster filesystem, if smap corruption is seen and the smap is marked bad then it could cause hang while unmounting the filesystem.

DESCRIPTION:
While freeing an extent in vx_extfree1() for logversion >= VX_LOGVERSION13 if we are freeing whole AUs we set VX_AU_SMAPFREE flag for those AUs. This ensures that revoke of delegation for that AU is delayed till the AU has SMAP free transaction in progress. This flag gets cleared either in post commit/undo processing of the transaction or during error handling in vx_extfree1(). In one scenario when we are trying to free a whole AU and its smap is marked bad, we do not return any error to vx_extfree1() and neither do we add the subfunction to free the extent to the transaction. So, the VX_AU_SMAPFREE flag is not cleared and remains set even if there is no SMAP free transaction in progress. This could lead to hang while unmounting the cluster filesystem.

RESOLUTION:
Code changes have been done to add error handling in vx_extfree1 to clear VX_AU_SMAPFREE flag in case where error is returned due to bad smap.

* 4013143 (Tracking ID: 4008352)

SYMPTOM:
Using VxFS mount binary inside container to mount any device might result in core generation.

DESCRIPTION:
Using VxFS mount binary inside container to mount any device might result in core generation.
This issue is because of improper initialisation of local pointer, and dereferencing garbage value later.

RESOLUTION:
This fix properly initialises all the pointers before dereferencing them.

* 4013144 (Tracking ID: 4008274)

SYMPTOM:
Race between compression thread and clone remove thread while allocating reorg inode.

DESCRIPTION:
Compression thread does the reorg inode allocation without setting i_inreuse and it takes HLOCK in exclusive mode. Later this lock in downgraded to shared mode. While this processing is happening clone delete thread can do iget on this inode and call vx_getownership without hold. If the inode is of type IFEMR or IFPTI or FREE success is returned after the ownership call. Later in the same function getownership is called with hold set before doing the processing (truncate or mark the inode as IFPTI). Removing the first redundant ownership call

RESOLUTION:
Delay taking ownership on inode until we check the inode mode.

* 4013626 (Tracking ID: 4004181)

SYMPTOM:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

DESCRIPTION:
VxFS internally maintains compliance clock, without this API, user will not be able to read the value

RESOLUTION:
Provide an API on mount point to read the Compliance clock for that filesystem

* 4013738 (Tracking ID: 3830300)

SYMPTOM:
Heavy cpu usage while oracle archive process are running on a clustered
fs.

DESCRIPTION:
The cause of the poor read performance in this case was due to fragmentation,
fragmentation mainly happens when there are multiple archivers running on the
same node. The allocation pattern of the oracle archiver processes is 

1. write header with O_SYNC
2. ftruncate-up the file to its final size ( a few GBs typically)
3. do lio_listio with 1MB iocbs

The problem occurs because all the allocations in this manner go through
internal allocations i.e. allocations below file size instead of allocations
past the file size. Internal allocations are done at max 8 Pages at once. So if
there are multiple processes doing this, they all get these 8 Pages alternately
and the fs becomes very fragmented.

RESOLUTION:
Added a tunable, which will allocate zfod extents when ftruncate
tries to increase the size of the file, instead of creating a hole. This will
eliminate the allocations internal to file size thus the fragmentation. Fixed
the earlier implementation of the same fix, which ran into
locking issues. Also fixed the performance issue while writing from secondary node.

Patch ID: VRTSvxvm-7.4.2.1400

* 4018182 (Tracking ID: 4008664)

SYMPTOM:
System panic occurs with the following stack:

void genunix:psignal+4()
void vxio:vol_logger_signal_gen+0x40()
int vxio:vollog_logentry+0x84()
void vxio:vollog_logger+0xcc()
int vxio:voldco_update_rbufq_chunk+0x200()
int vxio:voldco_chunk_updatesio_start+0x364()
void vxio:voliod_iohandle+0x30()
void vxio:voliod_loop+0x26c((void *)0)
unix:thread_start+4()

DESCRIPTION:
Vxio keeps vxloggerd proc_t that is used to send a signal to vxloggerd. In case vxloggerd has been ended for some reason, the signal may be sent to an unexpected process, which may cause panic.

RESOLUTION:
Code changes have been made to correct the problem.

* 4020207 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature as ON, it can happen vxiod with ID as 128 starts replication where RVG was in DCM mode, this vxiod is waiting for filesystem's response if a given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a dead lock situation.

RESOLUTION:
Code changes have been made to avoid handling MDSHIP IO in vxiod whose ID is bigger than 127.

* 4021346 (Tracking ID: 4010207)

SYMPTOM:
System panic occurred with the below stack:

native_queued_spin_lock_slowpath()
queued_spin_lock_slowpath()
_raw_spin_lock_irqsave()
volget_rwspinlock()
volkiodone()
volfpdiskiodone()
voldiskiodone_intr()
voldmp_iodone()
bio_endio()
gendmpiodone()
dmpiodone()
bio_endio()
blk_update_request()
scsi_end_request()
scsi_io_completion()
scsi_finish_command()
scsi_softirq_done()
blk_done_softirq()
__do_softirq()
call_softirq()

DESCRIPTION:
As part of collecting the IO statistics collection, the vxstat thread acquires a spinlock and tries to copy data to the user space. During the data copy, if some page fault happens, then the thread would relinquish the CPU and provide the same to some other thread. If the thread which gets scheduled on the CPU requests the same spinlock which vxstat thread had acquired, then this results in a hard lockup situation.

RESOLUTION:
Code has been changed to properly release the spinlock before copying out the data to the user space during vxstat collection.

* 4021428 (Tracking ID: 4020166)

SYMPTOM:
Build issue becuase of "struct request"

error: struct request has no member named next_rq
Linux has deprecated the member next_req

DESCRIPTION:
The issue was observed due to changes in OS structure

RESOLUTION:
code changes are done in required files

* 4021748 (Tracking ID: 4020260)

SYMPTOM:
While enabling dmp native support tunable dmp_native_support for Centos 8 below mentioned error was observed:

[root@dl360g9-4-vm2 ~]# vxdmpadm settune dmp_native_support=on
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groups

VxVM vxdmpadm ERROR V-5-1-15686 The following vgs could not be migrated as error in bootloader configuration file 

 cl
[root@dl360g9-4-vm2 ~]#

DESCRIPTION:
The issue was observed due to missing code check-ins for CentOS 8 in the required files.

RESOLUTION:
Changes are done in required files for dmp native support in CentOS 8

Patch ID: VRTSvxvm-7.4.2.1300

* 4008606 (Tracking ID: 4004455)

SYMPTOM:
snapshot restore failed on a instant_snapshot created on older version DG

DESCRIPTION:
create a DG with older version, create a instant snapshot, 
do some IOs on source volume.
try to restore the snapshot.
snapshot failed for this scenario.

RESOLUTION:
rca for this issue is there flag values were conflicting.
fixed this issue code has been checkedin

* 4010892 (Tracking ID: 4009107)

SYMPTOM:
CA chain certificate verification fails in VVR when the number of intermediate certificates is greater than the depth. So, we get error in SSL initialization.

DESCRIPTION:
CA chain certificate verification fails in VVR when the number of intermediate certificates is greater than the depth. SSL_CTX_set_verify_depth() API decides the depth of certificates (in /etc/vx/vvr/cacert file) to be verified, which is limited to count 1 in code. Thus intermediate CA certificate present first  in /etc/vx/vvr/cacert (depth 1  CA/issuer certificate for server certificate) could be obtained and verified during connection, but root CA certificate (depth 2  higher CA certificate) could not be verified while connecting and hence the error.

RESOLUTION:
Removed the call of SSL_CTX_set_verify_depth() API so as to handle the depth automatically.

* 4011866 (Tracking ID: 3976678)

SYMPTOM:
vxvm-recover:  cat: write error: Broken pipe error encountered in syslog multiple times.

DESCRIPTION:
Due to a bug in vxconfigbackup script which is started by vxvm-recover "cat : write error: Broken pipe" is encountered in syslog 
and it is reported under vxvm-recover. In vxconfigbackup code multiple subshells are created in a function call and the first subshell is for cat command. When a particular if condition is satistfied, return is called exiting the later subshells even when there is data to be read in the created cat subshell, which results in broken pipe error.

RESOLUTION:
Changes are done in VxVM code to handle the broken pipe error.

* 4011971 (Tracking ID: 3991668)

SYMPTOM:
Configured with sec logging, VVR reports data inconsistency when hit "No IBC message arrived" error.

DESCRIPTION:
It might happen seconday node served updates with larger sequence ID when In-Band Control (IBC) update arrived. In this case, VVR will drop the IBC update. Any updates whose sequence ID are larger couldn't start data volume writes. They will get queued. Data lost will happen when seconary receives automic commit and clear the queue. Hence vradmin verifydata reports data inconsistency.

RESOLUTION:
Code changes have been made to trigger updates in order to start data volume writes.

* 4012485 (Tracking ID: 4000387)

SYMPTOM:
Existing VxVM module fails to load on Rhel 8.2

DESCRIPTION:
RHEL 8.2 is a new release and had few KABI changes  on which VxVM compilation breaks .

RESOLUTION:
Compiled VxVM code against 8.2 kernel and made changes to make it compatible.

* 4012848 (Tracking ID: 4011394)

SYMPTOM:
As a part of verifying the performance of CFS cloud tiering verses scale out file system tiering in Access, it was found that CFS cloud tiering performance was degraded.

DESCRIPTION:
On verifying the performance of CFS cloud tiering verses scale out file system tiering in Access, it was found that CFS cloud tiering performance was degraded because the design was single threaded which was causing bottleneck and performance issues.

RESOLUTION:
Code Changes are around Multiple IO queues in the kernel, Multithreaded request loop to fetch IOs from kernel queues into userland global queue and Allow curl threads to work in parallel.

* 4013155 (Tracking ID: 4010458)

SYMPTOM:
In VVR (Veritas Volume replicator), the rlink might inconsistently disconnect due to unexpected transactions with below messages:
VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed

DESCRIPTION:
In VVR (Veritas Volume replicator), a transaction is triggered when a change in the VxVM/VVR objects needs 
to be persisted on disk. 

In some scenario, few unnecessary transactions were getting triggered in loop. This was causing multiple rlink
disconnects with below message logged frequently:
VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed

One such unexpected transaction was happening due to open/close on volume as part of SmartIO caching.
Additionally, vradmind daemon was also issuing some open/close on volumes as part of IO statistics collection,
which was causing unnecessary transactions. 

Additionally some unexpected transactions were happening due to incorrect checks in code related
to some temporary flags on volume.

RESOLUTION:
The code is fixed to disable the SmartIO caching on the volumes if the SmartIO caching is not configured on the system.
Additionally code is fixed to avoid the unexpected transactions due to incorrect checking on the temporary flags
on volume.

* 4013169 (Tracking ID: 4011691)

SYMPTOM:
Observed high CPU consumption on the VVR secondary nodes because of high pending IO load.

DESCRIPTION:
High replication related IO load on the VVR secondary and the requirement of maintaining write order fidelity with limited memory pools created  contention. This resulted in multiple VxVM kernel threads contending for shared resources and there by increasing the CPU consumption.

RESOLUTION:
Limited the way in which VVR consumes its resources so that a high pending IO load would not result into high CPU consumption.

* 4013718 (Tracking ID: 4008942)

SYMPTOM:
file system gets disabled when cache object gets full and hence unmount is failing.

DESCRIPTION:
When cache object gets full, IO errors comes on volume. 
Because IOs are not getting served as cache object is full so there is inconsistency of IOs.
Because of IO inconsistency vxfs gets disabled and unmount failed

RESOLUTION:
Fixed the issue and code has been checkedin

Patch ID: VRTSvxfen-7.4.2.1200

* 4022054 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSvxfen-7.4.2.1100

* 4006982 (Tracking ID: 3988184)

SYMPTOM:
The vxfen process cannot complete due to incomplete vxfentab file.

DESCRIPTION:
When I/O fencing starts, the vxfen startup script creates the /etc/vxfentab file on each node. If the coordination disk discovery is slow, the vxfen startup script fails to include all the coordination points in the vxfentab file. As a result, the vxfen startup script gets stuck in a loop.

RESOLUTION:
The vxfen startup process is modified to exit from the loop if it gets stuck while configuring 'vxfenconfig -c'. On exiting from the loop, systemctl starts vxfen again and tries to use the updated vxfentab file.

* 4007375 (Tracking ID: 4000745)

SYMPTOM:
The VxFEN process fails to start due to late discovery of the VxFEN disk group.

DESCRIPTION:
When I/O fencing starts, the VxFEN startup script creates this /etc/vxfentab file on each node. During disk-based fencing, the VxVM module may take longer time to discover the VxFEN disk group. Because of this delay, the 'generate disk list' opreration times out. Therefore, the VxFEN process fails to start and reports the following error: 'ERROR: VxFEN cannot generate vxfentab because vxfendg does not exist'

RESOLUTION:
A new tunable, getdisks_timeout, is introduced to specify the timeout value for the VxFEN disk group discovery. The maximum and the default value for this tunable is 600 seconds. You can set the value of this tunable by adding an getdisks_timeout=<time_in_sec> entry in the /etc/vxfenmode file.

* 4007376 (Tracking ID: 3996218)

SYMPTOM:
In a customized fencing mode, the 'vxfenconfig -c' command creates a new vxfend process even if VxFen is already configured.

DESCRIPTION:
When you configure fencing in the customized mode and run the 'vxfenconfig -c' command, the vxfenconfig utility reports the 'VXFEN ERROR V-11-1-6 vxfen already configured...' error. Moreover, it also creates a new vxfend process even if VxFen is already configured. Such redundant processes may impact the performance of the system.

RESOLUTION:
The vxfenconfig utility is modified so that it does not create a new vxfend process when VxFen is already configured.

* 4007677 (Tracking ID: 3970753)

SYMPTOM:
Freeing uninitialized/garbage memory causes panic in vxfen.

DESCRIPTION:
Freeing uninitialized/garbage memory causes panic in vxfen.

RESOLUTION:
Veritas has modified the VxFen kernel module to fix the issue by initializing the object before attempting to free it.
 .

Patch ID: VRTSdbac-7.4.2.1200

* 4022056 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSdbac-7.4.2.1100

* 4012751 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
2(RHEL8.2) is now introduced.

Patch ID: VRTSamf-7.4.2.1200

* 4022053 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSamf-7.4.2.1100

* 4012746 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
2(RHEL8.2) is now introduced.

Patch ID: VRTSgab-7.4.2.1200

* 4022052 (Tracking ID: 4019674)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 3(RHEL8.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 3(RHEL8.3) is now introduced.

Patch ID: VRTSgab-7.4.2.1100

* 4012745 (Tracking ID: 4012742)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 2(RHEL8.2).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL8 Update 1.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 2(RHEL8.2) is now introduced.

* 4013034 (Tracking ID: 4011683)

SYMPTOM:
The GAB module failed to start and the system log messages indicate failures with the mknod command.

DESCRIPTION:
The mknod command fails to start the GAB module because its format is invalid. If the names of multiple drivers in an environment contain the value "gab" as a substring, all their major device numbers get passed on to the mknod command. Instead, the command must contain the major device number for the GAB driver only.

RESOLUTION:
This hotfix addresses the issue so that the GAB module starts successfully even when other driver names in the environment contain "gab" as a substring.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel8.3_x86_64-Patch-7.4.2.1300.tar.gz to /tmp
2. Untar infoscale-rhel8.3_x86_64-Patch-7.4.2.1300.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel8.3_x86_64-Patch-7.4.2.1300.tar.gz
    # tar xf /tmp/infoscale-rhel8.3_x86_64-Patch-7.4.2.1300.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale742P1300 [<host1> <host2>...]

You can also install this patch together with 7.4.2 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.2 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE
Read and accept Terms of Service