infoscale-rhel7.5_x86_64-Patch-7.3.1.100

 Basic information
Release type: Patch
Release date: 2018-04-10
OS update support: RHEL7 x86-64 Update 5
Technote: None
Documentation: None
Popularity: 5377 viewed    770 downloaded
Download size: 190.51 MB
Checksum: 3155120230

 Applies to one or more of the following products:
InfoScale Availability 7.3.1 On RHEL7 x86-64
InfoScale Enterprise 7.3.1 On RHEL7 x86-64
InfoScale Foundation 7.3.1 On RHEL7 x86-64
InfoScale Storage 7.3.1 On RHEL7 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
fs-rhel7_x86_64-Patch-7.3.1.100 (obsolete) 2018-03-14
odm-rhel7_x86_64-Patch-7.3.1.100 (obsolete) 2018-03-14
vm-rhel7_x86_64-Patch-7.3.1.100 (obsolete) 2018-03-13

 Fixes the following incidents:
3932464, 3933810, 3933819, 3933820, 3933824, 3933828, 3933834, 3933843, 3933844, 3933874, 3933875, 3933876, 3933877, 3933878, 3933880, 3933882, 3933883, 3933884, 3933889, 3933890, 3933897, 3933898, 3933900, 3933904, 3933907, 3933910, 3933911, 3933912, 3934841, 3935903, 3936286, 3937536, 3937540, 3937541, 3937542, 3937549, 3937550, 3937808, 3937811, 3938258, 3939406, 3939411, 3940039, 3940143, 3940266, 3940368, 3940652, 3940830, 3941773, 3942697, 3943620, 3943731, 3943732, 3944181, 3944182, 3944195, 3944196, 3944197, 3944310

 Patch ID:
VRTSvxfs-7.3.1.200-RHEL7
VRTSodm-7.3.1.200-RHEL7
VRTSllt-7.3.1.200-RHEL7
VRTSgab-7.3.1.100-RHEL7
VRTSvxfen-7.3.1.200-RHEL7
VRTSamf-7.3.1.100-RHEL7
VRTSdbac-7.3.1.100-RHEL7
VRTSvxvm-7.3.1.200-RHEL7
VRTSaslapm-7.3.1.110-RHEL7

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.3.1 * * *
                         * * * Patch 100 * * *
                         Patch Date: 2018-04-03


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.3.1 Patch 100


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL7 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSllt
VRTSodm
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.3.1
   * InfoScale Enterprise 7.3.1
   * InfoScale Foundation 7.3.1
   * InfoScale Storage 7.3.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxvm 7.3.1.200
* 3943620 (3938549) Volume creation fails with error: Unexpected kernel error 
in configuration update for Rhel 7.5.
* 3932464 (3926976) Frequent loss of VxVM functionality due to vxconfigd unable to validate license.
* 3933874 (3852146) Shared DiskGroup(DG) fails to import when "-c" and "-o noreonline" options 
are
specified together
* 3933875 (3872585) System panics with storage key exception.
* 3933876 (3894657) VxVM commands may hang when using space optimized snapshot.
* 3933877 (3914789) System may panic when reclaiming on secondary in VVR environment.
* 3933878 (3918408) Data corruption when volume grow is attempted on thin reclaimable disks whose space is just freed.
* 3933880 (3864063) Application I/O hangs because of a race between the Master Pause SIO (Staging
I/O) and the Error Handler SIO.
* 3933882 (3865721) Vxconfigd may hang while pausing the replication in CVR(cluster Veritas Volume 
Replicator) environment.
* 3933883 (3867236) Application IO hang happens because of a race between Master Pause SIO(Staging IO) 
and RVWRITE1 SIO.
* 3933884 (3868154) When DMP Native Support is set to ON, dmpnode with multiple VGs cannot be listed
properly in the 'vxdmpadm native ls' command
* 3933889 (3879234) dd read on the Veritas Volume Manager (VxVM) character device fails with 
Input/Output error while accessing end of device.
* 3933890 (3879324) VxVM DR tool fails to handle busy device problem while LUNs are removed from  OS
* 3933897 (3907618) vxdisk resize leads to data corruption on filesystem
* 3933898 (3908987) False vxrelocd messages being generated by joining CVM slave.
* 3933900 (3915523) Local disk from other node belonging to private DG(diskgroup) is exported to the
node when a private DG is imported on current 
node.
* 3933904 (3921668) vxrecover command with -m option fails when executed on the slave
nodes.
* 3933907 (3873123) If the disk with CDS EFI label is used as remote
disk on the cluster node, restarting the vxconfigd
daemon on that particular node causes vxconfigd
to go into disabled state
* 3933910 (3910228) Registration of GAB(Global Atomic Broadcast) port u fails on slave nodes after 
multiple new devices are added to the system.
* 3933911 (3925377) Not all disks could be discovered by DMP after first startup.
* 3937540 (3906534) After enabling DMP (Dynamic Multipathing) Native support, enable /boot to be
mounted on DMP device.
* 3937541 (3911930) Provide a way to clear the PGR_FLAG_NOTSUPPORTED on the device instead of using
exclude/include commands
* 3937542 (3917636) Filesystems from /etc/fstab file are not mounted automatically on boot 
through systemd on RHEL7 and SLES12.
* 3937549 (3934910) DRL map leaks during snapshot creation/removal cycle with dg reimport.
* 3937550 (3935232) Replication and IO hang during master takeover because of racing between log 
owner change and master switch.
* 3937808 (3931936) VxVM(Veritas Volume Manager) command hang on master node after 
restarting 
slave node.
* 3937811 (3935974) When client process shuts down abruptly or resets connection during 
communication with the vxrsyncd daemon, it may terminate
vxrsyncd daemon.
* 3940039 (3897047) Filesystems are not mounted automatically on boot through systemd on RHEL7 and
SLES12.
* 3940143 (3941037) VxVM (Veritas Volume Manager) creates some required files under /tmp and
/var/tmp directories. These directories could be modified by non-root users and
will affect the Veritas Volume Manager Functioning.
Patch ID: VRTSaslapm 7.3.1.110
* 3944310 (3944312) VRTSaslapm package(rpm) doesn't function correctly for RHEL7.5 .
Patch ID: VRTSodm 7.3.1.200
* 3943732 (3938546) ODM module failed to load on RHEL7.5.
* 3939411 (3941018) VRTSodm driver will not load with 7.3.1.100 VRTSvxfs patch.
Patch ID: VRTSvxfs 7.3.1.200
* 3935903 (3933763) Oracle Hang in VxFS.
* 3940266 (3940235) A hang might be observed in case filesystem gets disbaled while enospace
handling is being taken care by inactive processing
* 3941773 (3928046) VxFS kernel panic BAD TRAP: type=34 in vx_assemble_hdoffset().
* 3942697 (3940846) vxupgrade fails while upgrading the filesystem from disk
layout version(DLV) 9 to DLV 10
* 3943731 (3938544) VxFS module failed to load on RHEL7.5.
* 3933810 (3830300) Degraded CPU performance during backup of Oracle archive logs
on CFS vs local filesystem
* 3933819 (3879310) The file system may get corrupted after a failed vxupgrade.
* 3933820 (3894712) ACL permissions are not inherited correctly on cluster 
file system.
* 3933824 (3908785) System panic observed because of null page address in writeback 
structure in case of 
kswapd process.
* 3933828 (3921152) Performance drop caused by vx_dalloc_flush().
* 3933834 (3931761) Cluster wide hang may be observed in case of high workload.
* 3933843 (3926972) A recovery event can result in a cluster wide hang.
* 3933844 (3922259) Force umount hang in vx_idrop
* 3933912 (3922986) Dead lock issue with buffer cache iodone routine in CFS.
* 3934841 (3930267) Deadlock between fsq flush threads and writer threads.
* 3936286 (3936285) fscdsconv command may fsil the conversion for disk layout version(DLV) 12 and above.
* 3937536 (3940516) File resize thread loops infinitely for file resize operation crossing 32 bit
boundary.
* 3938258 (3938256) When checking file size through seek_hole, it will return incorrect offset/size 
when delayed allocation is enabled on the file.
* 3939406 (3941034) VxFS worker thread may continuously spin on a CPU
* 3940368 (3940268) File system might get disabled in case the size of the directory surpasses the
vx_dexh_sz value.
* 3940652 (3940651) The vxupgrade command might fail while upgrading Disk Layout Version (DLV) 10 to
any upper DLV version.
* 3940830 (3937042) Data corruption seen when issuing writev with mixture of named page and 
anonymous page buffers.
Patch ID: VRTSdbac-7.3.1.100
* 3944197 (3944179) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSamf-7.3.1.100
* 3944196 (3944179) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSvxfen-7.3.1.200
* 3944195 (3944179) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSgab-7.3.1.100
* 3944182 (3944179) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSllt-7.3.1.200
* 3944181 (3944179) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxvm 7.3.1.200

* 3943620 (Tracking ID: 3938549)

SYMPTOM:
Command vxassist -g <dg> make vol <size> fails with error:
Unexpected kernel error in configuration update for Rhel 7.5.

DESCRIPTION:
Due to changes in Rhel 7.5 source code the vxassist make
volume command failed to create volume and returned with error "Unexpected
kernel error in configuration update".

RESOLUTION:
Changes are done in VxVM code to solve the issue for 
volume creation.

* 3932464 (Tracking ID: 3926976)

SYMPTOM:
Excessive number of connections are found in open state causing FD leak and
eventually reporting license errors.

DESCRIPTION:
The vxconfigd reports license errors as it fails to open the license files. The
failure to open is due to FD exhaustion, caused by excessive FIFO connections
left in open state.

The FIFO connections used to communicate with vxconfigd by clients (vx
commands). Usually these should get closed once the client exits. One of such
client "vxdclid" which is a daemon connecting frequently and leaving the
connection is open state, causing FD leak. 

This issue is applicable to Solaris platform only.

RESOLUTION:
One of the API, a library call is leaving the connection in open state while
leaving, which is fixed.

* 3933874 (Tracking ID: 3852146)

SYMPTOM:
In a CVM cluster, when importing a shared diskgroup specifying both -c and -o
noreonline options, the following error may be returned: 
VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed: Disk for disk
group not found.

DESCRIPTION:
The -c option will update the disk ID and disk group ID on the private region
of the disks in the disk group being imported. Such updated information is not
yet seen by the slave because the disks have not been re-onlined (given that
noreonline option is specified). As a result, the slave cannot identify the
disk(s) based on the updated information sent from the master, causing the
import to fail with the error Disk for disk group not found.

RESOLUTION:
The code is modified to handle the working of the "-c" and "-o noreonline"
options together.

* 3933875 (Tracking ID: 3872585)

SYMPTOM:
System running with VxFS and VxVM panics with storage key exception with the 
following stack:

simple_lock
dispatch
flih_util
touchrc
pin_seg_range
pin_com
pinx_plock
plock_pinvec
plock
mfspurr_sc_flih01

DESCRIPTION:
The xntpd process mounted on a vxfs filesystem could panic with storage key 
exception. The xntpd binary page faulted and did an IO, after which the storage 
key exception was detected OS as it couldn't locate it's keyset. From the code 
review it was found that in a few error cases in the vxvm, the storage key may 
not be restored after they're replaced.

RESOLUTION:
Do storage key restore even when in the error cases in vxio and dmp layer.

* 3933876 (Tracking ID: 3894657)

SYMPTOM:
VxVM commands may hang when using space optimized snapshot.

DESCRIPTION:
If there is a volume with DRL enabled having space optimized and mirrored 
cache 
object volume which DRL enabled, VxVM commands may hang. If the IO load on the 
volume is 
high it can lead to memory crunch as memory stabilization is done when 
DRL(Dirty 
Region Logging) is enabled. The IOs in the queue may wait for memory to become 
free. In the meantime, other VxVM commands which require changing the 
configuration of the volumes may hang
because of IO not able to proceed.

RESOLUTION:
Memory stabilization is not required for VxVM generated internal IO's for 
cache 
object volume. Code changes have be done to eliminate memory stabilization for 
cache object IOs.

* 3933877 (Tracking ID: 3914789)

SYMPTOM:
System may panic when reclaiming on secondary in VVR(Veritas Volume Replicator)
environment. It's due to accessing invalid address, error message is similiar to
"data access MMU miss".

DESCRIPTION:
VxVM maintains a linked list to keep memory segment information. When accessing
its content with certain offset, linked list is traversed. Due to code defect
when offset is equal to segment chunk size, end of such segement is returned
instead of start of next segment. It can result silent memory corruption because
it tries to access memory out of its boundary. System can panic when out of
boundary address isn't allocated yet.

RESOLUTION:
Code changes have been made to fix the out-of-boundary access.

* 3933878 (Tracking ID: 3918408)

SYMPTOM:
Data corruption when volume grow is attempted on thin reclaimable disks whose space is just freed.

DESCRIPTION:
When the space in the volume is freed by deleting some data or subdisks, the corresponding subdisks are marked for 
reclamation. It might take some time for the periodic reclaim task to start if not issued manually. In the meantime, if 
same disks are used for growing another volume, it can happen that reclaim task will go ahead and overwrite the data 
written on the new volume. Because of this race condition between reclaim and volume grow operation, data corruption 
occurs.

RESOLUTION:
Code changes are done to handle race condition between reclaim and volume grow operation. Also reclaim is skipped for 
those disks which have been already become part of new volume.

* 3933880 (Tracking ID: 3864063)

SYMPTOM:
Application I/O hangs after the Master Pause command is issued.

DESCRIPTION:
Some flags (VOL_RIFLAG_DISCONNECTING or VOL_RIFLAG_REQUEST_PENDING) in VVR
(Veritas Volume Replicator) kernel are not cleared because of a race between the
Master Pause SIO and the Error Handler SIO. This causes the RU (Replication
Update) SIO to fail to proceed, which leads to I/O hang.

RESOLUTION:
The code is modified to handle the race condition.

* 3933882 (Tracking ID: 3865721)

SYMPTOM:
Vxconfigd hang in dealing transaction while pausing the replication in 
Clustered VVR environment.

DESCRIPTION:
In Clustered VVR (CVM VVR) environment, while pausing replication which is in 
DCM (Data Change Map) mode, the master pause SIO (staging IO) can not finish 
serialization since there are metadata shipping SIOs in the throttle queue 
with the activesio count added. Meanwhile, because master pause 
SIOs SERIALIZE flag is set, DCM flush SIO can not be started to flush the 
throttle queue. It leads to a dead loop hang state. Since the master pause 
routine needs to sync up with transaction routine, vxconfigd hangs in 
transaction.

RESOLUTION:
Code changes were made to flush the metadata shipping throttle queue if master 
pause SIO can not finish serialization.

* 3933883 (Tracking ID: 3867236)

SYMPTOM:
Application IO hang happens after issuing Master Pause command.

DESCRIPTION:
The flag VOL_RIFLAG_REQUEST_PENDING in VVR(Veritas Volume Replicator) kernel is 
not cleared because of a race between Master Pause SIO and RVWRITE1 SIO resulting 
in RU (Replication Update) SIO to fail to proceed thereby causing IO hang.

RESOLUTION:
Code changes have been made to handle the race condition.

* 3933884 (Tracking ID: 3868154)

SYMPTOM:
When DMP Native Support is set to ON, and if a dmpnode has multiple VGs,
'vxdmpadm native ls' shows incorrect VG entries for dmpnodes.

DESCRIPTION:
When DMP Native Support is set to ON, multiple VGs can be created on a disk as
Linux supports creating VG on a whole disk as well as on a partition of 
a disk.This possibility was not handled in the code, hence the display of
'vxdmpadm native ls' was getting messed up.

RESOLUTION:
Code now handles the situation of multiple VGs of a single disk

* 3933889 (Tracking ID: 3879234)

SYMPTOM:
dd read on the Veritas Volume Manager (VxVM) character device fails with 
Input/Output error while accessing end of device like below:

[root@dn pmansukh_debug]# dd if=/dev/vx/rdsk/hfdg/vol1 of=/dev/null bs=65K
dd: reading `/dev/vx/rdsk/hfdg/vol1': Input/output error
15801+0 records in
15801+0 records out
1051714560 bytes (1.1 GB) copied, 3.96065 s, 266 MB/s

DESCRIPTION:
The issue occurs because of the change in the Linux API 
generic_file_aio_read. Because of lot of changes in Linux API 
generic_file_aio_read, 
it does not properly handle end of device reads/writes. The Linux code has 
been changed to use blkdev_aio_read which is a GPL symbol and hence 
cannot be used.

RESOLUTION:
Made changes in the code to handle end of device reads/writes properly.

* 3933890 (Tracking ID: 3879324)

SYMPTOM:
VxVM(Veritas Volume Manager) DR(Dynamic Reconfiguration) tool fails to 
handle busy device problem while LUNs are removed from OS

DESCRIPTION:
OS devices may still be busy after removing them from OS, it fails 'luxadm -
e offline <disk>' operation and leaves staled entries in 'vxdisk list' 
output 
like:
emc0_65535   auto            -            -            error
emc0_65536   auto            -            -            error

RESOLUTION:
Code changes have been done to address busy devices issue.

* 3933897 (Tracking ID: 3907618)

SYMPTOM:
vxdisk resize leads to data corruption on filesystem with MSDOS labelled disk having VxVM sliced format.

DESCRIPTION:
vxdisk resize changes the geometry on the device if required. When vxdisk resize is in progress, absolute offsets i.e offsets starting 
from start of the device are used. For MSDOS labelled disk, the full disk is devoted on Slice 4 but not slice 0. Thus when IO is 
scheduled on the device an extra 32 sectors gets added to the IO which is not required since we are already starting the IO from start of 
the device. This leads to data corruption since the IO on the device shifted by 32 sectors.

RESOLUTION:
Code changes have been made to not add 32 sectors to the IO when vxdisk resize is in progress to avoid corruption.

* 3933898 (Tracking ID: 3908987)

SYMPTOM:
The following unnecessary error message is printed to inform customer hot 
relocation will be performed on master mode.

VxVM vxrelocd INFO V-5-2-6551
hot-relocation operation for shared disk group will be performed on master 
node.

DESCRIPTION:
In case there're failed disks the message will be printed. Because related 
code is not placed in right position, it's printed even if there's no failed 
disks.

RESOLUTION:
Code changes have been make to fix the issue.

* 3933900 (Tracking ID: 3915523)

SYMPTOM:
Local disk from other node belonging to private DG is exported to the node when
a private DG is imported on current node.

DESCRIPTION:
When we try to import a DG, all the disks belonging to the DG are automatically
exported to the current node so as to make sure 
that the DG gets imported. This is done to have same behaviour as SAN with local
disks as well. Since we are exporting all disks in 
the DG, then it happens that disks which belong to same DG name but different
private DG on other node get exported to current node 
as well. This leads to wrong disk getting selected while DG gets imported.

RESOLUTION:
Instead of DG name, DGID (diskgroup ID) is used to decide whether disk needs to
be exported or not.

* 3933904 (Tracking ID: 3921668)

SYMPTOM:
Running the vxrecover command with -m option fails when run on the
slave node with message "The command can be executed only on the master."

DESCRIPTION:
The issue occurs as currently vxrecover -g <dgname> -m command on shared
disk groups is not shipped using the command shipping framework from CVM
(Cluster Volume Manager) slave node to the master node.

RESOLUTION:
Implemented code change to ship the vxrecover -m command to the master
node, when its triggered from the slave node.

* 3933907 (Tracking ID: 3873123)

SYMPTOM:
When remote disk on node is EFI disk, vold enable fails.
And following message get logged, and eventually causing the vxconfigd to go 
into disabled state:
Kernel and on-disk configurations don't match; transactions are disabled.

DESCRIPTION:
This is becasue one of the cases of EFI remote disk is not properly handled
in disk recovery part when vxconfigd is enabled.

RESOLUTION:
Code changes have been done to set the EFI flag on darec in recovery code

* 3933910 (Tracking ID: 3910228)

SYMPTOM:
Registration of GAB(Global Atomic Broadcast) port u fails on slave nodes after 
multiple new devices are added to the system..

DESCRIPTION:
Vxconfigd sends command to GAB for port u registration and waits for a respnse 
from GAB. During this timeframe if the vxconfigd is interrupted by any other 
module apart from GAB then it will not be able to receive the signal from GAB 
of successful registration. Since the signal is not received, vxconfigd 
believes the registration did not succeed and treats it as a failure.

RESOLUTION:
Mask the signals which vxconfigd can receive before waiting for the signal from 
GAB for registration of gab u port.

* 3933911 (Tracking ID: 3925377)

SYMPTOM:
Not all disks could be discovered by Dynamic Multi-Pathing(DMP) after first 
startup..

DESCRIPTION:
DMP is started too earlier in the boot process if iSCSI and raw haven't been 
installed. Till that point the FC devices are not recognized by OS, hence DMP 
misses FC devices.

RESOLUTION:
The code is modified to make sure DMP get started after OS disk discovery.

* 3937540 (Tracking ID: 3906534)

SYMPTOM:
After enabling DMP (Dynamic Multipathing) Native support, enable /boot to be
mounted on DMP device.

DESCRIPTION:
Currently /boot is mounted on top of OS (Operating System) device. When DMP
Native support is enabled, only VG's (Volume Groups) are migrated from OS 
device to DMP device.This is the reason /boot is not migrated to DMP device.
With this if OS device path is not available then system becomes unbootable 
since /boot is not available. Thus it becomes necessary to mount /boot on DMP
device to provide multipathing and resiliency.

RESOLUTION:
Code changes have been done to migrate /boot on top of DMP device when DMP
Native support is enabled.
Note - The code changes are currently implemented for RHEL-6 only. For other
linux platforms, /boot will still not be mounted on the DMP device

* 3937541 (Tracking ID: 3911930)

SYMPTOM:
Valid PGR operations sometimes fail on a dmpnode.

DESCRIPTION:
As part of the PGR operations, if the inquiry command finds that PGR is not
supported on the dmpnode node, a flag PGR_FLAG_NOTSUPPORTED is set on the
dmpnode.
Further PGR operations check this flag and issue PGR commands only if this flag
is
NOT set.
This flag remains set even if the hardware is changed so as to support PGR.

RESOLUTION:
A new command (namely enablepr) is provided in the vxdmppr utility to clear this
flag on the specified dmpnode.

* 3937542 (Tracking ID: 3917636)

SYMPTOM:
Filesystems from /etc/fstab file are not mounted automatically on boot 
through systemd on RHEL7 and SLES12.

DESCRIPTION:
While bootup, when systemd tries to mount using the devices mentioned in 
/etc/fstab file on the device, the device is not accessible leading to the 
failure of the mount operation. As the device discovery happens through udev 
infrastructure, the udev-rules for those 
devices need to be run when volumes are created so that devices get 
registered with systemd. In the case udev rules are executed even before the 
devices in "/dev/vx/dsk" directory are created.
Since the devices are not created, devices will not be registered with 
systemd leading to the failure of mount operation.

RESOLUTION:
Run "udevadm trigger" to execute all the udev rules once all volumes are 
created so that devices are registered.

* 3937549 (Tracking ID: 3934910)

SYMPTOM:
IO errors on data volume or file system happen after some cycles of snapshot 
creation/removal with dg reimport.

DESCRIPTION:
With the snapshot of the data volume removal and the dg reimport, the DRL map 
keep active rather than to be inactivated. With the new snapshot created, the 
DRL would be re-enabled and new DRL map allocated with the first write to the 
data volume. The original active DRL map would not be used and leaked. After 
some such cycles, the extent of the DCO volume would be exhausted due to the 
active but not be used DRL maps, then no more DRL map could be allocated and 
the IOs would be failed or unable to be issued on the data volume.

RESOLUTION:
Code changes are done to inactivate the DRL map if the DRL is disabled during 
the volume start, then it could be reused later safely.

* 3937550 (Tracking ID: 3935232)

SYMPTOM:
Replication and IO hang may happen on new master node during master 
takeover.

DESCRIPTION:
During master switch is in progress if log owner change kicks in, flag 
VOLSIO_FLAG_RVC_ACTIVE will be set by log owner change SIO. 
RVG(Replicated Volume Group) recovery initiated by master switch  will clear 
flag VOLSIO_FLAG_RVC_ACTIVE after RVG recovery done. When log owner 
change done,  as flag VOLSIO_FLAG_RVC_ACTIVE has been cleared, resetting 
flag VOLOBJ_TFLAG_VVR_QUIESCE is skipped. The present of flag 
VOLOBJ_TFLAG_VVR_QUIESCE will make replication and application IO on RVG 
always be in pending state.

RESOLUTION:
Code changes have been done to make log owner change wait until master 
switch completed.

* 3937808 (Tracking ID: 3931936)

SYMPTOM:
In FSS(Flexible Storage Sharing) environment, after restarting slave node VxVM 
command on master node hang result in failed disks on slave node could not 
rejoin disk group.

DESCRIPTION:
While lost remote disks on slave node comes back, online these disk and add 
them to disk group operations are performed on master node. Disk online 
includes operations from both master and slave node. On slave node these 
disks 
should be offlined then reonlined, but due to code defect reonline disks are 
missed result in these disks are kept in reonlining state. The following add disk 
to 
disk group operation needs to issue private region IOs on the disk. These IOs 
are 
shipped to slave node to complete. As the disks are in reonline state, busy error 
gets returned and remote IOs keep retrying, hence VxVM command hang on 
master node.

RESOLUTION:
Code changes have been made to fix the issue.

* 3937811 (Tracking ID: 3935974)

SYMPTOM:
While communicating with client process, vxrsyncd daemon terminates and after 
sometime it gets started or may require a reboot to start.

DESCRIPTION:
When the client process shuts down abruptly and vxrsyncd daemon attempt to write 
on the client socket, SIGPIPE signal is generated. The default action for this 
signal is to terminate the process. Hence vxrsyncd gets terminated.

RESOLUTION:
This SIGPIPE signal should be handled in order to prevent the termination of 
vxrsyncd.

* 3940039 (Tracking ID: 3897047)

SYMPTOM:
Filesystems are not mounted automatically on boot through systemd on RHEL7 and
SLES12.

DESCRIPTION:
When systemd service tries to start all the FS in /etc/fstab, the Veritas Volume

Manager (VxVM) volumes are not started since vxconfigd is still not up. The VxVM

volumes are started a little bit later in the boot process. Since the volumes
are 
not available, the FS are not mounted automatically at boot.

RESOLUTION:
Registered the VxVM volumes with UDEV daemon of Linux so that the FS would be 
mounted when the VxVM volumes are started and discovered by udev.

* 3940143 (Tracking ID: 3941037)

SYMPTOM:
VxVM (Veritas Volume Manager) creates some required files under /tmp
and /var/tmp directories.

DESCRIPTION:
VxVM (Veritas Volume Manager) creates some .lock files under /etc/vx directory. 

The non-root users have access to these .lock files, and they may accidentally
modify, move or delete those files.
Such actions may interfere with the normal functioning of the Veritas Volume
Manager.

RESOLUTION:
This Fix address the issue by masking the write permission for non-root users
for these .lock files.

Patch ID: VRTSaslapm 7.3.1.110

* 3944310 (Tracking ID: 3944312)

SYMPTOM:
VRTSaslapm package(rpm) doesn't function correctly for RHEL7.5.

DESCRIPTION:
Due to changes in RHEL7.5 update, there are breakages in APM(Array Policy Module) kernel modules
present in VRTSaslapm package. Hence the currently available VRTSaslapm doesn't function with RHEL7.5.
The VRTSaslapm code needs to be recompiled with RHEL7.5 kernel.

RESOLUTION:
VRTSaslapm  is recompiled with RHEL7.5 kernel.

Patch ID: VRTSodm 7.3.1.200

* 3943732 (Tracking ID: 3938546)

SYMPTOM:
ODM module failed to load on RHEL7.5.

DESCRIPTION:
Since RHEL7.5 is new release therefore ODM module failed to load
on it.

RESOLUTION:
Added ODM support for RHEL7.5.

* 3939411 (Tracking ID: 3941018)

SYMPTOM:
VRTSodm driver will not load with 7.3.1.100 VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm due to recent changes in VRTSvxfs 
header files due to which some symbols are not being resolved.

RESOLUTION:
Recompiled the VRTSodm with new changes in VRTSvxfs header files.

Patch ID: VRTSvxfs 7.3.1.200

* 3935903 (Tracking ID: 3933763)

SYMPTOM:
Oracle was hung, plsql session and ssh both were hanging.

DESCRIPTION:
There is a case in reuse code path where we are hitting dead loop while processing
inactivation thread.

RESOLUTION:
To fix this loop, we are going to try only once to finish inactivation of inodes
before we allocate a new inode for structural/attribute inode processing.

* 3940266 (Tracking ID: 3940235)

SYMPTOM:
A hang might be observed in case filesystem gets disbaled while enospace
handling is being taken care by inactive processing.
The stacktrace might look like:

 cv_wait+0x3c() ]
 delay_common+0x70()
 vx_extfree1+0xc08()
 vx_extfree+0x228()
 vx_te_trunc_data+0x125c()
 vx_te_trunc+0x878()
 vx_trunc_typed+0x230()
 vx_trunc_tran2+0x104c()
 vx_trunc_tran+0x22c()
 vx_trunc+0xcf0()
 vx_inactive_remove+0x4ec()
 vx_inactive_tran+0x13a4()
 vx_local_inactive_list+0x14()
 vx_inactive_list+0x6e4()
 vx_workitem_process+0x24()
 vx_worklist_process+0x1ec()
 vx_worklist_thread+0x144()
 thread_start+4()

DESCRIPTION:
In function smapchange funtion, it is possible in case of races that SMAP can
record the oldstate as VX_EAU_FREE or VX_EAU_ALLOCATED. But, the corresponding
EMAP won't be updated. This will happen if the concerned flag can get reset to 0
by some other thread in between. This leads to fm_dirtycnt leak which causes a
hang sometime afterwards.

RESOLUTION:
Code changes has been done to fix the issue by using the local variable instead of
global dflag variable directly which can get reset to 0.

* 3941773 (Tracking ID: 3928046)

SYMPTOM:
VxFS panic in the stack like below due to memory address not aligned:
void vxfs:vx_assemble_hdoffset+0x18
void vxfs:vx_assemble_opts+0x8c
void vxfs:vx_assemble_rwdata+0xf4
void vxfs:vx_gather_rwdata+0x58
void vxfs:vx_rwlock_putdata+0x2f8
void vxfs:vx_glm_cbfunc+0xe4
void vxfs:vx_glmlist_thread+0x164
unix:thread_start+4

DESCRIPTION:
The panic issue happened on copying piggyback data from inode to data buffer 
for the rwlock under revoke processing. After some data has been copied to 
the 
data buffer, it reached to a 32-bits aligned address, but the value (large 
dir 
freespace offset) which is defined as 64-bits data type was being accessed 
at 
the address. Then it causes system panic due to memory address not aligned.

RESOLUTION:
The code changed by copy data to the 32-bits aligned address through bcopy() 
rather than access directly.

* 3942697 (Tracking ID: 3940846)

SYMPTOM:
While upgrading the filesystem from DLV 9 to DLV 10, vxupgrade fails
with following error :

# vxupgrade -n 10 <mount point>
UX:vxfs vxupgrade: ERROR: V-3-22567: cannot upgrade <volname> - Not owner

DESCRIPTION:
while upgrading from DLV 9 to DLV 10 or upwards, upgrade code path serches for
mkfs version in histlog. This changes has been introduced to perform some
specific operations while upgrading the filesystem. If mkfs has been done for
the filesystem with version 6 and then the upgrade has been performed then only
this issue pops up as the histlog conversion from DLV 6 to DLV 7 does not
propagate the mkfs version field. This issue will occur only on Infoscale 7.3.1
onwards.

RESOLUTION:
Code changes have been done to allow DLV upgrade even in cases where mkfs
version is not present in the histlog.

* 3943731 (Tracking ID: 3938544)

SYMPTOM:
VxFS module failed to load on RHEL7.5.

DESCRIPTION:
Since RHEL7.5 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for RHEL7.5.

* 3933810 (Tracking ID: 3830300)

SYMPTOM:
Heavy cpu usage while oracle archive process are running on a clustered
fs.

DESCRIPTION:
The cause of the poor read performance in this case was due to fragmentation,
fragmentation mainly happens when there are multiple archivers running on the
same node. The allocation pattern of the oracle archiver processes is 

1. write header with O_SYNC
2. ftruncate-up the file to its final size ( a few GBs typically)
3. do lio_listio with 1MB iocbs

The problem occurs because all the allocations in this manner go through
internal allocations i.e. allocations below file size instead of allocations
past the file size. Internal allocations are done at max 8 Pages at once. So if
there are multiple processes doing this, they all get these 8 Pages alternately
and the fs becomes very fragmented.

RESOLUTION:
Added a tunable, which will allocate zfod extents when ftruncate
tries to increase the size of the file, instead of creating a hole. This will
eliminate the allocations internal to file size thus the fragmentation. Fixed
the earlier implementation of the same fix, which ran into
locking issues. Also fixed the performance issue while writing from secondary node.

* 3933819 (Tracking ID: 3879310)

SYMPTOM:
The file system may get corrupted after the file system freeze during 
vxupgrade. The full fsck gives the following errors:

UX:vxfs fsck: ERROR: V-3-20451: No valid device inodes found
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate

DESCRIPTION:
The vxupgrade requires the file system to be frozen during its functional 
operation. It may happen that the corruption can be detected while the freeze 
is in progress and the full fsck flag can be set on the file system. However, 
this doesn't stop the vxupgrade from proceeding.
At later stage of vxupgrade, after structures related to the new disk layout 
are updated on the disk, vxfs frees up and zeroes out some of the old metadata 
inodes. If any error occurs after this point (because of full fsck being set), 
the file system needs to go back completely to the previous version at the tile 
of full fsck. Since the metadata corresponding to the previous version is 
already cleared, the full fsck cannot proceed and gives the error.

RESOLUTION:
The code is modified to check for the full fsck flag after freezing the file 
system during vxupgrade. Also, disable the file system if an error occurs after 
writing new metadata on the disk. This will force the newly written metadata to 
be loaded in memory on the next mount.

* 3933820 (Tracking ID: 3894712)

SYMPTOM:
ACL permissions are not inherited correctly on cluster file system.

DESCRIPTION:
The ACL counts stored on a directory inode gets reset every 
time directory inodes 
ownership is switched between the nodes. When ownership on directory inode 
comes back to the node, 
which  previously abdicated it, ACL permissions were not getting inherited 
correctly for the newly 
created files.

RESOLUTION:
Modified the source such that the ACLs are inherited correctly.

* 3933824 (Tracking ID: 3908785)

SYMPTOM:
System panic observed because of null page address in writeback structure in case of 
kswapd 
process.

DESCRIPTION:
Secfs2/Encryptfs layers had used write VOP as a hook when Kswapd is triggered to 
free page. 
Ideally kswapd should call writepage() routine where writeback structure are correctly filled.  When 
write VOP is 
called because of hook in secfs2/encrypts, writeback structures are cleared, resulting in null page 
address.

RESOLUTION:
Code changes has been done to call VxFS kswapd routine only if valid page address is 
present.

* 3933828 (Tracking ID: 3921152)

SYMPTOM:
Performance drop. Core dump shows threads doing vx_dalloc_flush().

DESCRIPTION:
An implicit typecast error in vx_dalloc_flush() can cause this performance issue.

RESOLUTION:
The code is modified to do an explicit typecast.

* 3933834 (Tracking ID: 3931761)

SYMPTOM:
cluster wide hang may be observed in a race scenario in case freeze gets
initiated and there are multiple pending workitems in the worklist related to
lazy isize update workitems.

DESCRIPTION:
If lazy_isize_enable tunable is ON and "ls -l" is getting executed from the
non-writing node of the cluster frequently, it accumulates a huge number of
workitems to get processed by worker threads. In case there is any workitem with
active level 1 held which is enqueued after these workitems and clusterwide
freeze
gets initiated, it leads to deadlock situation. The worker threads
would get exhausted in processing the lazy isize update work items and the
thread
which is enqueued in the worklist would never get a chance to be processed.

RESOLUTION:
code changes have been done to handle this race condition.

* 3933843 (Tracking ID: 3926972)

SYMPTOM:
Once a node reboots or goes out of the cluster, the whole cluster can hang.

DESCRIPTION:
This is a three way deadlock, in which a glock grant could block the recovery while trying to 
cache the grant against an inode. But when it tries for ilock, if the lock is held by hlock revoke and 
waiting to get a glm lock, in our case cbuf lock, then it won't be able to get that because a 
recovery is in progress. The recovery can't proceed because glock grant thread blocked it.

Hence the whole cluster hangs.

RESOLUTION:
The fix is to avoid taking ilock in GLM context, if it's not available.

* 3933844 (Tracking ID: 3922259)

SYMPTOM:
A force umount hang with stack like this:
- vx_delay
- vx_idrop
- vx_quotaoff_umount2
- vx_detach_fset
- vx_force_umount
- vx_aioctl_common
- vx_aioctl
- vx_admin_ioctl
- vxportalunlockedkioctl
- vxportalunlockedioctl
- do_vfs_ioctl
- SyS_ioctl
- system_call_fastpath

DESCRIPTION:
An opened external quota file was preventing the force umount from continuing.

RESOLUTION:
Code has been changed so that an opened external quota file will be processed
properly during the force umount.

* 3933912 (Tracking ID: 3922986)

SYMPTOM:
System panic since Linux NMI Watchdog detected LOCKUP in CFS.

DESCRIPTION:
The vxfs buffer cache iodone routine interrupted the inode flush thread 
which 
was trying to acquire the cfs buffer hash lock with releasing the cfs 
buffer. 
And the iodone routine was blocked by other threads on acquiring the free 
list 
lock. In the cycle, the other threads were contending the cfs buffer hash 
lock 
with the inode flush thread. On Linux, the spinlock is FIFO tickets lock, so 
if the inode flush thread set ticket on the spinlock earlier, other threads 
cant acquire the lock. This caused a dead lock issue.

RESOLUTION:
Code changes are made to ensure acquiring the cfs buffer hash lock with irq 
disabled.

* 3934841 (Tracking ID: 3930267)

SYMPTOM:
Deadlock between fsq flush threads and writer threads.

DESCRIPTION:
In linux, under certain circumstances i.e. to account dirty pages, writer threads takes lock on inode 
and start flushing dirty pages which will need page lock. In this case, if fsq flush threads start flushing transaction 
on the same inode then it will need the inode lock which was held by writer thread. The page lock was taken by 
another writer thread which is waiting for transaction space which can be only freed by fsq flush thread. This 
leads to deadlock between these 3 threads.

RESOLUTION:
Code is modified to add a new flag which will skip dirty page accounting.

* 3936286 (Tracking ID: 3936285)

SYMPTOM:
fscdsconv command may fail the conversion for disk layout version 12 and
above.After exporting file system for use on the specified target, it fails to
mount on that specified target with below error:

# /opt/VRTS/bin/mount <vol> <mount-point>
UX:vxfs mount: ERROR: V-3-20012: not a valid vxfs file system
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk layout version

when importing file system on target for use on the same system, it asks for
'fullfsck' during mount. After 'fullfsck', file system mounted successfully. But
fsck gives below
meesages:

# /opt/VRTS/bin/fsck -y -o full /dev/vx/rdsk/mydg/myvol
log replay in progress
intent log does not contain valid log entries
pass0 - checking structural files
fileset 1 primary-ilist inode 34 (SuperBlock)
                failed validation clear? (ynq)y
pass1 - checking inode sanity and blocks
rebuild structural files? (ynq)y
pass0 - checking structural files
pass1 - checking inode sanity and blocks
pass2 - checking directory linkage
pass3 - checking reference counts
pass4 - checking resource maps
corrupted CUT entries, clear? (ynq)y
au 0 emap incorrect - fix? (ynq)y
OK to clear log? (ynq)y
flush fileset headers? (ynq)y
set state to CLEAN? (ynq)y

DESCRIPTION:
While checking for filesystem version in fscdsconv, the check for DLV 12 and above
was missing and that triggered this issue.

RESOLUTION:
Code changes have been done to handle filesystem version 12 and above for
fscdsconv command.

* 3937536 (Tracking ID: 3940516)

SYMPTOM:
The file resize thread loops infinitely if tried to resize file to a size 
greater than 4TB

DESCRIPTION:
Because of vx_u32_t typecast in vx_odm_resize function, resize threads gets stuck
inside an infinite loop.

RESOLUTION:
Removed vx_u32_t typcast in vx_odm_resize() to handle such scenarios.

* 3938258 (Tracking ID: 3938256)

SYMPTOM:
When checking file size through seek_hole, it will return incorrect offset/size when 
delayed allocation is enabled on the file.

DESCRIPTION:
In recent version of RHEL7 onwards, grep command uses seek_hole feature to check 
current file size and then it reads data depends on this file size. In VxFS, when dalloc is enabled, we 
allocate the extent to file later but we increment the file size as soon as write completes. When 
checking the file size in seek_hole, VxFS didn't completely consider case of dalloc and it was 
returning stale size, depending on the extent allocated to file, instead of actual file size which was 
resulting in reading less amount of data than expected.

RESOLUTION:
Code is modified in such way that VxFS will now return correct size in case dalloc is 
enabled on file and seek_hole is called on that file.

* 3939406 (Tracking ID: 3941034)

SYMPTOM:
During forced umount the vxfs worker thread may continuously spin on a CPU

DESCRIPTION:
During forced unmount a vxfs worker thread need a semaphore to drop super block
reference but that semaphore is held by vxumount thread and this vxumount thread
waiting for a event to happened. This situation causing a softlockup panic on the
system because vxfs worker thread continuously spinning on a CPU to grab semaphore.

RESOLUTION:
Code changes are done to fix this issue.

* 3940368 (Tracking ID: 3940268)

SYMPTOM:
File system having disk layout version 13 might get disabled in case the size of
the directory surpasses the vx_dexh_sz value.

DESCRIPTION:
When LDH (large directory Hash) hash directory is filled up and the buckets are
filled up, we extend the size of the hash directory. For this we create a reorg
inode and copy extent map of LDH attr inode into reorg inode. This is done using
extent map reorg function. In that function, we check whether extent reorg
structure was passed for the same inode or not. If its not, then we dont
proceed with extent copying. we setup the extent reorg structure accordingly but
while setting up the fileset index,  we use inodes i_fsetindex. But in disk
layout version 13 onwards, we have overlaid the attribute inode and because of
these changes, we no longer sets i_fsetindex in attribute inode and it will
remain 0. Hence the checks in extent map reorg function is failing and resulting in
disabling FS.

RESOLUTION:
Code has been modified to pass correct fileset.

* 3940652 (Tracking ID: 3940651)

SYMPTOM:
During Disk Layout Version (DLV) vxupgrade command might observe a hang.

DESCRIPTION:
vxupgrade does a lookup on histino to identify mkfs version. In case of CFS 
lookup requires RWLOCK or GLOCK on inode.

RESOLUTION:
Code changes have been done to take RWLOCK and GLOCK on inode.

* 3940830 (Tracking ID: 3937042)

SYMPTOM:
Data corruption seen when issuing writev with mixture of named page and anonymous page 
buffers.

DESCRIPTION:
During writes, VxFS prefaults all of the user buffers into kernel and decides the write length 
depending on this prefault length. In case of mixed page buffers, VxFs issues prefault separately for each 
page i.e. for named page and anon page. This reduces length to be wrote and will cause page create 
optimization. Since VxFs erroneously enables page create optimization, data corruption was seen on disk.

RESOLUTION:
Code is modified such that VxFS will not enable page create optimization when short prefault is 
seen.

Patch ID: VRTSdbac-7.3.1.100

* 3944197 (Tracking ID: 3944179)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
5(RHEL7.5) is now introduced.

Patch ID: VRTSamf-7.3.1.100

* 3944196 (Tracking ID: 3944179)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
5(RHEL7.5) is now introduced.

Patch ID: VRTSvxfen-7.3.1.200

* 3944195 (Tracking ID: 3944179)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
5(RHEL7.5) is now introduced.

Patch ID: VRTSgab-7.3.1.100

* 3944182 (Tracking ID: 3944179)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
5(RHEL7.5) is now introduced.

Patch ID: VRTSllt-7.3.1.200

* 3944181 (Tracking ID: 3944179)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
5(RHEL7.5) is now introduced.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel7_x86_64-Patch-7.3.1.100.tar.gz to /tmp
2. Untar infoscale-rhel7_x86_64-Patch-7.3.1.100.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel7_x86_64-Patch-7.3.1.100.tar.gz
    # tar xf /tmp/infoscale-rhel7_x86_64-Patch-7.3.1.100.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale731P100 [<host1> <host2>...]

You can also install this patch together with 7.3.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.3.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
#Manual installation is not supported


REMOVING THE PATCH
------------------
#Manual uninstallation is not supported


SPECIAL INSTRUCTIONS
--------------------
In case of OS upgrade it is recommended that upgrade the RHEL7.5 support patch first and then carry the OS upgrade.


OTHERS
------
NONE
Read and accept Terms of Service