infoscale-rhel7.7_x86_64-Patch-7.4.1.1300
Obsolete
The latest patch(es) : infoscale-rhel7_x86_64-Patch-7.4.1.2900 

 Basic information
Release type: Patch
Release date: 2019-08-06
OS update support: RHEL7 x86-64 Update 7
Technote: None
Documentation: None
Popularity: 10657 viewed    downloaded
Download size: 260.08 MB
Checksum: 1504628598

 Applies to one or more of the following products:
InfoScale Availability 7.4.1 On RHEL7 x86-64
InfoScale Enterprise 7.4.1 On RHEL7 x86-64
InfoScale Foundation 7.4.1 On RHEL7 x86-64
InfoScale Storage 7.4.1 On RHEL7 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
infoscale-rhel7_x86_64-Patch-7.4.1.3100 2022-01-16
infoscale-rhel7_x86_64-Patch-7.4.1.2900 2021-10-27
infoscale-rhel7_x86_64-Patch-7.4.1.1900 (obsolete) 2020-04-24

This patch supersedes the following patches: Release date
infoscale-rhel7_x86_64-Patch-7.4.1.1200 (obsolete) 2019-07-09

 Fixes the following incidents:
3970470, 3970482, 3973076, 3975897, 3977310, 3978184, 3978195, 3978208, 3978645, 3978646, 3978649, 3978678, 3979375, 3979397, 3979398, 3979400, 3979440, 3979462, 3979471, 3979475, 3979476, 3979656, 3980044, 3980457, 3980679, 3981028, 3981548, 3981628, 3981631, 3981738, 3982214, 3982215, 3982216, 3982217, 3982218

 Patch ID:
VRTSvxvm-7.4.1.1300-RHEL7
VRTSvxfs-7.4.1.1300-RHEL7
VRTSodm-7.4.1.1200-RHEL7
VRTSllt-7.4.1.1200-RHEL7
VRTSgab-7.4.1.1200-RHEL7
VRTSvxfen-7.4.1.1300-RHEL7
VRTSamf-7.4.1.1200-RHEL7
VRTSdbac-7.4.1.1200-RHEL7

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.1 * * *
                         * * * Patch 1300 * * *
                         Patch Date: 2019-08-01


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4.1 Patch 1300


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL7 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSdbac
VRTSgab
VRTSllt
VRTSodm
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.1
   * InfoScale Enterprise 7.4.1
   * InfoScale Foundation 7.4.1
   * InfoScale Storage 7.4.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSllt-7.4.1.1200
* 3982214 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSodm-7.4.1.1200
* 3981631 (3981630) ODM module failed to load on RHEL7.7
Patch ID: VRTSvxfs-7.4.1.1300
* 3981548 (3980741) Code changes to prevent data corruption in delayed allocation enabled filesystem.
* 3981628 (3981627) VxFS module failed to load on RHEL7.7.
* 3981738 (3979693) Fix for vxupgrade failing to upgrade from DLV 7 to 8 and returning EINVAL
* 3970470 (3970480) A kernel panic occurs when writing to cloud files.
* 3970482 (3970481) A file system panic occurs if many inodes are being used.
* 3978645 (3975962) Mounting a VxFS file system with more than 64 PDTs may panic the server.
* 3978646 (3931761) Cluster wide hang may be observed in case of high workload.
* 3978649 (3978305) The vx_upgrade command causes VxFS to panic.
* 3979400 (3979297) A kernel panic occurs when installing VxFS on RHEL6.
* 3980044 (3980043) A file system corruption occurred during a filesystem mount operation.
Patch ID: VRTSvxvm-7.4.1.1300
* 3980679 (3980678) VxVM support on RHEL 7.7
* 3973076 (3968642) [VVR Encrypted]Intermittent vradmind hang on the new Primary
* 3975897 (3931048) VxVM (Veritas Volume Manager) creates particular log files with write permission
to all users.
* 3978184 (3868154) When DMP Native Support is set to ON, dmpnode with multiple VGs cannot be listed
properly in the 'vxdmpadm native ls' command
* 3978195 (3925345) /tmp/vx.* directories are frequently created due to a bug in vxvolgrp command.
* 3978208 (3969860) Event source daemon (vxesd) takes a lot of time to start when lot of LUNS (around 1700) are attached to the system.
* 3978678 (3907596) vxdmpadm setattr command gives error while setting the path attribute.
* 3979375 (3973364) I/O hang may occur when VVR Replication is enabled in synchronous mode.
* 3979397 (3899568) Adding tunable dmp_compute_iostats to start/stop the iostat gathering
persistently.
* 3979398 (3955979) I/O gets hang in case of synchronous Replication.
* 3979440 (3947265) Delay added in vxvm-startup script to wait for infiniband devices to get 
discovered leads to various issues.
* 3979462 (3964964) Soft lockup may happen in vxnetd because of invalid packets kept sending from port scan tool.
* 3979471 (3915523) Local disk from other node belonging to private DG(diskgroup) is exported to the
node when a private DG is imported on current 
node.
* 3979475 (3959986) Restarting the vxencryptd daemon may cause some IOs to be lost.
* 3979476 (3972679) vxconfigd kept crashing and couldn't start up.
* 3979656 (3975405) cvm_clus fails to stop even after "hastop -all" is triggered, and so the cluster nodes get stuck in the LEAVING state.
* 3980457 (3980609) Secondary node panic in server threads
* 3981028 (3978330) The values of the VxVM and the VxDMP tunables do not persist after reboot with 4.4 and later versions of the Linux kernel.
Patch ID: VRTSdbac-7.4.1.1200
* 3982218 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSamf-7.4.1.1200
* 3982217 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSvxfen-7.4.1.1300
* 3982216 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
* 3977310 (3974739) VxFen should be able to identify SCSI3 disks in Nutanix environments.
Patch ID: VRTSgab-7.4.1.1200
* 3982215 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSllt-7.4.1.1200

* 3982214 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSodm-7.4.1.1200

* 3981631 (Tracking ID: 3981630)

SYMPTOM:
ODM module failed to load on RHEL7.7

DESCRIPTION:
The RHEL7.7 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL7.7.

Patch ID: VRTSvxfs-7.4.1.1300

* 3981548 (Tracking ID: 3980741)

SYMPTOM:
File data can be lost in a race scenario between two dalloc background flushers.

DESCRIPTION:
In a race between two dalloc back ground flusher, we may end up flushing the data on disk without updating the file size accordingly, which create a scenario where some bytes of data will be lost.

RESOLUTION:
Code changes have been done in dalloc code path to remove the possibility of flushing the data without updating the on-disk size.

* 3981628 (Tracking ID: 3981627)

SYMPTOM:
VxFS module failed to load on RHEL7.7.

DESCRIPTION:
The RHEL7.7 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL7.7.

* 3981738 (Tracking ID: 3979693)

SYMPTOM:
vxupgrade fails while upgrading from DLV 7 to DLV 8 with following error message:
ERROR: V-3-22567: cannot upgrade /dev/vx/rdsk/dg_share/sq_informatica - Invalid argument

DESCRIPTION:
While doing allocation for RCQ inodes as part of vxupgrade from DLV 7 to 8, only the first RCQ inode 
allocation should be done from initial ilist extents and rest can be allocated from anywhere. In order to 
implement special allocation for first RCQ inode, VX_UPG_IALLOC_IEXT1 flag was checked in vx_upg_ialloc(). 
But the code changes done through incident 3936138 removed this check, which resulted in all RCQ inodes 
being allocated in same way. Since, vx_upg_olt_inoalloc() code only handles allocation for first RCQ inode and 
not others, it returned EINVAL.

RESOLUTION:
Added the check of flag  VX_UPG_IALLOC_IEXT1 in vx_upg_ialloc().

* 3970470 (Tracking ID: 3970480)

SYMPTOM:
A kernel panic occurs when writing to cloud files.

DESCRIPTION:
This issue occurs due to missing error value initialization.

RESOLUTION:
Initialization of the error value is done in the write code path, so that a proper error message is displayed when writing to cloud files.

* 3970482 (Tracking ID: 3970481)

SYMPTOM:
A file system panic occurs if many inodes are being used.

DESCRIPTION:
This issue occurs due to improperly managed ownership of inodes.

RESOLUTION:
The ownership of inodes in case of a large inode count is been fixed.

* 3978645 (Tracking ID: 3975962)

SYMPTOM:
Mounting a VxFS file system with more than 64 PDTs may panic the server.

DESCRIPTION:
For large memory systems, the number of auto-tuned VMM buffers is huge. To accumulate these buffers, VxFS needs more PDTs. Currently up to 128 PDTs are supported. However, for more than 64 PDTs, VxFS fails to initialize the strategy routine and calls a wrong function in the mount code path causing the system to panic.

RESOLUTION:
VxFS has been updated to initialize strategy routine for more than 64 PDTs.

* 3978646 (Tracking ID: 3931761)

SYMPTOM:
cluster wide hang may be observed in a race scenario if freeze gets initiated and there are multiple pending workitems in the worklist related to lazy isize update workitems.

DESCRIPTION:
If lazy_isize_enable tunable is set to ON and the 'ls -l' command is executed frequently from a non-writing node of the cluster, it accumulates a huge number of workitems to get processed by the worker threads. If any workitem with active level 1 is enqueued after these workitems, and clusterwide freeze gets initiated, it leads to a deadlock situation. The worker threads gets exhausted in processing the lazy isize update workitems and the workitem with active level 1 never gets processed. This leads to the cluster to stop responding.

RESOLUTION:
VxFS has been updated to discard the blocking lazy isize update workitems if freeze is in progress.

* 3978649 (Tracking ID: 3978305)

SYMPTOM:
The vx_upgrade command causes VxFS to panic.

DESCRIPTION:
When the vx_upgrade command is executed, VxFS incorrectly accesses the freed memory, and then it panics if the memory is paged-out.

RESOLUTION:
The code is modified to make sure that VXFS does not access the freed memory locations.

* 3979400 (Tracking ID: 3979297)

SYMPTOM:
A kernel panic occurs when installing VxFS on RHEL6.

DESCRIPTION:
During VxFS installation, the fs_supers list is not initialized. While de-referencing the fs_supers pointer, the kernel gets a NULL value for the superblock address and panics.

RESOLUTION:
VxFS has been updated to initialize fs_super during VxFS installation.

* 3980044 (Tracking ID: 3980043)

SYMPTOM:
During a filesystem mount operation, after the Intent log replay, a file system metadata corruption occurred.

DESCRIPTION:
As part of the log replay during mount, fsck replays the transactions, rebuilds the secondary maps, and updates the EAU and the superblock summaries. Fsck flushes the EAU secondary map and the EAU summaries to the disk in a delayed manner, but the EAU state is flushed to the disk synchronously. As a result, if the log replay fails once before succeeding during the filesystem mount, the state of the metadata on the disk may become inconsistent.

RESOLUTION:
The fsck log replay is updated to synchronously write secondary map and EAU summary to the disk.

Patch ID: VRTSvxvm-7.4.1.1300

* 3980679 (Tracking ID: 3980678)

SYMPTOM:
Earlier module failed to load on RHEL 7.7 .

DESCRIPTION:
RHEL 7.7 is new release and hence VxVM module is compiled with the RHEL 7.7 kernel .

RESOLUTION:
Compiled VxVM with RHEl 7.7 kernel bits .

* 3973076 (Tracking ID: 3968642)

SYMPTOM:
Intermittent vradmind hang on the new VVR Primary

DESCRIPTION:
Vradmind was trying to hold pthread write lock with corresponding read lock already held, during race conditions seen while migration role on new primary, which led to intermittent vradmind hangs.

RESOLUTION:
Changes done to minimize the window for which readlock is held and ensure that read lock is released early so that further attempts to grab write lock succeed.

* 3975897 (Tracking ID: 3931048)

SYMPTOM:
Few VxVM log files listed below are created with write permission to all users
which might lead to security issues.

/etc/vx/log/vxloggerd.log
/var/adm/vx/logger.txt
/var/adm/vx/kmsg.log

DESCRIPTION:
The log files are created with write permissions to all users, which is a
security hole. 
The files are created with default rw-rw-rw- (666) permission because the umask
is set to 0 while creating these files.

RESOLUTION:
Changed umask to 022 while creating these files and fixed an incorrect open
system call. Log files will now have rw-r--r--(644) permissions.

* 3978184 (Tracking ID: 3868154)

SYMPTOM:
When DMP Native Support is set to ON, and if a dmpnode has multiple VGs,
'vxdmpadm native ls' shows incorrect VG entries for dmpnodes.

DESCRIPTION:
When DMP Native Support is set to ON, multiple VGs can be created on a disk as
Linux supports creating VG on a whole disk as well as on a partition of 
a disk.This possibility was not handled in the code, hence the display of
'vxdmpadm native ls' was getting messed up.

RESOLUTION:
Code now handles the situation of multiple VGs of a single disk

* 3978195 (Tracking ID: 3925345)

SYMPTOM:
/tmp/vx.* directories are frequently created.

DESCRIPTION:
/tmp/vx.* directories are frequently created due to a bug in vxvolgrp command.

RESOLUTION:
Source change has been made.

* 3978208 (Tracking ID: 3969860)

SYMPTOM:
Event source daemon (vxesd) takes a lot of time to start when lot of LUNS (around 1700) are attached to the system.

DESCRIPTION:
Event source daemon creates a configuration file ddlconfig.info with the help of HBA API libraries. The configuration file is created by child process while the parent process is waiting for child to create the configuration file. If the number of LUNS are large then time taken for creation of configuration is also more. Thus the parent process keeps on waiting for the child process to complete the configuration and exit.

RESOLUTION:
Changes have been done to create the ddlconfig.info file in the background and let the parent exit immediately.

* 3978678 (Tracking ID: 3907596)

SYMPTOM:
vxdmpadm setattr command gives the below error while setting the path attribute:
"VxVM vxdmpadm ERROR V-5-1-14526 Failed to save path information persistently"

DESCRIPTION:
Device names on linux change once the system is rebooted. Thus the persistent attributes of the device are stored using persistent 
hardware path. The hardware paths are stored as symbolic links in the directory /dev/vx/.dmp. The hardware paths are obtained from 
/dev/disk/by-path using the path_id command. In SLES12, the command to extract the hardware path changes to path_id_compat. Since 
the command changed, the script was failing to generate the hardware paths in /dev/vx/.dmp directory leading to the persistent 
attributes not being set.

RESOLUTION:
Code changes have been made to use the command path_id_compat to get the hardware path from /dev/disk/by-path directory.

* 3979375 (Tracking ID: 3973364)

SYMPTOM:
In case of VVR (Veritas Volume Replicator) synchronous mode of replication with TCP protocol, if there are any network issues
I/O's may hang for upto 15-20 mins.

DESCRIPTION:
In VVR synchronous replication mode, if a node on primary site is unable to receive ACK (acknowledgement) message sent from the secondary
within the TCP timeout period, then IO may get hung till the TCP layer detects a timeout, which is ~ 15-20 minutes.
This issue may frequently happen in a lossy network where the ACKs could not be delivered to primary due to some network issues.

RESOLUTION:
A hidden tunable 'vol_vvr_tcp_keepalive' is added to allow users to enable TCP 'keepalive' for VVR data ports if the TCP timeout happens frequently.

* 3979397 (Tracking ID: 3899568)

SYMPTOM:
"vxdmpadm iostat stop" as per design cannot stop the iostat gathering
persistently. To avoid Performance & Memory crunch related issues, it is
generally recommended to stop the iostat gathering.There is a requirement
to provide such ability to stop/start the iostat gathering persistently
in those cases.

DESCRIPTION:
Today DMP iostat daemon is stopped using - "vxdmpadm iostat stop". but this 
is not persistent setting. After reboot this would be lost and hence 
customer
needs to also have to put this in init scripts at appropriate place for
persistent effect.

RESOLUTION:
Code is modified to provide a  tunable "dmp_compute_iostats" which can
start/stop the iostat gathering persistently.

Notes:
Use following command to start/stop the iostat gathering persistently.
# vxdmpadm settune dmp_compute_iostats=on/off.

* 3979398 (Tracking ID: 3955979)

SYMPTOM:
In case of Synchronous mode of replication with TCP , if there are any network related issues,
I/O's get hang for upto 15-30 mins.

DESCRIPTION:
When synchronous replication is used , and if because of some network issues secondary is not being able
to send the network ack's to the primary, I/O gets hang on primary waiting for these network ack's. In case 
of TCP mode we depend on TCP for timeout to happen and then I/O's get drained out, since in this case there is no 
handling from VVR side, I/O's get hang until TCP triggers its timeout which in normal case happens within 15-30 mins.

RESOLUTION:
Code changes are done to allow user to set the time for tcp within which timeout should
get triggered.

* 3979440 (Tracking ID: 3947265)

SYMPTOM:
vxfen tends to fail and creates split brain issues.

DESCRIPTION:
Currently to check whether the infiniband devices are present or not
we check for some modules which on rhel 7.4 comes by default.

RESOLUTION:
TO check for infiniband devices we would be checking for /sys/class/infiniband
directory in which the device information gets populated if infiniband
devices are present.

* 3979462 (Tracking ID: 3964964)

SYMPTOM:
Vxnetd gets into soft lockup when port scan tool keeps sending packets over the network. Call trace likes the following:

kmsg_sys_poll+0x6c/0x180 [vxio]
? poll_initwait+0x50/0x50
? poll_select_copy_remaining+0x150/0x150
? poll_select_copy_remaining+0x150/0x150
? schedule+0x29/0x70
? schedule_timeout+0x239/0x2c0
? task_rq_unlock+0x1a/0x20
? _raw_spin_unlock_bh+0x1e/0x20
? first_packet_length+0x151/0x1d0
? udp_ioctl+0x51/0x80
? inet_ioctl+0x8a/0xa0
? kmsg_sys_rcvudata+0x7e/0x170 [vxio]
nmcom_server_start+0x7be/0x4810 [vxio]

DESCRIPTION:
In case received non-NMCOM packet, vxnetd skips it and goes back to poll more packets without giving up CPU, so if the packets are kept sending vxnetd may get into soft lockup status.

RESOLUTION:
A bit delay has been added to vxnetd to fix the issue.

* 3979471 (Tracking ID: 3915523)

SYMPTOM:
Local disk from other node belonging to private DG is exported to the node when
a private DG is imported on current node.

DESCRIPTION:
When we try to import a DG, all the disks belonging to the DG are automatically
exported to the current node so as to make sure 
that the DG gets imported. This is done to have same behaviour as SAN with local
disks as well. Since we are exporting all disks in 
the DG, then it happens that disks which belong to same DG name but different
private DG on other node get exported to current node 
as well. This leads to wrong disk getting selected while DG gets imported.

RESOLUTION:
Instead of DG name, DGID (diskgroup ID) is used to decide whether disk needs to
be exported or not.

* 3979475 (Tracking ID: 3959986)

SYMPTOM:
Some IOs may not be written to disk, when vxencryptd daemon is restarted.

DESCRIPTION:
If vxencryptd daemon is restarted, as part of restart some of the IOs still waiting in 
pending queue are lost and not written to the underlying disk.

RESOLUTION:
Code changes made to restart the IOs in pending queue once vxencryptd is started.

* 3979476 (Tracking ID: 3972679)

SYMPTOM:
vxconfigd kept crashing and couldn't start up with below stack:
(gdb) bt
#0 0x000000000055e5c2 in request_loop ()
#1 0x0000000000479f06 in main ()
its disassemble code like below:
0x000000000055e5ae <+1160>: callq 0x55be64 <vold_request_poll_unlock>
0x000000000055e5b3 <+1165>: mov 0x582aa6(%rip),%rax # 0xae1060
0x000000000055e5ba <+1172>: mov (%rax),%rax
0x000000000055e5bd <+1175>: test %rax,%rax
0x000000000055e5c0 <+1178>: je 0x55e60e <request_loop+1256>
=> 0x000000000055e5c2 <+1180>: mov 0x65944(%rax),%edx

DESCRIPTION:
vxconfigd takes message buffer as valid as if it's non-NULL.When vxconfigd failed to get share memory for message buffer, OS returned -1. 
In this case vxconfigd accessed a invalid address and caused segment fault.

RESOLUTION:
The code changes are done to check the message buffer properly before accessing it.

* 3979656 (Tracking ID: 3975405)

SYMPTOM:
cvm_clus fails to stop even after "hastop -all" is triggered, and so the cluster nodes get stuck in the LEAVING state.

DESCRIPTION:
When a slave node initiates a write request on RVG, I/O is shipped to the master node (VVR write-ship). If the I/O fails, the VKE_EIO error is passed back from the master node as response for write-ship request. This error is not handled, VxVM continues to retry the I/O operation.

RESOLUTION:
VxVM is updated to handle VKE_EIO error properly.

* 3980457 (Tracking ID: 3980609)

SYMPTOM:
Logowner node in DR secondary site is rebooted

DESCRIPTION:
Freed memory is getting accessed in server thread code path on secondary site

RESOLUTION:
Code changes have been made to fix access to freed memory

* 3981028 (Tracking ID: 3978330)

SYMPTOM:
The values of the VxVM and the VxDMP tunables do not persist after reboot with 4.4 and later versions of the Linux kernel.

DESCRIPTION:
Some changes were made in the Linux kernel from version 4.4 onwards, due to which the values of the these tunables could not persist after reboot.

RESOLUTION:
VxVM has been updated to make the tunable values persistent upon reboot.

Patch ID: VRTSdbac-7.4.1.1200

* 3982218 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSamf-7.4.1.1200

* 3982217 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSvxfen-7.4.1.1300

* 3982216 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

* 3977310 (Tracking ID: 3974739)

SYMPTOM:
VxFen should be able to identify SCSI3 disks in Nutanix environments.

DESCRIPTION:
The fencing module did not work in Nutanix environments, because it was not able to uniquely identify the Nutanix disks.

RESOLUTION:
VxFen has been updated to correctly identify Nutanix disks so that fencing can work in Nutanix environments.

Patch ID: VRTSgab-7.4.1.1200

* 3982215 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel7_x86_64-Patch-7.4.1.1300.tar.gz to /tmp
2. Untar infoscale-rhel7_x86_64-Patch-7.4.1.1300.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel7_x86_64-Patch-7.4.1.1300.tar.gz
    # tar xf /tmp/infoscale-rhel7_x86_64-Patch-7.4.1.1300.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale741P1300 [<host1> <host2>...]

You can also install this patch together with 7.4.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
>Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE