infoscale-rhel6_x86_64-Patch-7.4.1.1200

 Basic information
Release type: Patch
Release date: 2019-07-09
OS update support: None
Technote: None
Documentation: None
Popularity: 667 viewed    119 downloaded
Download size: 495.32 MB
Checksum: 482119789

 Applies to one or more of the following products:
InfoScale Availability 7.4.1 On RHEL6 x86-64
InfoScale Enterprise 7.4.1 On RHEL6 x86-64
InfoScale Foundation 7.4.1 On RHEL6 x86-64
InfoScale Storage 7.4.1 On RHEL6 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
3962607, 3966474, 3968449, 3970470, 3970482, 3971555, 3973076, 3973227, 3975142, 3975500, 3975897, 3976707, 3977099, 3977310, 3978184, 3978195, 3978208, 3978645, 3978646, 3978649, 3978678, 3979375, 3979397, 3979398, 3979400, 3979440, 3979462, 3979471, 3979475, 3979476, 3979656, 3980021, 3980044, 3980457, 3981028, 3981234

 Patch ID:
VRTSvlic-4.01.74.003-RHEL6
VRTSvxfs-7.4.1.1200-RHEL6
VRTSvcsag-7.4.1.1100-RHEL6
VRTSvcsea-7.4.1.1100-RHEL6
VRTSvcs-7.4.1.1100-RHEL6
VRTSsfmh-7.4.0.401-0
VRTSvxvm-7.4.1.1200-RHEL6
VRTSaslapm-7.4.1.1200-RHEL6
VRTSvxfen-7.4.1.1200-RHEL6.x86_64.rpm

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.1 * * *
                         * * * Patch 1200 * * *
                         Patch Date: 2019-07-04


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4.1 Patch 1200


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL6 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSaslapm
VRTSsfmh
VRTSvcs
VRTSvcsag
VRTSvcsea
VRTSvlic
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxfs-7.4.1.1200
* 3970470 (3970480) A kernel panic occurs when writing to cloud files.
* 3970482 (3970481) A file system panic occurs if many inodes are being used.
* 3978645 (3975962) Mounting a VxFS file system with more than 64 PDTs may panic the server.
* 3978646 (3931761) Cluster wide hang may be observed in case of high workload.
* 3978649 (3978305) The vx_upgrade command causes VxFS to panic.
* 3979400 (3979297) A kernel panic occurs when installing VxFS on RHEL6.
* 3980044 (3980043) A file system corruption occurred during a filesystem mount operation.
Patch ID: VRTSvxvm-7.4.1.1200
* 3973076 (3968642) [VVR Encrypted]Intermittent vradmind hang on the new Primary
* 3975897 (3931048) VxVM (Veritas Volume Manager) creates particular log files with write permission
to all users.
* 3978184 (3868154) When DMP Native Support is set to ON, dmpnode with multiple VGs cannot be listed
properly in the 'vxdmpadm native ls' command
* 3978195 (3925345) /tmp/vx.* directories are frequently created due to a bug in vxvolgrp command.
* 3978208 (3969860) Event source daemon (vxesd) takes a lot of time to start when lot of LUNS (around 1700) are attached to the system.
* 3978678 (3907596) vxdmpadm setattr command gives error while setting the path attribute.
* 3979375 (3973364) I/O hang may occur when VVR Replication is enabled in synchronous mode.
* 3979397 (3899568) Adding tunable dmp_compute_iostats to start/stop the iostat gathering
persistently.
* 3979398 (3955979) I/O gets hang in case of synchronous Replication.
* 3979440 (3947265) Delay added in vxvm-startup script to wait for infiniband devices to get 
discovered leads to various issues.
* 3979462 (3964964) Soft lockup may happen in vxnetd because of invalid packets kept sending from port scan tool.
* 3979471 (3915523) Local disk from other node belonging to private DG(diskgroup) is exported to the
node when a private DG is imported on current 
node.
* 3979475 (3959986) Restarting the vxencryptd daemon may cause some IOs to be lost.
* 3979476 (3972679) vxconfigd kept crashing and couldn't start up.
* 3979656 (3975405) cvm_clus fails to stop even after "hastop -all" is triggered, and so the cluster nodes get stuck in the LEAVING state.
* 3980457 (3980609) Secondary node panic in server threads
* 3981028 (3978330) The values of the VxVM and the VxDMP tunables do not persist after reboot with 4.4 and later versions of the Linux kernel.
* 3976707 (3976132) Veritas Volume Manager(VxVM) Support for Nutanix Storage device.
Patch ID: vom-HF074401
* 3981234 (3981235) VRTSsfmh package for Infoscale 7.4.1U1
Patch ID: VRTSvlic-4.01.74.003
* 3975500 (3975501) Providing Patch Release for VRTSvlic
Patch ID: VRTSvcsea-7.4.1.1100
* 3968449 (3967732) An incomplete error code is logged while bringing up the Oracle agent.
Patch ID: VRTSvcsag-7.4.1.1100
* 3962607 (3962498) Unable to start multiple SMB server instances.
* 3966474 (3929982) Registration keys fails to refresh on the CP servers.
* 3971555 (3969951) RVGSharedPri failed to come online on CVM slave.
* 3975142 (3974577) AMF fails to register disk group online events.
Patch ID: VRTSvcs-7.4.1.1100
* 3973227 (3978724) CmdServer starts every time the HAD starts, and keeps one port open although the service running on that port is no longer needed.
* 3977099 (3977098) VCS does not support non-evacuation of the service groups during a system restart.
* 3980021 (3969838) A failover Service Group can be brought online on one node even when it is ONLINE on another node
Patch ID: VRTSvxfen-7.4.1.1200
* 3977310 (3974739) VxFen should be able to identify SCSI3 disks in Nutanix environments.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxfs-7.4.1.1200

* 3970470 (Tracking ID: 3970480)

SYMPTOM:
A kernel panic occurs when writing to cloud files.

DESCRIPTION:
This issue occurs due to missing error value initialization.

RESOLUTION:
Initialization of the error value is done in the write code path, so that a proper error message is displayed when writing to cloud files.

* 3970482 (Tracking ID: 3970481)

SYMPTOM:
A file system panic occurs if many inodes are being used.

DESCRIPTION:
This issue occurs due to improperly managed ownership of inodes.

RESOLUTION:
The ownership of inodes in case of a large inode count is been fixed.

* 3978645 (Tracking ID: 3975962)

SYMPTOM:
Mounting a VxFS file system with more than 64 PDTs may panic the server.

DESCRIPTION:
For large memory systems, the number of auto-tuned VMM buffers is huge. To accumulate these buffers, VxFS needs more PDTs. Currently up to 128 PDTs are supported. However, for more than 64 PDTs, VxFS fails to initialize the strategy routine and calls a wrong function in the mount code path causing the system to panic.

RESOLUTION:
VxFS has been updated to initialize strategy routine for more than 64 PDTs.

* 3978646 (Tracking ID: 3931761)

SYMPTOM:
cluster wide hang may be observed in a race scenario if freeze gets initiated and there are multiple pending workitems in the worklist related to lazy isize update workitems.

DESCRIPTION:
If lazy_isize_enable tunable is set to ON and the 'ls -l' command is executed frequently from a non-writing node of the cluster, it accumulates a huge number of workitems to get processed by the worker threads. If any workitem with active level 1 is enqueued after these workitems, and clusterwide freeze gets initiated, it leads to a deadlock situation. The worker threads gets exhausted in processing the lazy isize update workitems and the workitem with active level 1 never gets processed. This leads to the cluster to stop responding.

RESOLUTION:
VxFS has been updated to discard the blocking lazy isize update workitems if freeze is in progress.

* 3978649 (Tracking ID: 3978305)

SYMPTOM:
The vx_upgrade command causes VxFS to panic.

DESCRIPTION:
When the vx_upgrade command is executed, VxFS incorrectly accesses the freed memory, and then it panics if the memory is paged-out.

RESOLUTION:
The code is modified to make sure that VXFS does not access the freed memory locations.

* 3979400 (Tracking ID: 3979297)

SYMPTOM:
A kernel panic occurs when installing VxFS on RHEL6.

DESCRIPTION:
During VxFS installation, the fs_supers list is not initialized. While de-referencing the fs_supers pointer, the kernel gets a NULL value for the superblock address and panics.

RESOLUTION:
VxFS has been updated to initialize fs_super during VxFS installation.

* 3980044 (Tracking ID: 3980043)

SYMPTOM:
During a filesystem mount operation, after the Intent log replay, a file system metadata corruption occurred.

DESCRIPTION:
As part of the log replay during mount, fsck replays the transactions, rebuilds the secondary maps, and updates the EAU and the superblock summaries. Fsck flushes the EAU secondary map and the EAU summaries to the disk in a delayed manner, but the EAU state is flushed to the disk synchronously. As a result, if the log replay fails once before succeeding during the filesystem mount, the state of the metadata on the disk may become inconsistent.

RESOLUTION:
The fsck log replay is updated to synchronously write secondary map and EAU summary to the disk.

Patch ID: VRTSvxvm-7.4.1.1200

* 3973076 (Tracking ID: 3968642)

SYMPTOM:
Intermittent vradmind hang on the new VVR Primary

DESCRIPTION:
Vradmind was trying to hold pthread write lock with corresponding read lock already held, during race conditions seen while migration role on new primary, which led to intermittent vradmind hangs.

RESOLUTION:
Changes done to minimize the window for which readlock is held and ensure that read lock is released early so that further attempts to grab write lock succeed.

* 3975897 (Tracking ID: 3931048)

SYMPTOM:
Few VxVM log files listed below are created with write permission to all users
which might lead to security issues.

/etc/vx/log/vxloggerd.log
/var/adm/vx/logger.txt
/var/adm/vx/kmsg.log

DESCRIPTION:
The log files are created with write permissions to all users, which is a
security hole. 
The files are created with default rw-rw-rw- (666) permission because the umask
is set to 0 while creating these files.

RESOLUTION:
Changed umask to 022 while creating these files and fixed an incorrect open
system call. Log files will now have rw-r--r--(644) permissions.

* 3978184 (Tracking ID: 3868154)

SYMPTOM:
When DMP Native Support is set to ON, and if a dmpnode has multiple VGs,
'vxdmpadm native ls' shows incorrect VG entries for dmpnodes.

DESCRIPTION:
When DMP Native Support is set to ON, multiple VGs can be created on a disk as
Linux supports creating VG on a whole disk as well as on a partition of 
a disk.This possibility was not handled in the code, hence the display of
'vxdmpadm native ls' was getting messed up.

RESOLUTION:
Code now handles the situation of multiple VGs of a single disk

* 3978195 (Tracking ID: 3925345)

SYMPTOM:
/tmp/vx.* directories are frequently created.

DESCRIPTION:
/tmp/vx.* directories are frequently created due to a bug in vxvolgrp command.

RESOLUTION:
Source change has been made.

* 3978208 (Tracking ID: 3969860)

SYMPTOM:
Event source daemon (vxesd) takes a lot of time to start when lot of LUNS (around 1700) are attached to the system.

DESCRIPTION:
Event source daemon creates a configuration file ddlconfig.info with the help of HBA API libraries. The configuration file is created by child process while the parent process is waiting for child to create the configuration file. If the number of LUNS are large then time taken for creation of configuration is also more. Thus the parent process keeps on waiting for the child process to complete the configuration and exit.

RESOLUTION:
Changes have been done to create the ddlconfig.info file in the background and let the parent exit immediately.

* 3978678 (Tracking ID: 3907596)

SYMPTOM:
vxdmpadm setattr command gives the below error while setting the path attribute:
"VxVM vxdmpadm ERROR V-5-1-14526 Failed to save path information persistently"

DESCRIPTION:
Device names on linux change once the system is rebooted. Thus the persistent attributes of the device are stored using persistent 
hardware path. The hardware paths are stored as symbolic links in the directory /dev/vx/.dmp. The hardware paths are obtained from 
/dev/disk/by-path using the path_id command. In SLES12, the command to extract the hardware path changes to path_id_compat. Since 
the command changed, the script was failing to generate the hardware paths in /dev/vx/.dmp directory leading to the persistent 
attributes not being set.

RESOLUTION:
Code changes have been made to use the command path_id_compat to get the hardware path from /dev/disk/by-path directory.

* 3979375 (Tracking ID: 3973364)

SYMPTOM:
In case of VVR (Veritas Volume Replicator) synchronous mode of replication with TCP protocol, if there are any network issues
I/O's may hang for upto 15-20 mins.

DESCRIPTION:
In VVR synchronous replication mode, if a node on primary site is unable to receive ACK (acknowledgement) message sent from the secondary
within the TCP timeout period, then IO may get hung till the TCP layer detects a timeout, which is ~ 15-20 minutes.
This issue may frequently happen in a lossy network where the ACKs could not be delivered to primary due to some network issues.

RESOLUTION:
A hidden tunable 'vol_vvr_tcp_keepalive' is added to allow users to enable TCP 'keepalive' for VVR data ports if the TCP timeout happens frequently.

* 3979397 (Tracking ID: 3899568)

SYMPTOM:
"vxdmpadm iostat stop" as per design cannot stop the iostat gathering
persistently. To avoid Performance & Memory crunch related issues, it is
generally recommended to stop the iostat gathering.There is a requirement
to provide such ability to stop/start the iostat gathering persistently
in those cases.

DESCRIPTION:
Today DMP iostat daemon is stopped using - "vxdmpadm iostat stop". but this 
is not persistent setting. After reboot this would be lost and hence 
customer
needs to also have to put this in init scripts at appropriate place for
persistent effect.

RESOLUTION:
Code is modified to provide a  tunable "dmp_compute_iostats" which can
start/stop the iostat gathering persistently.

Notes:
Use following command to start/stop the iostat gathering persistently.
# vxdmpadm settune dmp_compute_iostats=on/off.

* 3979398 (Tracking ID: 3955979)

SYMPTOM:
In case of Synchronous mode of replication with TCP , if there are any network related issues,
I/O's get hang for upto 15-30 mins.

DESCRIPTION:
When synchronous replication is used , and if because of some network issues secondary is not being able
to send the network ack's to the primary, I/O gets hang on primary waiting for these network ack's. In case 
of TCP mode we depend on TCP for timeout to happen and then I/O's get drained out, since in this case there is no 
handling from VVR side, I/O's get hang until TCP triggers its timeout which in normal case happens within 15-30 mins.

RESOLUTION:
Code changes are done to allow user to set the time for tcp within which timeout should
get triggered.

* 3979440 (Tracking ID: 3947265)

SYMPTOM:
vxfen tends to fail and creates split brain issues.

DESCRIPTION:
Currently to check whether the infiniband devices are present or not
we check for some modules which on rhel 7.4 comes by default.

RESOLUTION:
TO check for infiniband devices we would be checking for /sys/class/infiniband
directory in which the device information gets populated if infiniband
devices are present.

* 3979462 (Tracking ID: 3964964)

SYMPTOM:
Vxnetd gets into soft lockup when port scan tool keeps sending packets over the network. Call trace likes the following:

kmsg_sys_poll+0x6c/0x180 [vxio]
? poll_initwait+0x50/0x50
? poll_select_copy_remaining+0x150/0x150
? poll_select_copy_remaining+0x150/0x150
? schedule+0x29/0x70
? schedule_timeout+0x239/0x2c0
? task_rq_unlock+0x1a/0x20
? _raw_spin_unlock_bh+0x1e/0x20
? first_packet_length+0x151/0x1d0
? udp_ioctl+0x51/0x80
? inet_ioctl+0x8a/0xa0
? kmsg_sys_rcvudata+0x7e/0x170 [vxio]
nmcom_server_start+0x7be/0x4810 [vxio]

DESCRIPTION:
In case received non-NMCOM packet, vxnetd skips it and goes back to poll more packets without giving up CPU, so if the packets are kept sending vxnetd may get into soft lockup status.

RESOLUTION:
A bit delay has been added to vxnetd to fix the issue.

* 3979471 (Tracking ID: 3915523)

SYMPTOM:
Local disk from other node belonging to private DG is exported to the node when
a private DG is imported on current node.

DESCRIPTION:
When we try to import a DG, all the disks belonging to the DG are automatically
exported to the current node so as to make sure 
that the DG gets imported. This is done to have same behaviour as SAN with local
disks as well. Since we are exporting all disks in 
the DG, then it happens that disks which belong to same DG name but different
private DG on other node get exported to current node 
as well. This leads to wrong disk getting selected while DG gets imported.

RESOLUTION:
Instead of DG name, DGID (diskgroup ID) is used to decide whether disk needs to
be exported or not.

* 3979475 (Tracking ID: 3959986)

SYMPTOM:
Some IOs may not be written to disk, when vxencryptd daemon is restarted.

DESCRIPTION:
If vxencryptd daemon is restarted, as part of restart some of the IOs still waiting in 
pending queue are lost and not written to the underlying disk.

RESOLUTION:
Code changes made to restart the IOs in pending queue once vxencryptd is started.

* 3979476 (Tracking ID: 3972679)

SYMPTOM:
vxconfigd kept crashing and couldn't start up with below stack:
(gdb) bt
#0 0x000000000055e5c2 in request_loop ()
#1 0x0000000000479f06 in main ()
its disassemble code like below:
0x000000000055e5ae <+1160>: callq 0x55be64 <vold_request_poll_unlock>
0x000000000055e5b3 <+1165>: mov 0x582aa6(%rip),%rax # 0xae1060
0x000000000055e5ba <+1172>: mov (%rax),%rax
0x000000000055e5bd <+1175>: test %rax,%rax
0x000000000055e5c0 <+1178>: je 0x55e60e <request_loop+1256>
=> 0x000000000055e5c2 <+1180>: mov 0x65944(%rax),%edx

DESCRIPTION:
vxconfigd takes message buffer as valid as if it's non-NULL.When vxconfigd failed to get share memory for message buffer, OS returned -1. 
In this case vxconfigd accessed a invalid address and caused segment fault.

RESOLUTION:
The code changes are done to check the message buffer properly before accessing it.

* 3979656 (Tracking ID: 3975405)

SYMPTOM:
cvm_clus fails to stop even after "hastop -all" is triggered, and so the cluster nodes get stuck in the LEAVING state.

DESCRIPTION:
When a slave node initiates a write request on RVG, I/O is shipped to the master node (VVR write-ship). If the I/O fails, the VKE_EIO error is passed back from the master node as response for write-ship request. This error is not handled, VxVM continues to retry the I/O operation.

RESOLUTION:
VxVM is updated to handle VKE_EIO error properly.

* 3980457 (Tracking ID: 3980609)

SYMPTOM:
Logowner node in DR secondary site is rebooted

DESCRIPTION:
Freed memory is getting accessed in server thread code path on secondary site

RESOLUTION:
Code changes have been made to fix access to freed memory

* 3981028 (Tracking ID: 3978330)

SYMPTOM:
The values of the VxVM and the VxDMP tunables do not persist after reboot with 4.4 and later versions of the Linux kernel.

DESCRIPTION:
Some changes were made in the Linux kernel from version 4.4 onwards, due to which the values of the these tunables could not persist after reboot.

RESOLUTION:
VxVM has been updated to make the tunable values persistent upon reboot.

* 3976707 (Tracking ID: 3976132)

SYMPTOM:
VxVM doesn't have support for Nutanix Storage device.

DESCRIPTION:
Added Array Support Library(ASL) to support Nutanix Storage device with VxVM.

RESOLUTION:
New ASL (libvxnutanix.so) released to support Nutanix devices.

Patch ID: vom-HF074401

* 3981234 (Tracking ID: 3981235)

SYMPTOM:
N/A

DESCRIPTION:
VRTSsfmh package for Infoscale 7.4.1U1

RESOLUTION:
N/A

Patch ID: VRTSvlic-4.01.74.003

* 3975500 (Tracking ID: 3975501)

SYMPTOM:
Providing Patch Release for VRTSvlic 7.4.1

DESCRIPTION:
Providing Patch Release for VRTSvlic 7.4.1 .

RESOLUTION:
Providing Patch Release for VRTSvlic

Patch ID: VRTSvcsea-7.4.1.1100

* 3968449 (Tracking ID: 3967732)

SYMPTOM:
While bringing up the Oracle agent, the following error is logged: "VCS ERROR V-16-200 Oracle:oraerror.dat did not have records that could be parsed."

DESCRIPTION:
The log category in the error code indicates the component or feature for which messages are logged. If the oraerror.dat file that is available at "/opt/VRTSagents/ha/bin/Oracle" is renamed or removed, the log category is not set for the Oracle errors.  Therefore, an incomplete error code is logged. For example, error with code V-16-200 is logged instead of V-16-20002-200.

RESOLUTION:
The VCS agent for Oracle is updated to set log category for Oracle errors.

Patch ID: VRTSvcsag-7.4.1.1100

* 3962607 (Tracking ID: 3962498)

SYMPTOM:
Unable to start multiple SMB server instances.

DESCRIPTION:
The systemctl command uses the default location for creating the PID file. When multiple Samba servers are started, the command cannot create the PID file at custom locations. Therefore, multiple Samba server instances cannot start on the same node.

RESOLUTION:
The Samba Agent is enhanced to allow multiple instances of SMB server to start.

* 3966474 (Tracking ID: 3929982)

SYMPTOM:
Registration keys fails to refresh on the CP servers.

DESCRIPTION:
When CP server registration keys are missing and the ActionOnCoordPointFault attribute is set to RefreshRegistrations, the CoordPoint agent uses the vxfenswap utility to refresh the registration keys. However, vxfenswap performs certain operations that may not be supported in some customer environments. As a result, the registration keys fail to refresh on the CP servers.

RESOLUTION:
The CoordPoint agent is enhanced to support the direct refresh functionality. The agent provides the ActionOnCoordPointFault attribute that you can set to the value DirectRefreshRegistrations. If you set the attribute to this value, the agent bypasses the vxfenswap utility and refreshes the missing registration keys directly. It does so by using the cpsadm commands on the coordination points, irrespective of the value of FaultTolerance. This attribute is designed to be used in CP server-based fencing where all coordination points are servers.

* 3971555 (Tracking ID: 3969951)

SYMPTOM:
RVGSharedPri failed to come online on CVM slave.

DESCRIPTION:
The RVGSharedPri agent checks the master-slave configuration. If the node is a CVM master, RVGSharedPri agent performs migrate operation that in turn changes the replication role from secondary to primary. If the node is a CVM slave, it comes online only if the replication role is primary; otherwise, it faults the corresponding resource. The issue occurs if the agent attempts to come online on the CVM slave node first, before the CVM master node changes its replication role.

RESOLUTION:
This hotfix addresses the issue by adding a coordination mechanism between the CVM master and the CVM slave. After the hotfix is applied, the CVM slave waits to change its own replication role until the CVM master changes its role from secondary to primary.

* 3975142 (Tracking ID: 3974577)

SYMPTOM:
AMF fails to register disk group online events.

DESCRIPTION:
AMF fails to register disk group online events when the disk group name is longer than 24 characters. This issue occurs due to the small buffer used for checking the disk group state during registration.

RESOLUTION:
The buffer size is increased to accommodate disk group names that are longer than 24 characters.

Patch ID: VRTSvcs-7.4.1.1100

* 3973227 (Tracking ID: 3978724)

SYMPTOM:
CmdServer starts every time the HAD starts, and keeps one port open although the service running on that port is no longer needed.

DESCRIPTION:
Each time the HAD starts, the CmdServer process also starts and runs as a background service. If the service provided by CmdServer is no longer needed, you must manually stop that service every time the HAD starts. Also, there is no automated way to disable the CmdServer daemon.

RESOLUTION:
A new environment variable, STARTCMDSERVER, is now available, which lets you specify whether CmdServer daemon should be started when HAD starts. The default value of STARTCMDSERVER is 1. Make sure to set the value to 0 if you do not want the CmdServer service to be started when HAD starts. This environment variable is available in the /etc/sysconfig/vcs file for Linux and in the /etc/default/vcs file for Solaris an AIX.

* 3977099 (Tracking ID: 3977098)

SYMPTOM:
VCS does not support non-evacuation of the service groups during a system restart.

DESCRIPTION:
When VCS is stopped as part of a system restart operation, the active service groups on the node are migrated to another cluster node. In some casesfor example, to avoid administrative intervention during a manual shutdownyou may not want to evacuate the service groups during a system restart. However, VCS does not support such non-evacuation of the service groups.

RESOLUTION:
A new environment variable, NOEVACUATE, is now available, which lets you specify whether to evacuate service groups or not. The default value of NOEVACUATE is 0. Make sure to set the value to 1 if you do not want VCS to evacuate the service groups. This environment 
variable is available in the /etc/sysconfig/vcs file for Linux and in the /etc/default/vcs file for Solaris an AIX.

* 3980021 (Tracking ID: 3969838)

SYMPTOM:
A failover Service Group can be brought online on one node even when it is ONLINE on another node

DESCRIPTION:
The flush operation clears the internal state of a resource, but doesn't stop the entry points that are already running. In this situation, the entry point may report pseudo fault even when the service group is already offline on that particular node. When such fault is reported, the value of CurrentCount is decremented to zero although the service group is active in the cluster. The zero value signifies that the group is completely offline and hence VCS inadvertently allows any subsequent online request.

RESOLUTION:
Additional checks are introduced to ensure that this incorrect decrement in CurrentCount is prevented when the failover service group is active on any other node in the cluster.

Patch ID: VRTSvxfen-7.4.1.1200

* 3977310 (Tracking ID: 3974739)

SYMPTOM:
VxFen should be able to identify SCSI3 disks in Nutanix environments.

DESCRIPTION:
The fencing module did not work in Nutanix environments, because it was not able to uniquely identify the Nutanix disks.

RESOLUTION:
VxFen has been updated to correctly identify Nutanix disks so that fencing can work in Nutanix environments.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel6_x86_64-Patch-7.4.1.1200.tar.gz to /tmp
2. Untar infoscale-rhel6_x86_64-Patch-7.4.1.1200.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel6_x86_64-Patch-7.4.1.1200.tar.gz
    # tar xf /tmp/infoscale-rhel6_x86_64-Patch-7.4.1.1200.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale741P1200 [<host1> <host2>...]

You can also install this patch together with 7.4.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE
Read and accept Terms of Service