vcsag-sles12_x86_64-Patch-6.2.1.100

 Basic information
Release type: Patch
Release date: 2017-06-15
OS update support: None
Technote: None
Documentation: None
Popularity: 526 viewed    downloaded
Download size: 12.2 MB
Checksum: 2580386215

 Applies to one or more of the following products:
Application HA 6.2.1 On SLES12 x86-64
Cluster Server 6.2.1 On SLES12 x86-64
Storage Foundation Cluster File System 6.2.1 On SLES12 x86-64
Storage Foundation for Oracle RAC 6.2.1 On SLES12 x86-64
Storage Foundation HA 6.2.1 On SLES12 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
3662745, 3699148, 3807627, 3852346, 3852524, 3859708, 3869158, 3876001, 3877717, 3892587, 3905062, 3908111, 3916871, 3917204

 Patch ID:
VRTSvcsag-6.2.1.100-SLES12

Readme file
                          * * * READ ME * * *
        * * * Veritas Cluster Server Bundled Agents 6.2.1 * * *
                         * * * Patch 100 * * *
                         Patch Date: 2017-05-25


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Cluster Server Bundled Agents 6.2.1 Patch 100


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES12 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvcsag


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Symantec Application HA 6.2.1
   * Symantec Cluster Server 6.2.1
   * Symantec Storage Foundation Cluster File System HA 6.2.1
   * Symantec Storage Foundation for Oracle RAC 6.2.1
   * Symantec Storage Foundation HA 6.2.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvcsag-6.2.1.100
* 3807627 (3521067) If a VMwareDisks resource is online on a virtual machine, thevirtual machine shutdown operation hangs.
* 3852346 (3852345) DiskGroup agent fails to log a message before powering off the cluster node if 
the PanicSystemOnDGLoss attribute is set and the DiskGroup is disabled.
* 3852524 (3852521) VCS cluster becomes unavailable after virtual machine 
shutdown is initiated through vSphere UI
* 3859708 (3870031) diff_sync is not invoked for additional secondaries.
* 3869158 (3869156) When a virtual machine (VM) loses network connection to the ESX 
host, the VMwareDisks agent is unable to detach the disks from the VM. The VMwareDisk resources 
go into Administrative intervention state.
* 3876001 (3876000) The VMwareDisks agent always attaches disk with the 
persistent mode.
* 3877717 (3881394) RVGSharedPri agent do not support the multiple secondary(s)
configuration in CVM environment.
* 3892587 (3880663) After switching VVR Primary role  from one Geography to other, VVR replication 
might take some time to start after VVR role migration.
* 3905062 (3905061) vxvm-recover is unable to find the disk group and displays an
error.
* 3908111 (3908109) Mount agent support for NFS over IPv6.
* 3916871 (3916870) LVMVolumeGroup agent reports ONLINE state even if LUN is
removed from the cluster node
* 3917204 (3919377) Enable auto sync as an option after primary takes over for VVR RVGSharedPri and
RVGPrimary agents
Patch ID: VRTSvcsag-6.2.1.000
* 3662745 (3520211) The HostMonitor agent reports incorrect memory usage.
* 3699148 (3699146) The Application agent reports an application resource as
offline even when the resource is online.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvcsag-6.2.1.100

* 3807627 (Tracking ID: 3521067)

SYMPTOM:
When a VMwareDisks resource is online on a virtual machine, if you initiate a shutdown for that virtual machine from the vSphere UI, the shutdown operation hangs.

DESCRIPTION:
During a virtual machine (VM) shutdown, Veritas Cluster 
Server (VCS) tries to offline the VMwareDisks resources. As part of the offline operation, the VMwareDisks agent attempts to detach configured disks.
VMware does not allow disk operations during a VM shutdown. As a result, the detach operation fails and effectively the resource offline operation also fails.The VM continues to wait until the offline operation is complete (through Manual intervention).

RESOLUTION:
The VMwareDisks agent is modified to fix this issue. The agent now allows shutdown of a virtual machine even if the detach of the configured disks
fail. The disks are later detached when VCS brings the resource online on the failover VM.

* 3852346 (Tracking ID: 3852345)

SYMPTOM:
DiskGroup agent fails to log a message before powering off the cluster node if 
the PanicSystemOnDGLoss attribute is set and the DiskGroup is disabled.

DESCRIPTION:
If the DiskGroup goes into disabled state when the PanicSystemOnDGLoss 
attribute is set for the DiskGroup resource, the DiskGroup agent fails to log 
a message as it tries to power off the cluster node.

RESOLUTION:
The code is modified to log a message in the system log after the reboot of 
the cluster node.

* 3852524 (Tracking ID: 3852521)

SYMPTOM:
After vSphere UI initiates shutdown operation of the virtual 
machine in which the VMwareDisks resource was online, the VCS cluster 
becomes unavailable because cluster nodes may go down.

DESCRIPTION:
During the virtual machine (VM) shutdown where VMwareDisks 
resource was online, VCS tries to bring the VMwareDisks resource online on 
the failover cluster node. The VMware attach operation during online may 
take a long time if disks are still attached to the source node and 
potentially hang the failover system, thereby triggering Low Latency 
Transport (LLT) heartbeat loss. This loss can initiate fencing action that 
may panic the failover node, thus jeopardizing the cluster.

RESOLUTION:
The VMwareDisks agent is modified to call VMware attach 
operation only when disks are not attached to any other node. This new 
behavior eliminates the situation where attach operation may hang the system 
and cause heartbeat loss.

* 3859708 (Tracking ID: 3870031)

SYMPTOM:
Additional secondaries are in disconnected state.

DESCRIPTION:
In the Perl module RVGPrimaryAgent.pm we are closing both STDERR 
and
STDOUT 
before executing a Perl script within the Perl module. 
 
    close STDOUT;
    close STDERR;

Due to closing STDOUT and STDERR perl script(startrep CLI in diff_sync) is not
invoked which causes additional secondaries in disconnect state.

RESOLUTION:
Updated RVGPrimaryAgent.pm to assign STDOUT to dev/null. and
observed that additional secondaries are in connected state

* 3869158 (Tracking ID: 3869156)

SYMPTOM:
When a virtual machine (VM) loses network connection to the ESX 
host, the VMwareDisks agent is unable to detach the disks from the VM. The VMwareDisk resources 
go into Administrative intervention state.

DESCRIPTION:
When a VM loses network connection to the ESX host, the VMwareDisks 
agent is unable to login to the ESX host to check the state of the disks. As a 
result, the VMwareDisk resources go into UNKNOWN state. Its also unable to take the 
resources offline because the connection with ESX host is lost, thus making it 
impossible to receive instructions to detach the disks from the VM. The resources go 
into Administrative intervention state.

RESOLUTION:
The code is modified so that it allows service group failover in such 
cases because other nodes in the cluster might be able to communicate with the ESX 
host.

* 3876001 (Tracking ID: 3876000)

SYMPTOM:
The VMwareDisks agent always attaches disk with the persistent mode.

DESCRIPTION:
The VMwareDisks agent does not support attaching disks with 
independent persistent or independent nonpersistent mode. If the disk mode is 
not set to persistent outside of VCS, the VMwareDisks agent does not preserve 
the disk mode across cluster.

RESOLUTION:
The code is modified so that the VMwareDisks agent can support 
different disk modes.

* 3877717 (Tracking ID: 3881394)

SYMPTOM:
RVGSharedPri agent do not support the multiple secondary(s)
configuration in CVM environment.

DESCRIPTION:
The RVGSharedPri agent supports only one secondary so if there are
multiple secondary(s) in configuration agent fails to change the role from
secondary to primary.

RESOLUTION:
RVGSharedPri agent is updated to support the multiple secondary(s)
environment.

* 3892587 (Tracking ID: 3880663)

SYMPTOM:
After switching VVR Primary role from one Geography to another, VVR role 
migration takes place quickly, however VVR Replication might take 
some time before starting from new primary to other secondaries.

DESCRIPTION:
In multiple Geography cluster environment, after switching Global service 
group to another site, the Global service groups and Netbackup service come
online quickly and become available for use quickly, but it takes time to
start the replication from the new VVR Primary site to other secondary sites.
This is because RVGPrimary agent performs a differential based sync operation 
which might take some time to finish depending on the size of the data volumes 
configured for replication.

RESOLUTION:
Code Changes have been made to avoid using diff sync for migrate operation in 
RVGPrimary Agent.

* 3905062 (Tracking ID: 3905061)

SYMPTOM:
The DiskGroup agent attempts to stop all the volumes and simultaneously
deport the disk group. If the disk group has been deported and if all the
volumes have not been stopped, the following error message is displayed: 
VxVM vxdg ERROR V-5-1-582 Disk group a01dg: No such disk group

DESCRIPTION:
The DiskGroup agent stopall command internally triggers events to vxvm-recover,
but since the disk group has already been deported, vxvm-recover is unable to
find the disk group and displays an error.

RESOLUTION:
The DiskGroup agent code is modified to check whether all volumes have been
stopped before deporting the disk group.

* 3908111 (Tracking ID: 3908109)

SYMPTOM:
If the NFS server is configured over IPv6, the Mount agent does not work for NFS
share mount points.

DESCRIPTION:
If the NFS server is configured over IPv6, the Mount agent is unable to monitor
mount resources because of a parsing error in the agents MonitorProgram.

RESOLUTION:
Mount agent is modified to fix this issue.

* 3916871 (Tracking ID: 3916870)

SYMPTOM:
LVMVolumeGroup agents report the resource state as ONLINE even after
removing LUN from the cluster node.

DESCRIPTION:
When a LUN is removed, LVMVolumeGroup agent could not identify the
device level metadata inconsistencies. This results in displaying incorrect
state for the resource.

RESOLUTION:
The LVMVolumeGroup agent code is modified to identify meta data
inconsistencies and to correct the issue.

* 3917204 (Tracking ID: 3919377)

SYMPTOM:
RVGSharedPri and RVGPrimary agents only supports diff sync at the moment. When
VVR(Veritas Volume Replicator) primary role changes, diff sync doesn't have
convenient way to track its progress and takes much longer time than auto sync
to resync all data volumes.

DESCRIPTION:
When VVR(Veritas Volume Replicator) Primary role changes, it requires resync
between new primary node and other secondary nodes for data volumes. VCS(Veritas
Cluster Server) RVGSharedPri and  RVGPrimary agents only supports diff sync for
resync at the moment. In most situation, auto sync is smarter and has better
performance than diff sync. Also, VVR has command to track auto sync progress.

RESOLUTION:
RVGSharedPri and RVGPrimary agents scripts have been enhanced to support auto
sync as an option. New attribute ResyncType is introduced in RVGSharedPri and
RVGprimary agents to enable user to choose auto sync(1) or diff sync(0). By
default diff sync is used for resync. To use auto sync:
        # haconf -makerw
	# hares -modify <RVGSharedPri_Resource_name> ResyncType 1
	# hares -modify <RVGPrimary_Resource_name> ResyncType 1
	# haconf -dump -makero
	# hares -value <RVGSharedPri_Resource_name> ResyncType
	# hares -value <RVGPrimary_Resource_name> ResyncType

To track auto sync progress:
        # vxrlink -g <dg name> -i <time interval> status <rlk name>
        # vradmin -g <dg name> repstatus <rvg name>


Provide the following information:

DOCUMENT_TITLE:
TOPIC_NAME:
IMPACTED PRODUCTS: 
IS THE ISSUE CROSS-PLATFORM? (Y/N): 
Provide clear feedback. Use a numbered list for multiple feedback items.

Patch ID: VRTSvcsag-6.2.1.000

* 3662745 (Tracking ID: 3520211)

SYMPTOM:
The HostMonitor agent reports incorrect memory usage.

DESCRIPTION:
The issue was observed because the HostMonitor agent for Linux did not consider the buffer and cached memory while calculating the available free memory for the system.

RESOLUTION:
The code is modified to calculate the available free memory considering the available buffer and cache memory.

* 3699148 (Tracking ID: 3699146)

SYMPTOM:
Occasionally, the Application agent reports an application resource as 
offline even when the resource is online.

DESCRIPTION:
The Application agent was incorrectly comparing running processes, 
due to which, occasionally, the application resource is wrongly reported as 
offline even when the resource is online.

RESOLUTION:
The code is modified to report the correct state of the application.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch vcsag-sles12_x86_64-Patch-6.2.1.100.tar.gz to /tmp
2. Untar vcsag-sles12_x86_64-Patch-6.2.1.100.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/vcsag-sles12_x86_64-Patch-6.2.1.100.tar.gz
    # tar xf /tmp/vcsag-sles12_x86_64-Patch-6.2.1.100.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSvcsag621P1 [<host1> <host2>...]

You can also install this patch together with 6.2.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 6.2.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Perform the following steps on all the nodes:
1. Take backup of your configurations
2. Stop VCS on all nodes
3. Install the patch
4. Restart VCS

Stopping VCS on all the cluster nodes
--------------------------------

1. Ensure that the "/opt/VRTSvcs/bin" directory is included in your PATH
   environment variable so that you can execute all the VCS commands. Refer
   to Veritas Cluster Server Installation Guide for more information.

2. Freeze all the service groups persistently.
        # haconf -makerw
        # hagrp -freeze [group] -persistent

3. If the cluster is writable, you may close the configuration before stopping the cluster.
   Stop the cluster:
        # haconf -dump -makero

   Run the following command to stop vcs on all nodes of the cluster:
        # hastop -local -force

   Verify that the cluster is stopped by running the ha command:
        # hasys -state

   Make sure that both had and hashadow processes are stopped. Also, stop the VCS CmdServer:
        # CmdServer -stop

Installing the Patch
--------------------

1. Un-compress the downloaded patch and change the directory to the uncompressed patch location.
   Install the patch using the following command:

   # rpm -Uvh VRTSvcsag-6.2.1.100-SLES12.i686.rpm

2. Run the following command to verify if the new patch has been installed:

   # rpm -q VRTSvcsag

   If the proper patch is installed, the following output is displayed:

VRTSvcsag-6.2.1.100-SLES12.i686

Re-starting the VCS service:
--------------------------------

1. To start the cluster service, run the following command:
        # hastart
        # /opt/VRTSvcs/bin/CmdServer

2. Make VCS cluster writable:
         # haconf -makerw

3. Unfreeze all the groups:
         # hagrp -unfreeze [group] -persistent
         # haconf -dump -makero


REMOVING THE PATCH
------------------
Removal of the patch will result in removing the whole package
from the system/node. To go back to a previous installed version
of the package, you may need to re-install the package.
Run the following steps on the node where the patch is being uninstalled:

To remove the patch from a cluster node:
---------------------------------------------
1. Freeze all the service groups persistently.
        # haconf -makerw
        # hagrp -freeze [group] -persistent

2. Stop VCS on the node by running the stop sequence as mentioned in the PATCH INSTALLATION section.

3. Remove the patch by using the following command:

        # rpm -e VRTSvcsag -nodeps

4. Verify that the patch has been removed from the system:

        # rpm -qa|grep VRTSvcsag

   Ensure that the package is not be displayed.

5. Install the relevant package from the VCS base media.
    Also refer to SORT for any applicable patches.

6. To start the cluster service, follow the start sequence as mentioned in the PATCH INSTALLATION section.

7. Unfreeze all the groups.
         # hagrp -unfreeze [group] -persistent
         # haconf -dump -makero


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE