vcs-sles10_x86_64-VRTSvcsag_6.0.3.200
Obsolete
The latest patch(es) : sfha-sles10_x86_64-6.0.5 

 Basic information
Release type: Patch
Release date: 2013-03-28
OS update support: None
Technote: None
Documentation: None
Popularity: 992 viewed    downloaded
Download size: 14.29 MB
Checksum: 2457052769

 Applies to one or more of the following products:
VirtualStore 6.0.1 On SLES10 x86-64
Cluster Server 6.0.1 On SLES10 x86-64
Storage Foundation Cluster File System 6.0.1 On SLES10 x86-64
Storage Foundation for Oracle RAC 6.0.1 On SLES10 x86-64
Storage Foundation for Sybase ASE CE 6.0.1 On SLES10 x86-64
Storage Foundation HA 6.0.1 On SLES10 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-sles10_x86_64-6.0.5 2014-04-15

This patch supersedes the following patches: Release date
vcs-sles10_x86_64-VRTSvcsag-6.0.3.100 (obsolete) 2013-02-19

This patch requires: Release date
sfha-sles10_x86_64-6.0.3 (obsolete) 2013-02-01

 Fixes the following incidents:
3066018, 3093828

 Patch ID:
VRTSvcsag-6.0.300.200-GA_SLES10

Readme file
                          * * * READ ME * * *
                * * * Veritas Cluster Server 6.0.3 * * *
                      * * * Public Hot Fix 2 * * *
                         Patch Date: 2013-03-27


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Cluster Server 6.0.3 Public Hot Fix 2


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL5 x86-64
SLES10 x86-64
SLES11 x86-64
RHEL6 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvcsag


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Cluster Server 6.0.1
   * Veritas Storage Foundation for Oracle RAC 6.0.1
   * Veritas Storage Foundation Cluster File System 6.0.1
   * Veritas Storage Foundation High Availability 6.0.1
   * Symantec VirtualStore 6.0.1
   * Veritas Storage Foundation for Sybase ASE CE 6.0.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 6.0.300.200
* 3093828 (3125918) AMF driver panics the node when vxconfigd is unresponsive.
Patch ID: 6.0.300.100
* 3066018 (3065929) VMwareDisks agent functions fail if the ESX host and the virtual machine belong to different networks


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following Symantec incidents:

Patch ID: 6.0.300.200

* 3093828 (Tracking ID: 3125918)

SYMPTOM:
AMF driver panics the node when vxconfigd is unresponsive.

DESCRIPTION:
The libusnp_vxnotify.so library, used for disk group notifications, goes into
an infinite loop when vxconfigd daemon is unresponsive.  This causes AMF to
enter an inconsistent state; as a result, the AMF driver panics the node.

RESOLUTION:
Symantec has modified the AMF driver and libusnp_vxnotify.so
library to address this issue.

Patch ID: 6.0.300.100

* 3066018 (Tracking ID: 3065929)

SYMPTOM:
VCS agent for VMwareDisks fails to detach or attach the VMware disks 
if the ESX host and the virtual machine are not in the same network.

DESCRIPTION:
The non-shared virtual disks in a VMware virtual environment 
reside on a shared datastore and are attached to a single virtual machine at 
any given point of time. In event of a failure, the VCS agent for VMwareDisks 
detaches these disks from the failed virtual machine and attaches it to the 
target virtual machine. The VMwareDisks agent communicates with the host ESX to 
perform the disk detach and attach operations between virtual machines.

If the virtual machines and the ESX hosts are on different networks, then the 
VMwareDisks agent is unable to communicate with the ESX host. As a result, in 
event of a failure, the VMwareDisks agent cannot move the disks between virtual 
machines. This issue results in loss of application availability.

RESOLUTION:
The VMwareDisks agent is now modified, such that in addition to the 
ESX hosts it can now also communicate with the vCenter Server to which the 
virtual machines belong. 

Earlier, the "ESX Details" attribute of the VMwareDisks agent used the 
hostnames or IP addresses and the user account details of the ESX hosts on 
which the virtual machines were configured. Instead of the ESX details you can 
now choose to provide the hostname or IP address and the user account details 
of the vCenter Server to which the virtual machines belong.

As a result of this change, in event of a failure, the VMwareDisks agent now 
sends the disk detach and attach requests to the vCenter Server (instead of the 
ESX hosts). The vCenter Server then notifies the ESX host for these operations. 
Since the communication is directed through the vCenter  Server, the agent is 
now able to successfully detach and attach the disks even if the ESX host and 
the virtual machines reside in a different network.

In a scenario where the host ESX/ESXi itself faults, the VMareDisks agent from 
the target virtual machine sends a request to the vCenter Server to detach the 
disks from the failed virtual machine. However, since the host ESX has faulted, 
the request to detach the disks fails. The VMwareDisks agent from the target 
virtual machine now sends the disk attach request. The vCenter Server then 
processes this request and disks are attached to the target virtual machine. 
The application availability is thus not affected. 

Limitation:

If VMHA is not enabled and the host ESX faults, then even after the disks are 
attached to the target virtual machine they remain attached to the failed 
virtual machine. This issue occurs because the request to detach the disks 
fails since the host ESX itself has faulted. The agent then sends the disk 
attach request to the vCenter Server and attaches the disks to the target 
virtual machine. 

Even though the application availability is not impacted, the subsequent power 
ON of the faulted virtual machine fails. This issue occurs because of the stale 
link between the virtual machine and the disks  attached. Even though the disks 
are now attached to the target virtual machine the stale link with the failed 
virtual machine still exists. 

As a workaround, you must manually detach the disks from the failed virtual machine and then power ON the machine.

About the vCenter Server user account privileges:

Verify that the vCenter Server user account has administrative privileges or is a root user. If the vCenter Server user account fails to have the administrative privileges or is not a root user, then the disk detach and attach operation may fail, in event of a failure.

If you do not want to use the administrator user account or the root user, then you must create a role and add the following privileges to the created role:
- "Low level file operations" on datastore
- "Add existing disk" on virtual machine
- "Change resource" on virtual machine
- "Remove disk" on virtual machine
After you create a role and add the required privileges, you must add a local user to the created role. You can choose to add an existing user or create a new user.
Refer to the VMware product documentation for details on creating a role and adding a user to the created role.

Modifying/Specifying the ESX Details attribute:
Use the Cluster Manager (Java Console) or the Command Line to modify the attribute values.

To modify/specify the attribute from Cluster Manager 
1) From the Cluster Manager configuration tree, select the VMwareDisks resource and then select the Properties tab.
2) On the Properties tab, click the Edit icon next to the ESX Details attribute.
3) On the Edit Attribute dialogue box, select all the entries specified under the Key-Value column and press "-" to delete them.
4) Encrypt the password of the vCenter Server user account.
   a) From the command prompt, run the following command:
      # vcsencrypt -agent
   b) Enter the vCenter Server user account password.
   c) Re-enter the specified password.
      The encrypted value for the specified password is displayed.
5) On the Edit Attribute dialogue box, click "+" to specify the values under the Key-Value column.
6) Under the Key column, specify the vCenter Server hostname or the IP address.
7) Under the Value column, specify the encrypted password of the vCenter Server user account (from step 4)
   The value should be specified in the [UserName=EncryptedPassword] format. 
8) Click Ok to confirm the changes.
9) Repeat the steps for all VMwareDisks resources from the Cluster Manager configuration tree.
10)Save and close the configuration.

To modify/specify the attribute from Command Line 
1) Change the VCS configuration to read/write mode.
   # haconf -makerw
2) Delete the existing details of the ESX Server.
   # hares -modify [VMwareDisksResourceName] ESXDetails -delete -keys 
3) Encrypt the password of the vCenter Server user account.
   a) From the command prompt, run the following command:
      # vcsencrypt -agent
   b) Enter the vCenter Server user account password.
   c) Re-enter the specified password.
       The encrypted value for the specified password is displayed.
4) Specify the vCenter Server details.
   # hares -modify [VMwareDIsksResourceName] ESXDetails -add [vCenterIPAddress_or_hostname] [UserName]=[EncryptedPassword]
5) Repeat the steps for all VMwareDisks resources from the Cluster Manager configuration tree.
6) Save and close the configuration.
   # haconf -dump -makero

Note:
1) At any given point of time you can either specify the ESX/ESXi details or the 
   vCenter Server details as the "ESXDetails" attribute values of VMwareDisks resources.
   You cannot specify both, the ESX/ESXi and the vCenter Server details at the same time.
2) In case of the raw devices(RDM devices) managed through VMwareDisks resource, you must have the following additional privileges:
   All Privileges-> Virtual Machine-> Configuration-> Raw Device



INSTALLING THE PATCH
--------------------
The VRTSvcsag-6.0.300.200 patch may be installed either manually (see below)
or using the included installer.

 *  To apply the patch using the installer, enter the commands 
        # cd <hotfix_directory>  && ./installVCSAG603P2  [ <node1> <node2>... ]
    
    where 
      <hotfix_directory>
                is the directory where you unpacked this hotfix
      <node1>, <node2>, ...
                are the nodes to be patched.  If none are specified, you
                will be prompted to enter them interactively.  All nodes
                must belong to a single cluster.
    
    For information about installer options, run  './installVCSAG603P2 -help'.
    

 *  To apply the patch manually, perform the following steps on all nodes in
    the VCS cluster:

 1. Take backup of your configurations
 2. Stop VCS on the cluster
 3. Install the patch
 4. Restart VCS on the cluster

 Stopping VCS on the cluster
 ---------------------------
 Perform the following steps:
 1. Ensure that the "/opt/VRTSvcs/bin" directory is included in your PATH
    environment variable so that you can execute all the VCS commands. Refer
    to Veritas Cluster Server Installation Guide for more information.

 2. Ensure that the VRTSvcsag package version for Linux is 6.0.2 or 6.0.3  

 3. Freeze all the service groups persistently.
        # haconf -makerw
        # hagrp -freeze [group] -persistent

 4. Stop the cluster on all nodes.  If the cluster is writable, you may
    close the configuration before stopping the cluster.
        # haconf -dump -makero

    From any node, execute the following command.
        # hastop -all
    or
        # hastop -all -force

    Verify that you have stopped the cluster on all nodes by entering the following command: 
        # hasys -state
    
    After you run the command, the following output must be displayed:
        VCS ERROR V-16-1-10600 Cannot connect to VCS engine

    On all nodes, make sure that both had and hashadow processes are stopped.
    Also, stop the VCS CmdServer on all nodes.
        # CmdServer -stop

 Installing the Patch
 --------------------
 Perform the following steps:
 1. Un-compress the downloaded patch from Symantec.
    Change the directory to the uncompressed patch location. 
    Install the VRTSvcsag patch using one of the following commands.
      For RHEL5:
        # rpm -Uvh VRTSvcsag-6.0.300.200-GA_RHEL5.i686.rpm
      For RHEL6:
        # rpm -Uvh VRTSvcsag-6.0.300.200-GA_RHEL6.i686.rpm
      For SLES10:
        # rpm -Uvh VRTSvcsag-6.0.300.200-GA_SLES10.i586.rpm
      For SLES11:
        # rpm -Uvh VRTSvcsag-6.0.300.200-GA_SLES11.i686.rpm

 2. Run the following command to verify if the new patch has been installed:
        # rpm -q VRTSvcsag
    
    If the proper patch is installed, the following output is displayed:
      For RHEL5:
        VRTSvcsag-6.0.300.200-GA_RHEL5.i686
      For RHEL6:
        VRTSvcsag-6.0.300.200-GA_RHEL6.i686
      For SLES10:
        VRTSvcsag-6.0.300.200-GA_SLES10.i586
      For SLES11:
        VRTSvcsag-6.0.300.200-GA_SLES11.i686
    

 Restarting VCS on the cluster
 -----------------------------

 1. Execute the following command on one node:
        # hastart
    
    After this node goes to LOCAL_BUILD or RUNNING state, execute the
    hastart command on all other nodes in the cluster.

 2. Make VCS cluster writable:
        # haconf -makerw

 3. Unfreeze all the groups:
        # hagrp -unfreeze [group] -persistent
        # haconf -dump -makero


REMOVING THE PATCH
------------------
Removal of the patch will result in removing the whole package from the
system/node.  To go back to a previous installed version of the package,
you will need to re-install the package.

 To remove the patch from a cluster node
 ---------------------------------------
 1. Freeze all the service groups persistently:
        # haconf -makerw
        # hagrp -freeze [group] -persistent

 Run steps 2 - 5 on each of the VCS cluster nodes.

 2. Stop VCS on the node by following the steps provided in the section 
    "Stopping VCS on the cluster node".

 3. Remove the patch by using the following command:
        # rpm -e VRTSvcsag

 4. To verify that the patch has been removed from the system, execute the
    command 
        # rpm -qa | grep VRTSvcsag
    
    and confirm that no output is produced.

 5. Install the VRTSvcsag 6.0.2 or 6.0.3 package from the installation media.

 6. Restart the cluster by following the steps in the section "Restarting
    VCS on the cluster" above.


SPECIAL INSTRUCTIONS
--------------------
Install VRTSamf 6.0.300.100 (or later) patch before installing this patch.


OTHERS
------
NONE