This page lists publically-released patches for Veritas Enterprise Products.
For Product GA build, see Veritas Entitlement Management System(VEMS) by clicking the Veritas Support 'Licensing' option.
For information on private patches, contact Veritas Technical Support.
Veritas is making it easier to find all software installers and updates for Veritas products with a completely redesigned experience. NetBackup HotFixes and NetBackup Appliance patches are now also available at the new Veritas Download Center.
Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.
vcs-sles10_x86_64-VRTSvcsag-6.0.3.100
Obsolete
The latest patch(es) : sfha-sles10_x86_64-6.0.5 
Sign in if you want to rate this patch.

 Basic information
Release type: Patch
Release date: 2013-02-19
OS update support: None
Technote: None
Documentation: None
Popularity: 984 viewed    5 downloaded
Download size: 14.27 MB
Checksum: 964413723

 Applies to one or more of the following products:
VirtualStore 6.0.1 On SLES10 x86-64
Cluster Server 6.0.1 On SLES10 x86-64
Storage Foundation Cluster File System 6.0.1 On SLES10 x86-64
Storage Foundation for Oracle RAC 6.0.1 On SLES10 x86-64
Storage Foundation for Sybase ASE CE 6.0.1 On SLES10 x86-64
Storage Foundation HA 6.0.1 On SLES10 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-sles10_x86_64-6.0.5 2014-04-15
vcs-sles10_x86_64-VRTSvcsag_6.0.3.200 (obsolete) 2013-03-28

This patch requires: Release date
sfha-sles10_x86_64-6.0.3 (obsolete) 2013-02-01

 Fixes the following incidents:
3066018

 Patch ID:
VRTSvcsag-6.0.300.100-GA_SLES10

 Readme file  [Save As...]
                          * * * READ ME * * *
          * * * Veritas Cluster Server VRTSvcsag-6.0.300 * * *
                      * * * Public Hot Fix 1 * * *
                         Patch Date: 2013-02-18


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Cluster Server VRTSvcsag-6.0.300 Public Hot Fix 1


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL5 x86-64
RHEL6 x86-64
SLES10 x86-64
SLES11 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvcsag


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Cluster Server 6.0.1
   * Veritas Storage Foundation for Oracle RAC 6.0.1
   * Veritas Storage Foundation Cluster File System 6.0.1
   * Veritas Storage Foundation High Availability 6.0.1
   * Symantec VirtualStore 6.0.1
   * Veritas Storage Foundation for Sybase ASE CE 6.0.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
* 3066018 (3065929): VMwareDisks agent functions fail if the ESX host and the virtual machine belong to different networks


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following Symantec incidents:

Patch ID: 6.0.300.100

* 3066018 (Tracking ID: 3065929)

SYMPTOM:
VCS agent for VMwareDisks fails to detach or attach the VMware disks 
if the ESX host and the virtual machine are not in the same network.

DESCRIPTION:
The non-shared virtual disks in a VMware virtual environment 
reside on a shared datastore and are attached to a single virtual machine at 
any given point of time. In event of a failure, the VCS agent for VMwareDisks 
detaches these disks from the failed virtual machine and attaches it to the 
target virtual machine. The VMwareDisks agent communicates with the host ESX to 
perform the disk detach and attach operations between virtual machines.

If the virtual machines and the ESX hosts are on different networks, then the 
VMwareDisks agent is unable to communicate with the ESX host. As a result, in 
event of a failure, the VMwareDisks agent cannot move the disks between virtual 
machines. This issue results in loss of application availability.

RESOLUTION:
The VMwareDisks agent is now modified, such that in addition to the 
ESX hosts it can now also communicate with the vCenter Server to which the 
virtual machines belong. 

Earlier, the "ESX Details" attribute of the VMwareDisks agent used the 
hostnames or IP addresses and the user account details of the ESX hosts on 
which the virtual machines were configured. Instead of the ESX details you can 
now choose to provide the hostname or IP address and the user account details 
of the vCenter Server to which the virtual machines belong.

As a result of this change, in event of a failure, the VMwareDisks agent now 
sends the disk detach and attach requests to the vCenter Server (instead of the 
ESX hosts). The vCenter Server then notifies the ESX host for these operations. 
Since the communication is directed through the vCenter  Server, the agent is 
now able to successfully detach and attach the disks even if the ESX host and 
the virtual machines reside in a different network.

In a scenario where the host ESX/ESXi itself faults, the VMareDisks agent from 
the target virtual machine sends a request to the vCenter Server to detach the 
disks from the failed virtual machine. However, since the host ESX has faulted, 
the request to detach the disks fails. The VMwareDisks agent from the target 
virtual machine now sends the disk attach request. The vCenter Server then 
processes this request and disks are attached to the target virtual machine. 
The application availability is thus not affected. 

Limitation:

If VMHA is not enabled and the host ESX faults, then even after the disks are 
attached to the target virtual machine they remain attached to the failed 
virtual machine. This issue occurs because the request to detach the disks 
fails since the host ESX itself has faulted. The agent then sends the disk 
attach request to the vCenter Server and attaches the disks to the target 
virtual machine. 

Even though the application availability is not impacted, the subsequent power 
ON of the faulted virtual machine fails. This issue occurs because of the stale 
link between the virtual machine and the disks  attached. Even though the disks 
are now attached to the target virtual machine the stale link with the failed 
virtual machine still exists. 

As a workaround, you must manually detach the disks from the failed virtual machine and then power ON the machine.

About the vCenter Server user account privileges:

Verify that the vCenter Server user account has administrative privileges or is a root user. If the vCenter Server user account fails to have the administrative privileges or is not a root user, then the disk detach and attach operation may fail, in event of a failure.

If you do not want to use the administrator user account or the root user, then you must create a role and add the following privileges to the created role:
- x93Low level file operationsx94 on datastore
- x93Add existing diskx94 on virtual machine
- x93Change resourcex94 on virtual machine
- x93Remove diskx94 on virtual machine
After you create a role and add the required privileges, you must add a local user to the created role. You can choose to add an existing user or create a new user.
Refer to the VMware product documentation for details on creating a role and adding a user to the created role.

Modifying/Specifying the ESX Details attribute:
Use the Cluster Manager (Java Console) or the Command Line to modify the attribute values.

To modify/specify the attribute from Cluster Manager 
1) From the Cluster Manager configuration tree, select the VMwareDisks resource and then select the Properties tab.
2) On the Properties tab, click the Edit icon next to the ESX Details attribute.
3) On the Edit Attribute dialogue box, select all the entries specified under the Key-Value column and press x93-x93 to delete them.
4) Encrypt the password of the vCenter Server user account.
   a) From the command prompt, run the following command:
      # vcsencrypt x96agent
   b) Enter the vCenter Server user account password.
   c) Re-enter the specified password.
      The encrypted value for the specified password is displayed.
5) On the Edit Attribute dialogue box, click x93+x94 to specify the values under the Key-Value column.
6) Under the Key column, specify the vCenter Server hostname or the IP address.
7) Under the Value column, specify the encrypted password of the vCenter Server user account (from step 4)
   The value should be specified in the [UserName=EncryptedPassword] format. 
8) Click Ok to confirm the changes.
9) Repeat the steps for all VMwareDisks resources from the Cluster Manager configuration tree.
10)Save and close the configuration.

To modify/specify the attribute from Command Line 
1) Change the VCS configuration to read/write mode.
   # haconf x96makerw
2) Delete the existing details of the ESX Server.
   # hares x96modify [VMwareDisksResourceName] ESXDetails x96delete x96keys 
3) Encrypt the password of the vCenter Server user account.
   a) From the command prompt, run the following command:
      # vcsencrypt x96agent
   b) Enter the vCenter Server user account password.
   c) Re-enter the specified password.
       The encrypted value for the specified password is displayed.
4) Specify the vCenter Server details.
   # hares x96modify [VMwareDIsksResourceName] ESXDetails x96add [vCenterIPAddress_or_hostname] [UserName]=[EncryptedPassword]
5) Repeat the steps for all VMwareDisks resources from the Cluster Manager configuration tree.
6) Save and close the configuration.
   # haconf x96dump -makero

Note:
1) At any given point of time you can either specify the ESX/ESXi details or the 
   vCenter Server details as the "ESXDetails" attribute values of VMwareDisks resources.
   You cannot specify both, the ESX/ESXi and the vCenter Server details at the same time.
2) In case of the raw devices(RDM devices) managed through VMwareDisks resource, you must have the following additional privileges:
   All Privileges-> Virtual Machine-> Configuration-> Raw Device



INSTALLING THE PATCH
--------------------
Perform the following steps on all nodes in the VCS cluster:
1. Take backup of your configurations. 
2. Stop VCS on the cluster node.
3. Install the patch.
4. Restart VCS on the node.

Stopping VCS on the cluster node
--------------------------------
Perform the following steps:
1. Ensure that the "/opt/VRTSvcs/bin" directory is included in your PATH
   environment variable so that you can execute all the VCS commands. Refer
   to Veritas Cluster Server Installation Guide for more information.

2. Ensure that the VRTSvcsag package version for Linux is 6.0.2 or 6.0.3  

3. Freeze all the service groups persistently.
    # haconf -makerw
    # hagrp -freeze [group] -persistent

4. Stop the cluster on all nodes. If the cluster is writable, you may
   close the configuration before stopping the cluster.
    # haconf -dump -makero

   From any node, execute the following command.
    # hastop -all
    or
    # hastop -all -force

   Verify that you have stopped the cluster on all nodes by entering the following command: 
    # hasys -state
   
   After you run the command, the following output must be displayed:
   VCS ERROR V-16-1-10600 Cannot connect to VCS engine

   On all nodes, make sure that both had and hashadow processes are
   stopped.
   Also, stop the VCS CmdServer on all nodes.
    # CmdServer -stop

Installing the Patch
--------------------
Perform the following steps:
1. Un-compress the downloaded patch from Symantec.
   Change the directory to the uncompressed patch location. 
   Install the VRTSvcsag patch using the following command:
      i. For RHEL5:
                # rpm -Uvh VRTSvcsag-6.0.300.100-GA_RHEL5.i686.rpm
     ii. For RHEL6:
                # rpm -Uvh VRTSvcsag-6.0.300.100-GA_RHEL6.i686.rpm
    iii. For SLES10:
                # rpm -Uvh VRTSvcsag-6.0.300.100-GA_SLES10.i586.rpm
     iv. For SLES11:
                # rpm -Uvh VRTSvcsag-6.0.300.100-GA_SLES11.i686.rpm

2. Run the following command to verify if the new patch has been installed:
    # rpm -q VRTSvcsag
   If the proper patch is installed, the following output is displayed:

      i. For RHEL5:
                # VRTSvcsag-6.0.300.100-GA_RHEL5.i686
     ii. For RHEL6:
                # VRTSvcsag-6.0.300.100-GA_RHEL6.i686
    iii. For SLES10:
                # VRTSvcsag-6.0.300.100-GA_SLES10.i586
     iv. For SLES11:
                # VRTSvcsag-6.0.300.100-GA_SLES11.i686

    
Re-starting VCS on the cluster node
-----------------------------------

1. To start the cluster services on all cluster 
   nodes. Execute the following command first on one node:
    # hastart

   On all the other nodes, start VCS by executing the hastart command after the 
   first node goes to LOCAL_BUILD or RUNNING state.

2. Make VCS cluster writable.
     # haconf -makerw

3. Unfreeze all the groups.
     # hagrp -unfreeze [group] -persistent
     # haconf -dump -makero


REMOVING THE PATCH
------------------
Removal of the patch will result in removing the whole package 
from the system/node. To go back to a previous installed version 
of the package, you may need to re-install the package.
Run the following steps on all the VCS cluster nodes:

To remove the patch from a cluster node:
---------------------------------------------
1. Freeze all the service groups persistently.
    # haconf -makerw
    # hagrp -freeze [group] -persistent

2. Stop VCS on the node by following the steps provided in the section 
   "Stopping VCS on the cluster node".

3. Remove the patch by using the following command:
    # rpm -e VRTSvcsag

4. Verify that the patch has been removed from the system:
    # rpm -qa | grep VRTSvcsag
   Ensure that the VRTSvcsag package is not be displayed. This confirms 
   that the package is removed.

5. Install the VRTSvcsag 6.0.2 or 6.0.3 package from the installation media.

6. To start the cluster services on all cluster 
   nodes, execute the following command first on one node:
    # hastart

   On all the other nodes, start VCS by executing hastart after the 
   first node goes to LOCAL_BUILD or RUNNING state.

7. Unfreeze all the groups.
     # hagrp -unfreeze [group] -persistent
     # haconf -dump -makero


SPECIAL INSTRUCTIONS
--------------------
NO


OTHERS
------
NONE





Read and accept Terms of Service