vm-sles11_x86_64-5.1SP1RP1P1
Obsolete
The latest patch(es) : sfha-sles11_x86_64-5.1SP1RP4 

 Basic information
Release type: P-patch
Release date: 2011-02-24
OS update support: None
Technote: None
Documentation: None
Popularity: 876 viewed    downloaded
Download size: 17.28 MB
Checksum: 4164901892

 Applies to one or more of the following products:
Dynamic Multi-Pathing 5.1SP1 On SLES11 x86-64
Storage Foundation 5.1SP1 On SLES11 x86-64
Storage Foundation Cluster File System 5.1SP1 On SLES11 x86-64
Storage Foundation Cluster File System for Oracle RAC 5.1SP1 On SLES11 x86-64
Storage Foundation for Oracle RAC 5.1SP1 On SLES11 x86-64
Storage Foundation HA 5.1SP1 On SLES11 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-sles11_x86_64-5.1SP1RP4 2013-08-21
sfha-sles11_x86_64-5.1SP1RP3 (obsolete) 2012-10-02
vm-sles11_x86_64-5.1SP1RP2P3 (obsolete) 2012-06-13
vm-sles11_x86_64-5.1SP1RP2P2 (obsolete) 2011-10-28
vm-sles11_x86_64-5.1SP1RP2P1 (obsolete) 2011-10-19
sfha-sles11_x86_64-5.1SP1RP2 (obsolete) 2011-09-28

This patch supersedes the following patches: Release date
vm-sles11_x86_64-5.1SP1P1 (obsolete) 2010-12-01

This patch requires: Release date
sfha-sles11_x86_64-5.1SP1RP1 (obsolete) 2011-02-11

 Fixes the following incidents:
2254624, 2254625, 2254626, 2254627, 2254629, 2254630, 2254631, 2268691

 Patch ID:
VRTSvxvm-5.1.101.100-SP1RP1P1_SLES11

Readme file
                          * * * READ ME * * *
             * * * Veritas Volume Manager 5.1 SP1 RP1 * * *
                          * * * P-patch * * *
                         Patch Date: 2011.02.23


This document provides the following information:

   * PATCH NAME
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Volume Manager 5.1 SP1 RP1 P-patch


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Volume Manager 5.1 SP1 RP1
   * Veritas Storage Foundation for Oracle RAC 5.1 SP1 RP1
   * Veritas Storage Foundation Cluster File System 5.1 SP1 RP1
   * Veritas Storage Foundation 5.1 SP1 RP1
   * Veritas Storage Foundation High Availability 5.1 SP1 RP1
   * Veritas Storage Foundation Cluster File System for Oracle RAC 5.1 SP1 RP1
   * Veritas Dynamic Multi-Pathing 5.1 SP1 RP1


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL5 x86-64
SLES10 x86-64
SLES11 x86-64


INCIDENTS FIXED BY THE PATCH
----------------------------
This patch fixes the following Symantec incidents:

   * 2254624 (Tracking ID: 2080730)                                     

     SYMPTOM: On Linux, exclusion of devices using the "vxdmpadm        
     exclude" CLI is not persistent across reboots.                     

     DESCRIPTION: On Linux, names of OS devices (/dev/sd*) are not      
     persistent. The "vxdmpadm exclude" CLI uses the OS device names to 
     keep track of devices to be excluded by VxVM/DMP. As a result, on  
     reboot, if the OS device names change, then the devices which are  
     intended to be excluded will be included again.                    

     RESOLUTION: The resolution is to use persistent physical path names
     to keep track of the devices that have been excluded.              

   * 2254625 (Tracking ID: 2152830)                                     

     SYMPTOM: Sometimes the storage admins create multiple copies/clones
     of the same device. Diskgroup import fails with a non-descriptive  
     error message when multiple copies(clones) of the same device      
     exists and original device(s) are either offline or not available. 
     # vxdg import mydg VxVM vxdg ERROR V-5-1-10978 Disk group mydg:    
     import failed: No valid disk found containing disk group           

     DESCRIPTION: If the original devices are offline or unavailable,   
     vxdg import picks up cloned disks for import. DG import fails by   
     design unless the clones are tagged and tag is specified during DG 
     import. While the import failure is expected, but the error message
     is non-descriptive and doesn't provide any corrective action to be 
     taken by user.                                                     

     RESOLUTION: Fix has been added to give correct error meesage when  
     duplicate clones exist during import. Also, details of duplicate   
     clones is reported in the syslog. Example: [At CLI level] # vxdg   
     import testdg VxVM vxdg ERROR V-5-1-10978 Disk group testdg: import
     failed: DG import duplcate clone detected [In syslog]              
     vxvm:vxconfigd: warning V-5-1-0 Disk Group import failed: Duplicate
     clone disks are detected, please follow the vxdg (1M) man page to  
     import disk group with duplicate clone disks. Duplicate clone disks
     are: c2t20210002AC00065Bd0s2 : c2t50060E800563D204d1s2             
     c2t50060E800563D204d0s2 : c2t50060E800563D204d1s2                  

   * 2254626 (Tracking ID: 2202710)                                     

     SYMPTOM: Transactions on Rlink are not allowed during SRL to DCM   
     flush.                                                             

     DESCRIPTION: Present implementation doesn’t allow rlink transaction
     to go through if SRL to DCM flush is in progress. As SRL overflows,
     VVR start reading from SRL and mark the dirty regions in           
     corresponding DCMs of data volumes, it is called SRL to DCM flush. 
     During SRL to DCM flush transactions on rlink is not allowed. Time 
     to complete SRL flush depend on SRL size, it could range from      
     minutes to many hours. If user initiate any transaction on rlink   
     then it will hang until SRL flush completes.                       

     RESOLUTION: Changed the code behavior to allow rlink transaction   
     during SRL flush. Fix stops the SRL flush for transaction to go    
     ahead and restart the flush after transaction completion.          

   * 2254627 (Tracking ID: 2233889)                                     

     SYMPTOM: The volume recovery happens in a serial fashion when any  
     of the volumes has a log volume attached to it.                    

     DESCRIPTION: When recovery is initiated on a disk group, vxrecover 
     creates lists of each type of volumes such as cache volume, data   
     volume, log volume etc. The log volumes are recovered in a serial  
     fashion by design. Due to a bug the data volumes are added to the  
     log volume list if there exists a log volume. Hence even the data  
     volumes were recovered in a serial fashion if any of the volumes   
     has a log volume attached.                                         

     RESOLUTION: The code was fixed such that the data volume list,     
     cache volume list and the log volume list are maintained separately
     and the data volumes are not added to the log volumes list. The    
     recovery for the volumes in each list is done in parallel.         
     --------------------------------------------------------------------------------

   * 2254629 (Tracking ID: 2197254)                                     

     SYMPTOM: vxassist, the VxVM volume creation utility when creating  
     volume with .logtype=none. doesn.t function as expected.           

     DESCRIPTION: While creating volumes on thinrclm disks, Data Change 
     Object(DCO) version 20 log is attached to every volume by default. 
     If the user do not want this default behavior then .logtype=none.  
     option can be specified as a parameter to vxassist command. But    
     with VxVM on HP 11.31 , this option does not work and DCO version  
     20 log is created by default. The reason for this inconsistency is 
     that when .logtype=none. option is specified, the utility sets the 
     flag to prevent creation of log. However, VxVM wasn.t checking     
     whether the flag is set before creating DCO log which led to this  
     issue.                                                             

     RESOLUTION: This is a logical issue which is addressed by code fix.
     The solution is to check for this corresponding flag of            
     .logtype=none. before creating DCO version 20 by default           

   * 2254630 (Tracking ID: 2234821)                                     

     SYMPTOM: The host fails to re-enable the device after the device   
     comes back online.                                                 

     DESCRIPTION: When the device goes offline, the OS device gets      
     deleted and when the same device comes back online with a new      
     device name, DMP fails to re-enable the old dmpnodes. The code     
     enhancements made it work on SLES11, however the same changes      
     failed on RHEL5. This was because the udev environment variable    
     {DEVTYPE} and $name weren't getting set correctly on RHEL5.        

     RESOLUTION: The resolution is to remove ENV{DEVTYPE}=="disk" and   
     replace $name with %k and is certified to work on SLES11 and RHEL5.

   * 2254631 (Tracking ID: 2240056)                                     

     SYMPTOM: 'vxdg move/split/join' may fail during high I/O load.     

     DESCRIPTION: During heavy I/O load 'dg move' transcation may fail  
     because of open/close assertion and retry will be done. As the     
     retry limit is set to 30 'dg move' fails if retry hits the limit.  

     RESOLUTION: Change the default transaction retry to unlimit,       
     introduce a new option to 'vxdg move/split/join' to set transcation
     retry limit as follows: vxdg [-f] [-o verify|override] [-o expand] 
     [-o transretry=retrylimit] move src_diskgroup dst_diskgroup objects
     ... vxdg [-f] [-o verify|override] [-o expand] [-o                 
     transretry=retrylimit] split src_diskgroup dst_diskgroup objects   
     ... vxdg [-f] [-o verify|override] [-o transretry=retrylimit] join 
     src_diskgroup dst_diskgroup                                        

   * 2268691 (Tracking ID: 2248730)                                     

     SYMPTOM: Command hungs if "vxdg import" called from script with    
     STDERR redirected.                                                 

     DESCRIPTION: If script is having "vxdg import" with STDERR         
     redirected then script does not finish till DG import and recovery 
     is finished. Pipe between script and vxrecover is not closed       
     properly which keeps calling script waiting for vxrecover to       
     complete.                                                          

     RESOLUTION: Closed STDERR in vxrecover and redirected the output to
     /dev/console.                                                      


INSTALLING THE PATCH
--------------------
rhel4_x86_64 : 
# rpm -Uhv VRTSvxvm-5.1.101.100-SP1RP1P1_RHEL4.x86_64.rpm

rhel5_x86_64 : 
# rpm -Uhv VRTSvxvm-5.1.101.100-SP1RP1P1_RHEL5.x86_64.rpm 

sles10_x86_64 : 
# rpm -Uhv VRTSvxvm-5.1.101.100-SP1RP1P1_SLES10.x86_64.rpm 

sles11_x86_64 : 
# rpm -Uhv VRTSvxvm-5.1.101.100-SP1RP1P1_SLES11.x86_64.rpm


REMOVING THE PATCH
------------------
# rpm -e  <rpm-name>


SPECIAL INSTRUCTIONS
--------------------
NONE