This page lists publically-released patches for Veritas Enterprise Products.
For Product GA build, see Veritas Entitlement Management System(VEMS) by clicking the Veritas Support 'Licensing' option.
For information on private patches, contact Veritas Technical Support.
For NetBackup Enterprise Server and NetBackup Server patches, see the NetBackup Downloads.
Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.
vm-aix-5.1SP1RP1P1
Obsolete
The latest patch(es) : sfha-aix-5.1SP1RP4 
Sign in if you want to rate this patch.

 Basic information
Release type: P-patch
Release date: 2011-02-24
OS update support: None
Technote: None
Documentation: None
Popularity: 1092 viewed    66 downloaded
Download size: 25.71 MB
Checksum: 1825341845

 Applies to one or more of the following products:
Dynamic Multi-Pathing 5.1SP1 On AIX 5.3
Dynamic Multi-Pathing 5.1SP1 On AIX 6.1
Storage Foundation 5.1SP1 On AIX 5.3
Storage Foundation 5.1SP1 On AIX 6.1
Storage Foundation Cluster File System 5.1SP1 On AIX 5.3
Storage Foundation Cluster File System 5.1SP1 On AIX 6.1
Storage Foundation for Oracle RAC 5.1SP1 On AIX 5.3
Storage Foundation for Oracle RAC 5.1SP1 On AIX 6.1
Storage Foundation HA 5.1SP1 On AIX 5.3
Storage Foundation HA 5.1SP1 On AIX 6.1

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-aix-5.1SP1RP4 2013-08-21
vm-aix-5.1SP1RP3P1 (obsolete) 2012-11-16
sfha-aix-5.1SP1RP3 (obsolete) 2012-10-02
vm-aix-5.1SP1RP2P3 (obsolete) 2012-06-13
vm-aix-5.1SP1RP2P2 (obsolete) 2011-10-28
vm-aix-5.1SP1RP2P1 (obsolete) 2011-10-19
sfha-aix-5.1SP1RP2 (obsolete) 2011-09-28

This patch supersedes the following patches: Release date
vm-aix-5.1SP1P2 (obsolete) 2011-01-12
vm-aix-5.1SP1P1 (obsolete) 2010-12-01

This patch requires: Release date
sfha-aix-5.1SP1RP1 (obsolete) 2011-02-11

 Fixes the following incidents:
2257568, 2257570, 2257571, 2257575, 2257644, 2257681, 2267384

 Patch ID:
VRTSvxvm-05.01.0101.0100

 Readme file  [Save As...]
                          * * * READ ME * * *
             * * * Veritas Volume Manager 5.1 SP1 RP1 * * *
                         * * * P-patch 1 * * *
                         Patch Date: 2011.05.25


This document provides the following information:

   * PATCH NAME
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Volume Manager 5.1 SP1 RP1 P-patch 1


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Volume Manager 5.1 SP1 RP1
   * Veritas Storage Foundation for Oracle RAC 5.1 SP1 RP1
   * Veritas Storage Foundation Cluster File System 5.1 SP1 RP1
   * Veritas Storage Foundation 5.1 SP1 RP1
   * Veritas Storage Foundation High Availability 5.1 SP1 RP1
   * Veritas Dynamic Multi-Pathing 5.1 SP1 RP1


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
AIX


INCIDENTS FIXED BY THE PATCH
----------------------------
This patch fixes the following Symantec incidents:

Patch ID: 5.1.101.100

* 2257568 (Tracking ID: 2152830)

SYMPTOM:
Sometimes the storage admins create multiple copies/clones of the same device. 
Diskgroup import fails with a non-descriptive error message when multiple
copies(clones) of the same device exists and original device(s) are either
offline or not available.

# vxdg import mydg
VxVM vxdg ERROR V-5-1-10978 Disk group mydg: import failed: 
No valid disk found containing disk group

DESCRIPTION:
If the original devices are offline or unavailable, vxdg import picks
up cloned disks for import. DG import fails by design unless the clones
are tagged and tag is specified during DG import. While the import
failure is expected, but the error message is non-descriptive and
doesn't provide any corrective action to be taken by user.

RESOLUTION:
Fix has been added to give correct error meesage when duplicate clones
exist during import. Also, details of duplicate clones is reported in
the syslog.

Example:

[At CLI level]
# vxdg import testdg             
VxVM vxdg ERROR V-5-1-10978 Disk group testdg: import failed:
DG import duplcate clone detected

[In syslog]
vxvm:vxconfigd: warning V-5-1-0 Disk Group import failed: Duplicate clone disks are
detected, please follow the vxdg (1M) man page to import disk group with
duplicate clone disks. Duplicate clone disks are: c2t20210002AC00065Bd0s2 :
c2t50060E800563D204d1s2  c2t50060E800563D204d0s2 : c2t50060E800563D204d1s2

* 2257570 (Tracking ID: 2202710)

SYMPTOM:
Transactions on Rlink are not allowed during SRL to DCM flush.

DESCRIPTION:
Present implementation doesn't allow rlink transaction to go through if SRL
to DCM flush is in progress. As SRL overflows, VVR start reading from SRL and
mark the dirty regions in corresponding DCMs of data volumes, it is called SRL
to DCM flush. During SRL to DCM flush transactions on rlink is not allowed. Time
to complete SRL flush depend on SRL size, it could range from minutes to many
hours. If user initiate any transaction on rlink then it will hang until SRL
flush completes.

RESOLUTION:
Changed the code behavior to allow rlink transaction during SRL flush. Fix stops
the SRL flush for transaction to go ahead and restart the flush after
transaction completion.

* 2257571 (Tracking ID: 2197254)

SYMPTOM:
vxassist, the VxVM volume creation utility when creating volume with
"logtype=none" doesn't function as expected.

DESCRIPTION:
While creating volumes on thinrclm disks, Data Change Object(DCO) version 20 log
is attached to every volume by default. If the user do not want this default
behavior then "logtype=none" option can be specified as a parameter to vxassist
command. But with VxVM on HP 11.31 , this option does not work and DCO version
20 log is created by default.  The reason for this inconsistency is that  when
"logtype=none" option is specified, the utility sets the flag to prevent
creation of log. However, VxVM wasn't checking whether the flag is set before
creating DCO log which led to this issue.

RESOLUTION:
This is a logical issue which is addressed by code fix. The solution is to check
for this corresponding flag of  "logtype=none" before creating DCO version 20 by
default.

* 2257575 (Tracking ID: 2240056)

SYMPTOM:
'vxdg move/split/join' may fail during high I/O load.

DESCRIPTION:
During heavy I/O load 'dg move' transcation may fail because of open/close 
assertion and retry will be done. As the retry limit is set to 30 'dg move' 
fails if retry hits the limit.

RESOLUTION:
Change the default transaction retry to unlimit, introduce a new option 
to 'vxdg move/split/join' to set transcation retry limit as follows:

vxdg [-f] [-o verify|override] [-o expand] [-o transretry=retrylimit] move 
src_diskgroup dst_diskgroup objects ...

vxdg [-f] [-o verify|override] [-o expand] [-o transretry=retrylimit] split 
src_diskgroup dst_diskgroup objects ...

vxdg [-f] [-o verify|override] [-o transretry=retrylimit] join src_diskgroup 
dst_diskgroup

* 2257644 (Tracking ID: 2233889)

SYMPTOM:
The volume recovery happens in a serial fashion when any of the volumes has a
log volume attached to it.

DESCRIPTION:
When recovery is initiated on a disk group, vxrecover creates lists of each type
of volumes such as cache volume, data volume, log volume etc. The log volumes
are recovered in a serial fashion by design. Due to a bug the data volumes are
added to the log volume list if there exists a log volume. Hence even the data
volumes were recovered in a serial fashion if any of the volumes has a log
volume attached.

RESOLUTION:
The code was fixed such that the data volume list, cache volume list and the log
volume list are maintained separately and the data volumes are not added to the
log volumes list. The recovery for the volumes in each list is done in parallel.
--------------------------------------------------------------------------------

* 2257681 (Tracking ID: 2245121)

SYMPTOM:
Rlinks do not connect for NAT (Network Address Translations) configurations.

DESCRIPTION:
When VVR (Veritas Volume Replicator) is replicating over a Network Address 
Translation (NAT) based firewall, rlinks fail to connect resulting in 
replication failure.

Rlinks do not connect as there is a failure during exchange of VVR heartbeats.
For NAT based firewalls, conversion of mapped IPV6 (Internet Protocol Version 
6) address to IPV4 (Internet Protocol Version 4) address is not handled which 
caused VVR heartbeat exchange with incorrect IP address leading to VVR 
heartbeat failure.

RESOLUTION:
Code fixes have been made to appropriately handle the exchange of VVR 
heartbeats under NAT based firewall.

* 2267384 (Tracking ID: 2248730)

SYMPTOM:
Command hungs if "vxdg import" called from script with STDERR
redirected.

DESCRIPTION:
If script is having "vxdg import" with STDERR redirected then
script does not finish till DG import and recovery is finished. Pipe between
script and vxrecover is not closed properly which keeps calling script waiting
for vxrecover to complete.

RESOLUTION:
Closed STDERR in vxrecover and redirected the output to
/dev/console.


INSTALLING THE PATCH
--------------------
If the currently installed VRTSvxvm is below 5.1.101.0 level,
upgrade VRTSvxvm to 5.1.101.0 level before installing this patch.

AIX maintenance levels and APARs can be downloaded from the IBM web site:

 http://techsupport.services.ibm.com

1. Since the patch process will configure the new kernel extensions, ensure that no VxVM volumes are in use or open or mounted before starting the installati
on procedure

2. Check whether root support or DMP native support is enabled. If it is enabled, it will be retained after patch upgrade.

# vxdmpadm gettune dmp_native_support


If the current value is "on", DMP native support is enabled on this machine.

# vxdmpadm native list vgname=rootvg

If the output is some list of hdisks, root support is enabled on this machine

3.
a. Before applying this VxVM 5.1 SP1RP1P1 patch, stop the VEA Server's vxsvc process:
     # /opt/VRTSob/bin/vxsvcctrl stop

b. To apply this patch, use following command:
      # installp -ag -d ./VRTSvxvm.bff VRTSvxvm

c. To apply and commit this patch, use following command:
     # installp -acg -d ./VRTSvxvm.bff VRTSvxvm
NOTE: Please refer installp(1M) man page for clear understanding on APPLY & COMMIT state of the package/patch.

d. Reboot the system to complete the patch  upgrade.
     # reboot

e. Confirm that the point patch is installed:
# lslpp -hac VRTSvxvm | tail -1
/etc/objrepos:VRTSvxvm:5.1.101.100::APPLY:COMPLETE:01/02/11:10;56;11

f. if root support or dmp native support is enabled in step 2, verify whether it is retained after completing the patch upgrade
# vxdmpadm gettune dmp_native_support
# vxdmpadm native list vgname=rootvg


REMOVING THE PATCH
------------------
1. Check whether root support or DMP native support is enabled or not:

      # vxdmpadm gettune dmp_native_support

If the current value is "on", DMP native support is enabled on this machine.

      # vxdmpadm native list vgname=rootvg

If the output is some list of hdisks, root support is enabled on this machine

If disabled: goto step 3.
If enabled: goto step 2.

2. If root support or DMP native support is enabled:

 a. It is essential to disable DMP native support.
Run the following command to disable DMP native support as well as root support
      # vxdmpadm settune dmp_native_support=off

b. If only root support is enabled, run the following command to disable root suppor
      # vxdmpadm native disable vgname=rootvg

c. Reboot the system
      # reboot

d. Before backing out patch, stop the VEA server's vxsvc process:
     # /opt/VRTSob/bin/vxsvcctrl stop

To reject the patch if it is in "APPLIED" state, use the following command and re-enable DMP support
     # installp -r VRTSvxvm 5.1.101.100

a. Reboot the system
     # reboot


SPECIAL INSTRUCTIONS
--------------------
NONE



Read and accept Terms of Service