vm-aix-6.0.1.200
Obsolete
The latest patch(es) : sfha-aix-6.0.5 

 Basic information
Release type: Patch
Release date: 2012-10-10
OS update support: None
Technote: None
Documentation: None
Popularity: 1824 viewed    downloaded
Download size: 79.36 MB
Checksum: 436257761

 Applies to one or more of the following products:
Dynamic Multi-Pathing 6.0.1 On AIX 6.1
Dynamic Multi-Pathing 6.0.1 On AIX 7.1
Storage Foundation 6.0.1 On AIX 6.1
Storage Foundation 6.0.1 On AIX 7.1
Storage Foundation Cluster File System 6.0.1 On AIX 6.1
Storage Foundation Cluster File System 6.0.1 On AIX 7.1
Storage Foundation for Oracle RAC 6.0.1 On AIX 6.1
Storage Foundation for Oracle RAC 6.0.1 On AIX 7.1
Storage Foundation HA 6.0.1 On AIX 6.1
Storage Foundation HA 6.0.1 On AIX 7.1

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
vm-aix-6.0.3.200 (obsolete) 2013-12-18
vm-aix-6.0.3.100 (obsolete) 2013-09-13
sfha-aix-6.0.3 (obsolete) 2013-01-31

 Fixes the following incidents:
2860207, 2876865, 2892499, 2892621, 2892643, 2892650, 2892660, 2892689, 2892702, 2909847, 2911273, 2922798, 2924117, 2924188, 2924207, 2933468, 2933469, 2933905, 2934259, 2942166

 Patch ID:
VRTSvxvm-06.00.0100.0200

Readme file
                          * * * READ ME * * *
                * * * Veritas Volume Manager 6.0.1 * * *
                      * * * Public Hot Fix 2 * * *
                         Patch Date: 2012-10-09


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Volume Manager 6.0.1 Public Hot Fix 2


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
AIX 6.1 ppc
AIX 7.1 ppc


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Storage Foundation for Oracle RAC 6.0.1
   * Veritas Storage Foundation Cluster File System 6.0.1
   * Veritas Storage Foundation 6.0.1
   * Veritas Storage Foundation High Availability 6.0.1
   * Veritas Dynamic Multi-Pathing 6.0.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 6.0.100.200
* 2860207 (2859470) EMC SRDF (Symmetrix Remote Data Facility) R2 disk with EFI label is not 
recognized by VxVM (Veritas Volume Manager) and its shown in error state.
* 2876865 (2510928) The extended attributes reported by "vxdisk -e list" for the EMC SRDF luns are 
reported as "tdev mirror", instead of "tdev srdf-r1".
* 2892499 (2149922) Record the diskgroup import and deport events in syslog
* 2892621 (1903700) Removing mirror using vxassist does not work.
* 2892643 (2801962) Growing a volume takes significantly large time when the volume has version 20 
DCO attached to it.
* 2892650 (2826125) VxVM script daemon is terminated abnormally on its invocation.
* 2892660 (2000585) vxrecover doesn't start remaining volumes if one of the volumes is removed
during vxrecover command run.
* 2892689 (2836798) In VxVM, resizing simple EFI disk fails and causes system panic/hang.
* 2892702 (2567618) VRTSexplorer coredumps in checkhbaapi/print_target_map_entry.
* 2909847 (2882908) Machine failed to bootup with error "PReP-BOOT : Unable to load full PReP image"
* 2911273 (2879248) vxdisk scandisks gets hung on VIO client with dmp_native_support enabled
* 2922798 (2878876) vxconfigd dumps core in vol_cbr_dolog() due to race between two threads processing requests from the same client.
* 2924117 (2911040) Restore from a cascaded snapshot leaves the volume in
unusable state if any cascaded snapshot is in detached state.
* 2924188 (2858853) After master switch, vxconfigd dumps core on old master.
* 2924207 (2886402) When re-configuring devices, vxconfigd hang is observed.
* 2933468 (2916094) Enhancements have been made to the Dynamic Reconfiguration Tool(DR Tool) to 
create a separate log file every time DR Tool is started, display a message if 
a command takes longer time, and not to list the devices controlled by TPD 
(Third Party Driver) in 'Remove Luns' option of DR Tool.
* 2933469 (2919627) Dynamic Reconfiguration tool should be enhanced to remove LUNs feasibly in bulk.
* 2933905 (2934729) VxVM functionality gets enabled in Virtual I/O Server(VIOS)
* 2934259 (2930569) The LUNs in 'error' state in output of 'vxdisk list' cannot be removed through
DR(Dynamic Reconfiguration) Tool.
* 2942166 (2942609) Message displayed when user quits from Dynamic Reconfiguration Operations is
shown as error message.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following Symantec incidents:

Patch ID: 6.0.100.200

* 2860207 (Tracking ID: 2859470)

SYMPTOM:
The EMC SRDF-R2 disk may go in error state when you create EFI label on the R1 
disk. For example:

R1 site
# vxdisk -eo alldgs list | grep -i srdf
emc0_008c auto:cdsdisk emc0_008c SRDFdg online c1t5006048C5368E580d266 srdf-r1

R2 site
# vxdisk -eo alldgs list | grep -i srdf
emc1_0072 auto - - error c1t5006048C536979A0d65 srdf-r2

DESCRIPTION:
Since R2 disks are in write protected mode, the default open() call (made for 
read-write mode) fails for the R2 disks, and the disk is marked as invalid.

RESOLUTION:
As a fix, DMP was changed to be able to read the EFI label even on a write 
protected SRDF-R2 disk.

* 2876865 (Tracking ID: 2510928)

SYMPTOM:
The extended attributes reported by "vxdisk -e list" for the EMC SRDF luns are 
reported as "tdev mirror", instead of "tdev srdf-r1". Example,

# vxdisk -e list 
DEVICE       TYPE           DISK        GROUP        STATUS              
OS_NATIVE_NAME   ATTR        
emc0_028b    auto:cdsdisk   -            -           online thin         
c3t5006048AD5F0E40Ed190s2 tdev mirror

DESCRIPTION:
The extraction of the attributes of EMC SRDF luns was not done properly. Hence,
EMC SRDF luns are erroneously reported as "tdev mirror", instead of "tdev srdf-
r1".

RESOLUTION:
Code changes have been made to extract the correct values.

* 2892499 (Tracking ID: 2149922)

SYMPTOM:
Record the diskgroup import and deport events in 
the /var/adm/ras/vxconfigd.log file.
Following type of message can be logged in syslog:
VxVM vxconfigd NOTICE  V-5-1-16252 Disk group deport of <dgname> succeeded.

DESCRIPTION:
With the diskgroup import or deport, appropriate success message 
or failure message with the cause for failure should be logged.

RESOLUTION:
Code changes are made to log diskgroup import and deport events in 
syslog.

* 2892621 (Tracking ID: 1903700)

SYMPTOM:
vxassist remove mirror does not work if nmirror and alloc is specified,
giving an error "Cannot remove enough mirrors"

DESCRIPTION:
During remove mirror operation, VxVM does not perform correct
analysis of plexes. Hence the issue.

RESOLUTION:
Necessary code changes have been done so that vxassist works properly.

* 2892643 (Tracking ID: 2801962)

SYMPTOM:
Operations that lead to growing of volume, including 'vxresize', 'vxassist 
growby/growto' take significantly larger time if the volume has version 20 
DCO(Data Change Object) attached to it in comparison to volume which doesn't 
have DCO attached.

DESCRIPTION:
When a volume with a DCO is grown, it needs to copy the existing map in DCO and 
update the map to track the grown regions.  The algorithm was such that for 
each region in the map it would search for the page that contains that region 
so as to update the map. Number of regions and number of pages containing them 
are proportional to volume size. So, the search complexity is amplified and 
observed primarily when the volume size is of the order of terabytes. In the 
reported instance, it took more than 12 minutes to grow a 2.7TB volume by 50G.

RESOLUTION:
Code has been enhanced to find the regions that are contained within a page and 
then avoid looking-up the page for all those regions.

* 2892650 (Tracking ID: 2826125)

SYMPTOM:
VxVM script daemons are not up after they are invoked with the vxvm-recover 
script.

DESCRIPTION:
When the VxVM script daemon is starting, it will terminate any stale instance
if it does exist. When the script daemon is invoking with exactly the same 
process id of the previous invocation, the daemon itself is abnormally 
terminated by killing one own self through a false-positive detection.

RESOLUTION:
Code changes are made to handle the same process id situation correctly.

* 2892660 (Tracking ID: 2000585)

SYMPTOM:
If 'vxrecover -sn' is run and at the same time one volume is removed, vxrecover 
exits with the error 'Cannot refetch volume', the exit status code is zero but 
no volumes are started.

DESCRIPTION:
vxrecover assumes that volume is missing because the diskgroup must have been
deported while vxrecover was in progress. Hence, it exits without starting
remaining volumes. vxrecover should be able to start other volumes, if the DG 
is not deported.

RESOLUTION:
Modified the source to skip missing volume and proceed with remaining volumes.

* 2892689 (Tracking ID: 2836798)

SYMPTOM:
'vxdisk resize' fails with the following error on the simple format EFI 
(Extensible Firmware Interface) disk expanded from array side and system may 
panic/hang after a few minutes.
 
# vxdisk resize disk_10
VxVM vxdisk ERROR V-5-1-8643 Device disk_10: resize failed:
Configuration daemon error -1

DESCRIPTION:
As VxVM doesn't support Dynamic Lun Expansion on simple/sliced EFI disk, last 
usable LBA (Logical Block Address) in EFI header is not updated while expanding 
LUN. Since the header is not updated, the partition end entry was regarded as 
illegal and cleared as part of partition range check. This inconsistent 
partition information between the kernel and disk causes system panic/hang.

RESOLUTION:
Added checks in VxVM code to prevent DLE on simple/sliced EFI disk.

* 2892702 (Tracking ID: 2567618)

SYMPTOM:
VRTSexplorer coredumps in checkhbaapi/print_target_map_entry which looks like:
print_target_map_entry()
check_hbaapi()
main()
_start()

DESCRIPTION:
checkhbaapi utility uses HBA_GetFcpTargetMapping() API which returns the current 
set of mappings between operating system and fibre channel protocol (FCP) 
devices for a given HBA port. The maximum limit for mappings was set to 512 and 
only that much memory was allocated. When the number of mappings returned was 
greater than 512, the function that prints this information used to try to 
access the entries beyond that limit, which resulted in core dumps.

RESOLUTION:
The code has been changed to allocate enough memory for all the mappings 
returned by HBA_GetFcpTargetMapping().

* 2909847 (Tracking ID: 2882908)

SYMPTOM:
While booting the machine with DMP support enabled for rootvg, the bootup stops 
displaying below message on the console
"PReP-BOOT : Unable to load full PReP image"

DESCRIPTION:
With newer AIX TL levels, the defalult boot image size is increasing, and after 
enabling DMP support for rootvg, the boot image size exceeds the AIX limit of 
32Mb. In such cases AIX does not boot.

RESOLUTION:
Use the same vxdmpboot binary for both AIX 6.1 and 7.1 to reduce the boot image 
size

* 2911273 (Tracking ID: 2879248)

SYMPTOM:
With dmp_native_support enabled, when cfgmgr is run in parallel with vxdisk 
scandisks, then vxconfigd gets hung with following stack:

ddl_native_enable_dmpnodes()
ddl_native_update_list()
dmp_native_auto_update()
ddl_scan_devices()
req_scan_disks()
request_loop()
main()

while the DMP device method cfgdmpdisk, which is invoked by cfgmgr gets hung 
with below mentioned stack:

read()
breadv()
getresp()
retry()
vol_open()
build_devstruct()
main()

DESCRIPTION:
This is a deadlock, where cfgdmpdisk holds ODM(Object Data Manager) lock and it 
is waiting for vxconfigd to respond. While vxconfigd is also waiting for ODM 
lock.

RESOLUTION:
To avoid such a deadlock code changes have been made in DMP device methods to 
release ODM lock before communicating with vxconfigd.

* 2922798 (Tracking ID: 2878876)

SYMPTOM:
vxconfigd, VxVM configuration daemon dumps core with the following stack.

vol_cbr_dolog ()
vol_cbr_translog ()
vold_preprocess_request () 
request_loop ()
main     ()

DESCRIPTION:
This core is a result of a race between two threads which are processing the 
requests from the same client. While one thread completed processing a request 
and is in the phase of releasing the memory used, other thread is processing a 
request "DISCONNECT" from the same client. Due to the race condition, the 
second thread attempted to access the memory which is being released and dumped 
core.

RESOLUTION:
The issue is resolved by protecting the common data of the client by a mutex.

* 2924117 (Tracking ID: 2911040)

SYMPTOM:
Restore operation from a cascaded snapshot succeeds even when it's one
of the source is inaccessible. Subsequently, if the primary volume is made
accessible for operation, IO operations may fail on the volume as the source of
the volume is inaccessible. Deletion of snapshots would as well fail due to
dependency of the primary volume on the snapshots. In such case, following error
is thrown when try to remove any snapshot using 'vxedit rm' command:
""VxVM vxedit ERROR V-5-1-XXXX Volume YYYYYY has dependent volumes"

DESCRIPTION:
When a snapshot is restored from any snapshot, the snapshot becomes
the source of data for regions on primary volume that differ between the two
volumes. If the snapshot itself depends on some other volume and that volume is
not accessible, effectively primary volume becomes inaccessible after restore
operation. In such case, the snapshots cannot be deleted as the primary volume
depends on it.

RESOLUTION:
If a snapshot or any later cascaded snapshot is inaccessible,
restore from that snapshot is prevented.

* 2924188 (Tracking ID: 2858853)

SYMPTOM:
In CVM(Cluster Volume Manager) environment, after master switch, vxconfigd 
dumps core on the slave node (old master) when a disk is removed from the disk 
group.

dbf_fmt_tbl()
voldbf_fmt_tbl()
voldbsup_format_record()
voldb_format_record()
format_write()
ddb_update()
dg_set_copy_state()
dg_offline_copy()
dasup_dg_unjoin()
dapriv_apply()
auto_apply()
da_client_commit()
client_apply()
commit()
dg_trans_commit()
slave_trans_commit()
slave_response()
fillnextreq()
vold_getrequest()
request_loop()
main()

DESCRIPTION:
During master switch, disk group configuration copy related flags are not 
cleared on the old master, hence when a disk is removed from a disk group, 
vxconfigd dumps core.

RESOLUTION:
Necessary code changes have been made to clear configuration copy related flags 
during master switch.

* 2924207 (Tracking ID: 2886402)

SYMPTOM:
When re-configuring dmp devices, typically using command 'vxdisk scandisks', 
vxconfigd hang is observed. Since it is in hang state, no VxVM(Veritas volume 
manager)commands are able to respond.

Following process stack of vxconfigd was observed.

dmp_unregister_disk
dmp_decode_destroy_dmpnode
dmp_decipher_instructions
dmp_process_instruction_buffer
dmp_reconfigure_db
gendmpioctl
dmpioctl
dmp_ioctl
dmp_compat_ioctl
compat_blkdev_ioctl
compat_sys_ioctl
cstar_dispatch

DESCRIPTION:
When DMP(dynamic multipathing) node is about to be destroyed, a flag is set to 
hold any IO(read/write) on it. The IOs which may come in between the process of 
setting flag and actual destruction of DMP node, are placed in dmp queue and are
never served. So the hang is observed.

RESOLUTION:
Appropriate flag is set for node which is to be destroyed so that any IO after
marking flag will be rejected so as to avoid hang condition.

* 2933468 (Tracking ID: 2916094)

SYMPTOM:
These are the issues for which enhancements are done:
1. All the DR operation logs are accumulated in one log file 'dmpdr.log', and 
this file grows very large.
2. If a command takes long time, user may think DR operations have stuck.
3. Devices controlled by TPD are seen in list of luns that can be removed
in 'Remove Luns' operation.

DESCRIPTION:
1. All the logs of DR operations accumulate and form one big log file which 
makes it difficult for user to get to the current DR operation logs.
2. If a command takes time, user has no way to know whether the command has 
stuck.
3. Devices controlled by TPD are visible to user which makes him think that he 
can remove those devices without removing them from TPD control.

RESOLUTION:
1. Now every time user opens DR Tool, a new log file of form
dmpdr_yyyymmdd_HHMM.log is generated.
2. A messages is displayed to inform user if a command takes longer time than 
expected.
3. Changes are made so that devices controlled by TPD are not visible during DR
operations.

* 2933469 (Tracking ID: 2919627)

SYMPTOM:
While doing 'Remove Luns' operation of Dynamic Reconfiguration Tool, there is no
feasible way to remove large number of LUNs, since the only way to do so is to
enter all LUN names separated by comma.

DESCRIPTION:
When removing luns in bulk during 'Remove Luns' option of Dynamic
Reconfiguration Tool, it would not be feasible to enter all the luns separated
by comma.

RESOLUTION:
Code changes are done in Dynamic Reconfiguration scripts to accept file
containing luns to be removed as input.

* 2933905 (Tracking ID: 2934729)

SYMPTOM:
After restoring backup image on Virtual I/O server partition, VxVM functionality
gets enabled in Virtual I/O Server(VIOS)

For example,

# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
emc0_0       auto:none       -            -            online invalid
emc0_3       auto:LVM        -            -            LVM
emc0_4       auto:cdsdisk    -            -            online
emc0_5       auto:cdsdisk    -            -            online
emc0_6       auto:none       -            -            online invalid
emc0_7       auto:cdsdisk    -            -            online
emc0_8       auto:cdsdisk    -            -            online
emc0_9       auto:cdsdisk    -            -            online
emc0_10      auto:cdsdisk    -            -            online

DESCRIPTION:
By default, Veritas Volume Manager (VxVM) functionality is disabled in Virtual
I/O Server. When VxVM package is installed in Virtual I/O server, it creates
vol_disable_vm and dmp_aix_vios CuAt attributes. These attributes are used to
disable VxVM functionality in Virtual I/O Server. After restoring backup image,
these CuAt attributes needs to be recreated. In absence of these attributes,
VxVM functionality gets enabled and vxconfigd also may not work correctly.

RESOLUTION:
Added changes to create vol_disable_vm and dmp_aix_vios CuAt attributes from
VxVM config methods if not already present. During restoring of backup image
VxVM config methods gets invoked and they populate vol_disable_vm and
dmp_aix_vios CuAt attributes.

* 2934259 (Tracking ID: 2930569)

SYMPTOM:
The LUNs in 'error' state in output of 'vxdisk list' cannot be removed through
DR(Dynamic Reconfiguration) Tool.

DESCRIPTION:
The LUNs seen in 'error' state in VM(Volume Manager) tree are not listed by
DR(Dynamic Reconfiguration) Tool while doing 'Remove LUNs' operation.

RESOLUTION:
Necessary changes have been made to display LUNs in error state while doing
'Remove LUNs' operation in DR(Dynamic Reconfiguration) Tool.

* 2942166 (Tracking ID: 2942609)

SYMPTOM:
You will see following message as error message when quiting from Dynamic
Reconfiguration Tool.
"FATAL: Exiting the removal operation."

DESCRIPTION:
When user quits from an operation, Dynamic Reconfiguration Tool displays it is
quiting as error message.

RESOLUTION:
Made changes to display the message as Info.



INSTALLING THE PATCH
--------------------
If the currently installed VRTSvxvm is below 6.0.100.0 level, 
upgrade VRTSvxvm to 6.0.100.0 level before installing this patch.

AIX maintenance levels and APARs can be downloaded from the IBM web site:

 http://techsupport.services.ibm.com

Patching may be performed either manually or with the use of the included 
installer.  To continue, select one of the methods below:

  Patch using installer --      Complete step 1, then go to step 4.
  Patch manually --     Complete step 1, then continue with steps 2-3.

1. Since the patch process will configure the new kernel extensions,
        a) Stop I/Os to all the VxVM volumes.
        b) ensure that no VxVM volumes are in use or open or mounted before starting the installation procedure.
        c) Stop applications using any VxVM volumes.

[METHOD name="Patch manually"]
2. Check whether root support or DMP native support is enabled. If it is enabled, it will be retained after patch upgrade.

# vxdmpadm gettune dmp_native_support


If the current value is "on", DMP native support is enabled on this machine.

# vxdmpadm native list vgname=rootvg

If the output is some list of hdisks, root support is enabled on this machine

3.
a. Before applying this VxVM 6.0.1.200 patch, stop the VEA Server's vxsvc process:
     # /opt/VRTSob/bin/vxsvcctrl stop

b. To apply this patch, use following command:
      # installp -ag -d ./VRTSvxvm.bff VRTSvxvm

c. To apply and commit this patch, use following command:
     # installp -acg -d ./VRTSvxvm.bff VRTSvxvm
NOTE: Please refer installp(1M) man page for clear understanding on APPLY & COMMIT state of the package/patch.
d. Reboot the system to complete the patch  upgrade.
     # reboot

e. Confirm that the point patch is installed:
# lslpp -hac VRTSvxvm | tail -1
/etc/objrepos:VRTSvxvm:6.0.100.200::APPLY:COMPLETE:07/10/11:10;56;11
f. If root support or dmp native support is enabled in step 2, verify whether it is retained after completing the patch upgrade
# vxdmpadm gettune dmp_native_support
# vxdmpadm native list vgname=rootvg
[/METHOD]

[METHOD name="Patch using installer"]
4. To apply the patch using the installer, enter the following commands: 

        # cd <hotfix_directory>  && ./installVM601P2  [ <node1> <node2>... ]
    
    where 
      <hotfix_directory>
                is the directory where you unpacked this hotfix
      <node1>, <node2>, ...
                are the nodes to be patched.  If none are specified, you
                will be prompted to enter them interactively.  All nodes
                must belong to a single cluster.
    
    For information about installer options, run  './installVM601P2 -help'.
    
[/METHOD]


REMOVING THE PATCH
------------------
If the patch was installed using the installer, you may choose to use the 
patch uninstall script which was created at that time.

[METHOD name="Patch removal using uninstaller"]

To remove the patch using the patch uninstall script, enter the command 

        # /opt/VRTS/install/uninstallVM601P2  [ <node1> <node2>... ]

[/METHOD]

[METHOD name="Manual patch removal"]

1. Check whether root support or DMP native support is enabled or not:

      # vxdmpadm gettune dmp_native_support

If the current value is "on", DMP native support is enabled on this machine.

      # vxdmpadm native list vgname=rootvg

If the output is some list of hdisks, root support is enabled on this machine

If disabled: goto step 3.
If enabled: goto step 2.

2. If root support or DMP native support is enabled:

        a. It is essential to disable DMP native support.
        Run the following command to disable DMP native support as well as root support
              # vxdmpadm settune dmp_native_support=off

        b. If only root support is enabled, run the following command to disable root suppor
              # vxdmpadm native disable vgname=rootvg

        c. Reboot the system
              # reboot

3.
   a. Before backing out patch, stop the VEA server's vxsvc process:
              # /opt/VRTSob/bin/vxsvcctrl stop

    b. To reject the patch if it is in "APPLIED" state, use the following command and re-enable DMP support
              # installp -r VRTSvxvm 6.0.100.200

    c.   # reboot
[/METHOD]


SPECIAL INSTRUCTIONS
--------------------
NONE