This page lists publically-released patches for Veritas Enterprise Products.
For Product GA build, see Veritas Entitlement Management System(VEMS) by clicking the Veritas Support 'Licensing' option.
For information on private patches, contact Veritas Technical Support.
Veritas is making it easier to find all software installers and updates for Veritas products with a completely redesigned experience. NetBackup HotFixes and NetBackup Appliance patches are now also available at the new Veritas Download Center.
Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.
vm-rhel6_x86_64-6.0.3.100
Obsolete
The latest patch(es) : sfha-rhel6_x86_64-6.0.5 
Sign in if you want to rate this patch.

 Basic information
Release type: Patch
Release date: 2013-09-13
OS update support: None
Technote: None
Documentation: None
Popularity: 2838 viewed    291 downloaded
Download size: 21.13 MB
Checksum: 2121645564

 Applies to one or more of the following products:
VirtualStore 6.0.1 On RHEL6 x86-64
Dynamic Multi-Pathing 6.0.1 On RHEL6 x86-64
Storage Foundation 6.0.1 On RHEL6 x86-64
Storage Foundation Cluster File System 6.0.1 On RHEL6 x86-64
Storage Foundation for Oracle RAC 6.0.1 On RHEL6 x86-64
Storage Foundation HA 6.0.1 On RHEL6 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-rhel6_x86_64-6.0.5 2014-04-15
vm-rhel6_x86_64-6.0.3.200 (obsolete) 2013-12-18

This patch supersedes the following patches: Release date
vm-rhel6_x86_64-6.0.1.200 (obsolete) 2012-10-10

This patch requires: Release date
sfha-rhel6_x86_64-6.0.3 (obsolete) 2013-02-01

 Fixes the following incidents:
2853712, 2860207, 2863672, 2876865, 2892499, 2892590, 2892621, 2892643, 2892650, 2892660, 2892682, 2892684, 2892689, 2892698, 2892702, 2892716, 2922798, 2924117, 2924188, 2924207, 2933468, 2933469, 2934259, 2940447, 2941193, 2941226, 2941234, 2941237, 2941252, 2942166, 2942259, 2942336, 2944708, 2944710, 2944714, 2944717, 2944722, 2944724, 2944725, 2944727, 2944729, 2944741, 2962257, 2965542, 2973659, 2974870, 2976946, 2978189, 2979767, 2983679, 2988017, 2988018, 3004823, 3004852, 3005921, 3006262, 3011391, 3011444, 3020087, 3025973, 3026288, 3027482, 3090670, 3140411, 3150893, 3156719, 3159096, 3209160, 3210759, 3254132, 3254227, 3254229, 3254427, 3280555, 3283644, 3283668, 3294641, 3294642

 Patch ID:
VRTSvxvm-6.0.300.100-RHEL6

 Readme file  [Save As...]
                          * * * READ ME * * *
                * * * Veritas Volume Manager 6.0.3 * * *
                      * * * Public Hot Fix 1 * * *
                         Patch Date: 2013-09-13


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Volume Manager 6.0.3 Public Hot Fix 1


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL6 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Storage Foundation for Oracle RAC 6.0.1
   * Veritas Storage Foundation Cluster File System 6.0.1
   * Veritas Storage Foundation 6.0.1
   * Veritas Storage Foundation High Availability 6.0.1
   * Veritas Dynamic Multi-Pathing 6.0.1
   * Symantec VirtualStore 6.0.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 6.0.300.100
* 2892702 (2567618) The VRTSexplorer dumps core in vxcheckhbaapi/print_target_map_entry.
* 3090670 (3090667) The system panics or hangs while executing the "vxdisk -o thin, fssize list" 
command as part of Veritas Operations Manager (VOM) Storage Foundation (SF) 
discovery.
* 3140411 (2959325) The vxconfigd(1M) daemon dumps core while performing the disk group move 
operation.
* 3150893 (3119102) Support LDOM Live Migration with fencing enabled
* 3156719 (2857044) System crash on voldco_getalloffset when trying to resize filesystem.
* 3159096 (3146715) 'Rlinks' do not connect with NAT configurations on Little Endian Architecture.
* 3209160 (2750782) The Veritas Volume Manager (VxVM) upgrade process fails
because it incorrectly assumes that the root disk is encapsulated.
* 3210759 (3177758) Performance degradation is seen with upgrade from 5.1SP1RP3 to 6.0 on Linux
* 3254132 (3186971) LVM(Logical Volume Manager) configuration file is not correctly set after turning on DMP native support leads to system unbootable.
* 3254227 (3182350) VxVM volume creation or size increase hangs
* 3254229 (3063378) VM commands are slow when Read Only disks are presented
* 3254427 (3182175) "vxdisk -o thin, fssize list" command can report incorrect File System usage data
* 3280555 (2959733) Handling device path reconfiguration incase the device paths are moved across LUNs or enclosures to prevent vxconfigd coredump.
* 3283644 (2945658) If the Disk label is modified for an Active/Passive LUN, then
the current passive paths don't reflect this modification after a failover.
* 3283668 (3250096) /dev/raw/raw# devices vanish on reboot when selinux enabled.
* 3294641 (3107741) vxrvg snapdestroy fails with "Transaction aborted waiting for io drain" error and vxconfigd hangs for around 45 minutes
* 3294642 (3019684) IO hang on master while SRL is about to overflow
Patch ID: 6.0.300.0
* 2853712 (2815517) vxdg adddisk allows mixing of clone & non-clone disks in a DiskGroup.
* 2863672 (2834046) NFS migration failed due to device reminoring.
* 2892590 (2779580) Secondary node gives configuration error (no Primary RVG) after reboot of master node on Primary site.
* 2892682 (2837717) "vxdisk(1M) resize" command fails if 'da name' is specified.
* 2892684 (1859018) "link detached from volume" warnings are displayed when a linked-breakoff snapshot is created
* 2892698 (2851085) DMP doesn't detect implicit LUN ownership changes for some of the dmpnodes
* 2892716 (2753954) When a cable is disconnected from one port of a dual-port FC 
HBA, the paths via another port are marked as SUSPECT PATH.
* 2940447 (2940446) Full fsck hangs on I/O in VxVM when cache object size is very large
* 2941193 (1982965) vxdg import fails if da-name is based on naming scheme which is different from the prevailing naming scheme on the host
* 2941226 (2915063) Rebooting VIS array having mirror volumes, master node panicked and other nodes CVM FAULTED
* 2941234 (2899173) vxconfigd hang after executing command "vradmin stoprep"
* 2941237 (2919318) The I/O fencing key value of data disk are different and abnormal in a VCS cluster with I/O fencing.
* 2941252 (1973983) vxunreloc fails when dco plex is in DISABLED state
* 2942259 (2839059) vxconfigd logged warning "cannot open /dev/vx/rdmp/cciss/c0d device to check for ASM disk format".
* 2942336 (1765916) VxVM socket files don't have proper write protection
* 2944708 (1725593) The 'vxdmpadm listctlr' command has to be enhanced to print the count of device
paths seen through the controller.
* 2944710 (2744004) vxconfigd is hung on the VVR secondary node during VVR configuration.
* 2944714 (2833498) vxconfigd hangs while reclaim operation is in progress on volumes having instant snapshots
* 2944717 (2851403) System panic is seen while unloading "vxio" module. This happens whenever VxVM uses SmartMove feature and the "vxportal" module gets reloaded (for e.g. during VxFS package upgrade)
* 2944722 (2869594) Master node panics due to corruption if space optimized snapshots are refreshed and 'vxclustadm setmaster' is used to select master.
* 2944724 (2892983) vxvol dumps core if new links are added while the operation is in progress.
* 2944725 (2910043) Avoid order 8 allocation by vxconfigd in node reconfig.
* 2944727 (2919720) vxconfigd core in rec_lock1_5()
* 2944729 (2933138) panic in voldco_update_itemq_chunk() due to accessing invalid buffer
* 2944741 (2866059) Improving error messages hit during vxdisk resize operation
* 2962257 (2898547) vradmind on VVR Secondary Site dumps core, when Logowner Service Group on VVR 
(Veritas Volume Replicator) Primary Site is shuffled across its CVM (Clustered 
Volume Manager) nodes.
* 2965542 (2928764) SCSI3 PGR registrations fail when dmp_fast_recovery is disabled.
* 2973659 (2943637) DMP IO statistic thread may cause out of memory issue so that OOM(Out Of 
Memory) killer is invoked and causes system panic.
* 2974870 (2935771) In VVR environment, RLINK disconnects after master switch.
* 2976946 (2919714) On a THIN lun, vxevac returns 0 without migrating unmounted VxFS volumes.
* 2978189 (2948172) Executing "vxdisk -o thin, fssize list" command can result in panic.
* 2979767 (2798673) System panics in voldco_alloc_layout() while creating volume with instant DCO
* 2983679 (2970368) Enhance handling of SRDF-R2 Write-Disabled devices in DMP.
* 2988017 (2971746) For single-path device, bdget() function is being called for each I/O, which cause high cpu usage and leads to I/O performance degradation.
* 2988018 (2964169) In multiple CPUs environment, I/O performance degradation is seen when I/O is done through VxFS and VxVM specific private interface.
* 3004823 (2692012) vxevac move error message needs to be enhanced to be less generic and give clear message for failure.
* 3004852 (2886333) vxdg join command should not allow mixing clone & non-clone disks in a DiskGroup
* 3005921 (1901838) Incorrect setting of Nolicense flag can lead to dmp database inconsistency.
* 3006262 (2715129) Vxconfigd hangs during Master takeover in a CVM (Clustered Volume Manager) 
environment.
* 3011391 (2965910) Volume creation with vxassist using "-o ordered alloc=<disk-class>" dumps core.
* 3011444 (2398416) vxassist dumps core while creating volume after adding attribute "wantmirror=ctlr" in default vxassist rulefile
* 3020087 (2619600) Live migration of virtual machine having SFHA/SFCFSHA stack with data disks fencing enabled, causes service groups configured on virtual machine to fault.
* 3025973 (3002770) Accessing NULL pointer in dmp_aa_recv_inquiry() caused system panic.
* 3026288 (2962262) Uninstall of dmp fails in presence of other multipathing solutions
* 3027482 (2273190) Incorrect setting of UNDISCOVERED flag can lead to database inconsistency
Patch ID: 6.0.100.200
* 2860207 (2859470) EMC SRDF (Symmetrix Remote Data Facility) R2 disk with EFI label is not 
recognized by VxVM (Veritas Volume Manager) and its shown in error state.
* 2876865 (2510928) The extended attributes reported by "vxdisk -e list" for the EMC SRDF luns are 
reported as "tdev mirror", instead of "tdev srdf-r1".
* 2892499 (2149922) Record the diskgroup import and deport events in syslog
* 2892621 (1903700) Removing mirror using vxassist does not work.
* 2892643 (2801962) Growing a volume takes significantly large time when the volume has version 20 
DCO attached to it.
* 2892650 (2826125) VxVM script daemon is terminated abnormally on its invocation.
* 2892660 (2000585) vxrecover doesn't start remaining volumes if one of the volumes is removed
during vxrecover command run.
* 2892689 (2836798) In VxVM, resizing simple EFI disk fails and causes system panic/hang.
* 2892702 (2567618) VRTSexplorer coredumps in checkhbaapi/print_target_map_entry.
* 2922798 (2878876) vxconfigd dumps core in vol_cbr_dolog() due to race between two threads processing requests from the same client.
* 2924117 (2911040) Restore from a cascaded snapshot leaves the volume in
unusable state if any cascaded snapshot is in detached state.
* 2924188 (2858853) After master switch, vxconfigd dumps core on old master.
* 2924207 (2886402) When re-configuring devices, vxconfigd hang is observed.
* 2933468 (2916094) Enhancements have been made to the Dynamic Reconfiguration Tool(DR Tool) to 
create a separate log file every time DR Tool is started, display a message if 
a command takes longer time, and not to list the devices controlled by TPD 
(Third Party Driver) in 'Remove Luns' option of DR Tool.
* 2933469 (2919627) Dynamic Reconfiguration tool should be enhanced to remove LUNs feasibly in bulk.
* 2934259 (2930569) The LUNs in 'error' state in output of 'vxdisk list' cannot be removed through
DR(Dynamic Reconfiguration) Tool.
* 2942166 (2942609) Message displayed when user quits from Dynamic Reconfiguration Operations is
shown as error message.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following Symantec incidents:

Patch ID: 6.0.300.100

* 2892702 (Tracking ID: 2567618)

SYMPTOM:
The VRTSexplorer dumps core with the segmentation fault in 
checkhbaapi/print_target_map_entry. The stack trace is observed as follows:
 print_target_map_entry()
check_hbaapi()
main()
_start()

DESCRIPTION:
The checkhbaapi utility uses the HBA_GetFcpTargetMapping() API which returns 
the current set of mappings between the OS and the Fiber Channel Protocol (FCP) 
devices for a given Host Bus Adapter (HBA) port. The maximum limit for mappings 
is set to 512 and only that much memory is allocated. When the number of 
mappings returned is greater than 512, the function that prints this 
information tries to access the entries beyond that limit, which results in 
core dumps.

RESOLUTION:
The code is modified to allocate enough memory for all the mappings returned by 
the HBA_GetFcpTargetMapping() API.

* 3090670 (Tracking ID: 3090667)

SYMPTOM:
The "vxdisk -o thin, fssize list" command can cause system to hang or panic due 
to a kernel memory corruption. This command is also issued by Veritas 
Operations Manager (VOM) internally during Storage Foundation (SF) discovery. 
The following stack trace is observed:
 panic string:   kernel heap corruption detected
 vol_objioctl
vol_object_ioctl
voliod_ioctl - frame recycled
volsioctl_real

DESCRIPTION:
Veritas Volume Manager (VxVM) allocates data structures and invokes thin 
Logical Unit Numbers (LUNs) specific function handlers, to determine the disk 
space that is actively used by the file system. One of the function handlers 
wrongly accesses the system memory beyond the allocated data structure, which 
results in the kernel memory corruption.

RESOLUTION:
The code is modified so that the problematic function handler accesses only the 
allocated memory.

* 3140411 (Tracking ID: 2959325)

SYMPTOM:
The vxconfigd(1M) daemon dumps core while performing the disk group move 
operation with the following stack trace:
 dg_trans_start ()
 dg_configure_size ()
 config_enable_copy ()
 da_enable_copy ()
 ncopy_set_disk ()
 ncopy_set_group ()
 ncopy_policy_some ()
 ncopy_set_copies ()
 dg_balance_copies_helper ()
 dg_transfer_copies ()
 in vold_dm_dis_da ()
 in dg_move_complete ()
 in req_dg_move ()
 in request_loop ()
 in main ()

DESCRIPTION:
The core dump occurs when the disk group move operation tries to reduce the 
size of the configuration records in the disk group, when the size is large and 
the disk group move operation needs more space for the new- configrecord 
entries. Since, both the reduction of the size of configuration records 
(compaction) and the configuration change by disk group move operation cannot 
co-exist, this result in the core dump.

RESOLUTION:
The code is modified to make the compaction first before the configuration 
change by the disk group move operation.

* 3150893 (Tracking ID: 3119102)

SYMPTOM:
Live migration of virtual machine having Storage Foundation stack with data
disks fencing enabled, causes service groups configured on virtual machine to 
fault.

DESCRIPTION:
After live migration of virtual machine having Storage Foundation stack with
data disks fencing enabled is done, I/O fails on shared SAN devices with
reservation conflict and causes service groups to fault. 

Live migration causes SCSI initiator change. Hence I/O coming from migrated 
server to shared SAN storage fails with reservation conflict.

RESOLUTION:
Code changes are added to check whether the host is fenced off from cluster. If 
host is not fenced off, then registration key is re-registered for dmpnode
through migrated server and restart IO.

Admin needs to manually invoke 'vxdmpadm pgrrereg' from guest which was live
migrated after live migration.

* 3156719 (Tracking ID: 2857044)

SYMPTOM:
System crashes with following stack when resizing volume with DCO version 30.

PID: 43437  TASK: ffff88402a70aae0  CPU: 17  COMMAND: "vxconfigd"
 #0 [ffff884055a47600] machine_kexec at ffffffff8103284b
 #1 [ffff884055a47660] crash_kexec at ffffffff810ba972
 #2 [ffff884055a47730] oops_end at ffffffff81501860
 #3 [ffff884055a47760] no_context at ffffffff81043bfb
 #4 [ffff884055a477b0] __bad_area_nosemaphore at ffffffff81043e85
 #5 [ffff884055a47800] bad_area at ffffffff81043fae
 #6 [ffff884055a47830] __do_page_fault at ffffffff81044760
 #7 [ffff884055a47950] do_page_fault at ffffffff8150383e
 #8 [ffff884055a47980] page_fault at ffffffff81500bf5
    [exception RIP: voldco_getalloffset+38]
    RIP: ffffffffa0bcc436  RSP: ffff884055a47a38  RFLAGS: 00010046
    RAX: 0000000000000001  RBX: ffff883032f9eac0  RCX: 000000000000000f
    RDX: ffff88205613d940  RSI: ffff8830392230c0  RDI: ffff883fd1f55800
    RBP: ffff884055a47a38   R8: 0000000000000000   R9: 0000000000000000
    R10: 000000000000000e  R11: 000000000000000d  R12: ffff882020e80cc0
    R13: 0000000000000001  R14: ffff883fd1f55800  R15: ffff883fd1f559e8
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
#9 [ffff884055a47a40] voldco_get_map_extents at ffffffffa0bd09ab [vxio]
#10 [ffff884055a47a90] voldco_update_extents_info at ffffffffa0bd8494 [vxio]
#11 [ffff884055a47ab0] voldco_instant_resize_30 at ffffffffa0bd8758 [vxio]
#12 [ffff884055a47ba0] volfmr_instant_resize at ffffffffa0c03855 [vxio]
#13 [ffff884055a47bb0] voldco_process_instant_op at ffffffffa0bcae2f [vxio]
#14 [ffff884055a47c30] volfmr_process_instant_op at ffffffffa0c03a74 [vxio]
#15 [ffff884055a47c40] vol_mv_precommit at ffffffffa0c1ad02 [vxio]
#16 [ffff884055a47c90] vol_commit_iolock_objects at ffffffffa0c1244f [vxio]
#17 [ffff884055a47cf0] vol_ktrans_commit at ffffffffa0c131ce [vxio]
#18 [ffff884055a47d70] volconfig_ioctl at ffffffffa0c8451f [vxio]
#19 [ffff884055a47db0] volsioctl_real at ffffffffa0c8c9b8 [vxio]
#20 [ffff884055a47e90] vols_ioctl at ffffffffa0040126 [vxspec]
#21 [ffff884055a47eb0] vols_compat_ioctl at ffffffffa004034d [vxspec]
#22 [ffff884055a47ee0] compat_sys_ioctl at ffffffff811ce0ed
#23 [ffff884055a47f80] sysenter_dispatch at ffffffff8104a880

DESCRIPTION:
While updating DCO TOC(Table Of Content) entries into in-core TOC, a TOC entry 
is wrongly freed and zeroed out. As a result, traversing TOC entries leads to 
NULL pointer dereference and thus, causing the panic.

RESOLUTION:
Code changes have been made to appropriately update the TOC entries.

* 3159096 (Tracking ID: 3146715)

SYMPTOM:
The 'rinks' do not connect with the Network Address Translation (NAT) 
configurations on Little Endian Architecture (LEA).

DESCRIPTION:
On LEAs, the Internet Protocol (IP) address configured with the NAT mechanism 
is not converted from the host-byte order to the network-byte order. As a 
result, the address used for the rlink connection mechanism gets distorted and 
the 'rlinks' fail to connect.

RESOLUTION:
The code is modified to convert the IP address to the network-byte order before 
it is used.

* 3209160 (Tracking ID: 2750782)

SYMPTOM:
The Veritas Volume Manager (VxVM) upgrade process fails because it
incorrectly assumes that the root disk is encapsulated. The VxVM upgrade package
fails with the following error:
"Please un-encapsulate the root disk and then upgrade.
error: %pre(VRTSvxvm) scriptlet failed, exit status 1
error:   install: %pre scriptlet failed (2), skipping VRTSvxvm"

DESCRIPTION:
The VxVM upgrade process fails when the root disk under the Logical
Volume Manager (LVM) has a customized name as rootvg-rootvol for the root
volume. The customized name causes the VxVM's preinstall script to incorrectly
identify the root disk as encapsulated. The script also assumes that VxVM
controls the encapsulated disk.

RESOLUTION:
The code is modified such that VxVM now handles all the customized
LVM names.

* 3210759 (Tracking ID: 3177758)

SYMPTOM:
Performance degradation is seen after upgrade from SF 5.1SP1RP3 to SF 6.0.1 on 
Linux

DESCRIPTION:
The degradation in performance is seen because the I/O are not unplugged before 
getting delivered to lower layers in the IO path. These I/Os are unplugged by 
OS at a default time which 3 milli seconds, which resulted in an additional 
overhead in completion of I/Os.

RESOLUTION:
Code Changes made to explicitly unplug the I/Os before sending then to the lower 
layer.

* 3254132 (Tracking ID: 3186971)

SYMPTOM:
System becomes unbootable after turning on DMP native support if root 
filesystem are under LVM except /boot.

DESCRIPTION:
As filter in LVM configuration file is incorrectly set by DMP native support 
function, LVM Root VG(Volume Group) cannot find underlying PVs(Physical 
Volumes) during boot up which makes system unbootable.

RESOLUTION:
Changes have been made to set filter in LVM configuration file correctly.

* 3254227 (Tracking ID: 3182350)

SYMPTOM:
If there are more than 8192 paths in the system, the 
vxassist command hangs while creating a new VxVM volume or 
increasing the existing volume's size.

DESCRIPTION:
The vxassist command creates a hash table with max 8192 entries. Hence
other paths greater than 8192 will get hashed to an overlapping bucket
in this hash table. In such case, multiple paths which hash to the same bucket 
are linked in a chain.

In order to find a particular path in a specified bucket, vxassist command 
needs to traverse the entire linked chain. However vxassist only searches
the first element and hangs.

RESOLUTION:
The code is modifed to traverse the entire linked chain.

* 3254229 (Tracking ID: 3063378)

SYMPTOM:
Some VxVM (Volume Manager) commands run slowly when "read only" devices (e.g. 
EMC SRDF-WD, BCV-NR) are presented and managed by EMC PowerPath.

DESCRIPTION:
When performing IO write on a "read only" device, IO fails and retry will be 
done if IO is on TPD(Third Party Driver) device and path status is okay. Owing 
to the retry, IO will not return until timeout reaches which gives the 
perception that VxVM commands run slowly.

RESOLUTION:
Code changes have been done to return IO immediately with disk media failure if 
IO fails on any TPD device and path status is okay.

* 3254427 (Tracking ID: 3182175)

SYMPTOM:
"vxdisk -o thin, fssize list" command can report incorrect File System usage 
data.

DESCRIPTION:
An integer overflow in the internal calculation can cause this command to report 
incorrect per disk FS usage.

RESOLUTION:
Code changes are made so that the command would report the correct File System 
usage data.

* 3280555 (Tracking ID: 2959733)

SYMPTOM:
When device paths are moved across LUNs or enclosures, vxconfigd daemon can dump
core or data corruption can occur due to internal data structure inconsistencies.

DESCRIPTION:
When the device path configuration is changed after a planned or unplanned
disconnection by moving only a subset of the device paths across LUNs or other
storage arrays (enclosures), the DMP's internal data structures get messed up
leading to vxconfigd daemon dumping core and in some situations data corruption
due to incorrect LUN to path mappings.

RESOLUTION:
To resolve this issue, the vxconfigd code was modified to detect such situations
gracefully and modify the internal data structures accordingly to avoid a
vxconfigd coredump and data corruption

* 3283644 (Tracking ID: 2945658)

SYMPTOM:
If you modify the disk label for an Active/Passive LUN on Linux
platforms, the current passive paths don't reflect this modification after failover.

DESCRIPTION:
Whenever failover happens for an Active/Passive LUN, the disk label
information is updated.

RESOLUTION:
The BLKRRPART ioctl is issued on the passive paths during the
failover in order to update the disk label or partition information.

* 3283668 (Tracking ID: 3250096)

SYMPTOM:
When using DMP-ASM (dmp_native_support enabled) with SELinux 
enabled, '/dev/raw/raw# ' devices vanish on reboot.

DESCRIPTION:
At the time of system boot with DMP-ASM (dmp_native_support enabled), DMP 
(Dynamic Multi-pathing) creates dmp devices under '/dev/vx/dmp' before vxvm
(vxconfigd) startup. At this time, if SELinux is disabled, then it re-
mounts '/dev/vx' to 'tmpfs' and cleans up the info under '/dev/vx'. If SELinux 
is enabled, then it does not mount and as a result, the info under '/dev/vx/' 
continues to exist. DMP does not bind these dmp devices to raw devices if the 
info already exists under '/dev/vx/' at the time of vxconfigd startup. As a 
result, no devices appear under '/dev/raw/' when SELinux is enabled.

RESOLUTION:
Code changes are done to bind the dmp devices to raw devices as per the records 
in '/etc/vx/.vxdmprawdev', during vxconfigd startup, irrespective of the dmp 
devices present under '/dev/vx/dmp' or not.

* 3294641 (Tracking ID: 3107741)

SYMPTOM:
"vxrvg snapdestroy" command fails with error message "Transaction aborted
waiting for io drain", and vxconfigd hang is observed. vxconfigd stack trace is:

vol_commit_iowait_objects
vol_commit_iolock_objects
vol_ktrans_commit
volconfig_ioctl
volsioctl_real
vols_ioctl
vols_compat_ioctl
compat_sys_ioctl
...

DESCRIPTION:
The smartmove query of VxFS depends on some reads and writes. If some
transaction in VxVM blocks the new read and write, then API is hung waiting for
the response. This creates a deadlock-like situation with Smartmove API is
waiting for transaction to complete and transaction waiting Smartmove API is
hung waiting for transaction, and hence the hang.

RESOLUTION:
Disallow transactions during the Smartmove API.

* 3294642 (Tracking ID: 3019684)

SYMPTOM:
IO hang is observed when SRL is about to overflow after logowner switch from
slave to master. Stack trace looks like:

biowait
default_physio
volrdwr
fop_write
write
syscall_trap32

DESCRIPTION:
Delineating the steps, with slave as logowner, overflow the SRL and following it
up with DCM resync. Then, switching back logowner to master and trying to
overflow SRL again would manifest the IO hang in the master when SRL is about to
overflow. This happens because the master has a stale flag set with incorrect
value related to last SRL overflow.

RESOLUTION:
Reset the stale flag and ensure that flag is reset whether the logowner is
master or slave.

Patch ID: 6.0.300.0

* 2853712 (Tracking ID: 2815517)

SYMPTOM:
vxdg adddisk succeeds to add a clone disk to non-clone and non-clone disk to 
clone diskgroup, resulting in mixed diskgroup.

DESCRIPTION:
vxdg import fails for diskgroup which has mix of clone and non-clone disks. So 
vxdg adddisk should not allow creation of mixed diskgroup.

RESOLUTION:
vxdisk adddisk code is modified to return an error for an attempt to add clone 
disk to non-clone or non-clone disks to clone diskgroup, Thus it prevents 
addition of disk in diskgroup which leads to mixed diskgroup.

* 2863672 (Tracking ID: 2834046)

SYMPTOM:
VxVM dynamically reminors all the volumes during DG import if the DG base minor
numbers are not in the correct pool. This behaviour cases NFS client to have to
re-mount all NFS file systems in an environment where CVM is used on the NFS
server side.

DESCRIPTION:
Starting from 5.1, the minor number space is divided into two pools, one for
private disk groups and another for shared disk groups. During DG import, the DG
base minor numbers will be adjusted automatically if not in the correct pool,
and so do the volumes in the disk groups. This behaviour reduces many minor
conflicting cases during DG import. But in NFS environment, it makes all file
handles on the client side stale. Customers had to unmount files systems and
restart applications.

RESOLUTION:
A new tunable, "autoreminor", is introduced. The default value is "on". Most of
the customers don't care about auto-reminoring. They can just leave it as it is.
For a environment that autoreminoring is not desirable, customers can just turn
it off. Another major change is that during DG import, VxVM won't change minor 
numbers as long as there is no minor conflicts. This includes the cases that 
minor numbers are in the wrong pool.

* 2892590 (Tracking ID: 2779580)

SYMPTOM:
Secondary node gives configuration error 'no Primary RVG' when primary master 
node(default logowner) is rebooted and slave becomes new master.

DESCRIPTION:
After reboot of primary master, new master sends handshake request for vradmind
communication to secondary. As a part of handshake request, secondary deletes 
the old configuration including primary RVG. During this phase, secondary 
receives configuration update message from primary for old configuration. 
Secondary does not find old primary RVG configuration for processing this 
message. Hence, it cannot proceed with the pending handshake request and gives 
'no Primary RVG' configuration error.

RESOLUTION:
Code changes are done such that during handshake request phase, configuration 
messages of old primary RVG are discarded.

* 2892682 (Tracking ID: 2837717)

SYMPTOM:
"vxdisk(1M) resize" command fails if 'da name' is specified.

DESCRIPTION:
The scenario for 'da name' is not handled in the resize code path.

RESOLUTION:
The code is modified such that if 'dm name' is not specified to resize, then 'da 
name' specific operation is performed.

* 2892684 (Tracking ID: 1859018)

SYMPTOM:
"Link <link-name> link detached from volume <volume-name>" warnings are 
displayed when a linked-breakoff snapshot is created.

DESCRIPTION:
The purpose of these message is to let user and administrators know about the 
detach of link due to I/O errors. These messages get displayed uneccesarily 
whenever linked-breakoff snapshot is created.

RESOLUTION:
Code changes are made to display messages only when link is detached due to I/O 
errors on volumes involved in link-relationship.

* 2892698 (Tracking ID: 2851085)

SYMPTOM:
DMP doesn't detect implicit LUN ownership changes

DESCRIPTION:
DMP does ownership monitoring for ALUA arrays to detect implicit LUN ownership
changes. This helps DMP to always use Active/Optimized path for sending down I/O.
 This feature is controlled using dmp_monitor_ownership tune and is enabled by
default. 

In case of partial discovery triggered through event source daemon (vxesd), ALUA
information kept in kernel data structure for ownership monitoring was getting
wiped. This causes ownership monitoring to not work for these dmpnodes.

RESOLUTION:
Source has been updated to handle such case.

* 2892716 (Tracking ID: 2753954)

SYMPTOM:
When cable is disconnected from one port of a dual-port FC HBA, only 
paths going through the port should be marked as SUSPECT. But paths going 
through other port are also getting marked as SUSPECT.

DESCRIPTION:
Disconnection of a cable from a HBA port generates a FC event. 
When the event is generated, paths of all ports of the corresponding HBA are 
marked as SUSPECT.

RESOLUTION:
The code changes are done to mark the paths only going through the 
port on which FC event is generated.

* 2940447 (Tracking ID: 2940446)

SYMPTOM:
I/O can hang on volume with space optimized snapshot if the underlying cache 
object is of very large size. It can also lead to data corruption in cache-
object.

DESCRIPTION:
Cache volume maintains B+ tree for mapping the offset and its actual location 
in cache object. Copy-on-write I/O generated on snapshot volumes needs to 
determine the offset of particular I/O in cache object. Due to incorrect type-
casting the value calculated for large offset truncates to smaller value due to 
overflow, leading to data corruption.

RESOLUTION:
Code changes are done to avoid overflow during offset calculation in cache 
object.

* 2941193 (Tracking ID: 1982965)

SYMPTOM:
"vxdg import DGNAME <da-name..>" fails when "da-name" used as an input to vxdg
command is based on namingscheme which is different from the prevailing
namingscheme on the host. Error message seen is:

VxVM vxdg ERROR V-5-1-530 Device c6t50060E801002BC73d240 not found in 
configuration
VxVM vxdg ERROR V-5-1-10978 Disk group x86dg: import failed:
Not a disk access record

DESCRIPTION:
vxconfigd stores Disk Access (DA) records based on DMP names. If "vxdg" passes a
name other than DMP name for the device, vxconfigd cannot map it to a DA record.
As vxconfigd cannot locate a DA record corresponding to passed input name from
vxdg, it fails the import operation.

RESOLUTION:
vxdg command now converts the input name to DMP name before passing it to
vxconfigd for further processing.

* 2941226 (Tracking ID: 2915063)

SYMPTOM:
System panic with following stack during detaching plex of volume in CVM 
environment.

vol_klog_findent()
vol_klog_detach()
vol_mvcvm_cdetsio_callback()
vol_klog_start()
voliod_iohandle()
voliod_loop()

DESCRIPTION:
During plex-detach operation VxVM searches the plex object to be detached in 
kernel. In case if there is some transaction in progress on any diskgroup in 
the system, incorrect plex object gets selected sometime, which results into 
dereference of invalid address and panics the system.

RESOLUTION:
Code changes done to make sure that correct plex object is getting selected.

* 2941234 (Tracking ID: 2899173)

SYMPTOM:
In CVR environment, SRL failure may result into vxconfigd hang and eventually
resulting into 'vradmin stoprep' command hang.

DESCRIPTION:
'vradmin stoprep' command is hung because vxconfigd is waiting indefinitely in
transaction. Transaction was waiting for IO completion on SRL. 
We generate error handler to handle IO failure on SRL. But if we are in
transaction, this error was not getting handled properly resulting into
transaction hang.

RESOLUTION:
Fix is provided such that when SRL failure is encountered, transaction itself 
handles IO error on SRL.

* 2941237 (Tracking ID: 2919318)

SYMPTOM:
In a CVM environment with fencing enabled, wrong fencing keys are registered for
opaque disks during node join or dg import operations.

DESCRIPTION:
During cvm node join and shared dg import code path, when opaque disk 
registration happens, fencing keys in internal dg records are not in sync with
actual keys generated. This was causing wrong fencing keys registered for opaque
disks. For rest disks fencing key registration happens correctly.

RESOLUTION:
Fix is to copy correctly generated key to internal dg record for current dg
import/node join scenario and use it for disk registration.

* 2941252 (Tracking ID: 1973983)

SYMPTOM:
Relocation is failing with following error when DCO(data change 
object) plex is in disabled state.
VxVM vxrelocd ERROR V-5-2-600 Failure recovering <DCO> in disk group <diskgroup>

DESCRIPTION:
When a mirror-plex is added to a volume using "vxassist 
snapstart", attached DCO plex can be in DISABLED/DCOSNP state. While recovering 
such DCO plexes, if enclosure is disabled, plex can get in DETACHED/DCOSNP 
state and relocation fails.

RESOLUTION:
Code changes are made to handle DCO plexs in disabled state in 
relocation.

* 2942259 (Tracking ID: 2839059)

SYMPTOM:
When system connected to HP smart array "CCISS" disks boots, it logs following
warning messages on console:

VxVM vxconfigd WARNING V-5-1-16737 cannot open /dev/vx/rdmp/cciss/c0d to check
for ASM disk format
VxVM vxconfigd WARNING V-5-1-16737 cannot open /dev/vx/rdmp/cciss/c0d to check
for ASM disk format
VxVM vxconfigd WARNING V-5-1-16737 cannot open /dev/vx/rdmp/cciss/c0d to check
for ASM disk format
VxVM vxconfigd WARNING V-5-1-16737 cannot open /dev/vx/rdmp/cciss/c0d to check
for ASM disk format

DESCRIPTION:
HP smart array CCISS disk names end with a digit which is not parsed correctly.
The last digit of the disk names are truncated leading to invalid disk names.
This leads to warnings as the file can't be opened for checking ASM format due
to invalid device path.

RESOLUTION:
Code changes are made to handle the parsing correctly and get the valid device
names.

* 2942336 (Tracking ID: 1765916)

SYMPTOM:
VxVM socket files don't have write protection from other users.
The following files are writeable by all users:
srwxrwxrwx root root /etc/vx/vold_diag/socket
srwxrwxrwx root root /etc/vx/vold_inquiry/socket
srwxrwxrwx root root /etc/vx/vold_request/socket

DESCRIPTION:
These sockets are used by the admin/support commands to 
communicate with the vxconfigd. These sockets are created by vxconfigd during 
it's start up process.

RESOLUTION:
Proper write permissions are given to VxVM socket files. Only 
vold_inquiry/socket file is still writeable by all, because it is used by many 
VxVM commands like vxprint, which are used by all users.

* 2944708 (Tracking ID: 1725593)

SYMPTOM:
The 'vxdmpadm listctlr' command does not show the count of device paths seen
through it

DESCRIPTION:
The 'vxdmpadm listctlr' currently does not show the number of device paths seen
through it. The CLI option has been enhanced to provide this information as an
additional column at the end of each line in the CLI's output

RESOLUTION:
The number of paths under each controller is counted and the value is displayed
as the last column in the 'vxdmpadm listctlr' CLI output

* 2944710 (Tracking ID: 2744004)

SYMPTOM:
When VVR is configured, vxconfigd on secondary gets hung. Any vx 
commands issued during this time does not complete.

DESCRIPTION:
Vxconfigd is waiting for IOs to drain before allowing a 
configuration change command to proceed. The IOs never drain completely 
resulting into the hang. This is because there is a deadlock where pending IOs 
are unable to start and vxconfigd keeps waiting for their completion.

RESOLUTION:
Changed the code so that this deadlock does not arise. The IOs can 
be started properly and complete allowing vxconfigd to function properly.

* 2944714 (Tracking ID: 2833498)

SYMPTOM:
vxconfigd daemon hangs in vol_ktrans_commit() while reclaim operation is in 
progress on volumes having instant snapshots. Stack trace is given below:

vol_ktrans_commit
volconfig_ioctl

DESCRIPTION:
Storage reclaim leads to the generation of special IOs (termed as Reclaim IOs), 
which can be very large in size(>4G) and unlike application IOs, these are not 
broken into smaller sized IOs. Reclaim IOs need to be tracked in snapshot maps 
if the volume has full snapshots configured. The mechanism to track reclaim IO 
is not capable of handling such large IOs causing hang.

RESOLUTION:
Code changes are made to use the alternative mechanism in Volume manager to 
track the reclaim IOs.

* 2944717 (Tracking ID: 2851403)

SYMPTOM:
System panics while unloading 'vxio' module when VxVM SmartMove feature is used
and the "vxportal" module gets reloaded (for e.g. during VxFS package upgrade).
Stack trace looks like:

vxportalclose()
vxfs_close_portal()
vol_sr_unload()
vol_unload()

DESCRIPTION:
During a smart-move operation like plex attach, VxVM opens the 'vxportal' module
to read in-use file system maps information. This file descriptor gets closed
only when 'vxio' module is unloaded. If the 'vxportal' module is unloaded and
reloaded before 'vxio', the file descriptor with 'vxio' becomes invalid and
results in a panic.

RESOLUTION:
Code changes are made to close the file descriptor for 'vxportal' after reading
free/invalid file system map information. This ensures that stale file
descriptors don't get used for 'vxportal'.

* 2944722 (Tracking ID: 2869594)

SYMPTOM:
Master node would panic with following stack after a space optimized snapshot is
refreshed or deleted and master node is selected using 'vxclustadm setmaster'

volilock_rm_from_ils
vol_cvol_unilock
vol_cvol_bplus_walk
vol_cvol_rw_start
voliod_iohandle
voliod_loop
thread_start

In addition to this, all space optimized snapshots on the corresponding cache
object may be corrupted.

DESCRIPTION:
In CVM, the master node owns the responsibility of maintaining the cache object
indexing structure for providing space optimized functionality. When a space
optimized snapshot is refreshed or deleted, the indexing structure would get
rebuilt in background after the operation is returned. When the master node is
switched using 'vxclustadm setmaster' before index rebuild is complete, both old
master and new master nodes would rebuild the index in parallel which results in
index corruption. Since the index is corrupted, the data stored on space 
optimized snapshots should not be trusted. I/Os issued on corrupted index would 
lead to panic.

RESOLUTION:
When the master role is switched using 'vxclustadm setmaster', the index rebuild
on old master node would be safely aborted. Only new master node would be 
allowed to rebuild the index.

* 2944724 (Tracking ID: 2892983)

SYMPTOM:
vxvol command dumps core with the following stack trace, if executed 
parallel to vxsnap addmir command

strcmp()
do_link_recovery
trans_resync_phase1()
vxvmutil_trans()
trans()
common_start_resync()
do_noderecover()
main()

DESCRIPTION:
During creation of link between two volumes if vxrecover is triggered, vxvol 
command may not have information about the newly created links. This leads to 
NULL pointer dereference and dumps core.

RESOLUTION:
The code has been modified to check if links information is 
properly present with vxvol command and fail operation with appropriate error 
message.

* 2944725 (Tracking ID: 2910043)

SYMPTOM:
Frequent swapin/swapout seen due to higher order memory requests

DESCRIPTION:
In VxVM operations such as plex attach, snapshot resync/reattach issue 
ATOMIC_COPY IOCTL's. Default I/O size for these operation is 1MB and VxVM 
allocates this memory from operating system. Memory allocations of such large 
size can results into swapin/swapout of pages and are not very efficient. In 
presence of lot of such operations , system may not work very efficiently.

RESOLUTION:
VxVM has its own I/O memory management module, which allocates pages from 
operating system and efficiently manage them. Modified ATOMIC_COPY code to make 
use of VxVM's internal I/O memory pool instead of directly allocating memory 
from operating system.

* 2944727 (Tracking ID: 2919720)

SYMPTOM:
vxconfigd dumps core in rec_lock1_5() function.

rec_lock1_5()
rec_lock1()
rec_lock()
client_trans_start()
req_vol_trans()
request_loop()
main()

DESCRIPTION:
During any configuration changes in VxVM, vxconfigd locks all involved objects 
in operations to avoid any unexpected modification. Some objects which do not 
belong to the context of current transactions are not handled properly which 
resuls in core dump. This case is particularly seen during snapshots operation 
of cross-dg linked volume snapshots.

RESOLUTION:
Code changes are done to avoid locking of records which are not yet part of the 
committed VxVM configuration.

* 2944729 (Tracking ID: 2933138)

SYMPTOM:
System panics with stack trace given below:

voldco_update_itemq_chunk()
voldco_chunk_updatesio_start()
voliod_iohandle()
voliod_loop()

DESCRIPTION:
While tracking IOs in snapshot MAPS information is stored in-
memory pages. For large sized IOs (such as reclaim IOs), this information can 
span across multiple pages. Sometimes the pages are not properly referenced in 
MAP update for IOs of larger size which lead to panic because of invalid page 
addresses.

RESOLUTION:
Code is modified to properly reference pages during MAP update for 
large sized IOs.

* 2944741 (Tracking ID: 2866059)

SYMPTOM:
When disk resize fails, following messages can appear on screen:
1. "VxVM vxdisk ERROR V-5-1-8643 Device <device-name>: resize failed: One or 
more subdisks do not fit in pub reg"
or
2. "VxVM vxdisk ERROR V-5-1-8643 Device <disk-name>: resize failed: Cannot 
remove last disk in disk group"

DESCRIPTION:
In first message extra information should be provided like which 
subdisk is under consideration and what are subdisk and public region lengths 
etc. After vxdisk resize fails with the second message, if -f(force) option is 
used, resize operation succeeds. This message can be improved by suggesting the 
user to use -f (force) option for resizing

RESOLUTION:
Code changes are made to improve the error messages.

* 2962257 (Tracking ID: 2898547)

SYMPTOM:
vradmind dumps core on VVR (Veritas Volume Replicator) Secondary site in a CVR 
(Clustered Volume Replicator) environment. 
Stack trace would look like:

__kernel_vsyscall
raise
abort
fmemopen
malloc_consolidate
delete
delete[]
IpmHandle::~IpmHandle
IpmHandle::events
main

DESCRIPTION:
When Logowner Service Group is moved across nodes on the Primary Site, it 
induces deletion of IpmHandle of the old Logowner Node, as the IpmHandle of the 
new Logowner Node gets created. During destruction of IpmHandle object, a 
pointer '_cur_rbufp' is not set to NULL, which can lead to freeing up of memory 
which is already freed, and thus, causing 'vradmind' to dump core.

RESOLUTION:
Destructor of IpmHandle is modified to set the pointer to NULL after it is 
deleted.

* 2965542 (Tracking ID: 2928764)

SYMPTOM:
If the tunable dmp_fast_recovery is set to off, PGR( Persistent Group
Reservation) key registration fails except for the first path i.e. only
for the first path PGR key gets registered.

Consider we are registering keys as follows.

# vxdmpadm settune dmp_fast_recovery=off

# vxdmpadm settune dmp_log_level=9

# vxdmppr read -t REG /dev/vx/rdmp/hitachi_r7000_00d9

Node: /dev/vx/rdmp/hitachi_r7000_00d9
    ASCII-KEY           HEX-VALUE
    -----------------------------

# vxdmppr register -s BPGR0000 /dev/vx/rdmp/hitachi_r7000_00d9

# vxdmppr read -t REG /dev/vx/rdmp/hitachi_r7000_00d9
    Node: /dev/vx/rdmp/hitachi_r7000_00d9
    ASCII-KEY           HEX-VALUE
    -----------------------------
    BPGR0000            0x4250475230303030

This being a multipathed disk, only first path gets PGR key registered through 
it.


You will see the log messages similar to following: 

Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-0-0 SCSI error opcode=0x5f
returned rq_status=0x12 cdb_status=0x1 key=0x6 asc=0x2a ascq=0x3 on path 8/0x90
Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-3-0 dmp_scsi_ioctl: SCSI
ioctl completed host_byte = 0x0 rq_status = 0x8
Sep  6 11:29:41 clabcctlx04 kernel: sd 4:0:0:4: reservation conflict

Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-3-0 dmp_scsi_ioctl: SCSI
ioctl completed host_byte = 0x11 rq_status = 0x17

Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-0-0 SCSI error opcode=0x5f
returned rq_status=0x17 cdb_status=0x0 key=0x0 asc=0x0 ascq=0x0 on path 8/0xb0

Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-3-0 dmp_pr_send_cmd failed
with transport error: uscsi_rqstatus = 23ret = -1 status = 0 on dev 8/0xb0


Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-3-0 dmp_scsi_ioctl: SCSI
ioctl completed host_byte = 0x0 rq_status = 0x8

DESCRIPTION:
After key for first path gets registered successfully, the second path gets a   
reservation conflict which is expected. But in case of synchronous mode i.e.
when dmp_fast_recovery is off, we don't set the proper reservation flag, due to
which the registration command fails with the transport error and PGR keys on
other paths don't get registered. In asynchronous mode we set it correctly,
hence don't see the issue there.

RESOLUTION:
Set the proper reservation flag so that the key can be registered for other
paths as well.

* 2973659 (Tracking ID: 2943637)

SYMPTOM:
System panicked after the process of expanding DMP IO statistic queue size. The 
following stack message can be observed in syslog before panic:

oom_kill_process
select_bad_process
out_of_memory
__alloc_pages_nodemask
alloc_pages_current
__vmalloc_area_node
dmp_alloc
__vmalloc_node
dmp_alloc
vmalloc_32
dmp_alloc
dmp_zalloc
dmp_iostatq_add
dmp_iostatq_op
dmp_process_stats
dmp_daemons_loop

DESCRIPTION:
In the process of expanding DMP IO statistic queue size, memory is allocated in 
sleep/block way. When Linux kernel can't satisfy the memory allocation request, 
i.e. system under high load and the amount of per-CPU memory chunk can be large 
since amounts of CPU, it will invoke OOM killer to kill other processes/threads 
to free more memory, which may cause system panic.

RESOLUTION:
The code changes were made to allocate memory in non-sleep way in the process 
of expanding DMP IO statistic queue size, hence, it will return fail quickly if 
the system can't satisfy the request but not invoke OOM killer.

* 2974870 (Tracking ID: 2935771)

SYMPTOM:
Rlinks disconnect after switching the master.

DESCRIPTION:
Sometimes switching a master on the primary can cause the Rlinks to disconnect.
vradmin repstatus would show "paused due to network disconnection" as the
replication status. VVR uses a connection to check if the secondary is alive.
The secondary responds to these requests by replying back, indicating that it 
is alive. On a master switch, the old master fails to close this connection with 
the secondary. Thus after the master switch the old master as well as the new 
master would send the requests to the secondary. This causes a mismatch of 
connection numbers on the secondary and the secondary does not reply to the 
requests of the new master. Thus it causes the Rlinks to disconnect.

RESOLUTION:
The solution is to close the connection of the old master with the secondary, so
that it does not keep sending connection requests to the secondary.

* 2976946 (Tracking ID: 2919714)

SYMPTOM:
On a THIN lun, vxevac returns 0 without migrating unmounted VxFS volumes.  The
following error messages are displayed when an unmounted VxFS volumes is processed:

 VxVM vxsd ERROR V-5-1-14671 Volume v2 is configured on THIN luns and not mounted.
Use 'force' option, to bypass smartmove. To take advantage of smartmove for
supporting thin luns, retry this operation after mounting the volume.
 VxVM vxsd ERROR V-5-1-407 Attempting to cleanup after failure ...

DESCRIPTION:
On a THIN lun, VM will not move or copy data on an unmounted VxFS volumes unless
smartmove is bypassed.  The vxevac command fails needs to be enhanced to detect
unmounted VxFS volumes on THIN luns and to support a force option that allows the
user to bypass smartmove.

RESOLUTION:
The vxevac script has been modified to check for unmounted VxFS volumes on THIN
luns prior to performing the migration. If an unmounted VxFS volume is detected
the command fails with a non-zero return code and displays a message notifying
the user to mount the volumes or bypass smartmove by specifying the force option: 

 VxVM vxevac ERROR V-5-2-0 The following VxFS volume(s) are configured
 on THIN luns and not mounted:

         v2

 To take advantage of smartmove support on thin luns, retry this operation
 after mounting the volume(s).  Otherwise, bypass smartmove by specifying
 the '-f' force option.

* 2978189 (Tracking ID: 2948172)

SYMPTOM:
Execution of command "vxdisk -o thin, fssize list" can cause hang or panic.

Hang stack trace might look like:
pse_block_thread
pse_sleep_thread
.hkey_legacy_gate
volsiowait
vol_objioctl
vol_object_ioctl
voliod_ioctl
volsioctl_real
volsioctl

Panic stack trace might look like:
voldco_breakup_write_extents
volfmr_breakup_extents
vol_mv_indirect_write_start
volkcontext_process
volsiowait
vol_objioctl
vol_object_ioctl
voliod_ioctl
volsioctl_real
vols_ioctl
vols_compat_ioctl
compat_sys_ioctl
sysenter_dispatch

DESCRIPTION:
Command "vxdisk -o thin, fssize list" triggers reclaim I/Os to get file system
usage from veritas file system on veritas volume manager mounted volumes. We
currently do not support reclamation on volumes with space optimized (SO) snapshots.
But because of a bug, reclaim IOs continue to execute for volumes with SO
Snapshots leading to system panic/hang.

RESOLUTION:
Code changes are made to not to allow reclamation IOs to proceed on volumes with
SO Snapshots.

* 2979767 (Tracking ID: 2798673)

SYMPTOM:
System panic is observed with the stacktrace given below:

voldco_alloc_layout
voldco_toc_updatesio_done
voliod_iohandle
voliod_loop

DESCRIPTION:
DCO (data change object) contains metadata information required to 
start DCO volume and decode further information from the DCO volume. This 
information is stored in the 1st block of DCO volume. If this metadata 
information is incorrect/corrupted, the further processing of volume start 
resulted into panic due to divide-by-zero error in kernel.

RESOLUTION:
Code changes are made to verify the correctness of DCO volumes 
metadata information during startup. If the information read is incorrect, 
volume start operations fails.

* 2983679 (Tracking ID: 2970368)

SYMPTOM:
SRDF-R2 WD(write-disabled)devices are shown in error state and lots of path
enable/disable messages are generated in /etc/vx/dmpevents.log file.

DESCRIPTION:
DMP(dynamic multi-pathing driver) disables the paths of write protected devices.
Therefore these devices are shown in error state. Vxattachd daemon tries to
online these devices and executes partial device discovery for these devices. As
part of partial device discovery, enabling and disabling the paths of such write
protected devices generate lots of path enable/disable messages in
/etc/vx/dmpevents.log file.

RESOLUTION:
This issue is addressed by not disabling paths of write protected devices in DMP.

* 2988017 (Tracking ID: 2971746)

SYMPTOM:
For single-path device, bdget() function is being called for each I/O, which cause
high cpu usage and leads to I/O performance degradation.

DESCRIPTION:
For each I/O on the single-path DMP device, OS function bdget() is being called
while swapping DMP whole device with its subpath OS whole device, which consumes
cpu cycles while doing whole device look-up into block device database.

RESOLUTION:
Code changes are done to cache the OS whole block device pointer during device
open to prevent calling bdget() function during each I/O operation.

* 2988018 (Tracking ID: 2964169)

SYMPTOM:
In multiple CPUs environment, I/O performance degradation is seen when I/O is
done through VxFS and VxVM specific private interface.

DESCRIPTION:
I/O performance degradation is seen when I/O is being done through VxFS and VxVM
specific private interface and 'bi_comp_cpu' (CPU ID) field of 'struct bio'
which is being set by VxFS module is not being honoured within VxVM module and
hence not being propagated to underlying device driver, due to which iodone
routines are being called mostly on CPU 0, causing performance degradation. 
    VxFS and VxVM private interface is being used in case of smartmove, Oracle
Resilvering, smartsync etc..

RESOLUTION:
Code changes are done to propagate the 'bi_comp_cpu' (CPU ID) field of struct
bio, which is being set by VxFS (struct bio owner) within VxVM so that VxVM can
pass down the CPU id to underlying module and iodone will be called on the
specified CPU.

* 3004823 (Tracking ID: 2692012)

SYMPTOM:
When moving subdisks using vxassist move (or using vxevac command which
in turn call vxassist move), if the disk tag are not same for source &
destination, the command used to fail with generic message which does not convey
exactly why the operation failed.

You will see following generic message:
VxVM vxassist ERROR V-5-1-438 Cannot allocate space to replace subdisks

DESCRIPTION:
When moving subdisks using vxassist move, it uses available disks 
from disk group to move, if no target disk is specified. If these disks have 
site tag set and value of site tag attribute is not same, then vxassist move is 
expected to fail. But it fails with generic message that does not specify why 
the operation failed. Expectation is to introduce message that precisely convey 
user why the operation failed.

RESOLUTION:
New message is introduced which precisely conveys that disk failure 
is due to site tag attribute mismatch.

You will see following message along with the generic message that conveys the
actual reason for failure:
VxVM vxassist ERROR V-5-1-0 Source and/or target disk belongs to site, can not 
move over sites

* 3004852 (Tracking ID: 2886333)

SYMPTOM:
"vxdg(1M) join" command allowed mixing clone and non-clone disk group. 
Subsequent import of new joined disk group fails.

DESCRIPTION:
Mixing of clone and non-clone disk group is not allowed. The part of the code
where join operation is done is not validating the mix of clone and non-clone 
disk group and it was going ahead with the operation. This resulted in the new
joined disk group having mix of clone & non-clone disks. Subsequent import of
new joined disk group fails.

RESOLUTION:
During disk group join operation, both the disk groups are checked, if there is 
a mix of clone and non-clone disk group found, the join operation is failed.

* 3005921 (Tracking ID: 1901838)

SYMPTOM:
After addition of a license key that enables multi-pathing, the state of the
controller is still shown as DISABLED in the vxdmpadm CLI output.

DESCRIPTION:
When the multi-pathing license key is added, the state of active paths of a LUN is
changed to ENABLED but the state of the controller is not updated.

RESOLUTION:
As a fix, whenever multipathing license key is installed, the operation updates
the state of the controller in addition to that of the LUN paths.

* 3006262 (Tracking ID: 2715129)

SYMPTOM:
Vxconfigd hangs during Master takeover in a CVM (Clustered Volume Manager) 
environment. This results in vx command hang.

DESCRIPTION:
During Master takeover, VxVM (Veritas Volume Manager) kernel signals Vxconfigd 
with the information of new Master. Vxconfigd then proceeds with a vxconfigd-
level handshake with the nodes across the cluster. Before kernel could signal 
to vxconfigd, vxconfigd handshake mechanism got started, resulting in the hang.

RESOLUTION:
Code changes are done to ensure that vxconfigd handshake gets started only upon 
receipt of signal from the kernel.

* 3011391 (Tracking ID: 2965910)

SYMPTOM:
vxassist dumps core with following stack:
setup_disk_order()
volume_alloc_basic_setup()
fill_volume()
setup_new_volume()
make_trans()
vxvmutil_trans()
trans()
transaction()
do_make()
main()

DESCRIPTION:
When -o ordered is used, vxassist handles non-disk parameters in a 
different way. This scenario may result in invalid comparison, leading to a 
core dump.

RESOLUTION:
Code changes are made to handle the parameter comparison logic 
properly.

* 3011444 (Tracking ID: 2398416)

SYMPTOM:
vxassist dumps core with the following stack:
 merge_attributes()
 get_attributes()
 do_make()
 main()
 _start()

DESCRIPTION:
vxassist dumps core while creating volume when 
attribute 'wantmirror=ctlr' is added to the '/etc/default/vxassist' file. 
vxassist reads this default file initially and uses the attributes specified to 
allocate the storage during the volume creation. However, during the merging of 
attributes specified in the default file, it accesses NULL attribute structure 
causing the core dump.

RESOLUTION:
Necessary code changes have been done to check the attribute
structure pointer before accessing it.

* 3020087 (Tracking ID: 2619600)

SYMPTOM:
Live migration of virtual machine having SFHA/SFCFSHA stack with data disks 
fencing enabled, causes service groups configured on virtual machine to fault.

DESCRIPTION:
After live migration of virtual machine having SFHA/SFCFSHA stack with data 
disks fencing enabled is done, I/O fails on shared SAN devices with reservation 
conflict and causes service groups to fault. 

Live migration causes SCSI initiator change. Hence I/O coming from migrated 
server to shared SAN storage fails with reservation conflict.

RESOLUTION:
Code changes are added to check whether the host is fenced off from cluster. If 
host is fenced off, then registration key is re-registered for dmpnode through 
migrated server and restart IO.

* 3025973 (Tracking ID: 3002770)

SYMPTOM:
The system panics with the following stack trace:

vxdmp:dmp_aa_recv_inquiry
vxdmp:dmp_process_scsireq
vxdmp:dmp_daemons_loop
unix:thread_start

DESCRIPTION:
The panic happens while handling the SCSI response for SCSI Inquiry command. 
In order to determine if the path on which SCSI Inquiry command was issued is 
read-only, the code needs to check the error buffer. However the error buffer is 
not always prepared. So the code should examine if the error buffer is valid 
before further checking. Without such error buffer examination, the system may 
panic with NULL pointer.

RESOLUTION:
The source code is modified to verify the error buffer to be valid.

* 3026288 (Tracking ID: 2962262)

SYMPTOM:
When DMP Native Stack support is enabled and some devices are being managed by 
a multipathing solution other than DMP, then uninstalling DMP fails with an 
error for not being able to turn off DMP Native Stack support.

 Performing DMP prestop tasks ...................................... Done
The following errors were discovered on the systems:
CPI ERROR V-9-40-3436 Failed to turn off dmp_native_support tunable on
pilotaix216. Refer to Dynamic Multi-Pathing Administrator's guide to determine
the reason for the failure and take corrective action.
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groups


The CLI 'vxdmpadm settune dmp_native_support=off' also fails with following 
error.

# vxdmpadm settune dmp_native_support=off
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groups

DESCRIPTION:
With DMP Native Stack support it is expected that devices which are being used 
by LVM are multipathed by DMP. Co-existence with other multipath solutions in 
such cases is not supported. Having some other multipath solution results in 
this error.

RESOLUTION:
Code changes have been made to not error out while turning off DMP Native 
Support if device is not being managed by DMP.

* 3027482 (Tracking ID: 2273190)

SYMPTOM:
The device discovery commands 'vxdisk scandisks' or 'vxdctl enable' issued just
after license key installation may fail and abort.

DESCRIPTION:
After addition of license key that enables multi-pathing, the state of paths
maintained at user level is incorrect.

RESOLUTION:
As a fix, whenever multi-pathing license key is installed, the operation 
updates 
the state of paths both at user level and kernel level.

Patch ID: 6.0.100.200

* 2860207 (Tracking ID: 2859470)

SYMPTOM:
The EMC SRDF-R2 disk may go in error state when you create EFI label on the R1 
disk. For example:

R1 site
# vxdisk -eo alldgs list | grep -i srdf
emc0_008c auto:cdsdisk emc0_008c SRDFdg online c1t5006048C5368E580d266 srdf-r1

R2 site
# vxdisk -eo alldgs list | grep -i srdf
emc1_0072 auto - - error c1t5006048C536979A0d65 srdf-r2

DESCRIPTION:
Since R2 disks are in write protected mode, the default open() call (made for 
read-write mode) fails for the R2 disks, and the disk is marked as invalid.

RESOLUTION:
As a fix, DMP was changed to be able to read the EFI label even on a write 
protected SRDF-R2 disk.

* 2876865 (Tracking ID: 2510928)

SYMPTOM:
The extended attributes reported by "vxdisk -e list" for the EMC SRDF luns are 
reported as "tdev mirror", instead of "tdev srdf-r1". Example,

# vxdisk -e list 
DEVICE       TYPE           DISK        GROUP        STATUS              
OS_NATIVE_NAME   ATTR        
emc0_028b    auto:cdsdisk   -            -           online thin         
c3t5006048AD5F0E40Ed190s2 tdev mirror

DESCRIPTION:
The extraction of the attributes of EMC SRDF luns was not done properly. Hence,
EMC SRDF luns are erroneously reported as "tdev mirror", instead of "tdev srdf-
r1".

RESOLUTION:
Code changes have been made to extract the correct values.

* 2892499 (Tracking ID: 2149922)

SYMPTOM:
Record the diskgroup import and deport events in 
the /var/log/messages file.
Following type of message can be logged in syslog:
vxvm: vxconfigd: V-5-1-16254 Disk group import of <dgname> succeeded.

DESCRIPTION:
With the diskgroup import or deport, appropriate success message 
or failure message with the cause for failure should be logged.

RESOLUTION:
Code changes are made to log diskgroup import and deport events in 
syslog.

* 2892621 (Tracking ID: 1903700)

SYMPTOM:
vxassist remove mirror does not work if nmirror and alloc is specified,
giving an error "Cannot remove enough mirrors"

DESCRIPTION:
During remove mirror operation, VxVM does not perform correct
analysis of plexes. Hence the issue.

RESOLUTION:
Necessary code changes have been done so that vxassist works properly.

* 2892643 (Tracking ID: 2801962)

SYMPTOM:
Operations that lead to growing of volume, including 'vxresize', 'vxassist 
growby/growto' take significantly larger time if the volume has version 20 
DCO(Data Change Object) attached to it in comparison to volume which doesn't 
have DCO attached.

DESCRIPTION:
When a volume with a DCO is grown, it needs to copy the existing map in DCO and 
update the map to track the grown regions.  The algorithm was such that for 
each region in the map it would search for the page that contains that region 
so as to update the map. Number of regions and number of pages containing them 
are proportional to volume size. So, the search complexity is amplified and 
observed primarily when the volume size is of the order of terabytes. In the 
reported instance, it took more than 12 minutes to grow a 2.7TB volume by 50G.

RESOLUTION:
Code has been enhanced to find the regions that are contained within a page and 
then avoid looking-up the page for all those regions.

* 2892650 (Tracking ID: 2826125)

SYMPTOM:
VxVM script daemons are not up after they are invoked with the vxvm-recover 
script.

DESCRIPTION:
When the VxVM script daemon is starting, it will terminate any stale instance
if it does exist. When the script daemon is invoking with exactly the same 
process id of the previous invocation, the daemon itself is abnormally 
terminated by killing one own self through a false-positive detection.

RESOLUTION:
Code changes are made to handle the same process id situation correctly.

* 2892660 (Tracking ID: 2000585)

SYMPTOM:
If 'vxrecover -sn' is run and at the same time one volume is removed, vxrecover 
exits with the error 'Cannot refetch volume', the exit status code is zero but 
no volumes are started.

DESCRIPTION:
vxrecover assumes that volume is missing because the diskgroup must have been
deported while vxrecover was in progress. Hence, it exits without starting
remaining volumes. vxrecover should be able to start other volumes, if the DG 
is not deported.

RESOLUTION:
Modified the source to skip missing volume and proceed with remaining volumes.

* 2892689 (Tracking ID: 2836798)

SYMPTOM:
'vxdisk resize' fails with the following error on the simple format EFI 
(Extensible Firmware Interface) disk expanded from array side and system may 
panic/hang after a few minutes.
 
# vxdisk resize disk_10
VxVM vxdisk ERROR V-5-1-8643 Device disk_10: resize failed:
Configuration daemon error -1

DESCRIPTION:
As VxVM doesn't support Dynamic Lun Expansion on simple/sliced EFI disk, last 
usable LBA (Logical Block Address) in EFI header is not updated while expanding 
LUN. Since the header is not updated, the partition end entry was regarded as 
illegal and cleared as part of partition range check. This inconsistent 
partition information between the kernel and disk causes system panic/hang.

RESOLUTION:
Added checks in VxVM code to prevent DLE on simple/sliced EFI disk.

* 2892702 (Tracking ID: 2567618)

SYMPTOM:
VRTSexplorer coredumps in checkhbaapi/print_target_map_entry which looks like:
print_target_map_entry()
check_hbaapi()
main()
_start()

DESCRIPTION:
checkhbaapi utility uses HBA_GetFcpTargetMapping() API which returns the current 
set of mappings between operating system and fibre channel protocol (FCP) 
devices for a given HBA port. The maximum limit for mappings was set to 512 and 
only that much memory was allocated. When the number of mappings returned was 
greater than 512, the function that prints this information used to try to 
access the entries beyond that limit, which resulted in core dumps.

RESOLUTION:
The code has been changed to allocate enough memory for all the mappings 
returned by HBA_GetFcpTargetMapping().

* 2922798 (Tracking ID: 2878876)

SYMPTOM:
vxconfigd, VxVM configuration daemon dumps core with the following stack.

vol_cbr_dolog ()
vol_cbr_translog ()
vold_preprocess_request () 
request_loop ()
main     ()

DESCRIPTION:
This core is a result of a race between two threads which are processing the 
requests from the same client. While one thread completed processing a request 
and is in the phase of releasing the memory used, other thread is processing a 
request "DISCONNECT" from the same client. Due to the race condition, the 
second thread attempted to access the memory which is being released and dumped 
core.

RESOLUTION:
The issue is resolved by protecting the common data of the client by a mutex.

* 2924117 (Tracking ID: 2911040)

SYMPTOM:
Restore operation from a cascaded snapshot succeeds even when it's one
of the source is inaccessible. Subsequently, if the primary volume is made
accessible for operation, IO operations may fail on the volume as the source of
the volume is inaccessible. Deletion of snapshots would as well fail due to
dependency of the primary volume on the snapshots. In such case, following error
is thrown when try to remove any snapshot using 'vxedit rm' command:
""VxVM vxedit ERROR V-5-1-XXXX Volume YYYYYY has dependent volumes"

DESCRIPTION:
When a snapshot is restored from any snapshot, the snapshot becomes
the source of data for regions on primary volume that differ between the two
volumes. If the snapshot itself depends on some other volume and that volume is
not accessible, effectively primary volume becomes inaccessible after restore
operation. In such case, the snapshots cannot be deleted as the primary volume
depends on it.

RESOLUTION:
If a snapshot or any later cascaded snapshot is inaccessible,
restore from that snapshot is prevented.

* 2924188 (Tracking ID: 2858853)

SYMPTOM:
In CVM(Cluster Volume Manager) environment, after master switch, vxconfigd 
dumps core on the slave node (old master) when a disk is removed from the disk 
group.

dbf_fmt_tbl()
voldbf_fmt_tbl()
voldbsup_format_record()
voldb_format_record()
format_write()
ddb_update()
dg_set_copy_state()
dg_offline_copy()
dasup_dg_unjoin()
dapriv_apply()
auto_apply()
da_client_commit()
client_apply()
commit()
dg_trans_commit()
slave_trans_commit()
slave_response()
fillnextreq()
vold_getrequest()
request_loop()
main()

DESCRIPTION:
During master switch, disk group configuration copy related flags are not 
cleared on the old master, hence when a disk is removed from a disk group, 
vxconfigd dumps core.

RESOLUTION:
Necessary code changes have been made to clear configuration copy related flags 
during master switch.

* 2924207 (Tracking ID: 2886402)

SYMPTOM:
When re-configuring dmp devices, typically using command 'vxdisk scandisks', 
vxconfigd hang is observed. Since it is in hang state, no VxVM(Veritas volume 
manager)commands are able to respond.

Following process stack of vxconfigd was observed.

dmp_unregister_disk
dmp_decode_destroy_dmpnode
dmp_decipher_instructions
dmp_process_instruction_buffer
dmp_reconfigure_db
gendmpioctl
dmpioctl
dmp_ioctl
dmp_compat_ioctl
compat_blkdev_ioctl
compat_sys_ioctl
cstar_dispatch

DESCRIPTION:
When DMP(dynamic multipathing) node is about to be destroyed, a flag is set to 
hold any IO(read/write) on it. The IOs which may come in between the process of 
setting flag and actual destruction of DMP node, are placed in dmp queue and are
never served. So the hang is observed.

RESOLUTION:
Appropriate flag is set for node which is to be destroyed so that any IO after
marking flag will be rejected so as to avoid hang condition.

* 2933468 (Tracking ID: 2916094)

SYMPTOM:
These are the issues for which enhancements are done:
1. All the DR operation logs are accumulated in one log file 'dmpdr.log', and 
this file grows very large.
2. If a command takes long time, user may think DR operations have stuck.
3. Devices controlled by TPD are seen in list of luns that can be removed
in 'Remove Luns' operation.

DESCRIPTION:
1. All the logs of DR operations accumulate and form one big log file which 
makes it difficult for user to get to the current DR operation logs.
2. If a command takes time, user has no way to know whether the command has 
stuck.
3. Devices controlled by TPD are visible to user which makes him think that he 
can remove those devices without removing them from TPD control.

RESOLUTION:
1. Now every time user opens DR Tool, a new log file of form
dmpdr_yyyymmdd_HHMM.log is generated.
2. A messages is displayed to inform user if a command takes longer time than 
expected.
3. Changes are made so that devices controlled by TPD are not visible during DR
operations.

* 2933469 (Tracking ID: 2919627)

SYMPTOM:
While doing 'Remove Luns' operation of Dynamic Reconfiguration Tool, there is no
feasible way to remove large number of LUNs, since the only way to do so is to
enter all LUN names separated by comma.

DESCRIPTION:
When removing luns in bulk during 'Remove Luns' option of Dynamic
Reconfiguration Tool, it would not be feasible to enter all the luns separated
by comma.

RESOLUTION:
Code changes are done in Dynamic Reconfiguration scripts to accept file
containing luns to be removed as input.

* 2934259 (Tracking ID: 2930569)

SYMPTOM:
The LUNs in 'error' state in output of 'vxdisk list' cannot be removed through
DR(Dynamic Reconfiguration) Tool.

DESCRIPTION:
The LUNs seen in 'error' state in VM(Volume Manager) tree are not listed by
DR(Dynamic Reconfiguration) Tool while doing 'Remove LUNs' operation.

RESOLUTION:
Necessary changes have been made to display LUNs in error state while doing
'Remove LUNs' operation in DR(Dynamic Reconfiguration) Tool.

* 2942166 (Tracking ID: 2942609)

SYMPTOM:
You will see following message as error message when quiting from Dynamic
Reconfiguration Tool.
"FATAL: Exiting the removal operation."

DESCRIPTION:
When user quits from an operation, Dynamic Reconfiguration Tool displays it is
quiting as error message.

RESOLUTION:
Made changes to display the message as Info.



INSTALLING THE PATCH
--------------------
o Before-the-upgrade :-
  (a) Stop I/Os to all the VxVM volumes.
  (b) Umount any filesystems with VxVM volumes.
  (c) Stop applications using any VxVM volumes.
o Select the appropriate RPMs for your system, and upgrade to the new patch.
# rpm -Uhv VRTSvxvm-6.0.300.100-RHEL6.x86_64.rpm


REMOVING THE PATCH
------------------
# rpm -e  <rpm-name>


KNOWN ISSUES
------------
* Tracking ID: 3028155

SYMPTOM: When running I/O load, some DMP paths are disabled. The error
messages found in /var/log/message are like:
sd 1:0:0:14: SCSI error: return code = 0x08070002
sdp: Current: sense key: Aborted Command
     ASC=0x4b <<vendor>> ASCQ=0xc4
end_request: I/O error, dev sdp, sector 10633904
qla2xxx 0000:0b:00.0: scsi(1:0:14) Dropped frame(s) detected (0x80000 of 0x80000
bytes), firmware reported underrun.
qla2xxx 0000:0b:00.0: scsi(1:0:14) FCP command status: 0x15-0x302 (0x70002)
portid=060100 oxid=0x29d ser=0x1048e cdb=2a0000 len=0x80000 rsp_info=0x8
resid=0x0 fw_resid=0x80000
VxVM vxdmp V-5-0-112 [Info] disabled path 8/0xf0 belonging to the dmpnode
201/0xf0 due to...

WORKAROUND: Use the newer firmware version for disk array and HBA card.

* Tracking ID: 3037620

SYMPTOM: If VCS engine is stopped for the first time, the SCSI registration keys
are removed. But if VCS engine is stopped for the second time, the keys are not
removed.

WORKAROUND: None

* Tracking ID: 3051102

SYMPTOM: On SLES11 after in-place upgrade on root encapsulated system, on reboot 
user is prompted with following message.

Waiting for device /dev/vx/dsk/bootdg/rootvol to appear:..............could
not find /dev/vx/dsk/bootdg/rootvol.
Want me to fall back to <resume device>?(Y/n)

WORKAROUND: Perform following steps on SLES11 machine with encapsulated root 
disk 
to upgrade VxVM.

1.Unroot the encapsulated root disk.
Use command:
#/etc/vx/bin/vxunroot

2.Upgrade the VxVM
You can perfrom this with CPI installer or using rpm command.
#rpm -Uvh VRTSvxvm-<version>.rpm

3.Reboot

4.Re-encapsulated the root disk
Use command:
#/etc/vx/bin/vxencap -c -g <root_diskgroup> rootdisk=<root_disk>

* Tracking ID: 3292689

SYMPTOM: Nodes with root encapsulated disks fail to restart after you perform an 
operating system upgrade for SLES11SPx on 6.0.1 systems.

WORKAROUND: Perform the following procedure on the system with the encapsulated root disk to 
upgrade the kernel or operating system.

To upgrade the kernel or operating system on a system with an encapsulated root 
disk

1 Unroot the encapsulated root disk:

# /etc/vx/bin/vxunroot

2 Upgrade the kernel or the operating system.



You may upgrade the kernel using the following rpm command:

# rpm -Uvh Kernel-upgrade_version



OR



You may upgrade the operating system using tools like yast or zypper.

3 Reboot the system.

4 Re-encapsulate the root disk:

# /etc/vx/bin/vxencap -c -g root_diskgroup rootdisk=root_disk

* Tracking ID: 3293136

SYMPTOM: On root encapsulated SLES11 machine, reboot after upgrade of VxVM from 
6.0.3 to 6.0.4 may prompt to fall back on resume device.

WORKAROUND: 1)Select 'Y' on the prompt to continue booting up.
2)Backup existing /boot/VxVM_initrd.img file.
3)Regenerate VxVM_initrd.img file using following command.
#/etc/vx/bin/vxinitrd  /boot/VxVM_initrd.img `uname -r`



SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE



Read and accept Terms of Service