vm-hpux1131-5.1SP1RP3P10

 Basic information
Release type: P-patch
Release date: 2019-03-12
OS update support: None
Technote: None
Documentation: None
Popularity: 1194 viewed    downloaded
Download size: 376.04 MB
Checksum: 1251311272

 Applies to one or more of the following products:
Dynamic Multi-Pathing 5.1SP1 On HP-UX 11i v3 (11.31)
Storage Foundation 5.1SP1 On HP-UX 11i v3 (11.31)
Storage Foundation Cluster File System 5.1SP1 On HP-UX 11i v3 (11.31)
Storage Foundation for Oracle RAC 5.1SP1 On HP-UX 11i v3 (11.31)
Storage Foundation HA 5.1SP1 On HP-UX 11i v3 (11.31)

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vm-hpux1131-5.1SP1RP3P9 (obsolete) 2018-11-23
vm-hpux1131-5.1SP1RP3P8 (obsolete) 2017-10-13
vm-hpux1131-5.1SP1RP3P7 (obsolete) 2017-04-19
vm-hpux1131-5.1SP1RP3P6 (obsolete) 2016-11-08
vm-hpux1131-5.1SP1RP3P5 (obsolete) 2016-01-12
vm-hpux1131-5.1SP1RP3P4 (obsolete) 2015-02-15
vm-hpux1131-5.1SP1RP3P3 (obsolete) 2014-11-11
vm-hpux1131-5.1SP1RP3P2 (obsolete) 2014-09-19
vm-hpux1131-5.1SP1RP3P1 (obsolete) 2014-02-26

 Fixes the following incidents:
2070079, 2163809, 2169348, 2198041, 2204146, 2205574, 2211971, 2214184, 2220064, 2227908, 2232829, 2233987, 2234292, 2241149, 2245645, 2247645, 2248354, 2253269, 2256728, 2280285, 2316309, 2323999, 2328219, 2328268, 2328286, 2337091, 2349653, 2353325, 2353327, 2353328, 2353403, 2353404, 2353410, 2353421, 2353425, 2353427, 2353464, 2353922, 2357579, 2357820, 2360415, 2360419, 2360719, 2364700, 2366130, 2377317, 2379034, 2382705, 2382710, 2382714, 2382717, 2383705, 2384473, 2384844, 2386763, 2389095, 2390804, 2390815, 2390822, 2397663, 2405446, 2408209, 2408864, 2409212, 2411052, 2411053, 2413077, 2413908, 2415566, 2415577, 2417184, 2417205, 2421100, 2421491, 2423086, 2427560, 2428179, 2435050, 2436283, 2436287, 2436288, 2437840, 2440015, 2440031, 2440351, 2442751, 2442827, 2442850, 2477272, 2477291, 2479746, 2480006, 2483476, 2484466, 2484695, 2485230, 2485252, 2485261, 2485278, 2485288, 2488042, 2491856, 2492016, 2492568, 2493635, 2495590, 2497637, 2497796, 2507120, 2507124, 2508294, 2508418, 2511928, 2513457, 2515137, 2517819, 2525333, 2528144, 2531224, 2531983, 2531987, 2531993, 2532440, 2552402, 2553391, 2560539, 2563291, 2567623, 2568208, 2570739, 2570988, 2574840, 2576605, 2583307, 2603605, 2613584, 2613596, 2616006, 2621549, 2622029, 2622032, 2626742, 2626745, 2626900, 2626915, 2626920, 2627000, 2627004, 2636094, 2641932, 2643651, 2646417, 2652161, 2663673, 2666175, 2676703, 2677025, 2690959, 2695225, 2695226, 2695227, 2695228, 2695231, 2701152, 2702110, 2703035, 2703370, 2703373, 2703384, 2705101, 2706024, 2706027, 2706038, 2711758, 2713862, 2730149, 2737373, 2737374, 2741105, 2744219, 2747340, 2750454, 2750455, 2752178, 2756069, 2759895, 2763211, 2768492, 2774907, 2800774, 2802370, 2804911, 2817926, 2821137, 2821143, 2821452, 2821515, 2821537, 2821678, 2821695, 2826129, 2826607, 2827791, 2827794, 2827939, 2832887, 2836910, 2836974, 2845984, 2847333, 2851354, 2852270, 2858859, 2859390, 2860208, 2860281, 2860445, 2860449, 2860451, 2860812, 2862024, 2876116, 2881862, 2882488, 2883606, 2886083, 2906832, 2911009, 2911010, 2916915, 2919718, 2929003, 2934484, 2940448, 2946948, 2950826, 2950829, 2957608, 2960650, 2973632, 2979692, 2982085, 2983901, 2986939, 2990730, 3000033, 3012938, 3041018, 3043203, 3047474, 3047803, 3057916, 3059145, 3069507, 3072890, 3077756, 3083188, 3083189, 3087113, 3087777, 3100378, 3139302, 3140407, 3140735, 3142325, 3144794, 3147666, 3158099, 3158780, 3158781, 3158790, 3158793, 3158794, 3158798, 3158799, 3158800, 3158802, 3158804, 3158809, 3158813, 3158818, 3158819, 3158821, 3164583, 3164596, 3164601, 3164610, 3164611, 3164612, 3164613, 3164615, 3164616, 3164617, 3164618, 3164619, 3164620, 3164624, 3164626, 3164627, 3164628, 3164629, 3164631, 3164633, 3164637, 3164639, 3164643, 3164645, 3164646, 3164647, 3164650, 3164759, 3164790, 3164792, 3164793, 3164874, 3164880, 3164881, 3164883, 3164884, 3164911, 3164916, 3175698, 3178903, 3181315, 3181318, 3182750, 3183145, 3184253, 3189869, 3224030, 3227719, 3235365, 3238094, 3240788, 3242839, 3245608, 3247983, 3249204, 3253306, 3256806, 3261603, 3264167, 3273164, 3326516, 3363231, 3387294, 3393288, 3393457, 3407766, 3412839, 3482535, 3490621, 3500353, 3502788, 3507677, 3507679, 3507683, 3510359, 3519134, 3524141, 3527676, 3528614, 3596439, 3597496, 3612710, 3618287, 3623802, 3674615, 3677738, 3680388, 3694561, 3699305, 3699311, 3699313, 3741442, 3741460, 3741485, 3741553, 3741620, 3741725, 3741882, 3742610, 3756607, 3762780, 3814123, 3814134, 3851289, 3851574, 3895164, 3895494, 3895678, 3895777, 3897044, 3898670, 3902801, 3908808, 3913086, 3922019, 3922021, 3928660, 3932586, 3938065, 3945014, 3946329, 3952857, 3953926, 3955990, 3956609

 Patch ID:
PHCO_44774
PHKL_44775

Readme file
                          * * * READ ME * * *
             * * * Veritas Volume Manager 5.1 SP1 RP3 * * *
                         * * * P-patch 10 * * *
                         Patch Date: 2019-03-06


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Volume Manager 5.1 SP1 RP3 P-patch 10


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
HP-UX 11i v3 (11.31)


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Dynamic Multi-Pathing 5.1 SP1
   * Veritas Storage Foundation 5.1 SP1
   * Veritas Storage Foundation Cluster File System 5.1 SP1
   * Veritas Storage Foundation for Oracle RAC 5.1 SP1
   * Veritas Storage Foundation HA 5.1 SP1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: PHKL_44775, PHCO_44774
* 3946329 (3946328) HPUX installer may select a VxVM versions that may lead to
unsupported configuration.
* 3953926 (3953925) /opt/VRTS/bin/vxpfto utility can fail sometime.
* 3955990 (2052659) Any IO issued on a non-existent partition run risk of system panic due to double free same IO buffer.
* 3956609 (3956607) vxdisk reclaim dumps core.
Patch ID: PHCO_44714
* 3680388 (3680386) 'vxconfigd' process is stopped after upgrading the command only patch from 'VxVM 5.1SP1RP3P8' to 'VxVM 5.1SP1RP3P9'.
* 3932586 (3934324) Optimize the VxVM device discovery time by reducing few un-necessary io_search
API calls.
* 3938065 (3903441) When using ignite/UX to restore an image of a system using an ignite archive,
enabled with HP native multipathing (NMP) mode, the recovered system hangs while
booting.
* 3945014 (3945375) "vxassist" command with  "remove mirror" option can fail on site configured cluster.
* 3946329 (3946328) HPUX installer may select a VxVM versions that may lead to
unsupported configuration.
* 3952857 (3926067) vxassist relayout /vxassist commands may fail in Campus Cluster environment.
Patch ID: PHCO_44660, PHKL_44661
* 3913086 (3898090) VxVM vxvmconvert utility may corrupt HP-UX LVM logical volume due to incorrect 
subdisk length during conversion.
* 3922019 (3922018) Shared DG(Diskgroup) import fails when minor number of shared DG conflicts with minor number of private DG on slave.
* 3922021 (3922020) After performing 'vxdisk resize', size of disk is not updated on slave 
nodes.
* 3928660 (3928658) While failing over primary paths of DMP node, IO hang might be observed due to
deadlock between two vxiod threads.
Patch ID: PHKL_44587, PHCO_44586
* 3902801 (3902793) The VxVM diagnostic utility vxdmpdbprint may fail with error 71.
* 3908808 (3857120) Commands like vxdg deport which try to close a VxVM volume might hang.
Patch ID: PHCO_44553, PHKL_44554
* 3895164 (3893167) When the VxVM objects are displayed using glance command, it returns the SIGSEGV
error.
* 3895494 (3895493) More history of DDL reconfiguration is desired to help isolate problems
concerning dynamic reconfiguration.
* 3895678 (3892775) When installing patches from QPK [Quality patch bundle] on a system with a VxVM
version before VxVM51SP1, the installation fails with an error message.
* 3895777 (2165920) The vxrelocd(1M) daemon creates a defunct (zombie) process.
* 3897044 (3880212) The vxreattach(1M) command always performs full resync operation despite
FastResync license.
* 3898670 (3898171) System hangs in booting process while restoring the system with the help of an
ignite archive, which is created with HP's native multipathing (nMP) mode enabled.
Patch ID: PHCO_44243, PHKL_44244
* 3273164 (3281160) While importing a disk group that has a base-minor number same as that of an
existing imported disk group and when auto-reminor setting is turned off, 
no warning message is issued.
* 3694561 (2990967) Minor numbers are incorrectly assigned after the volumes are created.
* 3699305 (3317125) A disk group may get imported as shared disk group on a CVM(Clustered Volume
Manager) slave node, such that its 'base_minor' number is similar to an existing
private disk group on that slave node.
* 3699311 (3139180) The node join operation fails, if the device-minor numbers of the node-join 
operation are similar to the device-minor numbers of the currently imported 
disk groups in the cluster.
* 3699313 (3028704) When a disk group that has cloned disks is imported, warning messages related 
to minor number conflicts are displayed.
* 3741442 (3450758) The slave node is not able to join the Cluster Volume Manager (CVM) cluster and 
this results in panic.
* 3741460 (2721644) The vxconfigd(1M) daemon dumps core.
* 3741485 (3162418) The vxconfigd(1M) command dumps core when VxVM tries to find certain devices by 
their device numbers.
* 3741553 (3237503) The system hangs after a space-optimized snapshot is created, when a large size 
cache volume is used.
Thus issue is likely to occur when the size of the cache volume is in range of 
terabytes.
* 3741620 (2869594) Master node panics after a space optimized snapshot is refreshed or deleted, 
and the master node is selected using the 'vxclustadm setmaster' command.
* 3741725 (3247040) The vxdisk(1M) scandisks inadvertently re-enables PowerPath (PP) enclosures 
disabled earlier.
* 3741882 (3134118) The vxdmpadm -q iostat show interval = <interval> command reports 
incorrect QUEUED I/Os and PENDING I/Os after the first interval.
* 3742610 (3134882) Disk re-online operation is not performed on slave node for auto-imported disk
groups containing clone disks.
* 3756607 (3090097) While the disk group is imported in the shared mode, if its base_minor number 
conflicts with base_minor number of an existing private disk group on slave 
node, then the minor numbers for the disk group may get allocated from a 
private pool.
* 3762780 (3605612) When auto-reminor setting is turned on, a disk group may get imported with the same base-minor number as that of an existing imported disk group.
* 3814123 (3045033) The "vxdg init" command should not create a disk group on the clone disks.
* 3814134 (3734606) As per the current design, all disks are re-onlined for every clone or tagged 
disk group to be imported. As a result, the node join operation takes time.
* 3851289 (3851287) While installing patches from QPK [Quality patch bundle], installation may get 
aborted if the VxVM command patch is part of QPK, but base VxVM package is not 
installed.
* 3851574 (1726349) In the Cluster Volume Manager (CVM) environment, the shared disk group creation 
may fail if a private disk group was previously created on the same disks.
Patch ID: PHCO_44219, PHKL_44218
* 3597496 (2965910) Volume creation with the vxassist(1M) command dumps core when the non-disk
parameters are specified along with '-o ordered' option.
* 3674615 (3674614) Restarting the vxconfigd(1M) daemon on the slave (joiner) node, when the node-
join operation  is in progress, may cause the vxconfigd(1M) daemon to hang on 
the master and the joiner node.
* 3677738 (3199056) Veritas Volume Replicator (VVR) primary system panics in the vol_cmn_err()
function, due to the corrupted VVR queue.
Patch ID: PHCO_44194, PHKL_44195
* 3519134 (3518470) The 'vxdg adddisk' command fails with the following message:
 "<disk name> previous removed using -k option" 
Even though the disk was not removed using the -k option.
* 3612710 (2190632) High memory consumption is observed with the vxconfigd(1M) daemon when large 
number of events are generated.
* 3618287 (3526855) VxVM causes system panic during the I/O failures.
* 3623802 (3640061) System may panic/hang in VERITAS Volume Replicator (VVR) while flushing the log 
header
Patch ID: PHCO_44058, PHKL_44059
* 3249204 (3145359) On the IA architecture systems, the vxres_lvmroot -v -b <disk_name> command 
fails with the following error message:
"ERROR V-5-2-4699 Error creating Physical Volume /dev/vx/rdmp/c#t#d#s2"
* 3261603 (3261601) System panics when the dmp_destroy_dmpnode() function attempts to free a 
virtual address that is already set free.
* 3264167 (3254311) The system panics when the site is reattached to a site-consistent disk group 
that has a volume larger than 1.05 TB.
* 3387294 (2152830) A diskgroup (DG) import fails with a non-descriptive error message when 
multiple copies (clones) of the same device exist and the original devices are 
either offline or not available.
* 3393288 (2253210) When a device tree is added after a LUN is removed or added, the 
command "vxdisk scandisks" hangs.
* 3393457 (3390959) The vxconfigd(1M) daemon hangs in the kernel while 
processing the I/O request.
* 3407766 (3263105) The disk evacuation operation (vxevac()) fails for a volume with Data Change 
Object (DCO).
* 3412839 (2515070) When the I/O fencing is enabled, the Cluster Volume Manager (CVM) slave node 
may fail to join the cluster node.
* 3482535 (3482001) The 'vxddladm addforeign' command renders the system unbootable, after the 
reboot, for a few cases.
* 3490621 (2857341) The system panics during the normal I/O operations, with 'adaptive' I/O 
schedule policy in DMP.
* 3500353 (3500350) An option is not available to identify if the disk group (DG) is imported using 
the CLONE flag or not, when the clone flag on the disk is reset using the vxdisk
(1M) command.
* 3502788 (3485907) Panic occurs in the I/O code path.
* 3507677 (3293139) The verifydco utility fails with an error.
* 3507679 (3455460) The vxfmrshowmap and verify_dco_header utilities fail with an error.
* 3507683 (3281004) For DMP minimum queue I/O policy with large number of CPUs a couple of issues 
are observed.
* 3510359 (3438271) The vxconfigd(1M) daemon may hang when new LUNs are added.
* 3524141 (3524139) The checkinstall script for the PHCO_43824 fails, even when the dependencies 
are in the depot.
* 3527676 (3527674) After upgrading to 5.1SP1RP3P1, the configuration daemon (vxconfigd(1M)) dumps 
core because the unique identifier of the disk (UDID) is not found for EMC 
LUNs, when the older VRTSaslapm is installed on the system.
* 3528614 (3461383) The vxrlink(1M) command fails when the vxrlink g <DGNAME> -a att <RLINK> 
command is executed.
* 3596439 (3596425) During patch installation, the checkinstall script fails with error
"FC-COMMON.FC-SNIA filesets with revision B.11.31.1311 or Higher is required for
VxVM to work correctly.".
Patch ID: PHCO_43824, PHKL_43779
* 2982085 (2976130) Multithreading of the vxconfigd (1M) daemon for HP-UX 11i v3 causes the DMP 
database to be deleted as part of the device-discovery commands.
* 3326516 (2665207) Improve messaging on "vxdisk updateudid" failure on an 
imported
disk group.
* 3363231 (3325371) Panic occurs in the vol_multistepsio_read_source() function when snapshots are 
used.
Patch ID: PHCO_43526, PHKL_43527
* 2233987 (2233225) Growing a volume to more than a limit, the default being 1G, does not 
synchronize plexes for the newly allocated regions of the volume.
* 2245645 (2255018) The vxplex(1M) command core dumps during the relayout operation from concat to 
RAID 5.
* 2366130 (2270593) Shared disk group enters the disabled state when the vxconfigd(1M) daemon is 
restarted on the master node followed by the node join operation.
* 2437840 (2283588) Initialization of the mirror on the root disk gives an error message on the IA 
machine.
* 2485252 (2910043) Avoid order 8 allocation by the vxconfigd(1M) daemon while the node is 
reconfigured.
* 2567623 (2567618) The VRTSexplorer dumps core in vxcheckhbaapi/print_target_map_entry.
* 2570739 (2497074) The "Configuration daemon error 441 error occurs while trying to stop a volume 
that uses the vxvol(1M) command on the Cross Platform Data Sharing - Extensible 
Firmware Interface (CDS-EFI) disks.
* 2703384 (2692012) When moving the subdisks by using the vxassist(1M) command or the vxevac(1M) 
command, if the disk tags are not the same for the source and the destination, 
the command fails with a generic error message.
* 2832887 (2488323) Write on volumes with links could hang if the volume also has snapshots.
* 2836974 (2743926) The DMP restore daemon fails to restart during the system boot.
* 2847333 (2834046) NFS migration failed due to device re-minoring.
* 2851354 (2837717) vxdisk(1M) resize command fails if da name is specified.
* 2860208 (2859470) The Symmetrix Remote Data Facility R2 (SRDF-R2) with the Extensible Firmware 
Interface (EFI) label is not recognized by Veritas Volume Manager (VxVM) and 
goes in an error state.
* 2881862 (2878876) The vxconfigd daemon dumps core in vol_cbr_dolog() due to race between two 
threads processing requests from the same client.
* 2883606 (2189812) When the 'vxdisk updateudid' command is executed on a disk which is in 
the 'online invalid' state, the vxconfigd(1M) daemon dumps core.
* 2906832 (2398954) The system panics while performing I/O on a VxFS mounted instant snapshot with 
the Oracle Disk Manager (ODM) SmartSync enabled.
* 2916915 (2916911) The vxconfigd(1M) daemon sends a VOL_DIO_READ request before the device is 
open. This may result in a scenario where the open operation fails but the disk 
read or write operations proceeds.
* 2919718 (2919714) On a thin Logical Unit Number (LUN), the vxevac(1M) command returns 0 without 
migrating the unmounted-VxFS volumes.
* 2929003 (2928987) DMP(Dynamic multipathing) retried IO infinitely causing vxconfigd to hang in 
case
of OS(Operating system) layer failure.
* 2934484 (2899173) The vxconfigd(1M) daemon hangs after executing the "vradmin stoprep" command.
* 2940448 (2940446) A full file system check (fsck) hangs on I/O in Veritas Volume Manager (VxVM) 
when the cache object size is very large.
* 2946948 (2406096) The vxconfigd daemon dumps core in vol_cbr_oplistfree() function.
* 2950826 (2915063) During the detachment of a plex of a volume in the Cluster Volume Manager (CVM) 
environment, the system panics.
* 2950829 (2921816) System panics while starting replication after disabling the DCM volumes.
* 2957608 (2671241) When the Dirty Region Logging (DRL) log plex is configured in a volume, the 
vxnotify(1M) command does not report the volume enabled message.
* 2960650 (2932214) "vxdisk resize" operation may cause the disk goes into "online invalid" state.
* 2973632 (2973522) At cable connect on port1 of dual-port Fibre Channel Host Bus Adapters (FC 
HBA), paths via port2 are marked as SUSPECT.
* 2979692 (2575051) In a Cluster Volume Manager (CVM) environment, the master switch or the master 
nodes takeover operations results in panic.
* 2983901 (2907746) File Descriptor leaks are observed with the device-discovery command of VxVM.
* 2986939 (2530536) When any path of the ASM disk is disabled, this results in multiple DMP 
reconfigurations.
* 2990730 (2970368) Enhance handling of SRDF-R2 Write-Disabled devices in DMP.
* 3000033 (2631369) In a Cluster Volume Manager (CVM) environment, when the vxconfigd(1M) daemon is 
started in the single-threaded mode, a cluster reconfiguration such as a node 
join and Veritas Volume Manager(VxVM) operations on shared disk group take more 
time to complete.
* 3012938 (3012929) The vxconfigbackup(1M) command gives errors when disk names are changed.
* 3041018 (3041014) Beautify error messages seen during relayout operation.
* 3043203 (3038684) The restore daemon enables the paths of Business Continuance Volumes-Not Ready 
(BCV-NR) devices.
* 3047474 (3047470) The device /dev/vx/esd is not recreated on reboot with the latest major number, 
if it is already present on the system.
* 3047803 (2969844) The device discovery failure should not cause the DMP database to be destroyed 
completely.
* 3059145 (2979824) The vxdiskadm(1M) utility bug results in the exclusion of the unintended paths.
* 3069507 (3002770) While issuing a SCSI inquiry command, NULL pointer dereference in DMP causes 
system panic.
* 3072890 (2352517) The system panics while excluding a controller from Veritas Volume Manager 
(VxVM) view.
* 3077756 (3077582) A Veritas Volume Manager (VxVM) volume may become inaccessible causing the read/write operations to fail.
* 3083188 (2622536) Under a heavy I/O load, write I/Os on the Veritas Volume Replicator (VVR) 
Primary logowner takes a very long time to complete.
* 3083189 (3025713) In a Veritas Volume Replicator (VVR) environment, adddisk / rmdisk taking time 
and I/O hang during the command execution.
* 3087113 (3087250) In a CVM environment, during the node join operation when the host node joins 
the cluster node, this takes a long time to execute.
* 3087777 (3076093) The patch upgrade script installrp can panic the system while doing a patch 
upgrade.
* 3100378 (2921147) udid_mismatch flag is absent on a clone disk when source disk
is unavailable.
* 3139302 (3139300) Memory leaks are observed in the device discovery code path of VxVM.
* 3140407 (2959325) The vxconfigd(1M) daemon dumps core while performing the disk group move 
operation.
* 3140735 (2861011) The vxdisk -g <dgname> resize <diskname> command fails with an error for the 
Cross-platform Data Sharing(CDS)  formatted disk.
* 3142325 (3130353) Continuous disable or enable path messages are seen on the console for
EMC Not Ready (NR) devices.
* 3144794 (3136272) The disk group import operation with the -o noreonline option takes 
additional import time.
* 3147666 (3139983) Failed I/Os from SCSI are retried only on very few paths to a LUN instead of
utilizing all the available paths, and may result in DMP sending I/O failures to
the application bounded by the recovery option tunable.
* 3158099 (3090667) The system panics or hangs while executing the "vxdisk o thin, fssize list" 
command as part of Veritas Operations Manager (VOM) Storage Foundation (SF) 
discovery.
* 3158780 (2518067) The disabling of a switch port of the last-but-one active path to a Logical 
Unit Number (LUN) disables the Dynamic Multi-Pathing (DMP) node, and results in 
I/O failures on the DMP node even when an active path is available for the I/O.
* 3158781 (2495338) The disks having the hpdisk format can't be initialized with private region 
offset other than 128.
* 3158790 (2779580) Secondary node gives configuration error (no Primary RVG) after reboot of 
master node on Primary site.
* 3158793 (2911040) The restore operation from a cascaded snapshot leaves the volume in unusable 
state if any cascaded snapshot is in the detached state.
* 3158794 (1982965) The vxdg(1M) command import operation fails if the disk access) name is based 
on the naming scheme which is different from the prevailing naming scheme on 
the host.
* 3158798 (2825102) CVM reconfiguration and VxVM transaction code paths can simultaneously access 
volume device list resulting in data corruption.
* 3158799 (2735364) The "clone_disk" disk flag attribute is not cleared when a cloned disk group is 
removed by the "vxdg destroy <dg-name>" command.
* 3158800 (2886333) The vxdg(1M) join operation should not allow mixing of clone and non-clone 
disks in a disk group.
* 3158802 (2091520) The ability to move the configdb placement from one disk to another 
using "vxdisk set <disk> keepmeta=[always|skip|default]" command.
* 3158804 (2236443) Disk group import failure should be made fencing aware, in place of VxVM vxdmp 
V-5-0-0 i/o error message.
* 3158809 (2969335) The node that leaves the cluster node while the instant operation is in 
progress, hangs in the kernel and cannot join back to the cluster node unless 
it is rebooted.
* 3158813 (2845383) The site gets detached if the plex detach operation is performed with the site-
consistency set to off.
* 3158818 (2909668) In case of multiple sets of the cloned disks of the same source disk group, the 
import operation on the second set of the clone disk fails, if the first set of 
the clone disks were imported with updateid.
* 3158819 (2685230) In a Cluster Volume Replicator (CVR) environment, if the SRL is resized and the 
logowner is switched to and from the master node to the slave node, then there 
could be a SRL corruption that leads to the Rlink detach.
* 3158821 (2910367) When SRL on the secondary site disabled, secondary panics.
* 3164583 (3065072) Data loss occurs during the import of a clone disk group, when some of the 
disks are missing and the import useclonedev and updateid options are 
specified.
* 3164596 (2882312) If an SRL fault occurs in the middle of an I/O load, and you immediately issue
a read operation on data written during the SRL fault, the system returns old
data.
* 3164601 (2957555) The vxconfigd(1M)  daemon on the CVM master node hangs in the userland during 
the vxsnap(1M) restore operation.
* 3164610 (2966990) In a Veritas Volume Replicator (VVR) environment, the I/O hangs at the primary 
side after multiple cluster reconfigurations are triggered in parallel.
* 3164611 (2962010) The replication hangs when the Storage Replicator Log (SRL) is resized.
* 3164612 (2746907) The vxconfigd(1M) daemon can hang under the heavy I/O load on the master node 
during the reconfiguration.
* 3164613 (2814891) The vxconfigrestore(1M) utility does not work properly if the SCSI page 83 
inquiry returns more than one FPCH name identifier for a single LUN.
* 3164615 (3102114) A system crash during the 'vxsnap restore' operation can cause the vxconfigd
(1M) daemon to dump core after the system reboots.
* 3164616 (2992667) When new disks are added to the SAN framework of the Virtual Intelligent System 
(VIS) appliance and the Fibre Channel (FC) switcher is changed to the direct 
connection, the vxdisk list command does not show the newly added disks even 
after the vxdisk scandisks command is executed.
* 3164617 (2938710) The vxassist(1M) command dumps core during the relayout operation .
* 3164618 (3058746) When the DMP disks of one RAID volume group is disabled, the I/O of the other 
volume group hangs.
* 3164619 (2866059) The error messages displayed during the resize operation by using the vxdisk
(1M) command needs to be enhanced.
* 3164620 (2993667) Veritas Volume Manager (VxVM) allows setting the Cross-platform Data Sharing 
(CDS) attribute for a disk group even when a disk is missing, because it 
experienced I/O errors.
* 3164624 (3101419) In CVR environment, I/Os to the data volumes in an RVG experience may temporary 
hang during the SRL overflow with the heavy I/O load.
* 3164626 (3067784) The grow and shrink operations by the vxresize(1M) utility may dump core in 
vfprintf() function.
* 3164627 (2787908) The vxconfigd(1M) daemon dumps core when the slave node joins the CVM cluster 
node.
* 3164628 (2952553) Refresh of a snapshot should not be allowed from a different source volume 
without force option.
* 3164629 (2855707) I/O hangs with the SUN6540 array during the path fault injection test.
* 3164631 (2933688) When the 'Data corruption protection' check is activated by Dynamic Mult-
Pathing (DMP), the device- discovery operation aborts, but the I/O to the 
affected devices continues, this results in data corruption.
* 3164633 (3022689) The vxbrk_rootmir(1M) utility succeeds with the following error message:  
ioscan: /dev/rdsk/eva4k6k0_48s2: No such file or directory.
* 3164637 (2054606) During the DMP driver unload operation the system panics.
* 3164639 (2898324) UMR errors reported by Purify tool in "vradmind migrate" command.
* 3164643 (2815441) The file system mount operation fails when the volume is resized and the volume 
has a link to volume.
* 3164645 (3091916) The Small Computer System Interface (SCSI) I/O errors overflow the syslog.
* 3164646 (2893530) With no VVR configuration, when system is rebooted, it panicked.
* 3164647 (3006245) While executing a snapshot operation on a volume which has snappoints 
configured, the system panics infrequently.
* 3164650 (2812161) In a Veritas Volume Replicator (VVR) environment, after the Rlink is detached, 
the vxconfigd(1M) daemon on the secondary host may  hang.
* 3164759 (2635640) The "vxdisksetup(1M) -ifB" command fails on Enclosuer Based Naming (EBN) with 
the legacy tree removed.
* 3164790 (2959333) The Cross-platform Data Sharing (CDS) flag is not listed for disabled CDS disk 
groups.
* 3164792 (1783763) In a Veritas Volume Replicator (VVR) environment, the vxconfigd(1M) daemon may 
hang during a configuration change operation.
* 3164793 (3015181) I/O hangs on both the nodes of the cluster when the disk array is disabled.
* 3164874 (2986596) The disk groups imported with mix of standard and clone logical unit numbers
(LUNs) may lead to data corruption.
* 3164880 (3031796) Snapshot reattach operation fails if any other snapshot of the primary volume 
is not accessible.
* 3164881 (2919720) The vxconfigd(1M) command dumps core in the rec_lock1_5() function.
* 3164883 (2933476) The vxdisk(1M) command resize fails with a generic error message. Failure 
messages need to be more informative.
* 3164884 (2935771) In a Veritas Volume Replicator (VVR) environment, the rlinks disconnect after 
switching the master node.
* 3164911 (1901838) After installation of a license key that enables multi-pathing, the state of 
the controller is shown as DISABLED in the command- line-interface (CLI) output 
for the vxdmpadm(1M) command.
* 3164916 (1973983) The vxunreloc(1M) command fails when the Data Change Object (DCO) plex is in 
DISABLED state.
* 3178903 (2270686) In a CVM environment, during the node join operation when the host node joins 
the cluster node, this takes a long time to execute.
* 3181315 (2898547) The vradmind process dumps core on the Veritas Volume Replicator (VVR) 
secondary site in a Clustered Volume Replicator (CVR) environment, when 
Logowner Service Group on VVR Primary Site is shuffled across its CVM 
(Clustered Volume Manager) nodes.
* 3181318 (3146715) Rlinks do not connect with Network Address Translation (NAT) configurations
on Little Endian Architecture.
* 3182750 (2257733) Memory leaks are observed in the device discovery code path of VxVM.
* 3183145 (2477418) In VVR environment, logowner node on the secondary panics in low memory 
situations.
* 3189869 (2959733) Handling the device path reconfiguration in case the device paths are moved
across LUNs or enclosures to prevent the vxconfigd(1M) daemon coredump.
* 3224030 (2433785) In a CVM environment, the node join operation to the cluster node fails 
intermittently.
* 3227719 (2588771) The system panics when the multi-controller enclosure is disabled.
* 3235365 (2438536) Reattaching a site after it was either manually detached or detached due to 
storage inaccessibility, causes data corruption.
* 3238094 (3243355) The vxres_lvmroot(1M) utility which restores the Logical Volume Manager (LVM) 
root disk from the VxVM root disk fails.
* 3240788 (3158323) In a VVR environment, with multiple secondaries, if SRL overflows for rlinks at 
different times, it may result in the vxconfigd(1M) daemon to hang on the 
primary node.
* 3242839 (3194358) The continuous messages displayed in the syslog file with EMC
not-ready (NR) LUNs.
* 3245608 (3261485) The vxcdsconvert(1M) utility failed with the error "Unable to initialize the 
disk as a CDS disk".
* 3247983 (3248281) When the vxdisk scandisks or vxdctle enable commands are run consecutively 
the VxVM vxdisk ERROR V-5-1-0 Device discovery failed.  error is encountered.
* 3253306 (2876256) The vxdisk set mediatype command fails with the new naming scheme.
* 3256806 (3259926) The vxdmpadm(1M) command fails to enable the paths when the '-f'  option is 
provided.
Patch ID: PHCO_43295, PHKL_43296
* 3057916 (3056311) For release < 5.1 SP1, allow disk initialization with CDS format using raw geometry.
* 3175698 (3175778) Support for listener in 51SP1
* 3184253 (3184250) creating dummy incident for kernel patch
Patch ID: PHCO_43065, PHKL_43064
* 2070079 (1903700) Removing mirror using vxassist does not work.
* 2205574 (1291519) After multiple VVR migrate operations, vrstat fails to output statistics.
* 2227908 (2227678) Second rlink goes into DETACHED STALE state in multiple secondaries
environment when SRL has overflowed for multiple rlinks.
* 2427560 (2425259) vxdg join operation fails with VE_DDL_PROPERTY: Property not 
found in the list
* 2442751 (2104887) vxdg import error message needs improvement for cloned diskgroup import failure.
* 2442827 (2149922) Record the diskgroup import and deport events in syslog
* 2485261 (2354046) man page for dgcfgrestore is incorrect.
* 2492568 (2441937) vxconfigrestore precommit fails with awk errors
* 2495590 (2492451) vxvm-startup2 launches vxesd without checking install-db.
* 2513457 (2495351) VxVMconvert utility was incompatible to migrate data across platforms from native LVM configurations.
* 2515137 (2513101) User data corrupted with disk label information
* 2531224 (2526623) Memory leak detected in CVM code.
* 2560539 (2252680) vxtask abort does not cleanup tasks properly.
* 2567623 (2567618) The VRTSexplorer dumps core in vxcheckhbaapi/print_target_map_entry.
* 2570988 (2560835) I/Os and vxconfigd hung on master node after slave is rebooted under heavy I/O load.
* 2576605 (2576602) 'vxdg listtag' should give error message and display correct usage when executed with wrong syntax
* 2613584 (2606695) Machine panics in CVR (Clustered Volume Replicator) environment while performing I/O Operations.
* 2613596 (2606709) IO hang is seen when SRL overflows and one of the nodes reboots
* 2616006 (2575172) I/Os are hung on master node after rebooting the slave node.
* 2622029 (2620556) IO hung after SRL overflow
* 2622032 (2620555) IO hang due to SRL overflow & CVM reconfig
* 2626742 (2626741) Using vxassist -o ordered and mediatype:hdd options together do not work as expected
* 2626745 (2626199) "vxdmpadm list dmpnode" printing incorrect path-type
* 2627000 (2578336) Failed to online the cdsdisk.
* 2627004 (2413763) Uninitialized memory read results in a vxconfigd coredump
* 2641932 (2348199) vxconfig dumps core while importing a Disk Group
* 2646417 (2556781) In cluster environment, import attempt of imported disk group may return wrong 
error.
* 2652161 (2647975) Serial Split Brain (SSB) condition caused Cluster Volume Manager(CVM) Master Takeover to fail.
* 2663673 (2656803) Race between vxnetd start and stop operations causes panic.
* 2677025 (2677016) Veritas Event Manager(vxesd(1M)) daemon dumps core when main thread tries to close one of its thread (which hold connection with HP Event Manager).
* 2690959 (2688308) When re-import of disk group fails during master takeover, other shared disk
groups should not be disabled.
* 2695226 (2648176) Performance difference on Master vs Slave during recovery via DCO.
* 2695231 (2689845) Data disk can go in error state when data at the end of the 
first sector of the disk is same as MBR signature
* 2703035 (925653) Node join fails for higher CVMTimeout value.
* 2705101 (2216951) vxconfigd dumps core because chosen_rlist_delete() hits NULL pointer in linked list of clone disks
* 2706024 (2664825) DiskGroup import fails when disk contains no valid UDID tag on config copy and config copy is disabled.
* 2706027 (2657797) Starting 32TB RAID5 volume fails with V-5-1-10128 Unexpected kernel error in configuration update
* 2706038 (2516584) startup scripts use 'quit' instead of 'exit', causing empty directories in /tmp
* 2730149 (2515369) vxconfigd(1M) can hang in the presence of EMC BCV devices
* 2737373 (2556467) DMP-ASM: disabling all paths and reboot of the host causes losing of /etc/vx/.vxdmprawdev records
* 2737374 (2735951) Uncorrectable write error is seen on subdisk when SCSI device/bus reset occurs.
* 2747340 (2739709) Disk group rebuild fails as the links between volume and 
vset were missing from 'vxprint -D -' output.
* 2750455 (2560843) In VVR(Veritas Volume Replicator) setup I/Os can hang in slave nodes after one of the slave node is rebooted.
* 2756069 (2756059) System may panic when large cross-dg mirrored volume is 
started at boot.
* 2759895 (2753954) When a cable is disconnected from one port of a dual-port FC 
HBA, the paths via another port are marked as SUSPECT PATH.
* 2763211 (2763206) avxdisk rma command dumps core when disk name of very large 
length is given
* 2768492 (2277558) vxassist outputs a misleading error message during snapshot related operations.
* 2800774 (2566174) Null pointer dereference in volcvm_msg_rel_gslock() results 
in panic.
* 2802370 (2585239) VxVM commands run very slow on setup with tape devices
* 2804911 (2637217) Document new storage allocation attribute support in vradmin man page for 
resizevol/resizesrl.
* 2817926 (2423608) panic in vol_dev_strategy()following FC problems
* 2821137 (2774406) System may panic while accessing data change map volume
* 2821143 (1431223) "vradmin syncvol" and "vradmin syncrvg" commands do not work if the remote
diskgroup and vset names are specified when synchronizing vsets.
* 2821452 (2495332) vxcdsconvert fails if the private region of the disk to be converted is less than 1 MB.
* 2821515 (2617277) Man pages for the vxautoanalysis and vxautoconvert commands are missing from the base package.
* 2821537 (2792748) Node join fails due to closing the wrong file descriptor.
* 2821678 (2389554) vxdg listssbinfo output is incorrect.
* 2821695 (2599526) IO Hang seen when DCM is zero.
* 2826129 (2826125) VxVM script daemon is terminated abnormally on its invocation.
* 2826607 (1675482) "vxdg list <dgname>" command shows configuration copy in new failed state.
* 2827791 (2760181) Panic hit on secondary slave during logowner operation.
* 2827794 (2775960) In secondary CVR case, IO hang seen on a DG during SRL disable activity on 
other DG.
* 2827939 (2088426) Re-onlining of disks in DG during DG deport/destroy.
* 2836910 (2818840) Enhance the vxdmpasm utility to support various permissions and "root:non-
system" ownership can be set persistently.
* 2845984 (2739601) vradmin repstatus output occasionally reports abnormal 
timestamp.
* 2852270 (2715129) Vxconfigd hangs during Master takeover in a CVM (Clustered Volume Manager) 
environment.
* 2858859 (2858853) After master switch, vxconfigd dumps core on old master.
* 2859390 (2000585) vxrecover doesn't start remaining volumes if one of the volumes is removed
during vxrecover command run.
* 2860281 (2838059) VVR Secondary panic in vol_rv_update_expected_pos.
* 2860445 (2627126) IO hang seen due to IOs stuck at DMP level.
* 2860449 (2836798) In VxVM, resizing simple EFI disk fails and causes system panic/hang.
* 2860451 (2815517) vxdg adddisk allows mixing of clone & non-clone disks in a DiskGroup.
* 2860812 (2801962) Growing a volume takes significantly large time when the volume has version 20 
DCO attached to it.
* 2862024 (2680343) Manual disable/enable of paths to an enclosure leads to system panic
* 2876116 (2729911) IO errors seen during controller reboot or array port disable/enable.
* 2882488 (2754819) Diskgroup rebuild through avxmake ada loops infinitely if 
the diskgroup configuration has multiple objects on a single cache object.
* 2886083 (2257850) vxdiskadm leaks memory while performing operations related to enclosures.
* 2911009 (2535716) VxVMconvert doesn't support migration of data for large number of LVM/VG configuration since it always creates private region at static offsets - 128th block
* 2911010 (2627056) 'vxmake -g DGNAME -d desc-file' fails with very large configuration due to memory leaks
Patch ID: PHCO_42992, PHKL_42993
* 2280285 (2365486) In 2-nodes SFRAC configuration, after enabling ports systems panics due to improper order of acquire and release of locks.
* 2532440 (2495186) With TCP protocol used for replication, I/O throttling happens due to memory flow control.
* 2563291 (2527289) Site consistency: Both sites become detached after data/dco plex failue at each site, leading to I/O cluster wide outage
* 2621549 (2621465) When detached disk after connectivity restoration is tried to reattach gives 'Tagid conflict' error
* 2626900 (2608849) VVR Logowner: local I/O starved with heavy I/O load from Logclient
* 2626915 (2417546) Raw devices are lost after reboot and cause permissions problem.
* 2626920 (2061082) vxddladm -c assign names should work for devices with native support not enabled (VxVM labeled or TPD)
* 2636094 (2635476) Volume Manager does not recover a failed path
* 2643651 (2643634) Message enhancement for a mixed(non-cloned and cloned) dg import
* 2666175 (2666163) A small portion of possible memory leak incase of mix (clone and non-cloned) diskgroup import
* 2695225 (2675538) vxdisk resize may cause data corruption
* 2695227 (2674465) Adding/removing new LUNs causes data corruption
* 2695228 (2688747) Logowner local sequential I/Os starved with heavy I/O load on logclient.
* 2701152 (2700486) vradmind coredumps when Primary and Secondary have the same hostname and an active Stats session exists on Primary.
* 2702110 (2700792) The VxVM volume configuration daemon may dump a core during the Cluster Volume Manager(CVM) startup.
* 2703370 (2700086) EMC BCV (NR) established devices are resulting in multiple dmp events messages (paths being disabled/enabled)
* 2703373 (2698860) vxassist mirror failed for thin LUN because statvfs failed.
* 2711758 (2710579) Data corruption is observed on a Cross-platform Data Sharing (CDS) disk, as a 
part of the operations like LUN resize, Disk FLUSH, Disk ONLINE, and so on.
* 2713862 (2390998) System panicked during SAN reconfiguration because of the inconsistency in dmp
device open count.
* 2741105 (2722850) DMP fail over hangs when the primary controller is disabled while I/O activity
is ongoing.
* 2744219 (2729501) vxdmpadm exclude vxvm path=<> results in excluding unexpected set of paths.
* 2750454 (2423701) Upgrade of VxVM caused change in permissions.
* 2752178 (2741240) Invoking "vxdg join" operation during heavy IO load results in a transaction 
failure and leaves disks in an intermediate state.
* 2774907 (2771452) IO hung because of hung port deletion.
Patch ID: PHCO_42807, PHKL_42808
* 2440015 (2428170) IO hung on Mirror volume and return error on DMP disk, but phydisk(/dev/sdbw) is OK.
* 2477272 (2169726) CLONE : Disk group is imported using a Non-cloned and cloned disks, it can lead to data corruption
* 2493635 (2419803) Secondary Site panics in VVR (Veritas Volume Replicator)
* 2497637 (2489350) Memory leak in VVR
* 2497796 (2235382) IO hung in DMP while restoring a path in presence of pending IOs on local A/P class LUN
* 2507120 (2438426) VxVM is failing to correctly discover ZFS LUNs presented via PP after excluding/including libvxpp.so
* 2507124 (2484334) panic in dmp_stats_is_matching_group()
* 2508294 (2419486) Data corruption occurs on changing the naming scheme
* 2508418 (2390431) VVR: system crash dring DCM flush not finding the parent_sio volsiodone+1066
* 2511928 (2420386) Data corruption creating data in a vxfs filesystem, while being grown with vxresize on efi thinrclm disks.
* 2515137 (2513101) User data corrupted with disk label information
* 2517819 (2530279) vxesd has been built without any thread locking mechanism.
* 2525333 (2148851) vxdisk resize failed to resize the disk which is expanded physically from array console.
* 2528144 (2528133) vxdisk ERROR V-5-1-0 - Record in multiple disk groups
* 2531983 (2483053) Primary Slave node runs out of memory, system hang on VRTSvxvm
* 2531987 (2510523) ls -l command hang during RMAN backup on VVR/RAC cluster
* 2531993 (2524936) DG disabled after vold found the process file table is full.
* 2552402 (2432006) pending read count with kio cache is not decremented when read object is locked in transaction
* 2553391 (2536667) Slave node panics when private region I/O and dg deport operation are executed 
simulatenously.
* 2568208 (2431448) CVR:I/O hang while transitioning to DCM mode.
* 2574840 (2344186) Volume recovery is not clearing the need sync flag from volumes with DCO in BADLOG state. Thus nodes are unable to join the cluster.
* 2583307 (2185069) panic in vol_rv_mdship_srv_start()
* 2603605 (2419948) Race between the SRL flush due to SRL overflow and the kernel logging code, leads to a panic.
* 2676703 (2553729) Disk groups do not get imported and 'clone_disk' flag is seen on non-clone disks after uprade of VxVM.
Patch ID: PHCO_42245, PHKL_42246
* 2163809 (2151894) vmtest/tc/scripts/switchout/raid5/volume/start/wlog.tc#2 fails in the ss_tot nightly TC run
* 2169348 (2094672) CVR: vxconfigd on master hangs while reconfig is running in cvr stress with 8 users
* 2198041 (2196918) Snapshot creation with cachesize fails, as it doesn't take into account diskgroup alignment.
* 2204146 (2200670) vxattachd does not recover disks if disk group not imported
* 2211971 (2190020) SUSE complains dmp_deamon applying 1m continuous memory paging is too large
* 2214184 (2202710) VVR:During SRL to DCM flush, commands should not hang and come out with proper error.
* 2220064 (2228531) Vradmind hangs in vol_klog_lock() on VVR Secondary site.
* 2232829 (2232789) NetApp Metro Cluster dmp failed and vxio write erorrs
* 2234292 (2152830) A diskgroup (DG) import fails with a non-descriptive error message when 
multiple copies (clones) of the same device exist and the original devices are 
either offline or not available.
* 2241149 (2240056) 'vxdg move' transaction not completing and backups fail.
* 2247645 (2243044) VxVM vxislvm ERROR V-5-1-2604 cannot open /dev/rdsk/c64t0d1s2 error seen on running vxdisksetup
* 2248354 (2245121) Rlinks do not connect for NAT configurations.
* 2253269 (2263317) CLONE: Diskgroup import with dgid needs to be clearly documented in manual for the case in which original dg was destroyed and cloned disks are present.
* 2256728 (2248730) vxdg import command hangs as vxrecover daemon (spawned by vxdg) doesn't close standard error stream
* 2316309 (2316297) FMI: After applying 5.1SP1RP1 error message "Device is in use" appears during boot time
* 2323999 (2323925) If rootdisk is encapsulated and if install-db is present, clear warning should be displayed on system boot.
* 2328219 (2253552) Leak in vxsfdefault_parse.y at function vxsf_getdefault (*val)
* 2328268 (2285709) [QXCR1001108231] : guest hung on boot after vxconfigd started due to lack of dmpdaemons for error processing.
* 2328286 (2244880) hxrt5.1sp1:sfcfs:HP-UX11.31:initialize (vxdisksetup -fi) 1TB thin reclaimable luns failed
* 2337091 (2255182) Handling misconfiguration of CLARiiON array reporting one failovermode value through one HBA and different from other HBA
* 2349653 (2349352) New LUN addition on the host caused Data corruption.
* 2353325 (1791397) VVR:RU thread keeps spinning sending START_UPDATE message repeatedly to the secondary
* 2353327 (2179259) DMP SCSI bypass needs to be enhanced to handle I/O > 2TB
* 2353328 (2194685) LxRT, 5.1SP1PR2, SFCFSRAC, RHEL6: vxconfigd daemon core dump during array side switch ports disable and re-enable.
* 2353403 (2337694) TP/lxrt5.1sp1: "vxdisk -o thin list" showing size 0 for over 2TB LUNs on RHEL5
* 2353404 (2334757) memory consumption for the vxconfigd grows because of a lot of DMP_IDLE, DMP_UNIDLE events.
* 2353410 (2286559) System panics in Vxdmp with kernel heap corruption message.
* 2353421 (2334534) In CVM environment, vxconfigd level join is hung when Master returns error "VE_NO_JOINERS" to a joining node and cluster nidmap is changed in new reconfiguration
* 2353425 (2320917) VXVM 5.1RP2 Solaris10: Lost disk groups - lost two disk group during routine operation:vxconfigd also crashed.
* 2353427 (2337353) vxdmpadm include vxvm dmpnodename=<emcpower#> includes all excluded dmpnodes along with the requested one
* 2353464 (2322752) Duplicate DA records seen for NR devices upon restart of vxconfigd
* 2353922 (2300195) vxdiskunsetup fails on hp-ia64 for cds-efi
* 2357579 (2357507) In presence of large number of NR (Not-Ready) devices, server panics due to NMI triggered and when DMP continuously generates large no of path disable/enable events.
* 2357820 (2357798) CVR:Memory leak due to unfreed vol_ru_update structure
* 2360415 (2242268) panic in voldrl_unlog
* 2360419 (2237089) vxrecover might start the recovery of data volumes before the recovery of the associated cache volume is recovered.
* 2360719 (2359814) vxconfigbackup doesn't handle errors well
* 2364700 (2364253) In case of Space Optimized snapshots at secondary site, VVR leaks kernel memory.
* 2377317 (2408771) vxconfigd does not scan and discover all the storage device; some storage devices are skipped.
* 2379034 (2379029) Changing of enclosure name is not working for all devices in enclosure
* 2382705 (1675599) The vxconfigd daemon leaks memory while excluding and 
including a Third party Driver-controlled Logical Unit 
Number (LUN) in a loop. As a part of this, vxconfigd loses
its license information and an error is seen 
in the system log.
* 2382710 (2139179) SSB check invalid when lun copy
* 2382714 (2154287) [Product Enhancements] Improve handling of Not-Ready(NR)devices which are triggering "VxVM vxdmp V-5-3-1062 dmp_restore_node: Unstable path" messages
* 2382717 (2197254) While creating volumes on thinrclm disks, the option "logtype=none" does not work with vxassist command.
* 2383705 (2204752) getting error message "VxVM ERROR V-5-3-12240: GPT entries checksum mismatch" atfer creating diskgroup
* 2384473 (2064490) Ensure vxcdsconvert works with support for > 1 TB CDS disks
* 2384844 (2356744) VxVM script daemons should not allow its duplication instance in itself
* 2386763 (2346470) Excluding and including a LUN in a loop triggers a huge memory leak
* 2389095 (2387993) vxconfigd -k goes into disabled state in case of NR devices
* 2390804 (2249113) vol_ru_recover_primlog_done return the same start address to be read from SRL, if the dummy update is > MAX_WRITE
* 2390815 (2383158) VVR: vxio panic in vol_rv_mdship_srv_done+680
* 2390822 (2369786) VVR:A deadloop about NM_ERR_HEADR_IO
* 2397663 (2165394) CLONE: dg imported by selecting wrong disks. After destroying original dg, when try to import clone devices without useclonedev option with dgname, then it import dg with original disks.
* 2405446 (2253970) Support per-disk maxiosize for private region I/Os
* 2408209 (2291226) Multiple Oracle database's experienced block level corruption
* 2408864 (2346376) scripts/kernel/vxdmp/misc/dmp_verify_stats_dbg.tc#3 failing consistently since past few days
* 2409212 (2316550) DMP: warning messages seen on setup with ALUA array "VxVM vxconfigd WARNING V-5-1-0 ddl_add_disk_instr: Turning off NMP Alua mode failed for dmpnode 0xffffffff with ret = 13 "
* 2411052 (2268408) suppressing a powerpath disk's path using vxdiskadm 17-2 causes the disk to go in error state
* 2411053 (2410845) Lots of 'reservation conflict' messages seen on 5.1SP1RP1P1 clusters with XIV arrays.
* 2413077 (2385680) vol_rv_async_childdone+1147
* 2413908 (2413904) Multiple issues are seen with AxRT5.0MP3RP3 while performing Dynamic LUN reconfiguration.
* 2415566 (2369177) DDL: do_diskio function should be able to handle offset > 2TB
* 2415577 (2193429) IO policy not getting preserved when vold is restarted and migration from one devlist to other is taking place.
* 2417184 (2407192) [VxVM][600-705-265][TURK TELEKOMUNIKASYON A.S] fsck threads hung in biowait
* 2417205 (2407699) vxassist core dump if /etc/default/vxassist have wantmirror=ctlr
* 2421100 (2419348) DMP panic; race between dmp reconfig and dmp pass through ioctl
* 2421491 (2396293) HxRT SF6.0 : I/Os loaded, sanboot failed with vxconfigd core dump.
* 2423086 (2033909) In SF-RAC configuration, IO hung after disable secondary path of A/PG array Fujitsu ETERNUS3000
* 2428179 (2425722) vxsd move operation failed for disk size >= 2TB
* 2435050 (2421067) Vxconfigd hung in both nodes of primary
* 2436283 (2425551) IO hung for 6 mintues when reboot the slave node, if there is I/O on both master and slave.
* 2436287 (2428875) I/O on both nodes (wait for the DCM flush started), and crash the slave node, lead to the master reconfiguration hang
* 2436288 (2411698) I/Os hang in CVR (Clustered Volume Replicator) environment.
* 2440031 (2426274) LM-Conformance/cio test hits assert "f:_volsio_mem_free:1a".
* 2440351 (2440349) DCO volume may grow into any 'site' even when 'alloc=site:xxxx' is specified by a list of 'site' to be limited.
* 2442850 (2317703) Vxesd/Vxconfigd leaks file descriptors.
* 2477291 (2428631) Allow same fence key to be used for all Disk groups
* 2479746 (2406292) Panic in vol_subdisksio_delete()
* 2480006 (2400654) Stale array.info file can cause vxdmpadm commands to hang.
* 2483476 (2491091) vxdisksetup failing with stale EFI entries
* 2484466 (2480600) I/O of large sizes like 512k and 1024k hang in CVR. (Clustered Volume Replicator)
* 2484695 (2484685) Race between two vol_subdisk sios while doing adonea processing which causes one thread to free sio_fsvm_priv before other thread accesses it.
* 2485230 (2481938) QXCR1001120138: vxconfigbackup throwing an error when DG contains a sectioned disk
* 2485278 (2386120) Enhancement request to add diagnostic logging to help triage a CVM master takeover failure situation
* 2485288 (2431470) "vxdisk set" command operates on a wrong VxVM device and does not work with DA
(Disk Access) name correctly.
* 2488042 (2431423) CVR: Panic in vol_mv_commit_check after I/O error on DCM
* 2491856 (2424833) Pinnacle while autosync_deport#2 primary logowner hits ted assert nmcom_send_msg_tcp
* 2492016 (2232411) Pinnacle[NIGHTLY]:vmcert:/vmtest/tc/scripts/admin/volassist/align/subdisk.tc#2 & #4 failed in 01_02 nightly build


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: PHKL_44775, PHCO_44774

* 3946329 (Tracking ID: 3946328)

SYMPTOM:
"update-ux  -s <1805 OE Depot> !Base-VxFS-61 !Base-VxVM-61 !Base-VxTools-61  
!B3929IB" command may lead to unsupported configuration.

DESCRIPTION:
The above "update-ux" may fail if system is not at the latest VxVM 5.1SP1 patch 
level (PHCO_44243 & PHKL_44244  or superseding patches). "update-ux"  would auto 
select  the VxVM patches (PHCO_44243 & PHKL_44244). These patches have a 
dependency on VRTSaslapm whose revision string should be  r>=5.1.103.001.
The 1805 OE depot has 6.1 VRTSaslapm product  whose revision string is 
6.1.0.000.  Therefore SD automatically selects 6.1 VRTSaslapm  as 6.1.0.000 > 
5.1.103.001. System would have VRTSvxfs 5.1 SP1 & VRTSvxvm 6.1 (incompatible & 
unsupported configuration) The system would not boot if it is VxVM rooted.

RESOLUTION:
Changes have been done in VRTSaslapm dependencies to handle this scenario.

* 3953926 (Tracking ID: 3953925)

SYMPTOM:
"/opt/VRTS/bin/vxpfto -d -g <diskgroup_name> -o pftostate=disable/enable" can fail with below error:
VxVM vxdisk ERROR V-5-1-0 Device <disk_name> not in configuration or associated with DG <disk_group>

DESCRIPTION:
If the "disk media"(dm) name of a disk is same as the "hpnewname" of the other disk then we are likely to hit the issue.
If these names are same, VxVM is not able to differentiate between the disks and hence "vxpfto" fails with error.

RESOLUTION:
Appropriate code changes  have been done to differentiate between disks even if the "dm" and "hpnewname" are the same.

* 3955990 (Tracking ID: 2052659)

SYMPTOM:
Any IO issued on a non-existent partition of Extensible Firmware Interface     
disk can cause system panic with stack like below.                              

panic+0x240 () 
bad_kern_reference+0xa0 ()                                                     
$cold_vfault+0x470 ()
vm_hndlr+0x4b0 ()
bubbleup+0x880 ()                                                              
dmp_iostatq_wr+0x140 ()
gendmpiodone+0x130 ()
dmpiodone+0x20 () 
$cold_kmem_arena_free+0x20 ()
vxdmp_hp_kmem_free+0x30 ()
dmp_freerbuf+0x40 ()
dmp_strategy_handle_errio+0x3d0 ()                                             
dmp_bypass_strategy+0x310 ()                                                   
dmp_path_okay+0x4a0 ()
dmp_error_action+0x100 ()                                                      
dmp_aa_recv_inquiry+0x350 ()                                                   
dmp_process_scsireq+0x40 ()                                                    
dmp_daemons_loop+0x470 ()      
OR
bad_news+0x3b0
bubbledown+0x0
kmem_arena_free+0xb0
vxdmp_hp_kmem_free+0x30
dmp_freerbuf+0x40
gendmpiodone+0x410
dmpiodone+0x20
biodone+0x120

DESCRIPTION:
The strategy routine of the underlying driver returns an error when Dynamic 
Multi Pathing driver hands over the IO for a non-existent partition. Then      
Dynamic Multi Pathing driver processes the IO done on the buffer and returns 
the error IO to the caller. At the same time the underlying driver is also     
processing IO done on the same IO buffer, which would result in calling the    
io-done routine of Dynamic Multi Pathing driver. Thus Dynamic Multi Pathing 
would be processing the same buffer twice causing the system panic.

RESOLUTION:
To avoid processing the same buffer twice, the Dynamic Multi Pathing driver    
code is fixed to ignore the IO error coming from non-existent partition.

* 3956609 (Tracking ID: 3956607)

SYMPTOM:
When removing a VxVM disk using vxdg-rmdisk operation, the following error occurs requesting a disk reclaim.
VxVM vxdg ERROR V-5-1-0 Disk <device_name> is used by one or more subdisks which are pending to be reclaimed.
Use "vxdisk reclaim <device_name>" to reclaim space used by these subdisks, and retry "vxdg rmdisk" command.
Note: reclamation is irreversible. But when issuing vxdisk-reclaim as advised, the command dumps core.

DESCRIPTION:
In the disk-reclaim code path, memory allocation can fail at realloc() but the failure 
not detected, causing an invalid address to be referenced and a core dump results.

RESOLUTION:
The disk-reclaim code path now handles failure of realloc properly.

Patch ID: PHCO_44714

* 3680388 (Tracking ID: 3680386)

SYMPTOM:
'vxconfigd' process gets stopped after command only patch upgrade.

DESCRIPTION:
At the start of command patch upgrade, the 'vxconfigd' process is stopped in order to build a new vxconfigd binary.
However, new 'vxconfigd' process is not started after the patch upgrade completes.

RESOLUTION:
Code changes are implemented to re-start the 'vxconfigd' during the command patch upgrade.

* 3932586 (Tracking ID: 3934324)

SYMPTOM:
Optimize the VxVM device discovery time by reducing few un-necessary io_search
API calls.

DESCRIPTION:
During VxVM (Veritas Volume Manager) device discovery, io_search() API provided
by HP was used multiple times,  which increases the overall device discovery time. 
To optimize the device discovery process, few of the unnecessary io_search() API
calls have been removed.

RESOLUTION:
Code fix is implemented in order to reduce the device discovery time.

* 3938065 (Tracking ID: 3903441)

SYMPTOM:
When restoring an Ignite-UX system recovery archive created of a system having a
VxVM root disk that bypass DMP (i.e. using only HP-UX native multipathing or
NMP), the restored system hangs while booting with the following error message:

UX:vxfs fsck: ERROR: V-3-20945: /dev/vx/dsk/rootdg/standvol:cannot stat
/dev/vx/dsk/rootdg/standvol

DESCRIPTION:
The bootdg entry in the /etc/vx/volboot file was not properly restored. In the
case of a VxVM root disk which bypasses DMP, the issue can cause failures when
starting the volumes in rootdg and eventually lead to a hang during booting.

RESOLUTION:
The code is modified to update the missing bootdg entry in the volboot file
while booting the system after recovery.

* 3945014 (Tracking ID: 3945375)

SYMPTOM:
"vxassist" command with  "remove mirror" can fail on node on site configured cluster.
VxVM vxassist ERROR V-5-1-3242  Subvolume <Subvolume-name>: Cannot remove enough  mirrors

DESCRIPTION:
For each mirror record we check if there is  atleast one active mirror on site apart from the mirror we are marking to remove.
Due to a bug in the code to the number of active mirror configured on the site is reported incorrectly. 
Hence the "vxassist remove" option fails displaying the above error message.

RESOLUTION:
Appropriate code changes are done to handle the bug.

* 3946329 (Tracking ID: 3946328)

SYMPTOM:
"update-ux  -s <1805 OE Depot> !Base-VxFS-61 !Base-VxVM-61 !Base-VxTools-61  
!B3929IB" command may lead to unsupported configuration.

DESCRIPTION:
The above "update-ux" may fail if system is not at the latest VxVM 5.1SP1 patch 
level (PHCO_44243 & PHKL_44244  or superseding patches). "update-ux"  would auto 
select  the VxVM patches (PHCO_44243 & PHKL_44244). These patches have a 
dependency on VRTSaslapm whose revision string should be  r>=5.1.103.001.
The 1805 OE depot has 6.1 VRTSaslapm product  whose revision string is 
6.1.0.000.  Therefore SD automatically selects 6.1 VRTSaslapm  as 6.1.0.000 > 
5.1.103.001. System would have VRTSvxfs 5.1 SP1 & VRTSvxvm 6.1 (incompatible & 
unsupported configuration) The system would not boot if it is VxVM rooted.

RESOLUTION:
Changes have been done in VRTSaslapm dependencies to handle this scenario.

* 3952857 (Tracking ID: 3926067)

SYMPTOM:
In a Campus Cluster environment, vxassist relayout command may fail with 
following error:
VxVM vxassist ERROR V-5-1-13124  Site  offline or detached
VxVM vxassist ERROR V-5-1-4037 Relayout operation aborted. (20)

vxassist convert command also might fail with following error:
VxVM vxassist ERROR V-5-1-10128  No complete plex on the site.

DESCRIPTION:
For vxassist "relayout" and "convert" operations in Campus Cluster 
environment, VxVM (Veritas Volume Manager) needs to sort the plexes of volume 
according to sites. When the number of 
plexes of volumes are greater than 100, the sorting of plexes fail due to a 
bug in the code. Because of this sorting failure, vxassist relayout/convert 
operations fail.

RESOLUTION:
Code changes are done to properly sort the plexes according to site.

Patch ID: PHCO_44660, PHKL_44661

* 3913086 (Tracking ID: 3898090)

SYMPTOM:
VxVM (Veritas Volume Manager) vxvmconvert utility may corrupt HP-UX LVM logical 
volume due to incorrect subdisk length during conversion

DESCRIPTION:
When converting HP-UX LVM logical volume to VxVM volume by vxvmconvert utility, 
in case volume is pretty large, multiplication of logical volume's PE (Physical 
Extent) size, block size and extent count can overflow 32bit value hence 
subdisk length becomes incorrect. Consequently, converted VxVM volume content 
is corrupt.

RESOLUTION:
Code changes have been done to address the issue.

* 3922019 (Tracking ID: 3922018)

SYMPTOM:
Shared DG import fails when minor number of shared DG conflicts with minor number of private DG on slave with the below error:
VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed:  
Error in cluster processing

DESCRIPTION:
During shared DG import minor number is first checked on master node and then the request to import the DG is sent to slave node. If 
there is a private DG on the slave node with same minor number as the shared DG getting imported then slave node fails the import 
with minor conflict error. Master node retries the import of the DG with new minor number and sends the request to slave node for DG 
import retrial. There are some stale entries which are still kept in vxconfigd in-core database as part of first DG import failure 
and vxconfigd database is not cleaned up. The stale entries lead to vxconfigd core dump on slave which in turn results into DG import 
failure.

RESOLUTION:
Code change are done to clear the entries which are created as part of first DG import trial.

* 3922021 (Tracking ID: 3922020)

SYMPTOM:
'vxdisk resize' operation is not propagating the new DA sizes to slave nodes 
in
a cluster. The volume and filesystem sizes are correctly propagated, but the
vxdisk list
output shows the size on the slaves which is unchanged.

DESCRIPTION:
For propagating the new DA sizes to slave nodes, one of the functions reads 
the
required magic string at a wrong offset, because magic string is overwritten 
by
the local disk attributes offset. The reason for the over-writing is 
Disabled 
config copies. They are explicitly disabled for disks requiring VTOC to GPT 
conversion and which have a size greater than 1 TB. Hence the issue.

RESOLUTION:
Code changes have been done to refresh the DA records on SLAVES, by reading the
private region.

* 3928660 (Tracking ID: 3928658)

SYMPTOM:
While failing over primary paths of DMP node, IO hang might be observed due to
deadlock between two vxiod threads.

Following two threads might be involved in deadlock:
Thread 1 :
wait_for_lock()
spinlock()
volget_rwspinlock()
svm_adminio_wakeup()
voliod_restart_commitwait_io()
dmp_unquiesce_free()
cvm_dmp_failover_path()

Thread 2 :
wait_for_lock()
spinlock()
vol_failover_drain()
quiescesio_start()
voliod_iohandle()
voliod_loop()
kthread_daemon_startup()

DESCRIPTION:
During DMP path failover, a deadlock might happen between two vxiod threads
because of incorrect order of acquiring the spinlock.

RESOLUTION:
Code changes have been done to correct the locking order leading to deadlock
between two vxiod threads.

Patch ID: PHKL_44587, PHCO_44586

* 3902801 (Tracking ID: 3902793)

SYMPTOM:
The VxVM diagnostic utility vxdmpdbprint fails with the following error 71 error
message:

# /etc/vx/diag.d/vxdmpdbprint 
dmp_get_dmp_nodes_on_da() failed with error 71

DESCRIPTION:
This error occurs when the disk array name is greater than 15 characters. The
array-name field is an internal data structure, pre-allocated by the
vxdmpdbprint utility. If the disk array name is greater than 15 characters, it
will overflow and overwrite the subsequent fields in the data structure, leading
to error 71.

RESOLUTION:
Modify the pre-allocated space in the VxVM structure using the vxdmpdbprint utility.

* 3908808 (Tracking ID: 3857120)

SYMPTOM:
If the volume is shared in a CVM configuration, the following stack traces will
be seen under a vxiod daemon suggesting an attempt to drain I/O. In this case,
CVM halt will be blocked and eventually time out.
The stack trace may appear as:
sleep+0x3f0  
vxvm_delay+0xe0  
volcvm_iodrain_dg+0x150  
volcvmdg_abort_complete+0x200  
volcvm_abort_sio_start+0x140  
voliod_iohandle+0x80

or 

cv_wait+0x3c() 
delay_common+0x6c 
vol_mv_close_check+0x68 
vol_close_device+0x1e4
vxioclose+0x24
spec_close+0x14c
fop_close+0x8c 
closef2+0x11c
closeall+0x3c 
proc_exit+0x46c
exit+8
post_syscall+0x42c
syscall_trap+0x188

Since the vxconfigd would be busy in transaction of trying to close the volume
or drain the IO then all the other threads which send a request to vxconfigd
will hang.

DESCRIPTION:
VxVM maintains an I/O count of the in-progress I/O on the volume. When two
threads from VxVM asynchronously manipulate the I/O count on the volume, the
race between these threads might lead to stale I/O count remaining on the volume
even though the volume has actually completed all I/Os . Since there is an
invalid pending I/O count on the volume due to the race condition, the volume
cannot be closed.

RESOLUTION:
This issue has been fixed in the VxVM code manipulating the I/O count to avoid
the race condition between the two threads.

Patch ID: PHCO_44553, PHKL_44554

* 3895164 (Tracking ID: 3893167)

SYMPTOM:
When the VxVM objects are displayed using glance command, it returns the SIGSEGV
error.
The glance command fails with the following error message:

Memory fault(coredump)

DESCRIPTION:
Currently, VxVM by default uses 32-bit conversion table for converting
structure passed back to the application. The glance command is now modified to
64-bit. Hence, when the glance command tries to read 32-bit structures returned
by VxVM, it fails with SIGSEGV error.

RESOLUTION:
The code is modified such that VxVM will convert the structure to 64-bit
compatible structure if the calling application is 64-bit.

* 3895494 (Tracking ID: 3895493)

SYMPTOM:
More history of DDL reconfiguration is desired to help isolate problems
concerning dynamic reconfiguration.

DESCRIPTION:
In 5.1SP1 release, 10 ddl.log files are collected to record the dynamic
reconfiguration of the vxconfigd(1M) command. In some cases, the ddl.log files
are overwritten which leads to loss of important data required to complete the
Root Cause Analysis (RCA) of the issue encountered.

RESOLUTION:
For 5.1SP1 release, in order to safeguard important data for RCA efforts, the
number of ddl.log files before getting overwritten has been increased to 50.

* 3895678 (Tracking ID: 3892775)

SYMPTOM:
When installing patches from QPK [Quality patch bundle] on a system with a VxVM
version before VxVM 51SP1, the installation fails with the following error message:

ERROR:   The "checkinstall" script for "PHCO_44243" failed (exit code
         "1"). The script location was
         "/var/tmp/BAA005510/catalog/PHCO_44243/pfiles/checkinstall".

DESCRIPTION:
VxVM 5.1SP1 command patch included in the QPK bundle and the patchs
checkinstall script will verify the installation of the required versions of the
CommonIO and FC drivers. Such verification is needed only if the system is
running VxVM 5.1SP1. Due to some bad logic in the checkinstall script, a system
having a VxVM version other than 5.1SP still can fail the QPK installation if
the required versions of CommonIO and FC drivers are not present.

RESOLUTION:
The checkinstall script is modified to return an error message for the missing
pre-requisite drivers only when base 5.1SP1 VxVM package is installed in the system.

* 3895777 (Tracking ID: 2165920)

SYMPTOM:
The vxrelocd(1M) daemon creates a defunct (zombie) process.

DESCRIPTION:
Daemon vxrelocd(1M) create child processes to help in reclaiming disks. But
vxrelocd(1M) does not check the exit status of its child process. As the child
process is forked in the background, it causes a defunct child process of the
vxrelocd(1M) daemon.

RESOLUTION:
The code is modified such that the background process is branched only when it
is necessary and vxrelocd(1M) will wait for child process to complete.

* 3897044 (Tracking ID: 3880212)

SYMPTOM:
The vxreattach(1M) command always performs full resync operation despite
FastResync license.

DESCRIPTION:
By default, the command vxreattach(1M), uses "nofmr" option while reattaching
the disk back. To use FastResync with vxreattach operation, it needs to be
mentioned in /etc/default/vxreattach file (usefmr=yes). This

RESOLUTION:
The man page for command vxreattach(1M) is modified to include usage of
/etc/default/vxreattach file.

* 3898670 (Tracking ID: 3898171)

SYMPTOM:
When using ignite/UX to restore an image of a system having VxVM foreign disks
(i.e. VxVM disks bypassing DMP) as root disks, the recovered system hangs while
booting with the following error message:
UX:vxfs fsck: ERROR: V-3-20945: /dev/vx/dsk/rootdg/standvol:cannot stat
/dev/vx/dsk/rootdg/standvol

DESCRIPTION:
When restoring the ignite archive, it is found that the bootdg entry is missing
from /etc/vx/volboot. At the same time, due to a bug, vxconfigd cannot recover
the missing bootdg entry at rootdg import.

RESOLUTION:
The code is modified to correct the missing bootdg entry while booting the system.

Patch ID: PHCO_44243, PHKL_44244

* 3273164 (Tracking ID: 3281160)

SYMPTOM:
While importing a disk group that has a base-minor number same as that of an
existing imported disk group and when auto-reminor setting is turned off, 
the import proceeds successfully without issuing a warning suggesting disk 
group should be re-minored whenever possible.
Thus two imported disk groups may be assigned same base_minor, as shown below.

	vxprint -g <dg1> -F "%base_minor" <dg1>
10000
	vxprint -g <dg2> -F "%base_minor" <dg2>
10000

DESCRIPTION:
When auto-reminor is set to off, the  base-minor number conflict 
is not properly detected when importing the disk group. 
As a result, multiple disk groups may be imported using the same 
base_minor number.  After volumes are created in such disk groups, 
available minor numbers in the range of (base_minor) to (base_minor+1000) 
will be allocated to these volumes. This may cause potential minor number 
conflicts when such disk groups are deported and imported again.

RESOLUTION:
The code is modified to issue following warning while importing the disk group
 having same base-minor as that of existing disk groups: 
VxVM vxconfigd WARNING V-5-1-0 Disk group <dgname> has base-minor number 
conflicts, it should be reminored when volumes are not in use.

* 3694561 (Tracking ID: 2990967)

SYMPTOM:
The volume device numbers are not assigned as per the disk group base-minor 
range while the volumes are created.

For example, in below output, base_minor number is 6812000; but minor number 5 
is assigned to the volume. Minor numbers are expected to be assigned within the 
range base_minor to base_minor+1000.

vxprint -l synpdg | grep minor
minors:   >= 6812000
device: minor=5 bdev=2/5 cdev=134/5 path=/dev/vx/dsk/synpdg/ora_synp_log_vol

During import following warning message is displayed:
VxVM vxdg WARNING V-5-1-1328 Volume ora_synp_log_vol: 
Temporarily renumbered due to conflict"

DESCRIPTION:
To avoid the minor number conflicts, minor-number space is divided into two 
pools, one for the private-disk groups, and the other for the shared-disk 
groups. Further, a base-minor number is assigned to a disk group while 
importing it and the volumes created in the disk group are assigned device 
numbers based on the disk group base-minor number, the minor number range falls 
between base-minor to base-minor +1000. 
Now, if a shared disk group is created and later if it is imported back as the 
private disk group, the volumes minor numbers are expected to be assigned 
starting with the disk group base-minor number. However, the minor numbers were 
incorrectly getting assigned from the private pool of minor number
space. 
The issue occurred because an internal map was incorrectly used for assigning 
the minor numbers to the volume.

RESOLUTION:
The code is modified to correctly assign the device numbers to the volumes 
based on the disk group base-minor range. Further, the vxdg (1M) command is 
enhanced to provide a way to import the disk group if there are minor number 
conflicts.
Disk group is permanently re-minored using following command: 
vxdg [-t] -o reminor[=new_base_minor] import <dgname>
In the above command, if a new_base_minor is specified, then disk group will be
imported and re-minored using the new_base_minor.
If new_base_minor is not specified, then VxVM will assign an appropriate
new_base_minor.

* 3699305 (Tracking ID: 3317125)

SYMPTOM:
A disk group may get imported as shared disk group on a CVM(Clustered Volume
Manager) slave node, such that its 'base_minor' number is similar to an existing
private disk group on that slave node, as shown below:
> vxprint -g <priv_dg> -F "%base_minor" <priv_dg>
	20000
> vxprint -g <shared_dg> -F "%base_minor" <shared_dg>  
	20000

DESCRIPTION:
When an earlier private disk group is imported in shared mode and if its
base_minor number conflicts with the base_minor number of an existing 
private disk group on on of the slave nodes, the import should issue 
some warning message so that user can re-minor the disk group whenever
volumes are not in use. However, the disk group was getting imported with
base_minor number similar to existing private disk group on the slave node 
without issuing any waning message. Resolving such base-minor number conflicts
may help to avoid minor number conflicts in future.

RESOLUTION:
The code has been fixed to issue following warning message while 
importing disk group on slave node: 
VxVM vxconfigd WARNING V-5-1-0 Disk group <dgname> has base-minor number 
conflicts, it should be reminored when volumes are not in use..

* 3699311 (Tracking ID: 3139180)

SYMPTOM:
The node join operation fails, if the device-minor numbers of the node-join 
operation are similar to
The device-minor numbers of the currently imported disk groups in the cluster.
Following messages are displayed in the system logs:
vxvm:vxconfigd: V-5-1-8066 minor number <minor_number> disk group <dgname> in 
use
vxvm:vxconfigd: V-5-1-11092 cleanup_client: (Cannot assign minor number)
<minor_number>
vmunix: WARNING: VxVM vxio V-5-0-164 Failed to join cluster <cluster_name>, 
aborting

DESCRIPTION:
VxVM does not re-minor a disk group that is already imported, regardless of 
whether the "autoreminor" is set to "on". In this case, a node attempts to join 
the cluster, however, the joining node has minor numbers that conflict with a 
disk group in the cluster. 
As a result, the join operation fails. The disk group that has conflicting 
minor numbers, needs to be re-minored manually before performing the node-join 
operation.

RESOLUTION:
The behavior and work-around is documented in the vxdg(1M) commands manual page.

* 3699313 (Tracking ID: 3028704)

SYMPTOM:
When a disk group that has cloned disks is imported, warning messages related 
to minor number conflicts are displayed.
The warning message that is displayed is as follows:
"VxVM vxdg WARNING V-5-1-1328 Volume <volume_name>: Temporarily renumbered due 
to conflict"

DESCRIPTION:
When a disk group that has clone disks is imported using the '-o useclonedev' 
and the '-o updateid' options, minor number conflict occurs since the volumes 
in the cloned disk group have same minor number. If the autoreminor setting has 
been turned on, this conflict is avoided by performing dynamic reminoring 
during the disk group import operation.
However, the function which handles the 'updateid' task does not check if the 
disk group has already been reminored. due to a minor number conflict earlier 
during the disk group import process, and subsequently assigns a new base minor 
number to the disk group. 
As a result, a warning message is displayed.

RESOLUTION:
The code is modified such that the function that handles the disk group 
reminoring when 'updateid' option is provided is fixed. Reminoring is not 
performed when the minor number has already been changed due to a minor number 
conflict.

* 3741442 (Tracking ID: 3450758)

SYMPTOM:
The slave node panics when it tries to join the CVM cluster. The following 
stack trace is observed:
vol_plex_iogen()
volobject_iogen()
vol_cvol_volobject_iogen()
vol_cvol_init_iogen()
vol_cache_linkdone()
cvm_msg_cfg_end()
vol_kmsg_request_receive()
vol_kmsg_receiver()
kernel_thread()

This panic only occurs if the cluster contains a shared disk group having cache 
volumes.

DESCRIPTION:
The panic occurs when the node generates a Staged I/O (SIO) for the plex object 
in the process of validating the cache object (parent), and the plex object 
(child) associations. Some fields of the plex object, which were not populated 
are accessed as part of the SIO generation. This access to the NULL fields 
leads to panic.

RESOLUTION:
The code is modified to avoid accessing the NULL fields of the plex object, 
while the slave node joins the cluster.

* 3741460 (Tracking ID: 2721644)

SYMPTOM:
The vxconfigd(1M) daemon dumps core with the following stack trace:

#0  strstr () from /lib/libc.so.6
#1  ddl_get_tgttype ()
#2  ddl_update_port_prop ()
#3  ddl_set_config ()
#4  req_set_ddl_config ()
#5  request_loop ()
#6  main ()

The core dump may be observed after the disks are removed from the storage 
layer.

DESCRIPTION:
The vxconfigd (1M) daemon sets the information about the Device Discovery Layer 
(DDL) discovered 
entities. While reading the entity database, if disks are removed from storage 
layer, an internal unsigned variable corresponding to the number of sub-paths 
under a DMP node gets assigned a negative value as an error condition. Due to 
the incorrect data type, the program fails to identify the error to retrieve 
the number of sub-paths and incorrectly goes ahead. As a result, vxconfigd(1M) 
daemon subsequently dumps core due to an illegal operation.

RESOLUTION:
The code change defines the local variable as signed type to prevent the core 
dump.

* 3741485 (Tracking ID: 3162418)

SYMPTOM:
When handing a device discovery request, such as vxdisk scandisks, vxconfigd
(1M) failed the discovery due to some issue, leaving the device list empty. 
Subsequently, vxconfigd(1M) dumps core when trying to access the device list. A 
probable stack trace is as follows. 

ddi_hash_devno()
ddl_find_cdevno()
ddl_find_path_cdevno()
req_daname_get()
vold_process_request()
start_thread()

DESCRIPTION:
In trying to find a device from the emptied device list, NULL values are 
returned, but vxconfigd(1M) attempts to deference those NULL values leading to 
a core dump. Ideally, vxconfigd(1M) should detect the NULL values before 
accessing to avoid the core dump.

RESOLUTION:
The code changes are made to detect the NULL value correctly.

* 3741553 (Tracking ID: 3237503)

SYMPTOM:
The system hangs after a space-optimized snapshot is created, when a large size 
cache volume (in range of terabytes) is used.
Following stack trace is observed:
volilock_release()
vol_cvol_unilock()
vol_cvol_bplus_walk()
vol_cvol_rw_start()
voliod_iohandle()
voliod_loop()
vol_kernel_thread _init()
threadentry()

DESCRIPTION:
For all the changes written to the cache volume after the snapshot volume is 
created, a translation map with B+ tree-data structure is used to accelerate 
the search, insert, or delete operations. While inserting a node to the tree, 
type casting of the page offset to the 'unsigned int' causes the value 
truncation for the offset which is beyond maximum 32-bit integer. 
The value truncation corrupts the B+ tree data structure. This results in VxVM 
staged I/O (SIO) hang. 
In addition to this, all space optimized snapshots on the corresponding cache 
object may be corrupted.

RESOLUTION:
The code is modified to remove all the type castings to the unsigned int' in 
the cache volume code.

* 3741620 (Tracking ID: 2869594)

SYMPTOM:
Master node panics after a space optimized snapshot is refreshed or deleted, 
and the master node is selected using the 'vxclustadm setmaster' command. In 
addition to this, all the space optimized snapshots on the corresponding cache 
object may get corrupted. The following stack trace is observed:

volilock_rm_from_ils()
vol_cvol_unilock()	
vol_cvol_bplus_walk()
vol_cvol_rw_start()
voliod_iohandle()
voliod_loop()
thread_start()

DESCRIPTION:
In Clustered Volume Manager (CVM), the master node owns the responsibility to 
maintain the cache object index structure for providing the space optimized 
functionality. When a space optimized snapshot is refreshed or deleted, the 
index structure gets rebuilt in the background after the operation is returned. 
When the master node is switched using the 'vxclustadm setmaster command, 
before the index rebuild is complete, both the old master node and the new 
master node rebuild the index in parallel. This subsequently results in the 
index corruption. 
Since the index is corrupted, the data stored on the space optimized snapshots 
should not be trusted. I/Os issued on the corrupted index leads to panic.
Space optimized snapshots created on the corresponding cache object may be 
corrupted.  It may be a silent corruption, because panic may be observed at the 
time of accessing the corrupt index structures.

RESOLUTION:
The code is modified such that when the master role is switched using 
the 'vxclustadm setmaster' command, the index that is rebuild on the old master 
node is safely aborted. Only the new master node is allowed to rebuild the 
index.

* 3741725 (Tracking ID: 3247040)

SYMPTOM:
The execution of the vxdisk scandisks command inadvertently re- enables 
Powerpath(PP) enclosures earlier disabled using the following command:
"vxdmpadm disable enclosure=<pp_enclosure_name>"

DESCRIPTION:
During the device discovery process, due to a wrong check for the PP enclosure, 
Dynamic Multi-pathing (DMP) destroys the old PP enclosure from the device 
discovery layer (DDL) database, and adds it as a new enclosure. 
This process removes all the old flags that are set on the PP enclosure, and 
then DMP treats the enclosure as enabled due to the absence of required flags.

RESOLUTION:
The code is modified to keep the PP enclosure in the DDL database during the 
device discovery, and to verify that the existing flags on the paths of the PP 
enclosure are not reset.

* 3741882 (Tracking ID: 3134118)

SYMPTOM:
The vxdmpadm -q iostat show interval = <interval> command reports 
incorrect QUEUED I/Os and PENDING I/Os after the first interval.

Consider the example below. PENDING I/Os should display 200 instead of 0 for 
second interval, because they are still pending since the first interval.
$ vxdmpadm -qz iostat show interval=1

                    cpu usage = 0us    per cpu memory = 831488b
                      QUEUED I/Os        PENDING I/Os
PATHNAME             READS    WRITES
<device_path_1>        0         0           200
<device_path_1>        0         0           0

DESCRIPTION:
QUEUED I/Os and PENDING I/Os should report the outstanding I/Os for each 
interval. But the code incorrectly reports the difference between the I/O 
counts for the current interval and the previous one. As a result, if no I/O 
occurred between the two intervals, the difference, which is 0, will be printed 
instead of the actual number of outstanding I/Os.

RESOLUTION:
The code is modified so that for any time interval, the DMP driver would always 
report the 
outstanding I/O count for that time frame.

* 3742610 (Tracking ID: 3134882)

SYMPTOM:
For a node join scenario, disk re-online operation is not performed on slave 
node for auto-imported disk
groups containing clone disks.

DESCRIPTION:
During node join case, re-online operation is not done on slave node if the disk 
group containing clone disks is auto-imported on master node.
For a disk group containing clone disks, the disk re-online operation must be 
performed before importing the disk group. If the re-online is not performed, 
there can be possibility that the disk group is imported with stale private 
region
data. The re-online operation ensures that the disk private region is updated 
with
correct information.

RESOLUTION:
Code changes are done to re-online the disks on the slave node.

* 3756607 (Tracking ID: 3090097)

SYMPTOM:
While a disk group with auto-reminor enabled is imported in shared mode, but 
its base_minor number conflicts with the base_minor number of an existing 
private disk group on one of the slave nodes in cluster, the disk group will be 
imported with 
base_minor=0 and the minor numbers assigned to its associated volumes will then 
come from the private pool, as shown below. 
	vxdg -s import <dg>
	vxprint -g <dg> -m 
     	base_minor=0
	vol  vol_concat1
        	minor=6
	vol  vol_stripe1
        	minor=5

DESCRIPTION:
When auto_reminor setting has been enabled, a base-minor conflict should be 
resolved by retrying the disk group import operation using a different base-
minor. Due to an internal flag not properly set for the retry, auto-reminoring 
does not happen, leaving the base_minor at 0. Minor numbers assigned to 
associated volumes will then fall into the private pool of minor numbers from 0 
to 32999.

RESOLUTION:
The code is modified to set the 'auto-reminor' flag, while it retries the disk 
group import operation.

* 3762780 (Tracking ID: 3605612)

SYMPTOM:
When auto-reminor setting is enabled, a disk group may get imported such that its 'base_minor' number is similar to another imported disk group, as shown below:
	vxprint -g <dg_1> -F "%base_minor" <priv_dg>
	20000
	vxprint -g <dg_2> -F "%base_minor" <shared_dg>  
	20000

DESCRIPTION:
If the 'autoreminor' tunable is enabled and 'base_minor' number of disk group 
to be imported is similar to another disk group, then the disk group should be ideally automatically re-minored during import. The re-minoring helps to avoid minor number conflicts in future. However, the disk group to be imported was not getting re-minored if there are no volume records in disk group. As a result, multiple disk groups with similar 'base_minor' number were imported.

RESOLUTION:
The code is fixed to correctly re-minor the disk group during disk group import in order to handle the base_minor number conflicts.

* 3814123 (Tracking ID: 3045033)

SYMPTOM:
The "vxdg init" command should not create a disk group on the clone disks.

DESCRIPTION:
If the disk contains copy of data from some other VxVM disk, an attempt to 
create the disk groups (dgs) on such disks may result in data loss. Hence, 
the vxdg init command fails and a disk group will not be created on the clone 
disks. If the disk has 'udid_mismatch' or 'clone_disk' flag, but is not a clone 
disk, the flag can be manually cleared using the following commands:

vxdisk updateudid <diskname>
vxdisk set <diskname> clone=off

For more information, see the vxdg(1M) commands manual page.

RESOLUTION:
The code is modified to fail the "vxdg init" command for the clone disks. 
Additionally, the behavior of the vxdg init command for the clone disks and 
steps to clear the 'udid_mismatch' or 'clone_disk' flags are documented in the 
vxdg(1M) commands manual page.

* 3814134 (Tracking ID: 3734606)

SYMPTOM:
As per the current design, all disks are re-onlined for every clone or tagged 
disk group to be imported. As a result, the node join operation takes time.

DESCRIPTION:
During the slave node join operation, all the disks are re-onlined for every 
clone or tagged disk group to be imported. As a result, the node join time may 
increase if a large number of disk groups (dgs) have cloned or tagged disks on 
the system.

RESOLUTION:
The code is modified to reduce the redundant disk re-onlines during the node 
join operation. Earlier, all disks were re-onlined while importing each clone 
or tagged disk group during the node join operation. With the fix, all disks 
will be re-onlined once during the node join operation, if there exists at 
least one clone or tagged disk group.

* 3851289 (Tracking ID: 3851287)

SYMPTOM:
While installing patches from QPK [Quality patch bundle], installation may get 
aborted if the VxVM command patch is part of QPK, but base VxVM package is not 
installed.

DESCRIPTION:
If the VxVM command patch is included in the QPK bundle, then it gets marked 
for the installation. While installing the VxVM patch, checkinstall script 
verifies if the pre-requisites like CommonIO bundle FC drivers of the required 
versions are installed in the system. 

If the pre-requisites of required version are not installed in the system, then 
the checkinstall script returns failure irrespective of whether base VxVM 
package is installed. As a result, installation of other patches from QPK 
bundle also gets aborted.

RESOLUTION:
The checkinstall script is modified to return error for missing pre-requisite 
packages only when base VxVM package is installed in the system.

* 3851574 (Tracking ID: 1726349)

SYMPTOM:
In the CVM environment, the shared disk group creation may fail if a private 
disk group was previously created and destroyed using the same disks. The 
following error message is displayed:

  vxdg -s init <dgname> <diskname> 
  VxVM vxdg ERROR V-5-1-585 Disk group <dgname>: cannot create: Disk in use by 
another cluster

DESCRIPTION:
When a shared disk group is created, the disks are not compulsorily re-onlined 
on the slave node. As a result, if the in-core database of the slave node 
contains the stale disk information, the disk group creation may fail. 

The problem was observed when a private disk group was created on the master 
node and 'vxdisk scandisks' was performed on the slave node. Later, the private 
disk group was destroyed from the master node and a new shared disk group was 
created on the same disks. However, the disk group creation failed due to the 
stale disk information on the slave node.

RESOLUTION:
The code is modified to re-online the required disks on the slave node during 
the disk group creation.

Patch ID: PHCO_44219, PHKL_44218

* 3597496 (Tracking ID: 2965910)

SYMPTOM:
Volume creation with the vxassist(1M) command dumps core when the non-disk
parameters like enclosures are specified along with "-o ordered" option. The
following stack trace is observed: 
setup_disk_order()
volume_alloc_basic_setup()
fill_volume()
setup_new_volume()
make_trans()
vxvmutil_trans()
trans()
transaction()
do_make()
main()

DESCRIPTION:
When the volume creation triggers using the vxassist(1M) command with the "-o 
ordered" option, the vxassist(1M) command sets up the disk order, in order to 
allocate storage in the order that is specified by the vxassist(1M) command 
arguments. Before setting up the disk order, the vxassist(1M) command filters 
out all the non-disk parameters, in order to avoid mixing of
the disk classes such as mediatype. As a result, valid parameters like 
controllers or enclosures are also filtered out. Due to this, invalid pointer 
gets accessed while setting up the disk order. This results in segfault.

RESOLUTION:
The code is modified to remove the filtering of the disk classes while setting
up the disk order. The steps to use mediatype for ordered allocation have been
documented in the tech-note:
http://www.symantec.com/business/support/index?page=content&id=HOWTO107386

* 3674615 (Tracking ID: 3674614)

SYMPTOM:
Restarting the vxconfigd(1M) daemon on the slave (joiner) node, when the node-
join operation  is in progress, may cause the vxconfigd(1M) daemon to hang on 
the master and the joiner node.

DESCRIPTION:
When a node is in process of joining the cluster and due to some reason the 
vxconfigd(1M) daemon of this node is restarted, it may cause the vxconfigd(1M) 
daemon to hang on the master and the joiner node. When vxconfigd(1M) daemon is 
restarted in the middle of the node join process, it wrongly assumes that this 
is a slave rejoin case, and sets the rejoin flag. Because the rejoin flag is 
wrongly set, the import operation of the disk groups on the slave node fail, 
and the join process is not terminated smoothly. As a result, the vxconfigd(1M) 
daemon hangs on the master and the slave node.

RESOLUTION:
The code is modified to differentiate between the rejoin scenario and the 
vxconfigd(1M) daemon restart scenario.

* 3677738 (Tracking ID: 3199056)

SYMPTOM:
Veritas Volume Replicator (VVR) primary system panics in the vol_cmn_err()
function, due to the corrupted VVR queue. The following stack trace is observed:
panic_trap()
kernel_add_gate()
vol_cmn_err()
.kernel_add_gate()
skey_kmode()
nmcom_deliver_ack()
nmcom_ack_tcp()
nmcom_server_proc_tcp()
nmcom_server_proc_enter()
vxvm_start_thread_enter()

DESCRIPTION:
If the primary system receives the data acknowledgement prior to the network 
acknowledgement,
VVR fabricates the network acknowledgement for the message and keeps the 
acknowledgement in a queue. When the real network acknowledgement arrives at 
the primary system, VVR removes the acknowledgement from the queue. Only one 
thread is supposed to access this queue. However, because of the locking not 
being proper, there is a race condition, where the two threads can 
simultaneously update the queue. This leads to queue corruption. Subsequently, 
the system panic occurs when the corrupted queue is accessed.

RESOLUTION:
The code is modified to take the proper lock before entering the critical 
region.

Patch ID: PHCO_44194, PHKL_44195

* 3519134 (Tracking ID: 3518470)

SYMPTOM:
The 'vxdg adddisk' command fails with following message:
 "<disk name> previous removed using -k option" 
Even though the disk was not removed using the -k option.

DESCRIPTION:
This is a unique kind of setup where disk access (DA) names of one node are 
same 
as the DA names of the other physical devices on other node in Cluster 
Volume 
Manager (CVM). When the disk group that has one of the DA, and the master 
switch to the node that has the same DA name(different physical device) is 
not 
under any disk group, and the user tries to add the disk to the disk group 
the 'adddisk' operation fails.
This occurs because one of the condition gets success, which is only a 
partial 
check to the scenario. That is the DA name matches with the last_da_name. As 
this condition is successful it returns the error message "<disk name> 
previous 
removed using -k option.

RESOLUTION:
The code is modified to add another condition 'DA name should be null' along 
with the existing check. This resolves the issue at the time of the 
'adddisk' 
operation.

* 3612710 (Tracking ID: 2190632)

SYMPTOM:
High memory consumption is observed with the vxconfigd(1M) daemon when large 
number of events are generated.

DESCRIPTION:
Under certain conditions, large number of events can be generated for the 
vxconfigd clients. There is no limit on the number of event for clients at a 
time. This can result in high memory consumption of the vxconfigd(1M) daemon as 
the memory gets allocated for every event.

RESOLUTION:
The code is modified to limit the number of events for the client at a time.

* 3618287 (Tracking ID: 3526855)

SYMPTOM:
VxVM causes system panic during I/O failures.
The following stack trace is observed:

_vol_dev_strategy()
volsp_strategy()
vol_dev_strategy()
voldiosio_start()
voliod_iohandle()
voliod_loop()
kthread_daemon_startup()

DESCRIPTION:
Panic occurs as the device number gets incorrectly populated while sending the 
I/O to the SCSI layer.

RESOLUTION:
The code is modified to assign/re-assign the valid device number.

* 3623802 (Tracking ID: 3640061)

SYMPTOM:
System may panic/hang in VERITAS Volume Replicator (VVR) while flushing the log 
header 

The following stack trace is observed:
<IRQ>
[...]
volsync_wait()
vol_rv_flush_sio_ltq()
vol_rv_flush_loghdr_done()
[...]

DESCRIPTION:
System panic/hang occurs as VxVM kernel thread fails to check the interrupt 
context while flushing VVR-log header, which needs to be executed in the process 
context.

RESOLUTION:
The code is modified to switch to the process context while flushing the VVR-log 
header.

Patch ID: PHCO_44058, PHKL_44059

* 3249204 (Tracking ID: 3145359)

SYMPTOM:
On the IA architecture systems, the vxres_lvmroot -v -b <disk_name> command 
fails with the following error message:
"ERROR V-5-2-4699 Error creating Physical Volume /dev/vx/rdmp/c#t#d#s2"

DESCRIPTION:
The vxres_lvmroot utility fails on the IA architecture for HP-UX 11.31 1003 
update, because HPs pvcreate command fails to create LVM volumes on DMP DSFs 
for the 11.31 1003 OS release, and only allows it on the new NMP node.

RESOLUTION:
The code is modified to create the LVM physical volumes on the new NMP node, 
instead of the legacy DMP DSF.

* 3261603 (Tracking ID: 3261601)

SYMPTOM:
System panics when the dmp_destroy_dmpnode() function attempts to free a 
virtual address that is already set free. The following stack trace is observed:

mt_pause_trigger()
cold_wait_for_lock()
spinlock_usav()
kmem_arena_free()
vxdmp_hp_kmem_free()
dmp_destroy_dmpnode()
dmp_decode_destroy_dmpnode()
dmp_decipher_instructions()
dmp_process_instruction_buffer()
dmp_reconfigure_db()
gendmpioct ()
dmpioctl()

DESCRIPTION:
Due to a race condition when the dmp_destroy_dmpnode() function attempts to 
free an virtual address that is already set free, the system panics because it 
tries to access the stale memory address.

RESOLUTION:
The code is modified to avoid the occurrence of the race condition.

* 3264167 (Tracking ID: 3254311)

SYMPTOM:
In a campus cluster environment, either a manual detach or a detach because of 
the storage connectivity followed by a site reattach, results in system panic. 
The stack trace observed is as following:
voldco_or_acmbuf_to_pvmbuf()
voldco_recover_detach_map()
volmv_recover_dcovol()
vol_mv_precommit()
vol_commit_iolock_objects()
vol_ktrans_commit()
volconfig_ioctl()
volsioctl_real()
volsioctl()

DESCRIPTION:
When a site is reattached, possibly after a split-brain, it is possible that a 
site-consistent volume is updated on each site independently. For such 
instances, the maps that track this, need to be recovered from each site to 
take care of the updates performed by both the sites. These maps are stored in 
a Data Change Object (DCO). 
During the recovery of a DCO, it uses a contiguous chunk of memory to read and 
update the DCO map. This chunk of memory is able to handle the DCO recovery as 
long as the volume size is less than 1.05 TB. When the volume size is larger 
than 1.05 TB, the map size grows larger than the statically allocated memory 
buffer. In such a case, it overruns the memory buffer. This causes the system 
to panic.

RESOLUTION:
The code is modified to ensure that the buffer is accessed within the limits. 
Also, if required, another iteration of the DCO recovery is performed.

* 3387294 (Tracking ID: 2152830)

SYMPTOM:
A diskgroup (DG) import fails with a non-descriptive error message when 
multiple copies (clones) of the same device exist and the original devices are 
either offline or not available.
For example:
# vxdg import mydg
VxVM vxdg ERROR V-5-1-10978 Disk group mydg: import
failed:
No valid disk found containing disk group

DESCRIPTION:
If the original devices are offline or unavailable, the vxdg(1M) command picks 
up cloned disks for import.DG import fails unless the clones are tagged and the 
tag is specified during the DG import. The import failure is expected, but the 
error message is non-descriptive and does not specify the corrective action to 
be taken by the user.

RESOLUTION:
The code is modified to give the correct error message when duplicate clones 
exist during import. Also, details of the duplicate clones are reported in the 
system log.

* 3393288 (Tracking ID: 2253210)

SYMPTOM:
When a device tree is added after a LUN is removed or added, the 
command "vxdisk scandisks" hangs. The following stack trace is observed:
    slpq_swtch_core()
    inline real_sleep()
    sleep()
    vxvm_delay()
    vol_disk_change_iopolicy()
    volconfig_ioctl()
    volsioctl_real()
    volsioctl()
    vols_ioctl()
    spec_ioctl()
    vno_ioctl()
    ioctl()
    syscall()

DESCRIPTION:
When the I/O fails from DMP, in case of a specific error code, the pending I/O 
count on the DMP node is decremented, which was incremented earlier. 
In this particular hang, the error code is not set properly. The reason that 
the error code is not set properly is because of the handling of the EFI 
devices that do not have s2 partition. As a result, the pending I/O count on 
the DMP node is not decremented. This leads to the vxconfigd" hang which in 
turn leads to "vxdisk scandisks" hang.

RESOLUTION:
The code is modified to set the appropriate error code that takes care of 
decrementing the pending I/O count.

* 3393457 (Tracking ID: 3390959)

SYMPTOM:
The vxconfigd(1M) daemon hangs in the kernel, while processing the I/O request. 
The following stack trace is observed :
slpq_swtch_core()
sleep_pc()
biowait()
physio()
dmpread()
spec_rdwr()
vno_rw()
read()
syscall()

DESCRIPTION:
The vxconfigd(1M) daemon hangs, while processing the I/O request. 
The dmp_close_path failure message is displayed in the syslog before the 
hang. Based on the probable cause analysis, the failure message gets displayed 
in the syslog, during the path closure, which is related to the hang observed. 
Also, if the I/O fails on a path, the iocount is not decremented properly.

RESOLUTION:
The code is modified to add some debug messages, to confirm the current 
probable cause analysis, when this issue occurs. Also, if the I/O fails on a 
path, the iocount is decremented properly.

* 3407766 (Tracking ID: 3263105)

SYMPTOM:
The disk evacuation operation (vxevac()) fails for a volume with Data Change 
Object (DCO).

DESCRIPTION:
During the vxevac() operation, all sub-disks of a volume that reside on the 
specified source disk are moved to the destination disk. As part of the disk 
evacuation of the volume, its corresponding Data Change Log volume's (DCL's) 
sub-disks are also moved.
The vxevac() operation re-tries to move the sub-disks of the DCL volume. This 
fails because the sub-disks are already moved.

RESOLUTION:
The code is modified to exclude the DCL volumes, for which the disk evacuation 
is already completed as a part of the parent-volume-disk evacuation.

* 3412839 (Tracking ID: 2515070)

SYMPTOM:
When the I/O fencing is enabled, the Cluster Volume Manager (CVM) slave node 
may fail to join the cluster node. The following error message is displayed:

In Veritas Cluster Server(VCS) engine log:
VCS ERROR V-16-20006-1005 (abc) CVMCluster:cvm_clus:monitor:node - state: out 
of cluster reason: SCSI-3 PR operation failed: retry to add a node failed 

In syslog:
V-5-1-15908 Import failed for dg ebap01dg. Local node has data disk fencing 
enabled, but master does not have PGR key set

DESCRIPTION:
Whenever a fix transaction is performed during disk group (DG) import, the SCSI-
3 Persistent Group Reservation (PGR) key is not uploaded from vxconfigd to 
the kernel DG record. This leads to a NULL-PGR key in the kernel DG record. If 
subsequently, vxconfigd gets restarted, it reads the DG configuration record 
from the kernel. This leads to the PGR key being NULL in vxconfigd also. This 
configuration record is sent to the node, when it wants to join the cluster. 
The slave node fails to import the shared DG, because of the missing PGR key, 
and therefore it fails to join the cluster node.

RESOLUTION:
The code is modified to copy the disk group PGR key from vxconfigd to the 
kernel, when a new disk group record is loaded to the kernel.

* 3482535 (Tracking ID: 3482001)

SYMPTOM:
The 'vxddladm addforeign' command renders the system unbootable, after the 
reboot, for a few cases.

DESCRIPTION:
As part of the execution of the 'vxddladm addforeign' command, VxVM incorrectly 
identifies the specified disk as the root disk. As a result, it replaces all 
the entries pertaining to the 'root disk', with the entries of the specified 
disk, thus rendering the system unbootable.

RESOLUTION:
The code is modified to detect the root disk appropriately when it is 
specified, as a part of the 'vxddladm addforeign' command.

* 3490621 (Tracking ID: 2857341)

SYMPTOM:
The system panics during the normal I/O operations, with 'adaptive' I/O 
schedule policy in DMP. The following stack trace is observed:

gen_pr_select_path()
dmp_select_path()
gendmpstrategy()
generic_make_request()
voldiskiostart()
vol_subdisksio_start()
volkcontext_process ()
volkiostart()
vxvm_process_request_queue()
vxvm_unplug_fn()
vx_dio_physio()
vx_dio_rdwri()
vx_write_direct()
vx_write1()
vx_write_common_slow()
vx_write_common()
vx_vop_writ()
vx_write()
vfs_write()
sys_write()
tracesys()

DESCRIPTION:
With DMP, I/Os are scheduled on the paths based on its schedule policy. Due 
to the "adaptive" I/O schedule policy, it choses a path beyond the total number 
of paths available for a LUN. As a result, a NULL pointer is accessed. This 
results in a panic.

RESOLUTION:
The code is modified in the DMP's "adaptive" I/O schedule routine, to choose 
only the available paths to a LUN.

* 3500353 (Tracking ID: 3500350)

SYMPTOM:
An option is not available to identify if the disk group (DG) is imported using 
the CLONE flag or not, when the clone flag on the disk is reset using the 
command:

vxdisk set <disk-name> clone=off

DESCRIPTION:
When all the disks in the DG are clone disks (udid_mismatch or clone_disk flag 
set) then the DG is imported using the clone flag and set the clone_disk flag 
on the disk. The clone flag can be cleared on the disk using the "vxdisk set" 
command. But the DG remains imported using the clone flag. Once the clone 
flag is reset on the disk, there is no option to determine if the DG is 
imported with the clone flag or not.
This results in confusion.

RESOLUTION:
The code is modified so that the 'STATE' field of the vxdg list command 
displays the "clone" attribute if the disk group is imported using the '-o 
useclonedev=on' option.
An example is as follows:
# vxdg list
NAME                                        STATE               ID
<disk group name>                 enabled, cds, clone          <disk group id>

* 3502788 (Tracking ID: 3485907)

SYMPTOM:
Panic occurs in the I/O code path. The following stack trace is observed:
...
volkcontext_process()
volkiostart()
vxiostrategy()
...

or
...
voliod_iohandle()
voliod_loop()
...

DESCRIPTION:
When the snapshot reattach operation is in progress on a volume, the metadata 
of the snapshot gets updated. If any parallel I/O during this operation gets 
incorrect state of
Metadata, this leads to IO's of zero size being created. This leads to system 
panic.

RESOLUTION:
The code is modified to avoid the generation of I/Os of zero length, on volumes 
which are under the snapshot operations.

* 3507677 (Tracking ID: 3293139)

SYMPTOM:
The verifydco utility fails with the following error message:
VxVM vxprint ERROR V-5-1-924 Record <dco-name> not found

DESCRIPTION:
If the name of the volume is not the prefix for the corresponding Data Change 
Object (DCO) volume, the name of the DCO volume cannot be obtained. This 
results in the failure.

RESOLUTION:
The code is modified to handle the conditions where the name of the volume is 
not the prefix for the corresponding DCO, and get the appropriate DCO volume 
name.

* 3507679 (Tracking ID: 3455460)

SYMPTOM:
The vxfmrshowmap and verify_dco_header utilities fail with the following error 
message:
vxfmrshowmap:
VxVM  ERROR V-5-1-15443 seek to <offset-address> for <device-name> FAILED:Error 
0

verify_dco_header:
Cannot lseek to offset:<offset-address>

DESCRIPTION:
The issue occurs because the large offsets are not handled properly while 
seeking using 'lseek', as a part of the vxfmrshowmap and verify_dco_header 
utilities.

RESOLUTION:
The code is modified to properly handle large offsets as a part of the 
vxfmrhsowmap and verify_dco_header utilities.

* 3507683 (Tracking ID: 3281004)

SYMPTOM:
For DMP minimum queue I/O policy with large number of CPUs, the following 
issues are observed since the VxVM 5.1 SP1 release: 
1. CPU usage is high. 
2. I/O throughput is down if there are many concurrent I/Os.

DESCRIPTION:
The earlier minimum queue I/O policy is used to consider the host controller 
I/O load to select the least loaded path. For VxVM 5.1 SP1 version, an addition 
was made to consider the I/O load of the underlying paths of the selected host 
based controllers. However, this resulted in the performance issues, as there 
were lock contentions with the I/O processing functions and the DMP statistics 
daemon.

RESOLUTION:
The code is modified such that the host controller paths I/O load is not 
considered to avoid the lock contention.

* 3510359 (Tracking ID: 3438271)

SYMPTOM:
The VxVM commands may hang when new LUNs are added and device discovery is 

performed. Subsequent VxVM commands that request information from the vxconfigd

(1M) daemon may also hang. The vxconfigd stack trace is as following:

swtch_to_thread () 

slpq_swtch_core () 

sleep_pc () 

biowait_rp () 

biowait () 

dmp_indirect_io () 

gendmpioctl () 

dmpioctl () 

spec_ioctl () 

vno_ioctl () 

ioctl () 

syscall () 

syscallinit ()

DESCRIPTION:
When new LUNs are added device discovery is performed, where certain

operations are requested from the vxconfigd(1M) daemon. If the LUN to be added 

is foundto be non-SCSI or faulty, the vxconfigd(1M) daemon may hang.

RESOLUTION:
The code is modified to avoid the hang in the vxconfigd(1M) daemon and the 

subsequent VxVM commands.

* 3524141 (Tracking ID: 3524139)

SYMPTOM:
The checkinstall script for the PHCO_43824 fails, in case of the combined 
installation of the patch and the dependent package, even when the dependent 
package has a value greater than or equal to 1311. The following message is 
displayed:

ERROR:   FC-TACHYON-TL.FC-TL-RUN, FC-FCLP.FC-FCLP-RUN,
         FC-FCOC.FC-FCOC-RUN, FC-FCQ.FC-FCQ-RUN filesets with revision
         B.11.31.1311 or Higher is required for VxVM to work correctly.

DESCRIPTION:
The checkinstall script checks for the list of filesets that have a version 
greater than or equal to 1311. An incorrect field filter is used at the time of 
extracting the version information from the dependent filesets. This leads to 
the installation failure.
An example of the cut(1M) command is as following:
"cut -d . -f 4" 
"FC-COMMON.FC-SNIA, r=B.11.31.1403"
The cut command returns "31" instead of "1403" from the fileset.
This leads to the installation failure.

RESOLUTION:
The code is modified to extract the field "5" by using the cut(1) command, 
instead of extracting the field "4", from the dependent filesets.
An example of the cut(1) command executed is as following:
"cut -d . -f 5"

* 3527676 (Tracking ID: 3527674)

SYMPTOM:
After upgrading to 5.1SP1RP3P1, the vxconfigd(1M) daemon dumps core, because 
the UDID is not found for EMC LUNs. The following stack trace is observed:

ddi_hash_devno()
ddl_find_cdevno()
ddl_find_disk_cdevno()
daname_get()
req_daname_get()
request_loop()
main()

DESCRIPTION:
The 5.1SP1RP3P1 patch release depends on the VRTSaslapm package with version 
5.1.103.001 or above. This dependency is not followed. As a result, a list of 
disks are found uninitialized. The reason for un-initialization is the failure 
in the reconfiguration due to the change in the array type from 'CRL-ALUA' 
to 'ALUA' for Clariion array, while upgrading to 5.1SP1RP3P1. This causes the 
core dump.

RESOLUTION:
The code is modified to introduce the 'co-requisites VRTSaslapm.VXASLAPM-
KRN, r>=5.1.103.001' on the VxVM patches. This allows patch installation only 
when the current VRTSaslapm is installed or is being installed on the system 
with version '5.1.103.001' or greater. Otherwise, the VxVM patch fails to 
install.

* 3528614 (Tracking ID: 3461383)

SYMPTOM:
The vxrlink(1M) command fails, when the vxrlink g <DGNAME> -a att <RLINK> 
command is executed. On PA machines the following error message is displayed:

VxVM VVR vxrlink ERROR V-5-1-5276 Cannot open shared 
library/usr/lib/libvrascmd.sl, error: Can't dlopen() a library containing 
Thread Local Storage: /usr/lib/libvrascmd.sl

DESCRIPTION:
To make vxconfigd and other VxVM binaries thread safe, these binaries are now 
linked with HP's thread safe libIOmt library. The vxrlink(1M) command opens a 
shared library, which is linked with the thread safe libIOmt library. There 
is a limitation onHP-UX, that a shared library that contains Thread Local 
Storage (TLS) cannot be loaded dynamically. This results in an error.

RESOLUTION:
The code is modified, so that the library that is dynamically loaded through 
the vxrlink(1M) command, is not linked with the libIOmt library, as the 
vxrlink(1M) command and the library do not invoke any routines from 
the libIOmt library.

* 3596439 (Tracking ID: 3596425)

SYMPTOM:
While upgrading from HP-UX September 2010 release to OE containing PHCO_43824,
following error may be seen in /var/adm/sw/swm.log file:
       * Running "checkinstall" for "PHCO_43824, r=1.0".
NOTE:    Command output:
         -
/var/opt/swm/tmp/swmZ3Hj3uh/SDcat2tGfRnd/catalog/PHCO_43824/pfiles/checkinstall[8]:
1009
01: Syntax error
ERROR:   FC-COMMON.FC-SNIA filesets with revision B.11.31.1311 or
         Higher is required for VxVM to work correctly.
ERROR:   The "checkinstall" for "PHCO_43824, r=1.0" failed (exit code "1"). The
         script location was "/var/opt/swm/tmp/swmZ3Hj3uh/SDcat2tGfRnd/catalog/
         PHCO_43824/pfiles/checkinstall".

DESCRIPTION:
The checkinstall script was wrongly parsing the revision string for FC-COMMON
product which was resulting into this syntax error.

RESOLUTION:
The checkinstall script has been modified to correctly parse the revision string.

Patch ID: PHCO_43824, PHKL_43779

* 2982085 (Tracking ID: 2976130)

SYMPTOM:
The device-discovery commands such as vxdisk scandisks and vxdctl enable 
may cause the entire DMP database to be deleted. This causes the VxVM I/O 
errors and file systems to get disabled. For instances where VxVM manages the 
root disk(s), a system hang occurs. In a Serviceguard/SGeRAC environment 
integrated with CVM and/or CFS, VxVM I/O failures would typically lead to a 
Serviceguard INIT and/or a CRS TOC (if the voting disks sit on VxVM volumes). 
Syslog shows the removal of arrays from the DMP database as following: 
vmunix: NOTICE: VxVM vxdmp V-5-0-0 removed disk array 000292601518, datype = EMC
In addition to messages that indicate VxVM I/O errors and file systems are 
disabled.

DESCRIPTION:
VxVM's vxconfigd(1M) daemon uses HPs libIO(3X) APIs such as io_search() and 
io_search_array() functions to claim devices that are attached to the host. 
Although, vxconfigd(1M) is multithreaded, it uses a non-thread safe version of 
the libIO(3X)  APIs. A race condition may occur when multiple vxconfigd threads 
perform device discovery. This results in a NULL value returned to the libIO
(3X)  APIs call.VxVM interprets the NULL return value as an indication of none 
of the devices being attached and proceeds to delete all the devices previously 
claimed from the DMP database.

RESOLUTION:
The vxconfigd(1M) daemon, as well as the event source daemon vxesd(1M), is now 
linked with HPs thread-safe libIO(3X) library.  This prevents the race 
condition among multiple vxconfigd threads that perform device discovery. 
Please refer to HPs customer bulletin c03585923 for a list of other software 
components required for a complete

* 3326516 (Tracking ID: 2665207)

SYMPTOM:
When a user tries to update the udid on a disk that is an imported disk 
group, no message is displayed to convey that the operation is not allowed.

DESCRIPTION:
The code exits without displaying the error message, and the user is unaware of 
the restriction imposed on an imported disk group.

RESOLUTION:
The code has been modified to display a message that conveys to the user that 
the operation is not allowed. The following error message is displayed:

"VxVM vxdisk ERROR V-5-1-17080 The UDID for device emc0_0f36
cannot be updated because the disk is part of an imported diskgroup
VxVM vxdisk INFO V-5-1-17079 If you require the UDID to be updated, please
deport the diskgroup. The DDL and on-disk UDID content can be inspected
by issuing the 'vxdisk -o udid list' command"

* 3363231 (Tracking ID: 3325371)

SYMPTOM:
Panic occurs in the vol_multistepsio_read_source() function when VxVMs 
FastResync feature is used. The stack trace observed is as following:
vol_multistepsio_read_source()
vol_multistepsio_start()
volkcontext_process()
vol_rv_write2_start()
voliod_iohandle()
voliod_loop()
kernel_thread()

DESCRIPTION:
When a volume is resized, Data Change Object (DCO) also needs to be resized. 
However, the old accumulator contents are not copied into the new accumulator. 
Thereby, the respective regions are marked as invalid. Subsequent I/O on these 
regions triggers the panic.

RESOLUTION:
The code is modified to appropriately copy the accumulator contents during the 
resize operation.
Along with this, any previously corrupted DCO objects need to be recreated to
avoid any future panics. Please follow steps from following tech-note to
recreate DCO: 
http://www.symantec.com/business/support/index?page=content&id=TECH215939

Patch ID: PHCO_43526, PHKL_43527

* 2233987 (Tracking ID: 2233225)

SYMPTOM:
Growing a volume to more than a limit, the default being 1 GB does not 
synchronize plexes for the newly allocated regions of the volume.

DESCRIPTION:
There was a coding issue that skipped setting the hint to re-synchronize the 
plexes for certain scenarios. Any further reads on the newly allocated regions 
returns different results, this depends upon the plex that is considered for 
the read operation.

RESOLUTION:
The code is modified to set the re-synchronize hint correctly for any missed 
scenarios.

* 2245645 (Tracking ID: 2255018)

SYMPTOM:
The vxplex(1M) command dumps core during the relayout operation from concat 
to RAID 5.
The following stack trace is observed:

_int_malloc () from /lib/libc.so.6
malloc () from /lib/libc.so.6
vxvmutil_xmalloc ()
xmalloc ()
raid5_clear_logs ()
do_att ()
main ()

DESCRIPTION:
During the relayout operation when the vxplex(1M) utility builds the 
configuration, it retries the transaction. The transaction gets restarted. 
However, the cleanup is not done correctly before the restart. Thus, the vxplex
(1M) command dumps core.

RESOLUTION:
The code is modified so that the proper cleanup is done before the transaction 
is restarted.

* 2366130 (Tracking ID: 2270593)

SYMPTOM:
In a CVM environment, during the node join operation if the vxconfigd(1M) 
daemon on the master node is restarted, the shared disk groups may get 
disabled. The error is displayed in syslog is as following:
"Error in cluster processing"

DESCRIPTION:
When the  vxconfigd(1M) daemon  is restarted on the CVM master, all the shared 
disk groups are re-imported. During this process, if a cluster reconfiguration 
happens and not all the slave nodes have re-established the connection to the 
masters vxconfigd, the re-import of the shared disk groups may fail with 
error, leaving the disk groups in the disabled state.

RESOLUTION:
The code is modified so that during the vxconfigd(1M) demon restart operation, 
if the re-import of the disk groups encounters the "Error in cluster 
processing" error, then the re-import operation of the disk group is deferred 
until all the slave nodes re-establish the connection to the master node.

* 2437840 (Tracking ID: 2283588)

SYMPTOM:
The vxdisksetup (1M) command used for the initialization of the mirror on the 
root disk fails with the following error message on the IA machine:
VxVM vxislvm ERROR V-5-1-2604 cannot open /dev/rdisk/disk*_p2

DESCRIPTION:
The reported error occurs due to the unavailability of the OS disk-device node 
corresponding to the disk slice. The situation can occur when the Logical 
Volume Manager (LVM) rooted disk is copied as the VxVM rooted disk, using the 
vxcp_lvmroot(1M) command. Later, a root mirror is created using the vxrootmir
(1M) command when the system is booted from the created VxVM rootdisk. The 
vxrootmir(1M) command is executed on the VxVM rooted disk. After creating 
slices for the target disk, the device-node files are created on the VxVM 
rooted disk. However, the OS on the LVM rooted disk is unaware of the newly 
created slices of the disk initialized during vxrootmir on the VxVM rooted 
disk. Thus, the OS on the LVM rooted disk does not contain or create the device 
nodes for the new slices of the vxrootmir'ed disk. When booted from the LVM 
rooted disk, the vxdisksetup(1M) command
uses the idisk(1M) command to detect if the disk is a sliced disk, and tries to 
access the corresponding slices through the OS device nodes. Assuming, that the 
device nodes are present. Thereby, an error is encountered.

RESOLUTION:
The code is modified such that the vxdisksetup(1M) command uses the DMP devices 
to access the disk slices. Since, DMP creates the device nodes by reading the 
disk format after the boot, the DMP disk nodes are always created for the disk 
slices.

* 2485252 (Tracking ID: 2910043)

SYMPTOM:
When VxVM operations like the plex attach, snapshot resync or reattach  are 
performed, frequent swap-in and swap-out activities are observed due to the 
excessive memory allocation by the vxiod daemons.

DESCRIPTION:
When the VxVM operations such as the plex attach, snapshot resync, or reattach 
operations are performed, the default I/O size of the operation is 1 MB and 
VxVM allocates this memory from the OS. Such huge memory allocations can result 
in swap-in and swap-out of pages and are not very efficient. When many 
operations are performed, the system may not work very efficiently.

RESOLUTION:
The code is modified to make use of VxVM's internal I/O memory pool, instead of 
directly allocating the memory from the OS.

* 2567623 (Tracking ID: 2567618)

SYMPTOM:
The VRTSexplorer dumps core with the segmentation fault in 
checkhbaapi/print_target_map_entry. The stack trace is observed as follows:
 print_target_map_entry()
check_hbaapi()
main()
_start()

DESCRIPTION:
The checkhbaapi utility uses the HBA_GetFcpTargetMapping() API which returns 
the current set of mappings between the OS and the Fiber Channel Protocol (FCP) 
devices for a given Host Bus Adapter (HBA) port. The maximum limit for mappings 
is set to 512 and only that much memory is allocated. When the number of 
mappings returned is greater than 512, the function that prints this 
information tries to access the entries beyond that limit, which results in 
core dumps.

RESOLUTION:
The code is modified to allocate enough memory for all the mappings returned by 
the HBA_GetFcpTargetMapping() API.

* 2570739 (Tracking ID: 2497074)

SYMPTOM:
If the resize operation is performed on disks of size greater than 1 TB, the 
Cross Platform Data Sharing -Extensible Firmware Interface (CDS-EFI) disks and 
the "vxvol stop all" command is executed. The following error message is 
displayed:
"VxVM vxvol ERROR V-5-1-10128  Configuration daemon error 441"

DESCRIPTION:
The issue is observed only on the CDS-EFI disks. The correct EFI flags are not 
set when the resize operation is in the commit phase. As a result, the disk 
offset which is 24 K for the CDS-EFI disks is not taken into account for the 
private region I/Os. The entire write I/Os are shifted to 24 K . This results 
in the private region corruption.

RESOLUTION:
The code is modified to set the required flags for the CDS-EFI disks.

* 2703384 (Tracking ID: 2692012)

SYMPTOM:
When moving the subdisks by using the vxassist(1M) command or the vxevac(1M) 
command, if the disk tags are not the same for the source and the destination, 
the command fails with a generic error message. The error message does not 
provide the reason for the failure of the operation. The following error 
message is displayed:
VxVM vxassist ERROR V-5-1-438 Cannot allocate space to replace subdisks

DESCRIPTION:
When moving the subdisks using the vxassist move command, if no target disk 
is specified, it uses the available disks from the disk group to move. If the 
disks have the site tag set and the value of the site- tag attribute is not the 
same, the subsequent move operation using the vxassist(1M) command is expected 
to fail. However, it fails with the generic message that does not provide the 
reason for the failure of the operation.

RESOLUTION:
The code is modified so that a new error message is introduced to specify that 
the disk failure is due to the mismatch in the site-tag attribute. 
The enhanced error message is as follows:
VxVM vxassist ERROR V-5-1-0 Source and/or target disk belongs to site, can not 
move over sites

* 2832887 (Tracking ID: 2488323)

SYMPTOM:
Application gets stuck when it waits for I/O on a volume which has links and 
snapshots configured.

DESCRIPTION:
When the configured volume has links and snapshots, the write I/Os which are 
larger than the region size, need to be split. With linked volumes, it is 
possible to lose track of logging done for individual split I/Os. So, the I/O 
can wait indefinitely for logging to complete for the individual split I/Os, 
causing the application to hang.

RESOLUTION:
The code is modified to split the I/Os in such a way that each individual split 
I/O tracks its own logging status.

* 2836974 (Tracking ID: 2743926)

SYMPTOM:
During the system boot, the DMP restore daemon fails to restart. The following 
error message is displayed:
"VxVM vxdmpadm ERROR V-5-1-2111  Invalid argument".

DESCRIPTION:
During the system boot, vxvm-sysboot performs the vxdmpadm stop restore 
operation and then the vxdmpadm start restore operation. Without 
the /etc/vx/dmppolicy.info file, this restart fails. As the file system is 
read only during the system boot, creation of the /etc/vx/dmppolicy.info file 
fails with the invalid return values. These invalid return values are used as 
arguments for starting the restore daemon. This results in the invalid argument 
error and the restore daemon fails to start.

RESOLUTION:
The code is modified to return the appropriate value so that the restore daemon 
is started with the default values.

* 2847333 (Tracking ID: 2834046)

SYMPTOM:
Beginning from the 5.1 release, VxVM automatically re-minors the disk group and 
its objects in certain cases, and displays the following error message:
vxvm:vxconfigd: V-5-1-14563 Disk group mydg: base minor was in private pool, 
will be change to shared pool.
 NFS clients that attempt to reconnect to a file system on the disk groups 
volume fail because the file handle becomes stale. The NFS client needs to re-
mount the file system and probably a reboot to clear this. The issue is 
observed under the following situations:
	When a private disk group is imported as shared, or a shared disk 
group is imported as private. 
	After upgrading from a version of VxVM prior to 5.1

DESCRIPTION:
Since the HxRT 5.1 SP1 release, the minor-number space is divided into two 
pools, one for the private disk groups and the other for the shared disk 
groups. During the disk group import operation, the disk group base- minor 
numbers are adjusted automatically, if not in the correct pool. In a similar 
manner, the volumes in the disk groups are also adjusted. This behavior reduces 
many minor conflicting cases during the disk group import operation. However, 
in the NFS environment, it makes all the file handles on the client side stale. 
Subsequently, customers had to unmount files systems and restart the 
applications.

RESOLUTION:
The code is modified to add a new tunable "autoreminor". The default value of 
the autoreminor tunable is "on". Use vxdefault set autoreminor off to turn it 
off for NFS server environments. If the NFS server is in a CVM (Clustered Volume Manager) cluster, make 
the same change on all the nodes.

* 2851354 (Tracking ID: 2837717)

SYMPTOM:
The vxdisk resize command fails if da name is specified. An example is as 
following:
 # vxdisk list eva4k6k1_46 | grep ^pub
pubpaths:  block=/dev/vx/dmp/eva4k6k1_46 char=/dev/vx/rdmp/eva4k6k1_46
public:    slice=0 offset=32896 len=680736 disk_offset=0

# vxdisk resize eva4k6k1_46 length=813632

# vxdisk list eva4k6k1_46 | grep ^pub
pubpaths:  block=/dev/vx/dmp/eva4k6k1_46 char=/dev/vx/rdmp/eva4k6k1_46
public:    slice=0 offset=32896 len=680736 disk_offset=0
 
After resize operation len=680736 is not changed.

DESCRIPTION:
The scenario for da name is not handled in the resize code path. As a result, 
the vxdisk(1M) resize command fails if the da name is specified.

RESOLUTION:
The code is modified such that if dm name is not specified to resize, then 
only the da name specific operation is performed.

* 2860208 (Tracking ID: 2859470)

SYMPTOM:
The EMC SRDF-R2 disk may go in error state when the Extensible Firmware 
Interface (EFI) label is created on the R1 disk. For example:
R1 site
# vxdisk -eo alldgs list | grep -i srdf
emc0_008c auto:cdsdisk emc0_008c SRDFdg online c1t5006048C5368E580d266 srdf-r1

R2 site
# vxdisk -eo alldgs list | grep -i srdf
emc1_0072 auto - - error c1t5006048C536979A0d65 srdf-r2

DESCRIPTION:
Since R2 disks are in write protected mode, the default open() call made for 
the read-write mode fails for the R2 disks, and the disk is marked as invalid.

RESOLUTION:
The code is modified to change Dynamic Multi-Pathing (DMP) to be able to read 
the EFI label even on a write-protected SRDF-R2 disk.

* 2881862 (Tracking ID: 2878876)

SYMPTOM:
The vxconfigd(1M) daemon dumps core with the following stack trace:
vol_cbr_dolog ()
vol_cbr_translog ()
vold_preprocess_request () 
request_loop ()
main     ()

DESCRIPTION:
This core is a result of a race between the two threads which are processing 
the requests from the same client. While one thread completes processing a 
request and is in the phase of releasing the memory used, the other thread 
processes a DISCONNECT request from the same client. Due to the race 
condition, the second thread attempts to access the memory released and dumps 
core.

RESOLUTION:
The code is modified by protecting the common data of the client by a mutex.

* 2883606 (Tracking ID: 2189812)

SYMPTOM:
When the 'vxdisk updateudid' command is executed on a disk which is in 
the 'online invalid' state, the vxconfigd(1M) daemon dumps core with the 
following stack trace:
priv_join()
req_disk_updateudid()
request_loop()
main()

DESCRIPTION:
When udid is updated, the null check is not done for an internal data 
structure. As a result, the vxconfigd(1M) daemon dumps core.

RESOLUTION:
The code is modified to add null checks for the internal data structure.

* 2906832 (Tracking ID: 2398954)

SYMPTOM:
The system panics while doing I/O on a Veritas File System (VxFS) mounted 
instant snapshot with the Oracle Disk Manager (ODM) SmartSync enabled. The 
following stack trace is observed:
panic: post_hndlr(): Unresolved kernel interruption
cold_vm_hndlr
bubbledown
as_ubcopy
privlbcopy
volkio_to_kio_copy
vol_multistepsio_overlay_data
vol_multistepsio_start
voliod_iohandle
voliod_loop
kthread_daemon_startup

DESCRIPTION:
Veritas Volume Manager (VxVM) uses the fields av_back and av_forw of io buf 
structure to store its private information. VxFS also uses these fields to 
chain I/O buffers before passing I/O to VxVM. When an I/O is received at VxVM 
layer it always resets these fields. But if the ODM SmartSync is enabled, VxFS 
uses a special strategy routine to pass on hints to VxVM. Due to a bug in the 
special strategy routine, the av_back and av_forw fields are not reset and 
points to a valid buffer in VxFS I/O buffer chain. VxVM interprets these fields 
(av_back, av_forw) wrongly and modifies its contents which in turn corrupts the 
next buffer in the chain leading to the panic.

RESOLUTION:
The av_back and av_forw fields of io buf structure are reset in the special 
strategy routine.

* 2916915 (Tracking ID: 2916911)

SYMPTOM:
The vxconfigd(1M) deamon triggered a Data TLB Fault panic with the following 
stack trace:
 _vol_dev_strategy
volsp_strategy
vol_dev_strategy
voldiosio_start
volkcontext_process
volsiowait
voldio
vol_voldio_read
volconfig_ioctl
volsioctl_real
volsioctl
vols_ioctl
spec_ioctl
vno_ioctl
ioctl
syscall

DESCRIPTION:
The kernel_force_open_disk() function checks if the disk device is open. The 
device is opened only if not opened earlier. When the device is opened, it 
calls the kernel_disk_load() function which in-turn calls the VOL_NEW_DISK ioctl
() function. If the VOL_NEW_DISK ioctl fails, the error is not handled 
correctly as the return values are not checked. This may result in a scenario 
where the open operation fails but the disk read or write operation proceeds.

RESOLUTION:
The code is modified to handle the VOL_NEW_DISK ioctl. If the ioctl fails 
during the open operation of the device that does not exist, then the read or 
write operations are not allowed on the disk.

* 2919718 (Tracking ID: 2919714)

SYMPTOM:
On a thin Logical Unit Number (LUN), the vxevac(1M) command returns 0 without 
migrating the unmounted-VxFS volumes.  The following error messages are 
displayed when the unmounted-VxFS volumes are processed:
VxVM vxsd ERROR V-5-1-14671 Volume v2 is configured on THIN luns and not 
mounted.
Use 'force' option, to bypass smartmove. To take advantage of smartmove for
supporting thin luns, retry this operation after mounting the volume.
 VxVM vxsd ERROR V-5-1-407 Attempting to clean up after failure ...

DESCRIPTION:
On a thin LUN, VxVM does not move or copy data on the unmounted-VxFS volumes 
unless the smartmove is bypassed.  The vxevac(1M) command error messages need 
to be enhanced to detect the unmounted-VxFS volumes on thin LUNs, and to 
support a force option that allows the user to bypass the smartmove.

RESOLUTION:
The vxevac script is modified to check for the unmounted-VxFS volumes on thin 
LUNs, before the migration is performed. If the unmounted-VxFS volume is 
detected, the vxevac(1M) command fails with a non-zero return code. 
Subsequently, a message is displayed that notifies the user to mount the 
volumes or bypass the smartmove by specifying the force option. The 
rectified error message is displayed as following:
VxVM vxevac ERROR V-5-2-0 The following VxFS volume(s) are configured on THIN 
luns and not mounted:
  v2
To take advantage of smartmove support on thin luns, retry this operation after 
mounting the volume(s).  Otherwise, bypass smartmove by specifying the '-f' 
force option.

* 2929003 (Tracking ID: 2928987)

SYMPTOM:
vxconfigd hung is observed when IO failed by OS layer.

DESCRIPTION:
DMP is supposed to do number of IO retries that are defined by user. When it
receives IO failure from OS layer, due to bug it restarts IO without checking IO
retry count, thus IO gets stuck in loop infinitely

RESOLUTION:
Code changes are done in DMP to use the IO retry count defined by user.

* 2934484 (Tracking ID: 2899173)

SYMPTOM:
In a Clustered Volume Replicator (CVR) environment, an Storage Replicator Log 
(SRL) failure may cause the vxconfigd(1M) daemon to hang. This eventually 
causes the 'vradmin stoprep' command to hang.

DESCRIPTION:
The 'vradmin stoprep' command hangs because of the vxconfigd(1M) daemon that 
waits indefinitely during a transaction. The transaction waits for I/O 
completion on SRL. An error handler is generated to handle the I/O failure on 
the SRL. But if there is an ongoing transaction, the error is not handled 
properly. This causes the transaction to hang.

RESOLUTION:
The code is modified so that when an SRL failure is encountered, the 
transaction itself handles the I/O error on SRL.

* 2940448 (Tracking ID: 2940446)

SYMPTOM:
The I/O may hang on a volume with space optimized snapshot if the underlying 
cache object is of a very large size ( 30 TB ). It can also lead to data 
corruption in the cache-object. The following stack is observed:
pvthread()
et_wait()
uphyswait()
uphysio()
vxvm_physio()
volrdwr()
volwrite()
vxio_write()
rdevwrite()
cdev_rdwr()
spec_erdwr() 
spec_rdwr() 
vnop_rdwr() 
vno_rw() 
rwuio() 
rdwr() 
kpwrite() 
ovlya_addr_sc_flih_main()

DESCRIPTION:
The cache volume maintains a B+ tree for mapping the offset and its actual 
location in the cache object. Copy-on-write I/O generated on snapshot volumes 
needs to determine the offset of the particular I/O in the cache object. Due to 
the incorrect type-casting, the value calculated for the large offset truncates 
to the smaller value due to the overflow, leading to data corruption.

RESOLUTION:
The code is modified to avoid overflow during the offset calculation in the 
cache object. It is advised to create multiple cache objects of around 10 TB, 
rather than creating a single cache object of a very large size.

* 2946948 (Tracking ID: 2406096)

SYMPTOM:
The vxconfigd(1M) daemon dumps core with the following stack trace:
vol_cbr_oplistfree()
vol_clntaddop()
vol_cbr_translog()
vold_preprocess_request()
request_loop()
main()

DESCRIPTION:
The vxsnap utility forks a child process and the parent process exits. The 
child process continues the remaining work as the background process. It does 
not create a new connection with the vxconfigd (1M) daemon and continues to use 
the parents connection. Since the parent is dead, the vxconfigd(1M) daemon 
cleans up the client structure. Corresponding to further requests from the 
child process, the vxconfigd(1M) daemon tries to access the client structure 
that is already freed. As a result, the vxconfigd daemon dumps core.

RESOLUTION:
The code is modified by initiating a separate connection with the vxconfigd(1M) 
daemon from the forked child.

* 2950826 (Tracking ID: 2915063)

SYMPTOM:
During the detachment of a plex of a volume in the Cluster Volume Manager (CVM) 
environment, the master node panics with the following stack trace:
vol_klog_findent()
vol_klog_detach()
vol_mvcvm_cdetsio_callback()
vol_klog_start()
voliod_iohandle()
voliod_loop()

DESCRIPTION:
During the plex-detach operation, VxVM searches the plex object to be detached 
in the kernel. If any transaction is in progress on any disk group in the 
system, an incorrect plex object may be selected. This results in dereferencing 
of the invalid addresses and causes the system to panic.

RESOLUTION:
The code is modified to make sure that the correct plex object is selected.

* 2950829 (Tracking ID: 2921816)

SYMPTOM:
In a VVR environment, if there is Storage Replicator Log (SRL) overflow then 
the Data Change Map (DCM) logging mode is enabled. For such instances, if there 
is an I/O failure on the DCM volume then the system panics with the following 
stack trace:

vol_dcm_set_region()
vol_rvdcm_log_update()
vol_rv_mdship_srv_done()
volsync_wait()
voliod_loop()
...

DESCRIPTION:
There is a race condition where the DCM information is accessed at the same 
time when the DCM I/O failure is handled. This results in panic.

RESOLUTION:
The code is modified to handle the race condition.

* 2957608 (Tracking ID: 2671241)

SYMPTOM:
When the Dirty Region Logging (DRL) log plex is configured in a volume, the 
vxnotify(1M) command does not report the volume enabled message.

DESCRIPTION:
When the DRL log plex is configured in a volume, a two phase start of the 
volume is made. First plexes are started and the volume state is marked as 
DETACHED. In the second phase, after the log recovery, the volume state is 
marked as ENABLED. However, during the notification of the configuration change 
to the interested client, only the status change from DISABLED to ENABLED is 
checked.

RESOLUTION:
The code is modified to generate a notification on state change of a volume 
from any state to ENABLED and any state to DISABLED.

* 2960650 (Tracking ID: 2932214)

SYMPTOM:
After the "vxdisk resize" operation is performed from less than 1 TB to greater 
than or equal to 1 TB on a disk with SIMPLE or SLICED format, that has the Sun 
Microsystems Incorporation (SMI) label, the disk enters the "online invalid" 
state.

DESCRIPTION:
When the SIMPLE or SLICED disk, which has the Sun Microsystems Incorporation 
(SMI) label, is resized from less than 1 TB to greater than or equal to 1 TB by 
"vxdisk resize" operation, the disk will show the "online invalid" state.

RESOLUTION:
The code is modified to prevent the resize of the SIMPLE or SLICED disks with 
the SMI label from less than 1 TB to greater than or equal to 1 TB.

* 2973632 (Tracking ID: 2973522)

SYMPTOM:
At cable connect on one of the ports of a dual-port Fibre Channel Host Bus 
Adapters (FC HBA), paths that go through the other port are marked as SUSPECT. 
DMP does not issue I/O on such paths until the next restore daemon cycle 
confirms that the paths are functioning.

DESCRIPTION:
When a cable is connected at one of the ports of a dual-port FC HBA, HBA-
Registered State Change Notification (RSCN) event occurs on the other port. 
When the RSCN event occurs, DMP marks the paths
as SUSPECT that goes through that port.

RESOLUTION:
The code is modified so that the RSCN events that goes through the other port 
are not marked as SUSPECT.

* 2979692 (Tracking ID: 2575051)

SYMPTOM:
In a CVM environment, master switch or master node takeover operations result 
in a panic with the following stack trace:		
volobject_iogen 
vol_cvol_volobject_iogen 
vol_cvol_recover3_start
voliod_iohandle 
voliod_loop 
kernel_thread

DESCRIPTION:
The panic occurs while accessing the fields of a stale cache-object which is a 
part of a shared disk group. The cache-object recovery process gets initiated 
during a master node takeover or master switch operation. If some operation is 
done on the cache object, like creating a space optimized snapshot over it, 
while the recovery is in progress, the cache object gets changed. A panic can 
occur while accessing the changed fields in the cache-object.

RESOLUTION:
The code is modified to block the operations on a cache object while a cache-
object recovery is in progress.

* 2983901 (Tracking ID: 2907746)

SYMPTOM:
At device discovery, the vxconfigd(1M) daemon allocates file descriptors for 
open instances of /dev/config, but does not always close them after use, this 
results in a file descriptor leak over time.

DESCRIPTION:
Before any API of the libIO library is called, the io_init() function needs 
to be called. This function opens the /dev/config device file. Each io_init() 
function call should be paired with the io_end() function call. This function 
closes the /dev/config device file. However, the io_end() function call is 
amiss at some places in the device discovery code path. As a result, the file 
descriptor leaks are observed with the device-discovery command of VxVM.

RESOLUTION:
The code is modified to pair each io_init() function call with the io_end() 
function call in every possible code path.

* 2986939 (Tracking ID: 2530536)

SYMPTOM:
Disabling any path of the ASM disk causes multiple DMP reconfigurations.
Multiple occurrences of the following messages are seen in the dmpevent log:

Reconfiguration is in progress
Reconfiguration has finished

DESCRIPTION:
After enabling/disabling any path/controller of the DMP node, the VxVM daemon 
processes events generated for that DMP node. If a DMP node is already online, 
the VxVM daemon avoids re-online. Otherwise, it tries to online the DMP node, 
four times with some delay. If all attempts to online the DMP node fails, the 
VxVM daemon fires `vxdisk scandisks` on all the paths of that DMP node. In case 
of the ASM disk, all attempts to online the DMP node fails, and so the VxVM 
daemon fires the 'vxdisk scandisks' on all the paths of the DMP node. This in 
turn results in the multiple DMP reconfigurations.

RESOLUTION:
The code is modified to skip the event processing of the ASM disks.

* 2990730 (Tracking ID: 2970368)

SYMPTOM:
The SRDF-R2 WD (write-disabled) devices are shown in an error state and many 
enable and disable path messages that are generated in 
the /etc/vx/dmpevents.log file.

DESCRIPTION:
DMP driver disables the paths of the write-protected devices. Therefore, these 
devices are shown in an error state. The vxattachd(1M) daemon tries to online 
these devices and executes the partial-device discovery for these devices. As 
part of the partial-device discovery, enabling and disabling the paths of such 
write-protected devices, generates many path enable and disable messages in 
the /etc/vx/dmpevents.log file.

RESOLUTION:
The code is modified so that the paths of the write-protected devices in DMP 
are not disabled.

* 3000033 (Tracking ID: 2631369)

SYMPTOM:
In a CVM environment, when the vxconfigd(1M) daemon is started in the single-
threaded mode by specifying the -x nothreads option, a cluster 
reconfiguration such as a node join and VxVM operations on the shared disk 
group takes approximately five more minutes to complete, as compared to the 
multithreaded environment.

DESCRIPTION:
Each node in the cluster exchanges VxVM objects during cluster reconfiguration, 
and for certain VxVM operations. In the signal handler function of single 
threaded vxconfigd(1M) daemon, each nodes awaits for the maximum poll timeout 
value set within the code.

RESOLUTION:
The code is modified such that the signal-handler routines do not wait for the 
maximum timeout value, instead it returns immediately after the objects are 
received.

* 3012938 (Tracking ID: 3012929)

SYMPTOM:
When a disk name is changed when a backup operation is in progress, the 
vxconfigbackup(1M) command gives the following error:
VxVM vxdisk ERROR V-5-1-558 Disk <old diskname>: Disk not in the configuration 
VxVM vxconfigbackup WARNING V-5-2-3718 Unable to backup Binary diskgroup 
configuration for diskgroup <dgname>.

DESCRIPTION:
If disk names change during the backup, the vxconfigbackup(1M) command does not 
detect and refresh the changed names and it tries to find the configuration 
database information from the old diskname. Consequently, the vxconfigbackup
(1M) command displays an error message indicating that the old disk is not 
found in the configuration and it fails to take backup of the disk group 
configuration from the disk.

RESOLUTION:
The code is modified to ensure that the new disk names are updated and are used 
to find and backup the configuration copy from the disk.

* 3041018 (Tracking ID: 3041014)

SYMPTOM:
Sometimes a relayout command may fail with following error messages with not 
much information:
1. VxVM vxassist ERROR V-5-1-15309 Cannot allocate 4294838912 blocks of disk 
space required by the relayout operation for column expansion: Not enough HDD 
devices that meet specification. VxVM vxassist ERROR V-5-1-4037 Relayout 
operation aborted. (7)
2. VxVM vxassist ERROR V-5-1-15312 Cannot allocate 644225664 blocks of disk 
space required for the relayout operation for temp space: Not enough HDD 
devices that meet specification. VxVM vxassist ERROR V-5-1-4037 Relayout 
operation aborted. (7)

DESCRIPTION:
In some executions of the vxrelayout(1M) command, the error messages do not 
provide sufficient information. For example, when enough space is not 
available, the vxrelayout(1M) command displays an error where it mentions less 
disk space available than required. Hence the relayout operation can still 
fail when the disk space is increased.

RESOLUTION:
The code is modified to display the correct space required for the relayout 
operation to complete successfully.

* 3043203 (Tracking ID: 3038684)

SYMPTOM:
The restore daemon attempts to re-enable disabled paths of the Business 
Continuance Volume - Not Ready (BCV-NR) devices, logging many DMP messages as 
follows:  
VxVM vxdmp V-5-0-148 enabled path 255/0x140 belonging to the dmpnode 3/0x80
VxVM vxdmp V-5-0-112 disabled path 255/0x140 belonging to the dmpnode 3/0x80

DESCRIPTION:
The restore daemon tries to re-enable a disabled path of a BCV-NR device as the 
probe passes. But the open() operation fails on such devices as no I/O 
operations are permitted and the path is disabled. There is a check to prevent 
enabling the path of the device if the open() operation fails. Because of the 
bug in the open check, it incorrectly tries to re-enable the path of the BCV-NR 
device.

RESOLUTION:
The code is modified to do an open check on the BCV-NR block device.

* 3047474 (Tracking ID: 3047470)

SYMPTOM:
Cannot deport the disk group as the device /dev/vx/esd is not recreated on 
reboot with the latest major number. Consider the problem scenario as follows:
The old device /dev/vx/esdhas a major number which is now re-assigned to 
the vol driver. As the device /dev/vx/esd has same major number as that of 
the vol driver, it may disallow a disk group deport with the following 
message, until the vxesd(1M) daemon is stopped:
VxVM vxdg ERROR V-5-1-584 Disk group XXX: Some volumes in the disk group are 
in use

DESCRIPTION:
If the device /dev/vx/esd is present in the system with an old major number, 
then the mknod(1M) command in the startup script fails to recreate the device 
with the new major number. This leads to change in the functionality.

RESOLUTION:
The code is modified to delete the device under /dev/vx/esd, before the mknod
(1M) command in the startup script, so that it gets recreated with the latest 
major number.

* 3047803 (Tracking ID: 2969844)

SYMPTOM:
The DMP database gets destroyed if the discovery fails for some 
reason. ddl.log shows numerous entries as follows:
DESTROY_DMPNODE:
0x3000010 dmpnode is to be destroyed/freed
DESTROY_DMPNODE:
0x3000d30 dmpnode is to be destroyed/freed

Numerous vxio errors are seen in the syslog as all VxVM I/Os fail afterwards.

DESCRIPTION:
VxVM deletes the old device database before it makes the new device database. 
If the discovery process fails for some reason, this results in a null DMP 
database.

RESOLUTION:
The code is modified to take a backup of the old device database before doing 
the new discovery. Therefore, if the discovery fails we restore the old 
database and display the appropriate message on the console.

* 3059145 (Tracking ID: 2979824)

SYMPTOM:
While excluding the controller using the vxdiskadm(1M) utility, the unintended 
paths get excluded

DESCRIPTION:
The issue occurs due to a logical error related to the grep command, when the 
hardware path of the controller to be retrieved is excluded. In some cases, the 
vxdiskadm(1M) utility takes the wrong hardware path for the controller that is 
excluded, and hence excludes unintended paths. Suppose there are two 
controllers viz. c189 and c18 and the controller c189 is listed above c18 in 
the command, and the controller c18 is excluded, then the hardware path of the 
controller c189 is passed to the function and hence it ends up excluding the 
wrong controller.

RESOLUTION:
The script is modified so that the vxdiskadm(1M) utility now takes the hardware 
path of the intended controller only, and the unintended paths do not get 
excluded.

* 3069507 (Tracking ID: 3002770)

SYMPTOM:
When a SCSI-inquiry command is executed, any NULL- pointer dereference in 
Dynamic Multi-Pathing (DMP) causes the system to panic with the following stack 
trace:
dmp_aa_recv_inquiry()
dmp_process_scsireq()
dmp_daemons_loop()

DESCRIPTION:
The panic occurs when the SCSI response for the SCSI-inquiry command is 
handled. In order to determine if the path on which the SCSI-inquiry command 
issued is read-only, DMP needs to check the error buffer. However, the error 
buffer is not always prepared. So before making any further checking, DMP 
should examine if the error buffer is valid before any further checking. 
Without any error-buffer examination, the system panics with a NULL pointer.

RESOLUTION:
The code is modified to verify that the error buffer is valid.

* 3072890 (Tracking ID: 2352517)

SYMPTOM:
Excluding a controller from Veritas Volume Manager (VxVM ) using the vxdmpadm 
exclude ctlr=<ctlr-name>" command causes the system to panic with the following 
stack trace:
gen_common_adaptiveminq_select_path
dmp_select_path
gendmpstrategy
voldiskiostart
vol_subdisksio_start
volkcontext_process
volkiostart
vxiostrategy
vx_bread_bp
vx_getblk_cmn
vx_getblk
vx_getmap
vx_getemap
vx_do_extfree
vx_extfree
vx_te_trunc_data
vx_te_trunc
vx_trunc_typed
vx_trunc_tran2
vx_trunc_tran
vx_trunc
vx_inactive_remove
vx_inactive_tran
vx_local_inactive_list
vx_inactive_list
vx_workitem_process
vx_worklist_process
vx_worklist_thread
thread_start

DESCRIPTION:
While excluding a controller from the VxVM view, all the paths must also be 
excluded. The panic occurs because the controller is excluded before the paths 
belonging to that controller are excluded. While excluding the path, the 
controller of that path which is NULL is accessed.

RESOLUTION:
The code is modified to exclude all the paths belonging to a controller before 
excluding a controller.

* 3077756 (Tracking ID: 3077582)

SYMPTOM:
A Veritas Volume Manager (VxVM) volume may become inaccessible causing the read/write operations to fail with the following error:
# dd if=/dev/vx/dsk/<dg>/<volume> of=/dev/null count=10
dd read error: No such device
0+0 records in
0+0 records out

DESCRIPTION:
If I/Os to the disks timeout due to some hardware failures like weak Storage Area Network (SAN) cable link or Host Bus Adapter (HBA) failure, VxVM assumes that the disk is faulty or slow and it sets the failio flag on the disk. Due to this flag, all the subsequent I/Os fail with the No such device error.

RESOLUTION:
The code is modified such that vxdisk now provides a way to clear the failio flag. To check whether the failio flag is set on the disks, use the vxkprint(1M) utility (under /etc/vx/diag.d). To reset the failio flag, execute the vxdisk set <disk_name> failio=off command, or deport and import the disk group that holds these disks.

* 3083188 (Tracking ID: 2622536)

SYMPTOM:
Under a heavy I/O load, write I/Os on the Veritas Volume Replicator (VVR) 
Primary logowner takes a very long time to complete.

DESCRIPTION:
VVR cannot allow more than 2048 I/Os outstanding on the Storage Replicator Log 
(SRL) volume. Any I/Os beyond this threshold is throttled. The throttled I/Os 
are restarted after every SRL header flush operation. The restarted throttled 
I/Os contend with the new I/Os and can starve if new I/Os get preference.

RESOLUTION:
The SRL allocation algorithm is modified to give priority to the throttled 
I/Os. The new I/Os goes behind the throttled I/Os.

* 3083189 (Tracking ID: 3025713)

SYMPTOM:
In a VVR environment, VxVM "vxdg adddisk" and "vxdg rmdisk" commands take long 
time (approximately 90 seconds) to execute.

DESCRIPTION:
The VxVM commands on a VVR disk group does not complete until all the 
outstanding I/Os in that disk group are drained completely. If replication is 
active, the outstanding I/Os include network I/Os (the I/Os to be sent from the 
primary node to the secondary node). VxVM commands take a long time to complete 
as they wait for these network I/Os to drain or rlink to be disconnected.

RESOLUTION:
The code is modified such that, if there are outstanding network I/Os lying in 
the read-back pool, the rlink is disconnected to allow the VxVM commands to 
complete.

* 3087113 (Tracking ID: 3087250)

SYMPTOM:
In a CVM environment, during the node join operation when the host node joins 
the cluster node, this takes a long time to execute.

DESCRIPTION:
During the node join operation when the non-A/A storage with the shared disk 
groups is connected, then the master node gives the information about which 
array controller is to be used. The node that is to be joined, registers the 
Persistent Group Reservation (PGR) keys on the required array controller, that 
may take a little longer to execute, if the array is slow in processing the PGR 
commands.

RESOLUTION:
The code is modified so that DMP offloads the processing to register the PGR 
keys on the non-A/A storage with the shared disk groups. As a result, the node 
join operation is faster.

* 3087777 (Tracking ID: 3076093)

SYMPTOM:
The patch upgrade script installrp can panic the system while doing a patch 
upgrade. The panic stack trace looks is observed as following:
devcclose
spec_close
vnop_close
vno_close
closef
closefd
fs_exit
kexitx
kexit

DESCRIPTION:
When an upgrade is performed, the VxVM device drivers are not loaded, but the 
patch-upgrade process tries to start or stop the eventsource (vxesd) daemon. 
This can result in a system panic.

RESOLUTION:
The code is modified so that the eventsource (vxesd) daemon does not start 
unless the VxVM device drivers are loaded.

* 3100378 (Tracking ID: 2921147)

SYMPTOM:
The udid_mismatch flag is absent on a clone disk when source disk is 
unavailable. The 'vxdisk list' command does not show the udid_mismatch flag on 
a disk. This happens even when the 'vxdisk -o udid list' or 'vxdisk -v list 
diskname | grep udid' commands show different Device Discovery Layer (DDL) 
generated and private region unique identifier for disks (UDIDs).

DESCRIPTION:
When DDL generates the UDID and private region UDID of a disk do not match, 
Veritas Volume Manager (VxVM) sets the udid_mismatch flag on the disk. This 
flag is used to detect a disk as clone, which is marked with the clone-disk 
flag. The vxdisk (1M) utility is used to suppress the display of the 
udid_mismatch flag if the source Logical Unit Number (LUN) is unavailable on 
the same host.

RESOLUTION:
The vxdisk (1M) utility is modified to display the udid_mismatch flag, if it is 
set on the disk. Display of this flag is no longer suppressed, even when source 
LUN is unavailable on same host.

* 3139302 (Tracking ID: 3139300)

SYMPTOM:
At device discovery, the vxconfigd(1M) daemon allocates memory but does not 
release it after use, causing a user memory leak. The Resident Memory Size 
(RSS) of the vxconfigd(1M) daemon thus keeps growing and may reach maxdsiz(5) 
in the extreme case that causes the  vxconfigd(1M) daemon to abort.

DESCRIPTION:
At some places in the device discovery code path, the buffer is not freed. This 
results in memory leaks.

RESOLUTION:
The code is modified to free the buffers.

* 3140407 (Tracking ID: 2959325)

SYMPTOM:
The vxconfigd(1M) daemon dumps core while performing the disk group move 
operation with the following stack trace:
 dg_trans_start ()
 dg_configure_size ()
 config_enable_copy ()
 da_enable_copy ()
 ncopy_set_disk ()
 ncopy_set_group ()
 ncopy_policy_some ()
 ncopy_set_copies ()
 dg_balance_copies_helper ()
 dg_transfer_copies ()
 in vold_dm_dis_da ()
 in dg_move_complete ()
 in req_dg_move ()
 in request_loop ()
 in main ()

DESCRIPTION:
The core dump occurs when the disk group move operation tries to reduce the 
size of the configuration records in the disk group, when the size is large and 
the disk group move operation needs more space for the new- configrecord 
entries. Since, both the reduction of the size of configuration records 
(compaction) and the configuration change by disk group move operation cannot 
co-exist, this result in the core dump.

RESOLUTION:
The code is modified to make the compaction first before the configuration 
change by the disk group move operation.

* 3140735 (Tracking ID: 2861011)

SYMPTOM:
The vxdisk -g <dgname> resize <diskname> command fails with an error for the 
Cross-platform Data Sharing(CDS) formatted disk. The error message is displayed 
as following:
VxVM vxdisk ERROR V-5-1-8643 Device <DISKNAME>: resize failed: One or more 
subdisks do not fit in pub reg

DESCRIPTION:
During the resize operation, VxVM updates the VM disks private region with the 
new public region size, which is evaluated based on the raw disk geometry. But 
for the CDS disks, the geometry information stored in the disk label is 
fabricated such that the cylinder size is aligned with 8KB. The resize failure 
occurs when there is a mismatch in the public region size obtained from the 
disk label, and that stored in the private region.

RESOLUTION:
The code is modified such the new public region size is now evaluated based on 
the fabricated geometry considering the 8 KB alignment for the CDS disks, so 
that  it is consistent with the size obtained from the disk label.

* 3142325 (Tracking ID: 3130353)

SYMPTOM:
Disabled and enabled path messages are displayed continuously on the console 
for the EMC NR (Not Ready) devices:
I/O error occurred on Path hdisk139 belonging to Dmpnode emc1_1f2d
Disabled Path hdisk139 belonging to Dmpnode emc1_1f2d due to path failure
Enabled Path hdisk139 belonging to Dmpnode emc1_1f2d
I/O error occurred on Path hdisk139 belonging to Dmpnode emc1_1f2d
Disabled Path hdisk139 belonging to Dmpnode emc1_1f2d due to path failure

DESCRIPTION:
As part of the device discovery, DMP marks the paths belonging to the EMC NR 
devices as disabled, so that they are not used for I/O. However, the DMP-
restore logic, which issues inquiry on the disabled path brings the NR device 
paths back to the enabled state. This cycle is repetitive, and as a result the 
disabled and enabled path messages are seen continuously on the console.

RESOLUTION:
The DMP code is modified to specially handle the EMC NR devices, so that they 
are not disabled/enabled repeatedly. This means that we are not just 
suppressing the messages, but we are also handling the devices in a different 
manner.

* 3144794 (Tracking ID: 3136272)

SYMPTOM:
In a CVM environment, the disk group import operation with the -o noreonline 
option takes additional import time.

DESCRIPTION:
On a slave node when the clone disk group import is triggered by the master 
node, the da re-online takes place irrespective of the "-o noreonline" flag 
passed. This results in the additional import time.

RESOLUTION:
The code is modified to pass the hint to the slave node when the "-o 
noreonline" option is specified. Depending on the hint, the da re-online is 
either done or skipped. This avoids any additional import time.

* 3147666 (Tracking ID: 3139983)

SYMPTOM:
Failed I/Os from SCSI are retried only on very few paths to a LUN instead of
utilizing all the available paths. Sometimes this can cause multiple I/O
retrials without success. This can cause DMP to send I/O failures to the
application bounded by the recoveryoption tunable. 
 
 The following messages are displayed in the console log:
 [..]
 Mon Apr xx 04:18:01.885: I/O analysis done as DMP_PATH_OKAY on Path
 <path-name> belonging to Dmpnode <dmpnode-name> Mon Apr xx 04:18:01.885: I/O 
error occurred (errno=0x0) on Dmpnode <dmpnode-name> [..]

DESCRIPTION:
When I/O failure is returned to DMP with a retry error from SCSI, DMP retries
that I/O on another path. However, it fails to choose the path that has the
higher probability of successfully handling the I/O.

RESOLUTION:
The code is modified to implement the intelligence of choosing appropriate paths
that can successfully process the I/Os during retrials.

* 3158099 (Tracking ID: 3090667)

SYMPTOM:
The "vxdisk o thin, fssize list" command can cause system to hang or panic due 
to a kernel memory corruption. This command is also issued by Veritas 
Operations Manager (VOM) internally during Storage Foundation (SF) discovery. 
The following stack trace is observed:
 panic string:   kernel heap corruption detected
 vol_objioctl
vol_object_ioctl
voliod_ioctl - frame recycled
volsioctl_real

DESCRIPTION:
Veritas Volume Manager (VxVM) allocates data structures and invokes thin 
Logical Unit Numbers (LUNs) specific function handlers, to determine the disk 
space that is actively used by the file system. One of the function handlers 
wrongly accesses the system memory beyond the allocated data structure, which 
results in the kernel memory corruption.

RESOLUTION:
The code is modified so that the problematic function handler accesses only the 
allocated memory.

* 3158780 (Tracking ID: 2518067)

SYMPTOM:
The disabling of a switch port of the last-but-one active path to a Logical 
Unit Number (LUN) disables the DMP node, and results in I/O failures on the DMP 
node even when an active path is available for the I/O.

DESCRIPTION:
The execution of the "vxdmpadm disable" command and the simultaneous disabling 
of a port on the same path causes the DMP node to get disabled. The DMP I/O 
error handling code path fails to interlock these two simultaneous operations 
on the same path. This leads to a wrong assumption of a DMP node failure.

RESOLUTION:
The code is modified to detect the simultaneous execution of operations and 
prevent the DMP node from being disabled.

* 3158781 (Tracking ID: 2495338)

SYMPTOM:
Veritas Volume Manager (VxVM) disk initialization with hpdisk format fails with 
the following error:
$vxdisksetup -ivf  eva4k6k0_8 format=hpdisk privoffset=256
VxVM vxdisksetup ERROR V-5-2-2186 privoffset is incompatible with the hpdisk 
format.

DESCRIPTION:
VxVM imposes the limitation of private region offset as 128K or 256 sectors on 
non-boot(data) disks with the hpdisk format.

RESOLUTION:
The code is modified to relax the limitation to initialize a data disk with the 
hpdisk format with a fixed private region offset. The disks can now be 
initialized with a private region offset of 128K or greater.

* 3158790 (Tracking ID: 2779580)

SYMPTOM:
In a Veritas Volume Replicator (VVR) environment, the secondary node gives the 
configuration error 'no Primary RVG' when the primary master node (default 
logowner ) is rebooted and the slave node becomes the new master node.

DESCRIPTION:
After the primary master node is rebooted, the new master node sends 
a handshake request for the vradmind communication to the secondary node. As 
a part of the handshake request, the secondary node deletes the old 
configuration including the Primary RVG. During this phase, the secondary 
node receives the configuration update message from the primary node for the 
old configuration. The secondary node does not find the old Primary RVG 
configuration for processing this message. Hence, it cannot proceed with the 
pending handshake request. This results in a 'no Primary RVG' configuration 
error.

RESOLUTION:
The code is modified such that during the handshake request phase, the 
configuration messages of the old Primary RVG gets discarded.

* 3158793 (Tracking ID: 2911040)

SYMPTOM:
The restore operation from a cascaded snapshot succeeds even when one of the 
source is inaccessible. Subsequently, if the primary volume is made accessible 
for the restore operation, the I/O operation may fail on the volume, as the 
source of the volume is inaccessible. Any deletion of the snapshot also fails 
due to the dependency of the primary volume on the snapshots. When the user 
tries to remove the snapshot using the vxedit rm command, the following error 
message is displayed:
"VxVM vxedit ERROR V-5-1-XXXX Volume YYYYYY has dependent volumes"

DESCRIPTION:
When a snapshot is restored from any snapshot, the snapshot becomes the source 
of the data for the regions on the primary volume that differ between the two 
volumes. If the snapshot itself depends on some other volume and that volume is 
not accessible, effectively the primary volume becomes inaccessible after the 
restore operation. For such instances, the snapshot cannot be deleted as the 
primary volume depends on it.

RESOLUTION:
The code is modified so that if a snapshot or any later cascaded snapshot is 
inaccessible, the restore operation from that snapshot is prevented.

* 3158794 (Tracking ID: 1982965)

SYMPTOM:
The file system mount operation fails when the volume is resized and the volume 
has a link to the volume. The following error messages are displayed:
# mount -V vxfs /dev/vx/dsk/<dgname>/<volname>  <mount_dir_name>
UX:vxfs fsck: ERROR: V-3-26248:  could not read from block offset devid/blknum 
<xxx/yyy>. Device containing meta data may be missing in vset or device too big 
to be read on a 32 bit system. 
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate file system check 
failure, aborting ... 
UX:vxfs mount: ERROR: V-3-26883: fsck log replay exits with 1 
UX:vxfs mount: ERROR: V-3-26881: Cannot be mounted until it has been cleaned by 
fsck. Please run "fsck -V vxfs -y /dev/vx/dsk/<dgname>/<volname>"before mounting

DESCRIPTION:
The vxconfigd(1M) daemon stores disk access records based on the Dynamic Multi 
Pathing (DMP) names. If the vxdg(1M) command passes a name other than DMP name 
for the device, vxconfigd(1M) daemon cannot map it to a disk access record. As 
the vxconfigd(1M) daemon cannot locate a disk access record corresponding to 
passed input name from the vxdg(1M) command, it fails the import operation.

RESOLUTION:
The code is modified so that the vxdg(1M) command converts the input name to 
DMP name before passing it to the vxconfigd(1M) daemon for further processing.

* 3158798 (Tracking ID: 2825102)

SYMPTOM:
In a CVM environment, some or all VxVM volumes become inaccessible on the 
master node. VxVM commands on the master node as well as the slave node(s) 
hang. On the master node, vxiod and vxconfigd sleep and the following stack 
traces is observed:
vxconfigd on master :
 sleep_one
  vol_ktrans_iod_wakeup
  vol_ktrans_commit
  volconfig_ioctl
  volsioctl_real
  volsioctl
  vols_ioctl
  spec_ioctl
  vno_ioctl
   ioctl
   syscall

"vxiod" on master :
  sleep
  vxvm_delay
  cvm_await_mlocks
  volmvcvm_cluster_reconfig_exit
  volcvm_master
  volcvm_vxreconfd_thread

DESCRIPTION:
VxVM maintains a list of all the volume devices in volume device list. This 
list can be corrupted by simultaneous access from CVM reconfiguration code path 
and VxVM transaction code path. This results in inaccessibility of some or all 
the volumes.

RESOLUTION:
The code is modified to avoid simultaneous access to the volume device list 
from the CVM reconfiguration code path and the VxVM transaction code path.

* 3158799 (Tracking ID: 2735364)

SYMPTOM:
The "clone_disk" disk flag attribute is not cleared when a cloned disk group is 
removed by the "vxdg destroy <dg-name>" command.

DESCRIPTION:
When a cloned disk group is removed by the "vxdg destroy <dg-name>" command, 
the Veritas Volume Manager (VxVM) "clone_disk" disk flag attribute is not 
cleared.  The "clone_disk" disk flag attribute should be automatically turned 
off when the VxVM disk group is destroyed.

RESOLUTION:
The code is modified to turn off the "clone_disk" disk flag attribute when a 
cloned disk group is removed by the "vxdg destroy <dg-name>" command.

* 3158800 (Tracking ID: 2886333)

SYMPTOM:
The vxdg(1M) join operation allows mixing of clone and non-clone disks in a 
disk group. The subsequent import of a new joined disk group fails. The 
following error message is displayed:
"VxVM vxdg ERROR V-5-1-17090 Source disk group tdg and destination disk group 
tdg2 are not homogeneous, trying to Mix of cloned diskgroup to standard disk 
group or vice versa is not allowed. Please follow the vxdg (1M) man page."

DESCRIPTION:
Mixing of the clone and non-clone disk group is not allowed. The part of the 
code where the join operation is performed, executes the operation without 
validating the mix of the clone and the non-clone disk groups. This results in 
the newly joined disk group having a mix of the clone and non-clone disks. 
Subsequent import of the newly joined disk group fails.

RESOLUTION:
The code is modified so that during the disk group join operation, both the 
disk groups are checked. If a mix of clone and non-clone disk group is found, 
the join operation is aborted.

* 3158802 (Tracking ID: 2091520)

SYMPTOM:
Customers cannot selectively disable VxVM configuration copies on the disks 
associated with a disk group.

DESCRIPTION:
An enhancement is required to enable customers to selectively disable VxVM 
configuration copies on disks associated with a disk group.

RESOLUTION:
The code is modified to provide a "keepmeta=skip" option to the vxdiskset(1M) 
command to allow a customer to selectively disable VxVM configuration copies on 
disks that are a part of the disk group.

* 3158804 (Tracking ID: 2236443)

SYMPTOM:
In a VCS environment, the "vxdg import" command does not display an informative 
error message, when a disk group cannot be imported because the fencing keys 
are registered to another host. The following error messages are displayed:
 # vxdg import sharedg
   VxVM vxdg ERROR V-5-1-10978 Disk group sharedg: import failed:
   No valid disk found containing disk group
The system log contained the following NOTICE messages:
Dec 18 09:32:37 htdb1 vxdmp: NOTICE: VxVM vxdmp V-5-0-0 i/o error occured
(errno=0x5) on dmpnode 316/0x19b
 Dec 18 09:32:37 htdb1 vxdmp: [ID 443116 kern.notice] NOTICE: VxVM vxdmp V-5-0-0
i/o error occured (errno=0x5) on dmpnode 316/0x19b

DESCRIPTION:
The error messages that are displayed when a disk group cannot be imported 
because the fencing keys are registered to another host, needs to be more 
informative.

RESOLUTION:
The code has been added to the VxVM disk group import command to detect when a 
disk is reserved by another host, and issue a SCSI3 PR reservation conflict 
error message.

* 3158809 (Tracking ID: 2969335)

SYMPTOM:
The node that leaves the cluster node while the instant operation is in 
progress, hangs in the kernel and cannot join to the cluster node unless it is 
rebooted. The following stack trace is displayed in the kernel, on the node 
that leaves the cluster:
  voldrl_clear_30()
  vol_mv_unlink()
  vol_objlist_free_objects()
  voldg_delete_finish()
  volcvmdg_abort_complete()
  volcvm_abort_sio_start()
  voliod_iohandle()
  voliod_loop()

DESCRIPTION:
In a clustered environment, during any instant snapshot operation such as the 
snapshot refresh/restore/reattach operation that requires metadata 
modification, the I/O activity on volumes involved in the operation is 
temporarily blocked, and once the metadata modification is complete the I/Os 
are resumed. During this phase if a node leaves the cluster, it does not find 
itself in the I/O hold-off state and cannot properly complete the leave 
operation and hangs. An after effect of this is that the node will not be able 
to join to the cluster node.

RESOLUTION:
The code is modified to properly unblock I/Os on the node that leaves. This 
avoids the hang.

* 3158813 (Tracking ID: 2845383)

SYMPTOM:
The site gets detached if the plex detach operation is performed with the site 
consistency set to off.

DESCRIPTION:
If the plex detach operation is performed on the last complete plex of a site, 
the site is detached to maintain the site consistency. The site should be 
detached only if the site consistency is set. Initially, the decision to detach 
the site is made based on the value of the allsites flag. So, the site gets 
detached when the last complete plex is detached, even if the site consistency 
is off.

RESOLUTION:
The code is modified to ensure that the site is detached when the last complete 
plex is detached, only if the site consistency is set. If the site consistency 
is off and the allsites flag is on, detaching the last complete plex leads to 
the plex being detached.

* 3158818 (Tracking ID: 2909668)

SYMPTOM:
In case of multiple sets of the cloned disks of the same source disk group, the 
import operation on the second set of the clone disk fails, if the first set of 
the clone disks were imported with updateid. Import fails with following 
error message:
VxVM vxdg ERROR V-5-1-10978 Disk group firstdg: import failed: No tagname disks 
for import

DESCRIPTION:
When multiple sets of the clone disk exists for the same source disk group, 
each set needs to be identified with the separate tags. If one set of the 
cloned disk with the same tag is imported using the updateid option, it 
replaces the disk group ID on the imported disk with the new disk group ID. The 
other set of the cloned disk with the different tag contains the disk group ID. 
This leads to the import failure for the tagged import for the other sets, 
except for the first set. Because the disk group name maps to the latest 
imported disk group ID.

RESOLUTION:
The code is modified in case of the tagged disk group import for disk groups 
that have multiple sets of the clone disks. The tag name is given higher 
priority than the latest update time of the disk group, during the disk group 
name to disk group ID mapping.

* 3158819 (Tracking ID: 2685230)

SYMPTOM:
In a Cluster Volume Replicator (CVR) environment, if Storage Replicator Log 
(SRL) is resized with the logowner set on the CVM slave node and followed by a 
CVM master node switch operation, then there could be a SRL corruption that 
leads to the Rlink detach.

DESCRIPTION:
In a CVR environment, if SRL is resized when the logowner is set on the CVM 
slave node and if this is followed by the master node switch operation, then 
the new master node does not have the correct mapping of the SRL volume. This 
results in I/Os issued on the new master node to be corrupted on the SRL-volume 
contents and detaches the Rlink.

RESOLUTION:
The code is modified to correctly update the SRL mapping so that the SRL 
corruption does not occur.

* 3158821 (Tracking ID: 2910367)

SYMPTOM:
In a VVR environment, when Storage Replicator Log (SRL) is inaccessible or 
after the paths to the SRL volume of the secondary node are disabled, the 
secondary node panics with the following stack trace:
  __bad_area_nosemaphore
  vsnprintf
  page_fault
  vol_rv_service_message_start
  thread_return
  sigprocmask
  voliod_iohandle
  voliod_loop
  kernel_thread

DESCRIPTION:
The SRL failure in handled differently on the primary node and the secondary 
node. On the secondary node, if there is no SRL, replication is not allowed and 
Rlink is detached. The code region is common for both, and at one place flags 
are not properly set during the transaction phase. This creates an assumption 
that the SRL is still connected and tries to access the structure. This leads 
to the panic.

RESOLUTION:
The code is modified to mark the necessary flag properly in the transaction 
phase.

* 3164583 (Tracking ID: 3065072)

SYMPTOM:
Data loss occurs during the import of a clone disk group, when some of the 
disks are missing and the import useclonedev and updateid options are 
specified. The following error message is displayed:
VxVM vxdg ERROR V-5-1-10978 Disk group pdg: import failed:
Disk for disk group not found

DESCRIPTION:
During the clone disk group import if the updateid and useclonedev options 
are specified and some disks are unavailable, this causes the permanent data 
loss. Disk group Id is updated on the available disks during the import 
operation. The missing disks contain the old disk group Id, hence are not 
included in the later attempts to import the disk group with the new disk group 
Id.

RESOLUTION:
The code is modified such that any partial import of the clone disk group with 
the updateid option will no longer be allowed without the f (force) option. 
If the user forces the partial import of the clone disk group using the f 
option, the missing disks are not included in the later attempts to import the 
clone disk group with the new disk group Id.

* 3164596 (Tracking ID: 2882312)

SYMPTOM:
The Storage Replicator Log (SRL) faults in the middle of the I/O load. An 
immediate read on data that is written during the SRL fault may return old data.

DESCRIPTION:
In case of an SRL fault, the Replicated Volume Group (RVG) goes into the 
passthrough mode. The read/write operations are directly issued on the data 
volume. If the SRL is faulted while writing, and a read command is issued 
immediately on the same region, the read may return the old data. If a write 
command fails on the SRL, then the VVR acknowledges the write completion and 
places the RVG in the passthrough mode. The data-volume write is done 
asynchronously after acknowledging the write completion. If the read comes 
before the data volume write is finished, then it can return old data causing 
data corruption. It is race condition between write and read during the SRL 
failure.

RESOLUTION:
The code is modified to restart the write in case of the SRL failure and it 
does not acknowledge the write completion. When the write is restarted the RVG 
will be in passthrough mode, and it is directly issued on the data volume. 
Since, the acknowledgement is done only after the write completion, any 
subsequent read gets the latest data.

* 3164601 (Tracking ID: 2957555)

SYMPTOM:
The vxconfigd(1M) daemon on the CVM master node hangs in the userland during 
the vxsnap(1M) restore operation. The following stack trace is displayed:

rec_find_rid()
position_in_restore_chain()
kernel_set_object_vol()
kernel_set_object()
kernel_dg_commit_kernel_objects_20()
kernel_dg_commit()
commit()
dg_trans_commit()
slave_trans_commit()
slave_response()
fillnextreq()

DESCRIPTION:
During the snapshot restore operation, when the volume V1 gets restored from 
the source volume V2, and at the same time if the volume V2 gets restored from 
V1 or the child of V1 , the vxconfigd(1M) daemon tries to find the position of 
the volume that gets restored in the snapshot chain. For such instances, 
finding the position in the restore chain, causes the vxconfigd(1M) daemon to 
enter in an infinite loop and hangs.

RESOLUTION:
The code is modified to remove the infinite loop condition when the restore 
position is found.

* 3164610 (Tracking ID: 2966990)

SYMPTOM:
In a VVR environment, the I/O hangs at the primary side after multiple cluster 
reconfigurations are triggered in parallel. The stack trace is as following:
delay
vol_rv_transaction_prepare
vol_commit_iolock_objects
vol_ktrans_commit
volconfig_ioctl
volsioctl_real
volsioctl
fop_ioctl
ioctl

DESCRIPTION:
With I/O on the master node and the slave node, rebooting the slave node 
triggers the cluster reconfiguration, which in turn triggers the RVG recovery. 
Before the reconfiguration is complete, the slave node joins back again, which 
interrupts the leave reconfiguration in the middle of the operation. The node 
join reconfiguration does not trigger any RVG recovery, and so the recovery is 
skipped. The regular I/Os wait for the recovery to be completed. This situation 
leads to a hang.

RESOLUTION:
The code is modified such that the join reconfiguration does the RVG recovery, 
if there are any pending RVG recoveries.

* 3164611 (Tracking ID: 2962010)

SYMPTOM:
Replication hangs when the Storage Replicator Log (SRL) is resized. An example 
is as follows:
For example:
# vradmin -g vvrdg -l repstatus rvg
...
  Replication status:         replicating (connected)
  Current mode:               asynchronous
  Logging to:                 SRL ( 813061 Kbytes behind, 19 % full
  Timestamp Information:      behind by  0h 0m 8s

DESCRIPTION:
When a SRL is resized, its internal mapping gets changed and a new stream of 
data gets started. Generally, we revert back to the old mapping immediately 
when the conditions requisite for the resize is satisfied. However, if the SRL 
gets wrapped around, the conditions are not satisfied immediately. The old 
mapping is referred to when all the requisite conditions are satisfied, and the 
data is sent with the old mapping. This is done without starting the new 
stream. This causes a replication hang, as the secondary node continues to 
expect the data according to the new stream. Once the hang occurs, the 
replication status remains unchanged. The replication status is not changed 
even though the Rlink is connected.

RESOLUTION:
The code is modified to start the new stream of data whenever the old mapping 
is reverted.

* 3164612 (Tracking ID: 2746907)

SYMPTOM:
Under heavy I/O load, the vxconfigd(1M) daemon hangs  on the master node during 
the reconfiguration. The stack is observed as follows:
vxconfigd stack:
schedule 
volsync_wait 
vol_rwsleep_rdlock 
vol_get_disks
 volconfig_ioctl 
volsioctl_real 
vols_ioctl 
vols_compat_ioctl compat_sys_ioctl 
cstar_dispatch

DESCRIPTION:
When there is a reconfiguration, the vxconfigd(1M) daemon tries to acquire the 
volop_rwsleep write lock. This attempt fails as the I/O takes the read lock. As 
a result, the vxconfigd(1M) daemon tries to get the write lock. Thus I/O load 
starves out the vxconfigd(1M) daemons attempt to get the write lock. This 
results in the hang.

RESOLUTION:
The code is modified so that a new API is used to block out the read locks when 
an attempt is made to get the write locks. When the API is used during the 
reconfiguration starvation, the write lock is avoided. Thereby, the hang issue 
is resolved.

* 3164613 (Tracking ID: 2814891)

SYMPTOM:
The vxconfigrestore(1M) utility utility does not work properly if SCSI page 83 
inquiry returns more than one FPCH name identifier for a single LUN. The 
restoration fails with the following error message:
/etc/vx/bin/vxconfigrestore[1606]: shift: 4: bad number
expr: syntax error
expr: syntax error
..
..
Installing volume manager disk header for 17 ...
VxVM vxdisk ERROR V-5-1-558 Disk 17: Disk not in the configuration
17 disk format has been changed from cdsdisk to .
VxVM vxdisk ERROR V-5-1-5433 Device 17: init failed:

Device path not valid

DESCRIPTION:
The vxconfigrestore(1M) utility is used to restore a disk group's configuration 
information if it is lost or is corrupted. This operation should pick up the 
same LUNs that were used during the vxconfigbackup(1M) utility to save the 
configuration. If the SCSI-page-83 inquiry returns more than one FPCH-name 
identifier for a single LUN, the vxconfigrestore(1M) utility picks up the wrong 
LUN. So, the recovery may fail or succeed with the incorrect LUNs.

RESOLUTION:
The code is modified to pick up the correct LUN even when the SCSI-page-83 
inquiry returns more than one identifier.

* 3164615 (Tracking ID: 3102114)

SYMPTOM:
System crash during the vxsnap restore operation can cause the vxconfigd(1M) 
daemon  
to dump core with the following stack on system start-up:
 
rinfolist_iter()
process_log_entry()
scan_disk_logs()
...
startup()
main()

DESCRIPTION:
To recover from an incomplete restore operation, an entry is made in the 
internal logs. If the corresponding volume to that entry is not accessible, 
accessing a non-existent record causes the vxconfigd(1M) daemon to dump core 
with the SIGSEGV signal.

RESOLUTION:
The code is modified to ignore such an entry in the internal logs, if the 
corresponding volume does not exist.

* 3164616 (Tracking ID: 2992667)

SYMPTOM:
When the framework for SAN of VIS is changed from FC-switcher to the direct 
connection, the new DMP disk cannot be retrieved by running the "vxdisk 
scandisks" command.

DESCRIPTION:
Initially, the DMP node had multiple paths. Later, when the framework for SAN 
of VIS is changed from the FC switcher to the direct connection, the number of 
paths of each affected DMP node is reduced to 1. At the same time, some new 
disks are added to the SAN. Newly added disks are reused by the device number 
of the removed devices (paths). As a result, the vxdisk list command does not 
show the newly added disks even after the vxdisk scandisks command is 
executed.

RESOLUTION:
The code is modified so that DMP can handle the device number reuse scenario in 
a proper manner.

* 3164617 (Tracking ID: 2938710)

SYMPTOM:
The vxassist(1M) command dumps core with the following stack during the 
relayout operation:
relayout_build_unused_volume ()
relayout_trans ()
vxvmutil_trans ()
trans ()
transaction ()
do_relayout ()
main ()

DESCRIPTION:
During the relayout operation, the vxassist(1M) command sends a request to the 
vxconfigd(1M)  daemon to get the object record of the volume. If the request 
fails, the vxassist(1M) command tries to print the error message using the name 
of the object from the record retrieved. This causes a NULL pointer de-
reference and subsequently dumps core.

RESOLUTION:
The code is modified to print the error message using the name of the object 
from a known reference.

* 3164618 (Tracking ID: 3058746)

SYMPTOM:
When there are two disk groups that are located in two different RAID groups on 
the disk array and the disks of one of the disk group is disabled, and an I/O 
hang occurs on the other disk group.

DESCRIPTION:
When there are two disk groups that are located in two different RAID groups on 
the disk array and disks of one disk group is disabled, a spinlock is acquired 
but is not released. This leads to a deadlock situation that causes the I/O 
hang on any other disk group.

RESOLUTION:
The code is modified to prevent the hang by releasing the spinlock to allow the 
I/Os to proceed

* 3164619 (Tracking ID: 2866059)

SYMPTOM:
When disk-resize operation fails, the error messages displayed do not give the 
exact details of the failure. The following error messages are displayed:
1. "VxVM vxdisk ERROR V-5-1-8643 Device <device-name>: resize failed: One or 
more subdisks do not fit in pub reg" 
2. "VxVM vxdisk ERROR V-5-1-8643 Device <disk-name>: resize failed: Cannot 
remove last disk in disk group"

DESCRIPTION:
When a disk-resize operation fails, both the error messages need to be enhanced 
to display the exact details of the failure.

RESOLUTION:
The code is modified to improve the error messages.
Error message (1) is modified to:
VxVM vxdisk ERROR V-5-1-8643 Device emc_clariion0_338: resize failed: One or 
more subdisks do not fit in pub region.
vxconfigd log :
01/16 02:23:23:  VxVM vxconfigd DEBUG V-5-1-0 dasup_resize_check: SD 
emc_clariion0_338-01 not contained in disk sd: start=0 end=819200, public: 
offset=0 len=786560
 
Error message (2) is modified to:
VxVM vxdisk ERROR V-5-1-0 Device emc_clariion0_338: resize failed: Cannot 
remove last disk in disk group. Resizing this device can result in data loss. 
Use -f option to force resize.

* 3164620 (Tracking ID: 2993667)

SYMPTOM:
VxVM allows setting the Cross-platform Data Sharing (CDS) attribute for a disk 
group even when a disk is missing, because it experienced I/O errors. The 
following command succeeds even with an inaccessible disk:
vxdg -g <DG_NAME> set cds=on

DESCRIPTION:
When the CDS attribute is set for a disk group, VxVM does not fail the 
operation if some disk is not accessible. If the disk genuinely had an I/O 
error and fails, VxVM does not allow setting the disk group as CDS, because the 
state of the failed disk cannot be determined. If a disk had a non-CDS format 
and fails with all the other disks in the disk group with the CDS format, this 
allows the disk group to be set as CDS. If the disk returns and the disk group 
is re-imported, there could be a CDS disk group that has a non-CDS disk. This 
violates the basic definition of the CDS disk group and results in the data 
corruption.

RESOLUTION:
The code is modified such that VxVM fails to set the CDS attribute for a disk 
group, if it detects a failed disk inaccessible because of an I/O error. Hence, 
the below operation would fail with an error as follows:
# vxdg -g <DG_NAME> set cds=on
Cannot enable CDS because device corresponding to <DM_NAME> is in-accessible.

* 3164624 (Tracking ID: 3101419)

SYMPTOM:
In a CVR environment, I/Os to the data volumes in an RVG may temporarily 
experience a hang during the SRL overflow with the heavy I/O load.

DESCRIPTION:
The SRL flush occurs at a slower rate than the incoming I/Os from the master 
node and the slave nodes. I/Os initiated on the master node get starved for a 
long time, this appears like an I/O hang. The I/O hang disappears once the SRL 
flush is complete.

RESOLUTION:
The code is modified to provide a fair schedule for the I/Os to be initiated on 
the master node and the slave nodes.

* 3164626 (Tracking ID: 3067784)

SYMPTOM:
The grow and shrink operations by the vxresize(1M) utility may dump core in the 
vfprintf() function.The following stack trace is observed:

 vfprintf ()
 volumivpfmt ()
 volvpfmt ()
 volpfmt ()
 main ()

DESCRIPTION:
The vfprintf() function dumps core as the format specified to print the file 
system type is incorrect. The integer/ hexadecimal value is printed as a 
string, using the %s.

RESOLUTION:
The code is modified to print the file system type as the hexadecimal value, 
using the %x.

* 3164627 (Tracking ID: 2787908)

SYMPTOM:
The vxconfigd(1M) daemon dumps core when the slave node joins the CVM cluster.
The following stack trace is displayed:
client_delete
master_delete_client
master_leavers
role_assume
vold_set_new_role
kernel_get_cvminfo
cluster_check
vold_check_signal
request_loop
main
_start

DESCRIPTION:
When the slave node joins the CVM cluster, the vxconfigd(1M) daemon frees a 
memory structure that is already freed and dumps core.

RESOLUTION:
The code is modified to ensure that the client memory structure is not freed 
twice.

* 3164628 (Tracking ID: 2952553)

SYMPTOM:
The vxsnap(1M) command allows refreshing a snapshot from a different volume 
other than the source volume. An example is as follows:
# vxsnap refresh <snapvol> source=<srcvol>

DESCRIPTION:
The vxsnap(1M) command allows refreshing a snapshot from a different volume 
other than the source volume. This can result in an unintended loss of the 
snapshot.

RESOLUTION:
The code is modified to print the message requesting the user to use the -f 
option. This prevents any accidental loss of the snapshot.

* 3164629 (Tracking ID: 2855707)

SYMPTOM:
I/O hangs with the SUN6540 array during the path fault injection test. During 
the hang the DMP daemon kernel thread stack trace is displayed as following:
e_block_thread
e_sleep_thread
dmpEngenio_issue_failover
gen_issue_failover
gen_dmpnode_update_cur_pri
dmp_failover_to_other_path
dmp_failover
dmp_error_action
dmp_error_analysis_callback
dmp_process_scsireq
dmp_daemons_loop 
vxdmp_start_thread_enter
procentry

DESCRIPTION:
I/O hang is caused due to the cyclic dependency, where all the DMP daemon 
kernel threads wait for the dmpEngIO Array Policy Module (APM). In turn, 
the dmpEngIO APM thread requires the DMP daemon kernel thread to issue the 
failover. This results in a deadlock.

RESOLUTION:
The code is modified to eliminate the cyclic dependency on the DMP daemon 
kernel threads.

* 3164631 (Tracking ID: 2933688)

SYMPTOM:
When the 'Data corruption protection' check is activated by DMP, the device- 
discovery operation aborts, but the I/O to the affected devices continues, this 
results in the data corruption. The message is displayed as following:
Data Corruption Protection Activated - User Corrective Action Needed:
To recover, first ensure that the OS device tree is up to date (requires OS
specific commands).
Then, execute 'vxdisk rm' on the following devices before reinitiating device
discovery using 'vxdisk scandisks'

DESCRIPTION:
When 'Data corruption protection' check is activated by DMP, the device-
discovery operation aborts after displaying a message. However, the device-
discovery operation does not stop I/Os from being issued on the DMP device on 
the affected devices, for all those devices whose discovery information changed 
unexpectedly and are no longer valid.

RESOLUTION:
The code is modified so that DMP is changed to forcibly fail the I/Os on 
devices whose discovery information is changed unexpectedly. This prevents any 
further damage to the data.

* 3164633 (Tracking ID: 3022689)

SYMPTOM:
The vxbrk_rootmir(1M) utility succeeds with an error printed on the command 
line, but disks that are a part of the break-off DG which are under NMP, now 
come under DMP as the /etc/vx/darecs file is not correctly updated.

DESCRIPTION:
The vxbrk_rootmir(1M) utility uses a hard-coded path "/dev/rdsk" and expects 
the DMP node to be available. This is done to get the new name corresponding to 
the input name. With the EBN name as the input, the vxbrk_rootmir(1M) utility 
does not find the OS new node and displays the error message.

RESOLUTION:
The code is modified such that the vxbrk_roomir(1M) utility does not use the 
hard-coded path to get the new node from the DMP input name.

* 3164637 (Tracking ID: 2054606)

SYMPTOM:
During the DMP driver unload operation the system panics with the following 
stack trace:
kmem_free
dmp_remove_mp_node
dmp_destroy_global_db
dmp_unload
vxdmp`_fini
moduninstall
modunrload
modctl
syscall_trap

DESCRIPTION:
The system panics during the DMP driver unload operation when its internal data 
structures are destroyed, because DMP attempts to free the memory associated 
with a DMP device that is marked for deletion from DMP.

RESOLUTION:
The code is modified to check the DMP device state before any attempt is made 
to free the memory associated with it.

* 3164639 (Tracking ID: 2898324)

SYMPTOM:
Set of memory-leak issues in the user-land daemon "vradmind" that are reported 
by the Purify tool.

DESCRIPTION:
The issue is reported due to the improper/no initialization of the allocated 
memory.

RESOLUTION:
The code is modified to ensure that the proper initialization is done for the 
allocated memory.

* 3164643 (Tracking ID: 2815441)

SYMPTOM:
The file system mount operation fails when the volume is resized and the volume 
has a link to the volume. The following error messages are displayed:
# mount -V vxfs /dev/vx/dsk/<dgname>/<volname>  <mount_dir_name>
UX:vxfs fsck: ERROR: V-3-26248:  could not read from block offset devid/blknum 
<xxx/yyy>. Device containing meta data may be missing in vset or device too big 
to be read on a 32 bit system. 
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate file system check 
failure, aborting ... 
UX:vxfs mount: ERROR: V-3-26883: fsck log replay exits with 1 
UX:vxfs mount: ERROR: V-3-26881: Cannot be mounted until it has been cleaned by 
fsck. Please run "fsck -V vxfs -y /dev/vx/dsk/<dgname>/<volname>"before mounting

DESCRIPTION:
The resize operation of the link volumes can be interrupted and restarted by 
the vxrecover(1M) command . The recover operation is triggered when the disk 
recovers from the failure or during the cluster reconfigurations. If the resize 
and the recover operation are run in parallel, and there are mirrors in link-
to, the volume-recovery offset does not get updated properly for the linked 
volume.

RESOLUTION:
The code is modified to grow the linked from volume if we see that the volume 
sizes for linked to and linked from volumes are not the same when the 
vxrecover(1M) command is run. Accordingly, update the volume recovery offset 
for linked to volumes properly.

* 3164645 (Tracking ID: 3091916)

SYMPTOM:
In VCS cluster environment, the syslog overflows with following the Small 
Computer System Interface (SCSI) I/O error messages:
reservation conflict
Unhandled error code
Result: hostbyte=DID_OK driverbyte=DRIVER_OK
CDB: Write(10): 2a 00 00 00 00 90 00 00 08 00
reservation conflict
VxVM vxdmp V-5-0-0 [Error] i/o error occurred (errno=0x5) on dmpnode 201/0x60
Buffer I/O error on device VxDMP7, logical block 18
lost page write due to I/O error on VxDMP7

DESCRIPTION:
In VCS cluster environment when the private disk group is flushed and deported 
on one node, some I/Os on the disk are cached as the disk writes are done 
asynchronously. Importing the disk group immediately ater with PGR keys causes 
I/O errors on the previous node as the PGR keys are not reserved on that node.

RESOLUTION:
The code is modified to write the I/Os synchronously on the disk.

* 3164646 (Tracking ID: 2893530)

SYMPTOM:
When a system is rebooted and there are no VVR configurations, the system 
panics with following stack trace:
nmcom_server_start()
vxvm_start_thread_enter()
...

DESCRIPTION:
The panic occurs because the memory segment is accessed after it is released. 
The access happens in the VVR module and can happen even if no VVR is 
configured on the system.

RESOLUTION:
The code is modified so that the memory segment is not accessed after it is 
released.

* 3164647 (Tracking ID: 3006245)

SYMPTOM:
While executing a snapshot operation on a volume which has snappoints 
configured, the system panics infrequently with the following stack trace:
...
voldco_copyout_pervolmap ()
voldco_map_get ()
volfmr_request_getmap ()
...

DESCRIPTION:
When the snappoints are configured for a volume by using the vxsmptadm(1M) 
command in the kernel, the relationship is maintained using a field.  This 
field is also used for maintaining the snapshot relationships. Sometimes, 
the snappoints field may wrongly be identified with the snapshot field. This 
causes the system to panic.

RESOLUTION:
The code is modified to properly identify the fields that are used for the 
snapshot and the snappoints and handle the fields accordingly.

* 3164650 (Tracking ID: 2812161)

SYMPTOM:
In a VVR environment, after the Rlink is detached, the vxconfigd(1M) daemon on 
the secondary host may hang. The following stack trace is observed:
cv_wait
delay_common
delay
vol_rv_service_message_start
voliod_iohandle
voliod_loop
...

DESCRIPTION:
There is a race condition if there is a node crash on the primary site of VVR 
and if any subsequent Rlink is detached. The vxconfigd(1M) daemon on the 
secondary site may hang, because it is unable to clear the I/Os received from 
the primary site.

RESOLUTION:
The code is modified to resolve the race condition.

* 3164759 (Tracking ID: 2635640)

SYMPTOM:
The "vxdisksetup(1M) -ifB" command fails on the Enclosure Based Naming (EBN) 
devices when the legacy tree is removed. The following error message is 
displayed:

"idisk: Stat of disk failed(2)"

DESCRIPTION:
The implementation of the function to get the unsliced path, returns the legacy 
tree based path if the input parameter does not specify the full device path. 
This leads to further failures when any direct or indirect use of the function 
is performed.

RESOLUTION:
The code is modified such that the function returns the DMP based path when the 
input parameter does not specify the full device path.

* 3164790 (Tracking ID: 2959333)

SYMPTOM:
For Cross-platform Data Sharing (CDS) disk group, the vxdg list command does 
not list the CDS flag when that disk group is disabled.

DESCRIPTION:
When the CDS disk group is disabled, the state of the record list may not be 
stable. Hence, it is not considered if disabled disk group was CDS. As a 
result, Veritas Volume Manager (VxVM) does not mark any such flag.

RESOLUTION:
The code is modified to display the CDS flag for disabled CDS disk groups.

* 3164792 (Tracking ID: 1783763)

SYMPTOM:
In a VVR environment, the vxconfigd(1M) daemon may hang during a configuration 
change operation. The following stack trace is observed:
delay
vol_rv_transaction_prepare
vol_commit_iolock_objects
vol_ktrans_commit
volconfig_ioctl
volsioctl_real
volsioctl
vols_ioctl
...

DESCRIPTION:
Incorrect serialization primitives are used. This results in the vxconfigd(1M) 
daemon to hang.

RESOLUTION:
The code is modified to use the correct serialization primitives.

* 3164793 (Tracking ID: 3015181)

SYMPTOM:
I/O can hang on all the nodes of a cluster when the complete non-Active/Active 
(A/A) class of the storage is disconnected. The problem is only CVM specific.

DESCRIPTION:
The issue occurs because the CVM-DMP protocol does not progress any further 
when the ioctls on the corresponding DMP metanodes fail. As a result, all 
hosts hold the I/Os forever.

RESOLUTION:
The code is modified to complete the CVM-DMP protocol when any of the ioctls 
on the DMP metanodes fail.

* 3164874 (Tracking ID: 2986596)

SYMPTOM:
The disk groups imported with mix of standard and clone Logical Unit Numbers 
(LUNs) may lead to data corruption.

DESCRIPTION:
The vxdg(1M) command import operation should not allow mixing of clone and non-
clone LUNs since it may result in data corruption if the clone copy is not up-
to-date. vxdg(1M) import code was going ahead with clone LUNs when 
corresponding standard LUNs were unavailable on the same host.

RESOLUTION:
The code is modified for the vxdg(1M) command import  operation, so that it 
does not pick up the clone disks in above case and prevent mix disk group 
import. The import fails if partial import is not allowed based on other 
options specified during the import.

* 3164880 (Tracking ID: 3031796)

SYMPTOM:
When a snapshot is reattached using the vxsnap reattach command-line-
interface (CLI), the operation fails with the following error message:
"VxVM vxplex ERROR V-5-1-6616 Internal error in get object. Rec <test_snp>"

DESCRIPTION:
When a snapshot is reattached to the volume, the volume manager checks the 
consistency by locking all the related snapshots. If any related snapshots are 
not available the operation fails.

RESOLUTION:
The code is modified to ignore any inaccessible snapshot. This prevents any 
inconsistency during the operation.

* 3164881 (Tracking ID: 2919720)

SYMPTOM:
The vxconfigd(1M) daemon dumps core in the rec_lock1_5() function. The 
following stack trace is observed:
rec_lock1_5()
rec_lock1()
rec_lock()
client_trans_start()
req_vol_trans()
request_loop()
main()

DESCRIPTION:
During any configuration changes in VxVM, the vxconfigd(1M) command locks all 
the objects involved in the operations to avoid any unexpected modification. 
Some objects which do not belong to the current transactions are not handled 
properly. This results in a core dump. This case is particularly observed 
during the snapshot operations of the cross disk group linked-volume snapshots.

RESOLUTION:
The code is modified to avoid the locking of the records which are not yet a 
part of the committed VxVM configuration.

* 3164883 (Tracking ID: 2933476)

SYMPTOM:
The vxdisk(1M) command resize operation fails with the following generic error 
messages that do not state the exact reason for the failure:
VxVM vxdisk ERROR V-5-1-8643 Device 3pardata0_3649: resize failed: Operation is 
not supported.

DESCRIPTION:
The disk-resize operation fails in in the following cases:
1. When shared disk has simple or nopriv format.
2. When GPT (GUID Partition Table) labeled disk has simple or sliced format.
3. When the Cross-platform Data Sharing (CDS) disk is part of the disk group 
which has version less than 160 and disk resize is done to greater than 1 TB 
size.

RESOLUTION:
The code is modified to enhance disk resize failure messages.

* 3164884 (Tracking ID: 2935771)

SYMPTOM:
In a Veritas Volume Replicator (VVR) environment, the rlinks disconnect after 
switching the master node.

DESCRIPTION:
Sometimes switching a master node on the primary node can cause the rlinks to 
disconnect. The vradmin repstatus command displays "paused due to network 
disconnection" as the replication status. VVR uses a connection to check if the 
secondary node is alive. The secondary node responds to these requests by 
replying back, indicating that it is alive. On a master node switch, the old 
master node fails to close this connection with the secondary node. Thus, after 
the master node switch the old master node as well as the new master node sends 
the requests to the secondary node. This causes a mismatch of connection 
numbers on the secondary node, and the secondary node does not reply to the 
requests of the new master node. Thus, it causes the rlinks to disconnect.

RESOLUTION:
The code is modified to close the connection of the old master node with the 
secondary node, so that it does not send the connection requests to the 
secondary node.

* 3164911 (Tracking ID: 1901838)

SYMPTOM:
After installation of a license key that enables multi-pathing, the state of 
the controller is shown as DISABLED in the command- line-interface (CLI) output 
for the vxdmpadm(1M) command.

DESCRIPTION:
When the multi-pathing license key is installed, the state of the active paths 
of the Logical Unit Number (LUN) is changed to the ENABLED state. However, the 
state of the controller is not updated. As a result, the state of the 
controller is shown as DISABLED in the CLI output for the vxdmpadm(1M) command.

RESOLUTION:
The code is modified so that the states of the controller and the active LUN 
paths are updated when the multi-pathing license key is installed.

* 3164916 (Tracking ID: 1973983)

SYMPTOM:
Relocation fails with the following error when the Data Change Object (DCO) 
plex is in a disabled state: VxVM vxrelocd ERROR V-5-2-600 Failure recovering 
<DCO> in disk group <diskgroup>

DESCRIPTION:
When a mirror-plex is added to a volume using the "vxassist snapstart" command, 
the attached DCO plex can be in DISABLED or DCOSNP state. If the enclosure is 
disabled while recovering such DCO plexes, the plex can get in to the 
DETACHED/DCOSNP state. This can result in relocation failures.

RESOLUTION:
The code is modified to handle the DCO plexes in disabled state during 
relocation.

* 3178903 (Tracking ID: 2270686)

SYMPTOM:
The vxconfigd(1M) daemon on the  master node hangs if there is a 
reconfiguration (node join followed by a leave operation) when the snapshot is 
taken. Stack trace of vxsnap syncstart is as follows:

e_block_thread
pse_block_thread
pse_sleep_thread
volsiowait 
volpvsiowait
vol_mv_do_resync
vol_object_ioctl
voliod_ioctl
volsioctl_real
volsioctl
vols_ioctl
   rdevioctl
spec_ioctl
vnop_ioctl
vno_ioctl
common_ioctl
ovlya_addr_sc_flih_main
__ioctl
Ioctl
vol_syncstart_internal  
 do_syncstart_vols
vset_voliter_all
do_syncstart
main
__start

Join sio (during reconfiguration) is hung with following stack trace:

e_block_thread()
pse_block_thread()
pse_sleep_thread()
vol_rwsleep_wrlock()
volopenter_exclusive()
volcvm_lockdg()
volcvm_joinsio_start()
voliod_iohandle()
voliod_loop()
vol_kernel_thread_init()
threadentry()

DESCRIPTION:
During the reconfiguration volop_rwsleep write lock enters the block mode. The 
snapshot operation takes the same lock in the read mode. The snapshot waits for 
a response from a node that has left the cluster. The leave processing is held 
up as the reconfiguration waits for the write lock in the block mode. Hence, 
the deadlock occurs.

RESOLUTION:
The code is modified so that reconfiguration takes the write lock in the non-
block mode. If it is not possible to get the write lock, then the 
reconfiguration is restarted. This results in the leave being detected and the 
appropriate action is taken, so that there is no expectation of any response 
from the nodes that have left the cluster. As a result, the deadlock is avoided.

* 3181315 (Tracking ID: 2898547)

SYMPTOM:
The vradmind process dumps core on the Veritas Volume Replicator (VVR) 
secondary site in a Clustered Volume Replicator (CVR) environment. The stack 
trace would look like:
__kernel_vsyscall
raise
abort
fmemopen
malloc_consolidate
delete
delete[]
IpmHandle::~IpmHandle
IpmHandle::events
main

DESCRIPTION:
When log owner service group is moved across the nodes on the primary site, it 
induces the deletion of IpmHandle of the old log owner node, as the IpmHandle 
of the new log owner node gets created. During the destruction of IpmHandle 
object, a pointer '_cur_rbufp' is not set to NULL, which can lead to freeing up 
of memory which is already freed. This causes 'vradmind' to dump core.

RESOLUTION:
The code is modified for the destructor of IpmHandle to set the pointer to NULL 
after it is deleted.

* 3181318 (Tracking ID: 3146715)

SYMPTOM:
The rinks do not connect with the Network Address Translation (NAT) 
configurations on Little Endian Architecture (LEA).

DESCRIPTION:
On LEAs, the Internet Protocol (IP) address configured with the NAT mechanism 
is not converted from the host-byte order to the network-byte order. As a 
result, the address used for the rlink connection mechanism gets distorted and 
the rlinks fail to connect.

RESOLUTION:
The code is modified to convert the IP address to the network-byte order before 
it is used.

* 3182750 (Tracking ID: 2257733)

SYMPTOM:
At device discovery, the vxconfigd(1M) daemon allocates memory but does not release it after use, 
causing a user memory leak. The Resident Memory Size (RSS) of the vxconfigd(1M) daemon thus keeps 
growing and may reach maxdsiz(5) in the extreme case that causes the  vxconfigd(1M) daemon to 
abort.

DESCRIPTION:
At some places in the device discovery code path, the buffer is not freed. This results in memory leaks.

RESOLUTION:
The code is modified to free the buffers.

* 3183145 (Tracking ID: 2477418)

SYMPTOM:
In a VVR environment, if the logowner node on the secondary node performs under 
the low memory situation, the system panics with the following stack trace:
vx_buffer_pack
vol_ru_data_pack
vol_ru_setup_verification
vol_rv_create_logsio
vol_rv_service_update
vol_rv_service_message_start
voliod_iohandle
voliod_loop
vol_kernel_thread_init
...

DESCRIPTION:
While receiving the updates on the secondary node, a retry of the receive 
operation is needed if VVR runs out of memory. In this situation, the retry 
gets overlapped for updates for which the receive operation is successful. This 
leads to a panic.

RESOLUTION:
The code is modified to correctly handle the retry operations.

* 3189869 (Tracking ID: 2959733)

SYMPTOM:
When the device paths are moved across LUNs or enclosures, the vxconfigd(1M) 
daemon can dump core, or the data corruption can occur due to the internal data 
structure inconsistencies. The following stack trace is observed:
ddl_reconfig_partial ()
ddl_reconfigure_all ()
ddl_find_devices_in_system ()
find_devices_in_system ()
req_discover_disks ()
request_loop ()
main ()

DESCRIPTION:
When the device path configuration is changed after a planned or unplanned 
disconnection by moving only a subset of the device paths across LUNs or the 
other storage arrays (enclosures), DMP's internal data structure is 
inconsistent. This causes the vxconfigd(1M) daemon to dump core. Also, for some 
instances the data corruption occurs due to the incorrect LUN to the path 
mappings.

RESOLUTION:
The vxconfigd(1M) code is modified to detect such situations gracefully and 
modify the internal data structures accordingly, to avoid a vxconfigd(1M) 
daemon core dump and the data corruption.

* 3224030 (Tracking ID: 2433785)

SYMPTOM:
In a CVM environment, the node join operation to the cluster node fails 
intermittently. The syslog entries observed are as following:
Jun 17 19:51:19 sfqapa31 vxvm:vxconfigd: V-5-1-8224 slave:disk 
1370256959.514.sfqapa40 not shared
Jun 17 19:51:19 sfqapa31 vxvm:vxconfigd: V-5-1-7830 cannot find disk 
1370256959.514.sfqapa40
Jun 17 19:51:19 sfqapa31 vxvm:vxconfigd: V-5-1-11092 cleanup_client: (Cannot 
find disk on slave node) 222

DESCRIPTION:
In a clustered environment during the node join operation, the node joiner 
checks if the disk is shared or not and tries to re-read the configuration data 
from the disk. An improper check of the flag results in the node join failure.

RESOLUTION:
The code is modified to remove the incorrect check of the flag.

* 3227719 (Tracking ID: 2588771)

SYMPTOM:
The system panics when the multi-controller enclosure is disabled. The 
following stack trace is observed:

dmpCLARiiON_get_unit_path_report 
page_fault 
dmp_ioctl_by_bdev 
dmp_handle_delay_open 
gen_dmpnode_update_cur_pri 
dmp_start_failover 
gen_update_cur_pri 
dmp_update_cur_pri 
dmp_reconfig_update_cur_pri 
dmp_decipher_instructions 
dmp_process_instruction_buffer 
dmp_reconfigure_db 
dmp_compat_ioctl

DESCRIPTION:
When all the controllers associated with an enclosure are disabled one-by-one, 
the internal tasks are generated to update the current active path of the DMP 
devices. The race condition between these two activities leads to the system 
panic.

RESOLUTION:
The code is modified to start the update task after all the required 
controllers are disabled.

* 3235365 (Tracking ID: 2438536)

SYMPTOM:
When a site is reattached after it is either manually detached or detached due 
to the storage inaccessibility, this results in the data corruption on the 
volume. The issue would be observed only on mirrored volume (mirrored across 
different sites).

DESCRIPTION:
When a site is reattached, possibly after split-brain, it is possible that a 
site-consistent volume is updated on each site independently. For such 
instances, the tracking map needs to be recovered from each site to take care 
of the updates done from both the sites. These maps are stored in Data Change 
Object (DCO). Recovery of DCO involves the map to be updated that tracks the 
detached mirror (detach map) from the active I/O tracking map. While this is 
performed it would update the last block of regions in the detach map from the 
previous block of the active map. This results in the data corruption of the 
detached map.

RESOLUTION:
The code is modified to ensure that the pointer to the active map buffer is 
updated correctly even for the last block.

* 3238094 (Tracking ID: 3243355)

SYMPTOM:
The vxres_lvmroot(1M) utility which restores the Logical Volume Manager (LVM) 
root disk from the VxVM root disk fails with the following error message:
VxVM vxres_lvmroot ERROR V-5-2-2493 Extending X MB LV <LVOL> for vg00

DESCRIPTION:
When the LVM root disk is restored from the VxVM root disk, the vxres_lvmroot 
creates the Logical Volumes (LVs) based on the Physical Extent (PEs) of the 
size that corresponds to the VxVM volume. If the volume size is less than the 
PE size, the vxres_lvmroot(1M) utility creates LV of the size 0, this fails the 
entire restoration command.

RESOLUTION:
The code is modified such that the vxres_lvmroot(1M) utility creates the LVs of 
at least the PE size, if the corresponding volume size is smaller than that of 
the PE size.

* 3240788 (Tracking ID: 3158323)

SYMPTOM:
In a VVR environment, with multiple secondaries, if SRL overflows for rlinks at 
different times, it may result in the vxconfigd(1M) daemon to hang on the 
primary node.  The stack trace observed is as following:
vol_commit_iowait_objects()
vol_commit_iolock_objects()
vol_ktrans_commit()
volconfig_ioctl()
volsioctl_real()
volsioctl()
vols_ioctl()

DESCRIPTION:
When the first rlink tries to go into the DCM log mode, the transaction is 
timed out waiting for the NIO drain. The transaction is retried but it gets 
timed out. The Network I/O (NIO) drain does not happen because VVR dropped the 
acknowledgements while in the transaction mode. This results in the vxconfigd
(1M) daemon hang.

RESOLUTION:
The code is modified to accept the acknowledgement even during the transaction 
phase.

* 3242839 (Tracking ID: 3194358)

SYMPTOM:
Continuous I/O error messages on OS device and DMP node can be seen in the
syslog associated with the EMC Symmetrix not-ready (NR) Logical units.

DESCRIPTION:
VxVM tries to online the EMC not-ready (NR) logical units. As part of the disk
online process, it tries to read the disk label from the logical unit. Because
the logical unit is NR the I/O fails. The failure messages are displayed in the
syslog file.

RESOLUTION:
The code is modified to skip the disk online for the EMC NR LUNs.

* 3245608 (Tracking ID: 3261485)

SYMPTOM:
The vxcdsconvert(1M) utility fails with following error messages:

VxVM vxcdsconvert ERROR V-5-2-2777 <DA_NAME>: Unable to initialize the disk 
as a CDS disk
VxVM vxcdsconvert ERROR V-5-2-2780 <DA_NAME> : Unable to move volume 
<VOLUME_NAME> off of the disk
VxVM vxcdsconvert ERROR V-5-2-3120 Conversion process aborted

DESCRIPTION:
As part of the conversion process, the vxcdsconvert(1M) utility moves all the 
volumes to some other disk, before the disk is initialized with the CDS format. 
On the VxVM formatted disks apart from the CDS format, the VxVM volume starts 
immediately in the PUBLIC partition. If LVM or FS was stamped on the disk, even 
after the data migration to some other disk within the disk group, this 
signature is not erased. As part of the vxcdsconvert operation, when the disk 
is destroyed, only the SLICED tags are erased but the partition table still 
exists. The disk is then recognized to have a file system or LVM on the 
partition where the PUBLIC region existed earlier. The vxcdsconvert(1M) utility 
fails because the vxdisksetup which is invoked internally to initialize the 
disk with the CDS format, prevents the disk initialization for any foreign FS 
or LVM.

RESOLUTION:
The code is modified so that the vxcdsconvert(1M) utility forcefully invokes 
the vxdisksetup(1M) command to erase any foreign format.

* 3247983 (Tracking ID: 3248281)

SYMPTOM:
When the vxdisk scandisks or vxdctl enable commands are run consecutively, 
an error is displayed as following:
VxVM vxdisk ERROR V-5-1-0 Device discovery failed.

DESCRIPTION:
The device discovery failure occurs because in some cases the variable that is 
passed to the OS specific function is not set properly.

RESOLUTION:
The code is modified to set the correct variable before the variable is passed 
to the OS specific function.

* 3253306 (Tracking ID: 2876256)

SYMPTOM:
The vxdisk set mediatype command fails with the new naming scheme. The 
following error message is displayed:
VxVM vxdisk ERROR V-5-1-12952 Device <new disk name> not in configuration or 
associated with DG <dg name>
For example, the vxdisk set mediatype command fails when the naming scheme is 
set to new instead of  being set as vxdisk set mediatype.

DESCRIPTION:
When the vxdisk(1M) command to set the media type is run on the newly named 
disk it fails. The case to handle the scenario to get the corresponding da 
name of the new name is not handled.

RESOLUTION:
The code is modified so that if the new name is not found, then the 
corresponding da name in the set media type code path is retrieved.

* 3256806 (Tracking ID: 3259926)

SYMPTOM:
The vxdmpadm(1M) command fails to enable the paths when the '-f' option is 
provided. The following error message is displayed:
"VxVM vxdmpadm ERROR V-5-1-2395 Invalid arguments"

DESCRIPTION:
When the '-f' option is provided, the vxdmpadm enable command displays an 
error message, and fails to enable the paths that were disabled previously. 
This occurs because the improper argument value is passed to the respective 
function.

RESOLUTION:
The code is modified so that the vxdmpadm enable command successfully enables 
the paths when the '-f' option is provided.

Patch ID: PHCO_43295, PHKL_43296

* 3057916 (Tracking ID: 3056311)

SYMPTOM:
Following problems can be seen on disks initialized with 5.1SP1 listener and which
are being used for older releases like 4.1, 5.0, 5.0.1:
1. Creation of a volume failed on a disk indicating in-sufficient space available.
2. Data corruption seen. CDS backup label signature seen within PUBLIC region data.
3. Disks greater than 1TB in size will appear "online invalid" after on older
releases.

DESCRIPTION:
VxVM listener can be used to initialize Boot disks and Data disks which can be
used with older VxVM releases. Eg: 5.1SP1 Listener can be used to initialize
disks which can be used with all previous VxVM releases like 5.0.1, 5.0, 4.1 etc.
With 5.1SP1 onwards VxVM always uses Fabricated geometry while initializing disk
with CDS format.
Older releases like 4.1, 5.0, 5.0.1 use Raw geometry. These releases do not
honor LABEL geometry. Hence, if a disk was initialized through 5.1SP1 listener,
disk would be stamped with Fabricated geometry. When such a disk was used with
older VxVM releases like 5.0.1, 5.0, 4.1, there can be a mismatch between the
stamped geometry (Fabricated) and in-memory geometry (Raw). If on-disk cylinder
size < in-memory cylinder size, we might encounter data corruption issues.
To prevent any data corruption issues, we need to initialize disks through
listener with older CDS format by using raw geometry.
Also, if disk size is >= 1TB, 5.1SP1 VxVM will initialize the disk with CDS EFI
format. Older releases like 4.1, 5.0, 5.0.1 etc. do not understand EFI format.

RESOLUTION:
From releases 5.1SP1 onwards, through HP-UX listener, disk to be used for older
releases like 4.1, 5.0, 5.0.1 will be initialized with raw geometry. 
Also, initialization of disk through HPUX listener whose size is greater than 1TB
will fail.

* 3175698 (Tracking ID: 3175778)

SYMPTOM:
Provide support for VxVM 5.1SP1 Ignite-UX integration.

DESCRIPTION:
The new support is added to the "VxVM listener" binary to work with the raw
geometry changes along with the support for VxVM 5.1SP1 Ignite-UX integration.

RESOLUTION:
The new support is added to the "VxVM listener" binary. what(1) string for the new
version of "VxVM listener" is "1.33"

* 3184253 (Tracking ID: 3184250)

SYMPTOM:
Compatibility issues with VxVM patch PHCO_43295.

DESCRIPTION:
The VxVM command patch requires the VxVM kernel patch to be shipped along with it.

RESOLUTION:
The dummy kernel patch is built on top of 5.1SP1RP2 patch.

Patch ID: PHCO_43065, PHKL_43064

* 2070079 (Tracking ID: 1903700)

SYMPTOM:
vxassist remove mirror does not work if nmirror and alloc is specified,
giving an error "Cannot remove enough mirrors"

DESCRIPTION:
During remove mirror operation, VxVM does not perform correct
analysis of plexes. Hence the issue.

RESOLUTION:
Necessary code changes have been done so that vxassist works properly.

* 2205574 (Tracking ID: 1291519)

SYMPTOM:
After two VVR migrate operations, vrstat command does not output any 
statistics.

DESCRIPTION:
Migrate operation results in RDS (Replicated Data Set) information 
getting updated on both primary and secondary side vradmind. After multiple 
migrate operations, stale handle to older RDS is used by vrstat to retrieve 
statistics resulting in the failure.

RESOLUTION:
Necessary code changes have being made to ensure correct and updated
RDS handle is used by vrstat to retrieve statistics.

* 2227908 (Tracking ID: 2227678)

SYMPTOM:
In case of multiple secondaries, if one secondary has overflowed and is in
resync mode, then if another secondary overflows, then the rlink corresponding
to the later secondary gets DETACHED and is not able to connect again. Even a
complete resynchronization is not working for the detached rlink.

DESCRIPTION:
When the later rlink overflows, we detach the Rlink. At the time of detach, the
rlink is going into an incorrect and unrecoverable state resulting it to never
connect again.

RESOLUTION:
Changes have been made to ensure that when a resync is ongoing for one of the
rlinks and another rlink overflows, it gets detached and a valid state is
maintained for that rlink. Hence, full synchronization at a later time can
recover the rlink completely.

* 2427560 (Tracking ID: 2425259)

SYMPTOM:
vxdg join operation fails throwing error join failed : Invalid 
attribute specification.

DESCRIPTION:
For the disk name containing  a/a character e. g. cciss/c0d1, join 
operation fails to parse the name of disks and hence returns with error.

RESOLUTION:
Code changes are made to handle special characters in disk name.

* 2442751 (Tracking ID: 2104887)

SYMPTOM:
vxdg import fails with following ERROR message for cloned device import, when
original diskgroup is already imported with same DGID.
# vxdg -Cfn clonedg -o useclonedev=on -o tag=tag1 import testdg
VxVM vxdg ERROR V-5-1-10978 Disk group testdg: import failed: Disk group exists
and is imported

DESCRIPTION:
In case of clone device import, vxdg import without "-o updateid" option fails,
if original DG is already imported. The error message returned may be
interpreted as diskgroup with same name is imported, while actually the dgid is
duplicated and not the dgname.

RESOLUTION:
vxdg utility is modified to return better error message for cloned DG
import. It directs you to get details from system log. Details of conflicting
dgid and suggestion to use "-o updateid" are added in the system log.

* 2442827 (Tracking ID: 2149922)

SYMPTOM:
Record the diskgroup import and deport events in 
the /var/adm/syslog/syslog.log  file.
Following type of message can be logged in syslog:
vxvm:vxconfigd: V-5-1-16254 Disk group import of <dgname> succeeded.

DESCRIPTION:
With the diskgroup import or deport, appropriate success message 
or failure message with the cause for failure should be logged.

RESOLUTION:
Code changes are made to log diskgroup import and deport events in 
syslog.

* 2485261 (Tracking ID: 2354046)

SYMPTOM:
dgcfgrestore man page examples are incorrect.

DESCRIPTION:
Example 1:
<snip>
Restore the VxVM configuration information for the VxVM disk disk01 on the
replacement disk device c0t7d0 that was part of the single-disk disk group
onediskdg using the default configuration file /etc/vxvmconf/dg00.conf:

      ## Restore configuration data
      # dgcfgrestore -n onediskdg c0t7d0	
</snip>

The last step of this example will not work. If we create disk group with same
name then the default configuration file /etc/vxvmconf/<dg_name>.conf is moved
to etc/vxvmconf/<dg_name>.conf.old and new configuration is written to default
configuration file. The an option will read configuration from default
configuration file and dgcfgrestore will fail. 

Example 2:
<snip>
The final example restores the VxVM configuration information for the VxVM disk,
disk01 for failed disk device c0t2d0, which was part of a single-disk disk
group, foodg, to disk, c0t3d0. The default configuration file,
/etc/vxvmconf/foodg.conf is used. 

# Initialize c0t3d0 for use by VxVM 
/etc/vx/bin/vxdisketup -i c0t3d0 
# Create disk group using this disk 
vxdg init onediskdg disk01=c0t3d0 
# Restore configuration data 
dgcfgrestore -n foodg -o c0t3d0 c0t2d0 
</snip>

In this example to restore configuration of foodg on replacement disk we need to
create new disk group with same name i.e. foodg. a aoa option refers to old disk
media name. In the last step ao option is used with new disk media name so
dgcfgrestore will fail.

RESOLUTION:
This document defect is addressed by document change. For Example 1, -f option
is used to read disk group configuration from /etc/vxvmconf/<dg_name>.conf.old
file. For Example 2, new disk group name is changed to foodg and  -o option is
used with old disk media name.

* 2492568 (Tracking ID: 2441937)

SYMPTOM:
vxconfigrestore(1M) command fails with the following error...

"The source line number is 1.
awk: Input line 22 | <Enclosure Name> cannot be longer than 3, 000 bytes."

DESCRIPTION:
In the function where we read the disk attributes from backup, we are getting the
disk attributes in the variable "$disk_attr".
The value of this variable "$disk_attr" comes out to be a line longer than 3000
bytes. Later this variable "$disk_attr" is used by awk(1) command to parse it and
hits the awk(1) limitation of 3000 bytes.

RESOLUTION:
The code is modified to replace awk(1) command with cut(1) command which does not
have this limitation.

* 2495590 (Tracking ID: 2492451)

SYMPTOM:
If the Veritas Volume Manager (VxVM) is not configured, 
the following messages are observed in syslog file:
  vxesd[1554]: Event Source daemon started 
  vxesd[1554]: HBA API Library Loaded 
  vxesd[1554]: Registration with DMP failed

DESCRIPTION:
The presence of a/etc/vx/reconfig.d/state.d/install-dba file indicates that 
VxVM is not configured.
The startup script a/sbin/init.d/vxvm-startup2a does not check the presence of 
install-db file and starts the vxesd(1M) daemon.

RESOLUTION:
The code has been modified to check the presence of install-db file and start 
vxesd(1M) daemon accordingly.

* 2513457 (Tracking ID: 2495351)

SYMPTOM:
VxVMconvert utility was incompatible to migrate data across platforms from native 
LVM configurations.

DESCRIPTION:
VxVMconvert used for migration of customer data did not take into consideration 
varied LVM configuration scenarios that hindered its ability to do successful 
conversion. For example, HPUX Volume Group with large number of Logical and 
Physical Volumes requires significant metadata size to be stored on the disk for
successful conversion. However VxVMconvert failed to calculate metadata size based 
on the configuration rather created static and lesser sizes thereby leading to 
failure in migration of data.

RESOLUTION:
Enhanced VxVMconvert utility to handle different LVM configuration scenarios to 
migrate data across platforms.

* 2515137 (Tracking ID: 2513101)

SYMPTOM:
When VxVM is upgraded from 4.1MP4RP2 to 5.1SP1RP1, the data on CDS disk gets
corrupted.

DESCRIPTION:
When CDS disks are initialized with VxVM version 4.1MP4RP2, the number of 
cylinders are calculated based on the disk raw geometry. If the calculated 
number of cylinders exceed Solaris VTOC limit (65535), because of unsigned 
integer overflow, truncated value of number of cylinders gets written in CDS 
label.
    After the VxVM is upgraded to 5.1SP1RP1, CDS label gets wrongly written in
the public region leading to the data corruption.

RESOLUTION:
The code changes are made  to suitably adjust the number of tracks & heads so 
that the calculated number of cylinders be within Solaris VTOC limit.

* 2531224 (Tracking ID: 2526623)

SYMPTOM:
Memory leak detected in CVM DMP messaging phase. Below is message:
NOTICE: VxVM vxio V-5-3-3938 vol_unload(): not all memory has been freed 
(volkmem=424)

DESCRIPTION:
During CVM-DMP messaging memory was not getting freed for a specific scenario.

RESOLUTION:
Necessary code changes done to take care of memory deallocation.

* 2560539 (Tracking ID: 2252680)

SYMPTOM:
When a paused VxVM (Veritas Volume Manager task) is aborted using 'vxtask abort'
command, it does not get aborted appropriately. It continues to show up in the
output of 'vxtask list' command and the corresponding process does not get killed.

DESCRIPTION:
As appropriate signal is not sent to the paused VxVM task which is being
aborted, it fails to abort and continues to show up in the output of 'vxtask
list' command. Also, its corresponding process does not get killed.

RESOLUTION:
Code changes are done to send an appropriate signal to paused tasks to abort them.

* 2567623 (Tracking ID: 2567618)

SYMPTOM:
The VRTSexplorer dumps core with the segmentation fault in 
checkhbaapi/print_target_map_entry. The stack trace is observed as follows:
 print_target_map_entry()
check_hbaapi()
main()
_start()

DESCRIPTION:
The checkhbaapi utility uses the HBA_GetFcpTargetMapping() API which returns 
the current set of mappings between the OS and the Fiber Channel Protocol (FCP) 
devices for a given Host Bus Adapter (HBA) port. The maximum limit for mappings 
is set to 512 and only that much memory is allocated. When the number of 
mappings returned is greater than 512, the function that prints this 
information tries to access the entries beyond that limit, which results in 
core dumps.

RESOLUTION:
The code is modified to allocate enough memory for all the mappings returned by 
the HBA_GetFcpTargetMapping() API.

* 2570988 (Tracking ID: 2560835)

SYMPTOM:
On master I/Os and vxconfigd get hung when slave is rebooted under 
heavy I/O load.

DESCRIPTION:
When slave leaves cluster without sending the DATA ack message to 
master, slave's I/Os get stuck on master because their logend processing can 
not be completed. At the same time cluster reconfiguration takes place as the 
slave left the cluster. In CVM (Cluster Volume Manager) reconfiguration code 
path these I/Os are aborted in order to proceed with the reconfiguration and 
recovery. But if the local I/Os on master goes to the logend queue after the 
logendq is aborted, these local I/Os get stuck forever in the logend queue 
leading to the permanent I/O hang.

RESOLUTION:
During CVM reconfiguration and RVG (Replicated Volume group) 
recovery later, no I/Os will be put into the logendq.

* 2576605 (Tracking ID: 2576602)

SYMPTOM:
listtag option for vxdg command gives results even when executed with wrong 
syntax.

DESCRIPTION:
The correct syntax, as per vxdg help, is "vxdg listtag [diskgroup ...]".
However, when executed with the wrong syntax, "vxdg [-g diskgroup] listtag", it
still give results.

RESOLUTION:
Please use the correct syntax as per help for vxdg command.
The command has being modified from 6.0 release onwards to display error and
usage message when wrong syntax is used.

* 2613584 (Tracking ID: 2606695)

SYMPTOM:
Panic in CVR (Clustered Volume Replicator) environment while performing I/O 
Operations.

Panic stack traces might look like:
1)
vol_rv_add_wrswaitq 
vol_get_timespec_latest 
vol_kmsg_obj_request 
vol_kmsg_request_receive 
vol_kmsg_receiver 
kernel_thread

2)
vol_rv_mdship_callback
vol_kmsg_receiver
kernel_thread

DESCRIPTION:
In CVR, logclient requests METADATA information from logowner node to perform 
write operations.  Logowner node looks for any duplicate messages before adding 
the requests to the queue for processing. When a duplicate request arrives, 
logowner tries to copy the data from original I/O request and responds to the 
logclient with the METADATA information. During this process, panic can occur 

i) While copying the data as the code handling acopya is not properly locked.
ii) if logclient receives inappropriate METADATA information because of 
improper copy.

RESOLUTION:
Code changes are performed with appropriate conditions and locks while copying 
the data from original I/O requests for the duplicates.

* 2613596 (Tracking ID: 2606709)

SYMPTOM:
SRL overflow and CVR reconfiguration lead to the reconfiguration hang.

DESCRIPTION:
There are 6 RVG each has 16 datavolumes in the above reported 
problem. This problem could happen with more than 1 RVG configured. Here, both 
master and slave nodes are performing I/O. Slave node is rebooted, and which 
trigger a reconfiguration. All 6 RVG's doing I/O which fully utilized the 
RVIOMEM pool (memory pool used for RVG I/O's). Due to node leave, the I/O's on 
all the RVG will come to halt  waiting for recovery flag set by the 
reconfiguration code path. Some pending I/O's in all the RVG's are still kept 
in the queue, due to holes in the SRL  beacuse of node leave. The RVIOMEM pool 
is completely used by 3 of the RVG (600 + I/O) which are still doing the I/O.
In the reconfiguration code, the rvg1 is picked to abort all the pending IO's 
in the queue, and wait for the active I/O's to complete. There are still some 
I/O still waiting for the RVIOMEM pool and is waiting for the memory. But the 
other active RVG's are not releasing any memory, this is just queued or 
waiting for the memory. With out all the pending I/O's are serviced, the code 
will not move forward to abort the I/O's,  and the reconfiguration will never 
complete.

RESOLUTION:
Instead of going 1 by 1 RVG to abort and start the recovery, 
changing the logic to abort the I/O's in all the RVG's first. Later send the 
recovery message for all the RVG's after the iocount drains to 0. This way, we 
will avoid a hang situation due to some RVG's holding the memory.

* 2616006 (Tracking ID: 2575172)

SYMPTOM:
The reconfigd thread is hung waiting for the IO to drain.

DESCRIPTION:
While doing CVR(Cluser Volume Replicator) reconfiguration, RVG
(Replicator Volume Group) recovery is started. The recovery can get stuck in 
DCM(Data Change Map) read while flushing the SRL(Serial Replicator Log). Flush 
operation creates lrage number of (1000+) threads. When the system memory is 
very low. In some cases, when the memory allocation is fails, failing to reduce 
the count leads to hang.

RESOLUTION:
Reset the number_of_children to 0, when ever the I/O creation fails 
due to memory allocation failure.

* 2622029 (Tracking ID: 2620556)

SYMPTOM:
The I/O hung on the primary after SRL overflow and during SRL flush and rlink 
connect/disconnect.

DESCRIPTION:
As part of rlink connect or disconnect, the RVG is serialized to complete the 
connection or disconnection. The I/O throttle is normal during the SRL flush 
due to memory pool pressure or reaching the max throttle limit. During the 
serialization, the i/o is throttled to complete the DCM flush. The remote I/O's 
are kept in throttleq during the throttling is triggered.

Due to I/O serialization, the throttled I/O is never get flushed and because 
of that I/O never complete.

RESOLUTION:
If the serialization is successful, flush the throttleq immediately. This will 
make sure, the remote I/O's will get retried again in the serialization code 
path

* 2622032 (Tracking ID: 2620555)

SYMPTOM:
During CVM reconfig, the RVG wait for the iocount to go to '0', to start the 
RVG recovery and complete the reconfiguration.

DESCRIPTION:
In CVR, the node leave will trigger the reconfiguration. The reconfiguration 
code path initiate the RVG recovery of all the shared diskgroup. The recovery 
is needed to flush the SRL (shared by all the nodes) to the data volume to 
avoid any missing writes to the data volume by the leaving node. This recovery 
involves, reading the data fromt he SRL and copy copy it to the data volume. 
The flush may take its own time depend on the disk response time and the size 
of SRL region required to flush. During the recovery a flag is set on the RVG 
to avoid any new I/O.

In this particular hand, the recovery is taking 30 minutes. During this time, 
there is another node leave happened, which triggered the second 
reconfiguration. The second reconfiguration before it trigger another recovery 
it wait for the IO count to go to zero by setting the RECOVER flag to RVG.
 
The first RVG recovery clears the RECOVER flag after 30 minutes once completed 
the SRL flush. Since this is the same flag set by the second reconfiguration, 
the second reconfiguration waiting indefinitely for the I/O count to go to 
zero. Since the RECOVER flag is unset, the I/O resumed. So second 
reconfiguration stuck forever.

RESOLUTION:
If the RECOVER flag is set, dont keep waiting for the iocount to become zero in 
the reconfigration code path. There is no need for another recovery, if the 
second reconfiguration is started before the first recovery completes.

* 2626742 (Tracking ID: 2626741)

SYMPTOM:
Vxassist when used with "-o ordered " and "mediatype:hdd" during
striped volume make operation does not maintain disk order.

DESCRIPTION:
Vxassist when invoked with "-o ordered" and "mediatype:hdd"
options while creating a striped volume, does not maintain the disk order
provided by the user.  First stripe of the volume should correspond to the
first disk provided by the user.

RESOLUTION:
Rectified code to use the disks as per the user specified disk
order.

* 2626745 (Tracking ID: 2626199)

SYMPTOM:
"vxdmpadm list dmpnode" command shows the path-type value as "primary/secondary"
for a LUN in an Active-Active array as below when it is suppose to be NULL value.

<snippet starts>
dmpdev          = c6t0d3 
state           = enabled 
...
array-type      = A/A 
###path         = name state type transport ctlr hwpath aportID aportWWN attr 
path            = c23t0d3 enabled(a) secondary FC c30 2/0/0/2/0/0/0.0x50060e800 
5c0bb00 - - - 
</snippet ends>

DESCRIPTION:
For a LUN under Active-Active array the path-type value is supposed to be NULL.
In this specific case other commands like "vxdmpadm getsubpaths dmpnode=<>" were
showing correct (NULL) value for path-type.

RESOLUTION:
The "vxdmpadm list dmpnode" code path failed to initialize the path-type
variable and by default set path-type to "primary or secondary" even for 
Active-Active array LUN's. This is fixed by initializing the path-type variable 
to NULL.

* 2627000 (Tracking ID: 2578336)

SYMPTOM:
I/O error is encountered while accessing the cdsdisk.

DESCRIPTION:
This issue is seen only on defective cdsdisks, where s2 partition size in sector 
0 label is less than sum of public region offset and public region length.

RESOLUTION:
A solution has been implemented to rectify defective cdsdisk at the time the 
cdsdisk is onlined.

* 2627004 (Tracking ID: 2413763)

SYMPTOM:
vxconfigd, the VxVM daemon dumps core with the following stack:

ddl_fill_dmp_info
ddl_init_dmp_tree
ddl_fetch_dmp_tree
ddl_find_devices_in_system
find_devices_in_system
mode_set
setup_mode
startup
main
__libc_start_main
_start

DESCRIPTION:
Dynamic Multi Pathing node buffer declared in the Device Discovery Layer was not 
initialized. Since  the node buffer is local to the function, an explicit 
initialization is required before copying another buffer into it.

RESOLUTION:
The node buffer is appropriately initialized using memset() to address the 
coredump.

* 2641932 (Tracking ID: 2348199)

SYMPTOM:
vxconfigd dumps core during Disk Group import with the following function call
stack 

strcmp+0x60 ()
da_find_diskid+0x300 ()
dm_get_da+0x250 ()
ssb_check_disks+0x8c0 ()
dg_import_start+0x4e50 ()
dg_reimport+0x6c0 ()
dg_recover_all+0x710 ()
mode_set+0x1770 ()
setup_mode+0x50 ()
startup+0xca0 ()
main+0x3ca0 ()

DESCRIPTION:
During Disk Group import, vxconfigd performs certain validations on the disks.
During one such validation, it iterates through the list of available disk
access records to find a match with a given disk media record. It does a string
comparison of the disk IDs in the two records to find a match. Under certain
conditions, the disk ID for a disk access record may have a NULL value.
vxconfigd dumps core when it passes this to strcmp() function.

RESOLUTION:
Code was modified to check for disk access records with NULL value and skip them
from disk ID comparison.

* 2646417 (Tracking ID: 2556781)

SYMPTOM:
In cluster environment, importing a disk group which is imported on another
node will result in wrong error messages like given below:
VxVM vxdg ERROR V-5-1-10978 Disk group <diskgroup name>: import failed:
Disk is in use by another host

DESCRIPTION:
When VxVM is translating a given disk group name to disk group id during the
disk group import process, an error return indicating that the disk group is in
use by another host may be overwritten by a wrong error.

RESOLUTION:
The source code has been changed to handle the return value in a correct way.

* 2652161 (Tracking ID: 2647975)

SYMPTOM:
Serial Split Brain (SSB) condition caused Cluster Volume Manager(CVM) Master
Takeover to fail. The below vxconfigd debug output was noticed when the issue
was noticed:

VxVM vxconfigd NOTICE V-5-1-7899 CVM_VOLD_CHANGE command received
V-5-1-0 Preempting CM NID 1
VxVM vxconfigd NOTICE V-5-1-9576 Split Brain. da id is 0.5, while dm id is 0.4 
for
dm cvmdgA-01
VxVM vxconfigd WARNING V-5-1-8060 master: could not delete shared disk groups
VxVM vxconfigd ERROR V-5-1-7934 Disk group cvmdgA: Disabled by errors 
VxVM vxconfigd ERROR V-5-1-7934 Disk group cvmdgB: Disabled by errors 
...
VxVM vxconfigd ERROR V-5-1-11467 kernel_fail_join() :           Reconfiguration
interrupted: Reason is transition to role failed (12, 1)
VxVM vxconfigd NOTICE V-5-1-7901 CVM_VOLD_STOP command received

DESCRIPTION:
When Serial Split Brain (SSB) condition is detected by the new CVM master, on
Veritas Volume Manager (VxVM)  versions 5.0 and 5.1, the default CVM behaviour
will cause the new CVM master to leave the cluster and causes cluster-wide downtime.

RESOLUTION:
Necessary code changes have been done to ensure that when SSB is detected in a
diskgroup, CVM will only disable that particular diskgroup and keep the other
diskgroups imported during the CVM Master Takeover, the new CVM master will not
leave the cluster with the fix applied.

* 2663673 (Tracking ID: 2656803)

SYMPTOM:
VVR (Veritas Volume Replicator) panics when vxnetd start/stop operations are 
invoked in parallel.
Panic stack trace might look like:

panicsys
vpanic_common
panic
mutex_enter()
vol_nm_heartbeat_free()
vol_sr_shutdown_netd()
volnet_ioctl()
volsioctl_real()
spec_ioctl()

DESCRIPTION:
vxnetd start and stop operations are not serialized. Hence we hit race condition 
and panic if they are run in parallel, when they access the shared resources 
without locks. The panic stack varies depending on where the resource contention 
is seen.

RESOLUTION:
Incorporated synchronization primitive to allow only either the vxnetd start or 
stop process to run at a time.

* 2677025 (Tracking ID: 2677016)

SYMPTOM:
Veritas Event Manager(vxesd(1M)) daemon dumps core when main thread tries to
close one of its thread (which hold connection with HP Event Manager). The stack
trace of main thread will be as follows:
_evmmiscFree+0 () from /usr/lib/hpux32/libevm.so
EvmConnDestroy+0x150 () from /usr/lib/hpux32/libevm.so
volhp_evm_stop+0x30 ()
register_evm_events+0x60 ()

DESCRIPTION:
When main thread of vxesd encounters an error in processing client requests, it
starts closing others threads. But when it starts closing EVM thread, main
thread destroys the connection used by EVM thread and if there is a context
switch at this point to EVM thread, there is a chance of vxesd dumping core.

RESOLUTION:
Code changes have been done to ensure that EVM thread destroy its connection
rather than main thread doing it.

* 2690959 (Tracking ID: 2688308)

SYMPTOM:
When re-import of a disk group fails during master takeover, it makes all the
shared disk groups to be disabled. It also results in the corresponding node
(new master) leaving the cluster.

DESCRIPTION:
In cluster volume manager when master goes down, the upcoming master tries to 
re-import the disk group. If some error occurs while re-importing the disk
group then it disables all the shared disk groups and the new master leaves the
cluster. This may result in cluster outage.

RESOLUTION:
Code changes are made to disable the disk group on which error occurred while
re-importing and continue import of the other shared disk groups.

* 2695226 (Tracking ID: 2648176)

SYMPTOM:
In a clustered volume manager environment, additional data synchronization is 
noticed during reattach of a detached plex on a mirrored volume even when there 
was no I/O on the volume after the mirror was detached. This behavior is seen 
only on mirrored volumes with version 20 DCO attached and is part of a shared 
diskgroup.

DESCRIPTION:
In a clustered volume manager environment, write I/Os issued on a mirrored 
volume from the CVM master node are tracked in a bitmap unnecessarily. The 
tracked bitmap is then used during detach to create the tracking map for 
detached plex. This results in additional delta between active plex and the 
detached plex. So, even when there are no I/Os after detach, the reattach will 
do additional synchronization between mirrors.

RESOLUTION:
The unnecessary bitmap tracking of write I/Os issued on a mirrored volume from 
the CVM master node is prevented. So, the tracking map that gets created during 
detach will always starts clean.

* 2695231 (Tracking ID: 2689845)

SYMPTOM:
Disks are seen in error state.
hitachi_usp-vm0_11 auto            -            -            error

DESCRIPTION:
When data at the end of the first sector of the disk is same as 
MBR signature, Volume Manager misinterprets the data disk as MBR disk. 
Accordingly, partitions are determined but format determination fails for these 
fake partitions and disk goes into error state.

RESOLUTION:
Code changes are made to check the status field of the disk along 
with the MBR signature. Valid status fields for MBR disk are 0x00 and 0x80.

* 2703035 (Tracking ID: 925653)

SYMPTOM:
Node join fails when CVMTimeout is set to value higher than 35 mins 
(approximately).

DESCRIPTION:
Node join fails due to integer overflow for higher CVMTimeout value.

RESOLUTION:
Code changes done to handle higher CVMTimeout value.

* 2705101 (Tracking ID: 2216951)

SYMPTOM:
The vxconfigd daemon core dumps in the chosen_rlist_delete() function and the 
following stack trace is displayed:

chosen_rlist_delete()
req_dg_import_disk_names()
request_loop()
main()

DESCRIPTION:
The vxconfigd daemon core dumps when it accesses a NULL pointer in the 
chosen_rlist_delete() function.

RESOLUTION:
The code is modified to handle the NULL pointer in the chosen_rlist_delete() 
function

* 2706024 (Tracking ID: 2664825)

SYMPTOM:
The following two issues are seen when a cloned disk group having a mixture of
disks which are clones of disks initialized under VxVM version 4.x and 5.x is
imported.

(i) The following error will be seen without -o useclonedev=on -o updateid
options on 5.x environment with the import failure.

# vxdg -Cf import <disk_group_name>
VxVM vxdg ERROR <UMI-num> Disk group <disk_group_name>: import failed:
Disk group has no valid configuration copies

(ii) The following warning will be seen with -o useclonedev=on -o updateid
options on 5.x environment with the import success.

# vxdg -Cf -o useclonedev=on -o updateid import <disk-group-name>
VxVM vxdg WARNING <UMI-num> Disk <disk_name>: Not found, last known location: 
...

DESCRIPTION:
The vxconfigd, a VxVM daemon, imports a disk group having disks where all the
disks should be either cloned or standard(non-clone). If the disk group has a
mixture of cloned and standard devices, and user attempts to import the disk 
group -

(i) without "-o useclonedev=on" options, only standard disks are considered
for import. The import would fail if none of the standard disks have a valid
configuration copy. 

(ii) with "-o useclonedev=on" option, the import would succeed, but the standard
disks go missing because only clone disks are considered for import.

A disk which is initialized in the VxVM version earlier to 5.x has no concept of
Unique Disk Identifier(UDID) which helps to identify the cloned disk. It could
not be flagged as cloned disk even if it is indeed a cloned disk. This results
in the issues (i) and (ii).

RESOLUTION:
The source code is modified to set the appropriate flags so that the disks 
initialized in VxVM 4.X will be recognized as "cloned", and both of the issues 
(i) and (ii) will be avoided.

* 2706027 (Tracking ID: 2657797)

SYMPTOM:
Starting a RAID5 volume fails, when one of the sub-disks in the RAID5 column
starts at an offset greater than 1TB.

Example:
# vxvol -f -g dg1 -o delayrecover start vol1
VxVM vxvol ERROR V-5-1-10128  Unexpected kernel error in configuration update

DESCRIPTION:
VxVM uses an integer variable to store the starting block offset of a sub-disk
in a RAID5 column. This overflows when a sub-disk is located at an offset
greater than 2147483647 blocks (1TB) and results in failure to start the volume.

Refer to "sdaj" in the following example.

E.g.
v  RaidVol    -            DETACHED NEEDSYNC 64459747584 RAID   -        raid5
pl RaidVol-01 RaidVol    ENABLED  ACTIVE   64459747584 RAID   4/128    RW
[..]
SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
sd DiskGroup101-01 RaidVol-01 DiskGroup101 0 1953325744 0/0 sdaa     ENA
sd DiskGroup106-01 RaidVol-01 DiskGroup106 0 1953325744 0/1953325744 sdaf
ENA             
sd DiskGroup110-01 RaidVol-01 DiskGroup110 0 1953325744 0/3906651488 sdaj
ENA

RESOLUTION:
VxVM code is modified to handle integer overflow conditions for RAID5 volumes.

* 2706038 (Tracking ID: 2516584)

SYMPTOM:
There are many random directories not cleaned up in /tmp/, like 
vx.$RANDOM.$RANDOM.$RANDOM.$$ on system startup.

DESCRIPTION:
In general the startup scripts should call quit(), in which it call do the 
cleanup when errors detected. The scripts were calling exit() directly instead 
of quit() leaving some random-created directories uncleaned.

RESOLUTION:
These script should be restored to call quit() instead of exit() directly.

* 2730149 (Tracking ID: 2515369)

SYMPTOM:
vxconfigd(1M) can hang in the presence of EMC BCV devices in established
(bcv-nr) state with a call stack similar to the following is observed:

  inline biowait_rp
  biowait
  dmp_indirect_io
  gendmpioctl
  dmpioctl
  spec_ioctl
  vno_ioctl
  ioctl
  syscall

Also, a message similar to the following can be seen in the syslog:
NOTICE: VxVM vxdmp V-5-3-0 gendmpstrategy: strategy call failed on bp
<address>, path devno 255/<device_no>

DESCRIPTION:
The issue can happen during device discovery. While reading the device
information, the device is expected to be opened in block mode, but the device
was incorrectly being opened in character mode causing the hang.

RESOLUTION:
The code was changed to open the block device from DMP indirect IO code path.

* 2737373 (Tracking ID: 2556467)

SYMPTOM:
When dmp_native_support is enabled, ASM (Automatic Storage Management) disks 
are disconnected from host and host is rebooted, user defined user-group 
ownership of respective DMP (Dynamic Multipathing) devices is lost and 
ownership is set to default values.

DESCRIPTION:
The user-group ownership records of DMP devices in /etc/vx/.vxdmprawdev file 
are refreshed at the time of boot and only the records of currently available 
devices are retained. As part of refresh, records of all the disconnected ASM 
disks are removed from /etc/vx/.vxdmpraw and hence set to default value.

RESOLUTION:
Made code changes so that the file /etc/vx/.vxdmprawdev will not be refreshed 
at boot time.

* 2737374 (Tracking ID: 2735951)

SYMPTOM:
Following messages can be seen in syslog:

SCSI error: return code = 0x00070000  
I/O error, dev <devname>, sector <no_of_sector>
VxVM vxdmp V-5-0-0 i/o error occurred (errno=0x0) on dmpnode <major>/<minor>

DESCRIPTION:
When the SCSI resets happen, the I/O fails with PATH_OK or PATH_RETRY error. As 
time bound recovery is default recovery option, VxVM retries the I/O till 
timeout. Because of miscalculation of time taken by each I/O retry, total 
timeout value is reduced drastically. All retries fail with the same error in 
this small timeout value and uncorrectable error occurs.

RESOLUTION:
Code changes are made to calculate the timeout value properly.

* 2747340 (Tracking ID: 2739709)

SYMPTOM:
While rebuilding disk group, maker file generated from vxprint -
dmvpshrcCx -D  command does not have the links between volumes and vset. 
Hence, rebuild of disk group fails.

DESCRIPTION:
File generated by the avxprint  -dmvpshrcCx -D aa command does not 
have the link between volumes and vset(volume set) due to which diskgroup 
rebuilding fails.

RESOLUTION:
Code changes are done to maintain the link between volumes and 
vsets.

* 2750455 (Tracking ID: 2560843)

SYMPTOM:
In 3 or more node cluster, when one of the slaves is rebooted under 
heavy I/O load, the I/Os hang on the other slave.

Example :
Node A (master and logowner)
Node B (slave 1)
Node C (slave 2)

If Node C is doing a heavy I/Os and Node B is rebooted, the I/Os on Node C gets
hung.

DESCRIPTION:
When Node B leaves the cluster, its throttled I/Os are aborted and 
all the resources taken by these I/Os are freed. Along with these I/Os, 
throttled I/Os of Node C are also responded that resources are not available to 
let Node C resend those I/Os. But during this process, region locks hold by 
these I/Os on master are not freed.

RESOLUTION:
All the resources taken by the remote I/Os on master are freed 
properly.

* 2756069 (Tracking ID: 2756059)

SYMPTOM:
During boot process when vxvm starts large cross-dg mirrored volume 
(>1.5TB), system may panic with following stack:

vxio:voldco_or_drl_to_pvm
vxio:voldco_write_pervol_maps_20
vxio:volfmr_write_pervol_maps
vxio:volfmr_copymaps_instant
vxio:volfmr_copymaps
vxio:vol_mv_precommit
vxio:vol_commit_iolock_objects
vxio:vol_ktrans_commit
vxio:volconfig_ioctl
vxio:volsioctl_real

DESCRIPTION:
During resync of the cross-dg mirrored volume DRL(dirtly region 
logging) log is changed to track map on the volume. While changing map pointer 
calculation is not done properly. Due to wrong moving forward step of the 
pointer, array out of bounds issue occurs for very large volume leading to 
panic.

RESOLUTION:
The code changes are done to fix the wrong pointer increment.

* 2759895 (Tracking ID: 2753954)

SYMPTOM:
When cable is disconnected from one port of a dual-port FC HBA, only 
paths going through the port should be marked as SUSPECT. But paths going 
through other port are also getting marked as SUSPECT.

DESCRIPTION:
Disconnection of a cable from a HBA port generates a FC event. 
When the event is generated, paths of all ports of the corresponding HBA are 
marked as SUSPECT.

RESOLUTION:
The code changes are done to mark the paths only going through the 
port on which FC event is generated.

* 2763211 (Tracking ID: 2763206)

SYMPTOM:
vxdisk rm dumps core with following stack trace
vfprintf
volumivpfmt
volpfmt
do_rm

DESCRIPTION:
While copying the disk name of very large length array bound 
checking is not done which causes buffer overflow. Segmentation fault occurs 
while accessing corrupted memory, terminating avxdisk rma process.

RESOLUTION:
Code changes are done to do array bound checking to avoid such 
buffer overflow issues.

* 2768492 (Tracking ID: 2277558)

SYMPTOM:
vxassist outputs an error message while doing snapshot related
operations. The message looks like : "VxVM VVR vxassist ERROR V-5-1-10127 
getting associations of rvg <rvg>: Property not found in the list"

DESCRIPTION:
The error message is being displayed incorrectly. There is an error
condition which is getting masked by a previously occurred error which vxassist
chose to ignore, and went ahead with the operation.

RESOLUTION:
Fix has been added to reset previously occurred error which has been ignored, so
that the real error is displayed by vxassist.

* 2800774 (Tracking ID: 2566174)

SYMPTOM:
In a Clustered Volume Manager environment, the node which is taking 
over as MASTER dumped core because of NULL pointer dereference while releasing 
the ilocks. The stack is given below:

vxio:volcvm_msg_rel_gslock
vxio:volkmsg_obj_sio_start
vxio:voliod_iohandle
vxio:voliod_loop

DESCRIPTION:
The issue is seen due to offloading glock messages to the io 
daemon threads. When VxVM io daemon threads are processing the glock release 
messages, the interlock release and free happens after invoking the kernel 
message complete routine. This has a side effect that the reference count on 
the control block becomes zero and if garbage collection is running at this 
stage, it will end up freeing the message from the garbage queue. So, if there 
is a resend of the same message, there will be two contexts processing the same 
interlock free request. The receiver thread for which interlock is NULL and 
freed from other context, panic occurs.

RESOLUTION:
Code changes are done to offload glock messages to VxVM io 
daemon threads after processing the control block. Also the kernel message 
response routine is invoked after checking whether interlock release is 
required and releasing it.

* 2802370 (Tracking ID: 2585239)

SYMPTOM:
On a setup with tape devices, the VxVM commands like "vxdisk -o 
alldgs list" run very slow.

DESCRIPTION:
On a setup with tape devices, vxesd invokes HBA API even for 
tape devices in /dev/rtape directory, which slows down the system.

RESOLUTION:
Call HBA API only for /dev/[r]disk or /dev/[r]dsk devices.

* 2804911 (Tracking ID: 2637217)

SYMPTOM:
The storage allocation attributes pridiskname and secdiskname are not 
documented in the vradmin man page for resizevol/resizesrl.

DESCRIPTION:
The apridisknamea and asecdisknamea are optional arguments to the vradmin 
resizevol and vradmin resizesrl commands, which enable users to specify a comma-
separated list of disk names for the resize operation on a VVR data volume and 
SRL, respectively. These arguments were introduced in 5.1SP1, but were not 
documented in the vradmin man page.

RESOLUTION:
The vradmin man page has been updated to document the storage allocation 
attributes apridisknamea and asecdisknamea for the vradmin resizevol and 
vradmin resizesrl commands.

* 2817926 (Tracking ID: 2423608)

SYMPTOM:
System panic'd after losing some paths to a disk following some FC issues with 
the following stack :

vol_dev_strategy+0x81 
voldiosio_start+0xe70 
volkcontext_process+0x8c0 
volsiowait+0x130 
voldio+0xad0 
vol_voldio_read+0x30 
volconfig_ioctl+0x940 
volsioctl_real+0x7d0 
volsioctl+0x60 
vols_ioctl+0x80 
spec_ioctl+0x100 
vno_ioctl+0x390 
ioctl+0x3e0 
syscall+0x5a0

DESCRIPTION:
Panic occurs in voldiosio_start()->vol_dev_strategy(), 
the related procedure in voldiosio_start() is 

1) find disk by invoking function find_disk_from_dio()
2) Set the disk policy 
3) Call vol_dev_stratety() to fire IO

There is a race condition in traversing the dglist and when there are FC issues,
the wrong disk is returned which was already freed. As the disk has been freed,
the diskas iopolicy points to a freed memory and caused panic when accessing it.

RESOLUTION:
There is no need to traverse the dglist to get the disk information as the same
disk is set in the IO structure. The IO is issued to the correct disk and no panic
occurs.

* 2821137 (Tracking ID: 2774406)

SYMPTOM:
The system may panic while referencing the DCM(Data Channge Map) 
object attached to the volume, with following stack:
vol_flush_srl_to_dv_start
voliod_iohandle
voliod_loop

DESCRIPTION:
When volume tries to flush the DCM to track the I/O map, if disk 
attached to the DCM is not available, DCM state is set to aborting before 
marking inactive. Since the current state of volume is till ACTIVE, trying to 
access the DCM object causes panic.

RESOLUTION:
Code changes are done to check if DCM is not in aborting state 
before proceeding with the DCM flush.

* 2821143 (Tracking ID: 1431223)

SYMPTOM:
"vradmin syncvol" and "vradmin syncrvg" commands do not work if the remote
diskgroup and vset names are specified when synchronizing vsets.

DESCRIPTION:
When command "vradmin syncvol" or "vradmin syncrvg" for vset is executed, vset
is expanded to its component volumes and path is generated for each component
volume. But when remote vset names are specified on command line, it fails to
expand remote component volumes correctly. Synchronization will fail because of
incorrect path for volumes.

RESOLUTION:
Code changes have been made to ensure remote vset is expanded correctly when
specified on command line.

* 2821452 (Tracking ID: 2495332)

SYMPTOM:
vxcdsconvert(1M) fails if private region length is less than 1MB and 
it is a single sub-disk spanning the entire disk with following error:

# vxcdsconvert  -g <DGNAME> alldisks evac_subdisks_ok=yes
VxVM vxprint ERROR V-5-1-924 Record <SUBDISK> not found
VxVM vxprint ERROR V-5-1-924 Record <SUBDISK> not found
VxVM vxcdsconvert ERROR V-5-2-3174 Internal Error
VxVM vxcdsconvert ERROR V-5-2-3120 Conversion process aborted

DESCRIPTION:
If non-cds disk private region length is less than 1MB, 
vxcdsconvert internally tries to relocate subdisks at the start to create room 
for the private region of 1MB. To make room for back-up labels, vxcdsconvert 
tries to relocate subdisks at the end of the disk. Two entries, one for 
relocation at the start and one at the end are created during the analysis 
phase. Once the first sub-disk is relocated, next vxprint operation fails as 
the sub-disk has already been evacuated to another DM(disk media record).

RESOLUTION:
This problem is fixed by allowing generation of multiple relocation 
entries for same subdisk. Later if the sub-disk is found to be already 
evacuated to other DM, relocation is skipped for the subdisk with same name.

* 2821515 (Tracking ID: 2617277)

SYMPTOM:
Man pages missing for the vxautoanalysis and vxautoconvert commands.

DESCRIPTION:
The man pages for the vxautoanalysis and vxautoconvert commands are missing from
the base package.

RESOLUTION:
Added the man pages for vxautoanalysis(1M) and vxautoconvert(1M) commands.

* 2821537 (Tracking ID: 2792748)

SYMPTOM:
In an HP-UX cluster environment, the slave join fails with the following error 
message in syslog:

VxVM vxconfigd ERROR V-5-1-5784 cluster_establish:kernel interrupted vold on 
overlapping reconfig.

DESCRIPTION:
During the join operation, the slave node performs a disk group import. As part
of the import, the file descriptor pertaining to "Port u" is closed due to a
wrong assignment of the return value of open(). Hence, the subsequent write to
the same port returns EBADF.

RESOLUTION:
The code is modified to avoid closing the wrong file descriptor.

* 2821678 (Tracking ID: 2389554)

SYMPTOM:
The vxdg command of VxVM (Veritas Volume Manager) located at /usr/sbin
directory shows incorrect message for ssb (Serial Split Brain) information of a
disk group. The ssb information uses "DISK PRIVATE PATH" as an item, but the 
content is public path of some disk. The ssb information also prints unknown 
characters to represent the config copy id of a disk if the disk's config 
copies are all disabled. Moreover, there is some redundant information in the 
output messages.

"DISK PRIVATE PATH" error is like this:
$ vxdisk list <diskname>
...
pubpaths:  block=/dev/vx/dmp/<diskname>s4 char=/dev/vx/rdmp/<diskname>s4
privpaths: block=/dev/vx/dmp/<diskname>s3 char=/dev/vx/rdmp/<diskname>s3
...
$ vxsplitlines -v -g mydg
...
                                Pool 0
DEVICE          DISK         DISK ID        DISK PRIVATE PATH        
<device name> <diskname> <disk id>       /dev/vx/rdmp/<diskname>s4

Unknown character error message is like this:
$ vxdg import <disk group>
...
To import the diskgroup with config copy from the second pool issue the command
/usr/sbin/vxdg [-s] -o selectcp=<config copy ID> import <disk group>
...

DESCRIPTION:
VxVM uses a SSB data structure to maintain ssb information displayed to the 
user. The SSB data structure contains some members, such as Pool ID, config 
copy id, etc. After memory allocation for a SSB data structure, this new 
allocated memory area is not initialized. If all the config copies of some  
disk are disabled, then the config copy id member has unknwon data. vxdg 
command will try to print such data, then unknown characters are displayed to 
the stdout. The SSB data structure has "disk public path" member, but no "disk 
private path" member. So the output message can only display public path of a 
disk.

RESOLUTION:
The ssb structure has been changed to use "disk private path" instead of "disk
public path". Moreover, after memory allocation for a ssb structure, newly 
allocated memory is properly initialized.

* 2821695 (Tracking ID: 2599526)

SYMPTOM:
SRL to DCM flush does not happen resulting in I/O hang.

DESCRIPTION:
After SRL overflow, before the RU state machine phase could be changed to
VOLRP_PHASE_SRL_FLUSH; Rlink connection thread sneak in and changed the phase to
VOLRP_PHASE_START_UPDATE. Once the phase is changed to VOLRP_PHASE_START_UPDATE;
the state machine missed to flush the SRL into DCM and goes into
VOLRP_PHASE_DCM_WAIT and stucks there.

RESOLUTION:
RU state machine phases are handled correctly after SRL overflows.

* 2826129 (Tracking ID: 2826125)

SYMPTOM:
VxVM script daemons are not up after they are invoked with the vxvm-recover 
script.

DESCRIPTION:
When the VxVM script daemon is starting, it will terminate any stale instance
if it does exist. When the script daemon is invoking with exactly the same 
process id of the previous invocation, the daemon itself is abnormally 
terminated by killing one own self through a false-positive detection.

RESOLUTION:
Code changes are made to handle the same process id situation correctly.

* 2826607 (Tracking ID: 1675482)

SYMPTOM:
vxdg list <dgname> command shows configuration copy in new failed state.

# vxdg list dgname
config disk 3PARDATA0_75 copy 1 len=48144 state=new failed
       config-tid=0.1550 pending-tid=0.1551
       Error: error=Volume error 0

DESCRIPTION:
When a configuration copy is initialized on a new disk in a diskgroup, an IO 
error on the disk can prevent on-disk update and make configuration copy in-
consistent.

RESOLUTION:
In case of a failed initialization, configuration copy is disabled. If required
in future, this disabled copy will be reused for setting up a new configuration
copy. If current state of configuration copy is anew-faileda, next import of the
diskgroup will disable it.

* 2827791 (Tracking ID: 2760181)

SYMPTOM:
The secondary slave node hit a panic in vol_rv_change_sio_start() for already 
active logowner operation.

DESCRIPTION:
The slave node panic during the logowner change. The logowner change and the 
reconfiguration recovery process happens at the same time, leading to a race in 
setting the ACTIVE flag. The reconfiguration recovery unset the flag which is 
set by the logowner change operation. In the middle of logowner change 
operation the ACTIVE flag is missing and leads to the system panic.

RESOLUTION:
The appropriate lock is taken in the logowner change code and also added more 
debug log entries for better tracking the logowner issues.

* 2827794 (Tracking ID: 2775960)

SYMPTOM:
On secondary CVR, after disabling the SRL on one DG, triggered an IO hang on 
another DG.

DESCRIPTION:
The failure of SRL lun's is causing the failure in both DG's. The I/O failure 
in the messages confirmed, the LUN failure on the DG4 also. Every 1024 I/O's to 
the SRL, the header of the SRL is flushed. In the SRL flush code, during the 
error scenario, the flush I/O is queued but not getting started. If the flush 
I/O is not getting completed, the application I/O will hang for ever.

RESOLUTION:
The fix is to start the flush I/O which is queued in the error scenario.

* 2827939 (Tracking ID: 2088426)

SYMPTOM:
Re-onlining of all disks irrespective of whether they belong to the
shared dg being destroyed/deported is being done on master and slaves.

DESCRIPTION:
When a shared dg is destroyed/deported, all disks on the nodes in the cluster 
are re-onlined asynchronously when another command is fired. This results in 
wasteful usage of resources and delaying the next command substantially based 
on how many disks/luns are there on each node.

RESOLUTION:
Restrict re-onlining of luns/disks to those that belong to the dg that
is being destroyed/deported. By doing this resource and delay in the next
command is restricted to the number of luns/disks that belong to the dg in 
question.

* 2836910 (Tracking ID: 2818840)

SYMPTOM:
1. The file permissions set to ASM devices is not persistent across reboot.
2. User is not able to set desired permissions to ASM devices.
3. The files created with user id root and  group other than system  are 
not persistent.

DESCRIPTION:
The vxdmpasm sets the permissions of the devices to "660", which is not 
persistent across reboot as these devices are not kept in /etc/vx/.vxdmpasmdev 
file. Currently there is no option which enables the User to set the desired 
permissions. The files created with user id root and group other than asystema 
are changed back to aroot:systema upon reboot.

RESOLUTION:
Code is modified to keep the device entries in the file /etc/vx/.vxdmpasmdev to 
make the permissions persistent across reboot. Code is enhanced to provide an 
option to set the desired permissions and set the desired user id/group.

* 2845984 (Tracking ID: 2739601)

SYMPTOM:
vradmin repstatus output occasionally reports abnormal timestamp 
information.

DESCRIPTION:
Sometimes vradmin repstatus will show the timestamp which is abnormal.
This timestamp is reported in the "Timestamp Information" section of vradmin
repstatus output. In this case the timestamp reported is a very high value in 
time, something like 100 hours. This condition occurs when no data has been 
replicated across to the secondary for a long time. This does not necessarily 
mean that the Rlinks are disconnected for a long time. Even if the Rlinks are 
connected it could be possible that no new data was written to the primary 
during that period and thus no data got replicated across to the secondary.
Now, if at this point the Rlink is paused and some writes are done, then vradmin
repstatus will show abnormal timestamp.

RESOLUTION:
To solve this issue whenever new data is written to the data volume, if the
Rlink is up-to-date then we mark this timestamp. This will make sure that
abnormal timestamp is not reported.

* 2852270 (Tracking ID: 2715129)

SYMPTOM:
Vxconfigd hangs during Master takeover in a CVM (Clustered Volume Manager) 
environment. This results in VxVM command hang.

DESCRIPTION:
During Master takeover, VxVM (Veritas Volume Manager) kernel signals Vxconfigd 
with the information of new Master. Vxconfigd then proceeds with a vxconfigd-
level handshake with the nodes across the cluster. Before kernel could signal 
to vxconfigd, vxconfigd handshake mechanism got started, resulting in the hang.

RESOLUTION:
Code changes are done to ensure that vxconfigd handshake gets started only upon 
receipt of signal from the kernel.

* 2858859 (Tracking ID: 2858853)

SYMPTOM:
In CVM(Cluster Volume Manager) environment, after master switch, vxconfigd 
dumps core on the slave node (old master) when a disk is removed from the disk 
group.

dbf_fmt_tbl()
voldbf_fmt_tbl()
voldbsup_format_record()
voldb_format_record()
format_write()
ddb_update()
dg_set_copy_state()
dg_offline_copy()
dasup_dg_unjoin()
dapriv_apply()
auto_apply()
da_client_commit()
client_apply()
commit()
dg_trans_commit()
slave_trans_commit()
slave_response()
fillnextreq()
vold_getrequest()
request_loop()
main()

DESCRIPTION:
During master switch, disk group configuration copy related flags are not 
cleared on the old master, hence when a disk is removed from a disk group, 
vxconfigd dumps core.

RESOLUTION:
Necessary code changes have been made to clear configuration copy related flags 
during master switch.

* 2859390 (Tracking ID: 2000585)

SYMPTOM:
If 'vxrecover -sn' is run and at the same time one volume is removed, vxrecover 
exits with the error 'Cannot refetch volume', the exit status code is zero but 
no volumes are started.

DESCRIPTION:
vxrecover assumes that volume is missing because the diskgroup must have been
deported while vxrecover was in progress. Hence, it exits without starting
remaining volumes. vxrecover should be able to start other volumes, if the DG 
is not deported.

RESOLUTION:
Modified the source to skip missing volume and proceed with remaining volumes.

* 2860281 (Tracking ID: 2838059)

SYMPTOM:
The VVR secondary machine crashes with following panic stack:

crash_kexec 
__die 
do_page_fault
error_exit
[exception RIP: vol_rv_update_expected_pos+337]
vol_rv_service_update
vol_rv_service_message_start 
voliod_iohandle
voliod_loop
kernel_thread at ffffffff8005dfb1

DESCRIPTION:
If VVR primary machine crashes without completing a few of the write I/Os to the
data volumes, it does fill the incomplete write I/Os with the "DUMMY" I/Os. 
It has to do so to maintain the write order fidelity at the secondary. While 
processing such dummy updates on secondary, because of a logical error, the 
secondary VVR code tries to deference the NULL pointer leading to the panic.

RESOLUTION:
The code changes are made in VVR secondary in "DUMMY" update processing code 
path to correct the logic.

* 2860445 (Tracking ID: 2627126)

SYMPTOM:
Observed IO hang on system as lots of IOs are stuck in DMP global queue.

DESCRIPTION:
Lots of IOs and Paths are stuck in dmp_delayq and dmp_path_delayq respectively
and DMP daemon could not process them, because of the race condition between 
aprocessing the dmp_delayqa and awaking up the DMP daemona. Lock is held while 
processing the dmp_delayq, and it is released for very short duration. If any 
path is busy in this duration, it gives IO error, leading to IO hang.

RESOLUTION:
The global delay queue pointers are copied to local variables and lock is held
only for this period, then IOs in the queue are processed using the local queue
variable.

* 2860449 (Tracking ID: 2836798)

SYMPTOM:
'vxdisk resize' fails with the following error on the simple format EFI 
(Extensible Firmware Interface) disk expanded from array side and system may 
panic/hang after a few minutes.
 
# vxdisk resize disk_10
VxVM vxdisk ERROR V-5-1-8643 Device disk_10: resize failed:
Configuration daemon error -1

DESCRIPTION:
As VxVM doesn't support Dynamic Lun Expansion on simple/sliced EFI disk, last 
usable LBA (Logical Block Address) in EFI header is not updated while expanding 
LUN. Since the header is not updated, the partition end entry was regarded as 
illegal and cleared as part of partition range check. This inconsistent 
partition information between the kernel and disk causes system panic/hang.

RESOLUTION:
Added checks in VxVM code to prevent DLE on simple/sliced EFI disk.

* 2860451 (Tracking ID: 2815517)

SYMPTOM:
vxdg adddisk succeeds to add a clone disk to non-clone and non-clone disk to 
clone diskgroup, resulting in mixed diskgroup.

DESCRIPTION:
vxdg import fails for diskgroup which has mix of clone and non-clone disks. So 
vxdg adddisk should not allow creation of mixed diskgroup.

RESOLUTION:
vxdisk adddisk code is modified to return an error for an attempt to add clone 
disk to non-clone or non-clone disks to clone diskgroup, Thus it prevents 
addition of disk in diskgroup which leads to mixed diskgroup.

* 2860812 (Tracking ID: 2801962)

SYMPTOM:
Operations that lead to growing of volume, including vxresize, vxassist 
growby/growto take significantly larger time if the volume has version 20 
DCO(Data Change Object) attached to it in comparison to volume which doesnt 
have DCO attached.

DESCRIPTION:
When a volume with a DCO is grown, it needs to copy the existing map in DCO and 
update the map to track the grown regions.  The algorithm was such that for 
each region in the map it would search for the page that contains that region 
so as to update the map. Number of regions and number of pages containing them 
are proportional to volume size. So, the search complexity is amplified and 
observed primarily when the volume size is of the order of terabytes. In the 
reported instance, it took more than 12 minutes to grow a 2.7TB volume by 50G.

RESOLUTION:
Code has been enhanced to find the regions that are contained within a page and 
then avoid looking-up the page for all those regions.

* 2862024 (Tracking ID: 2680343)

SYMPTOM:
While manually disabling and enabling paths to an enclosure machine may panic
with the following stack:

    apauto_get_failover_path+0000CC()
    gen_dmpnode_update_cur_pri+000828()
    dmp_start_failover+000124()
    gen_update_cur_pri+00012C()
    dmp_update_cur_pri+000030()
    dmp_reconfig_update_cur_pri+000010()
    dmp_decipher_instructions+0006E8()
    dmp_process_instruction_buffer+000308()
    dmp_reconfigure_db+0000C4()
    gendmpioctl+000ECC()
    dmpioctl+00012C()

DESCRIPTION:
The Dynamic Multi-Pathing(DMP) driver keeps track of the number of active paths
and failed paths internally. The computation may go wrong while exercising
manual disable/enable of paths which can lead to machine panic.

RESOLUTION:
Code changes have been made to properly update the active path and failed path
count.

* 2876116 (Tracking ID: 2729911)

SYMPTOM:
During a controller or port failure, UDEV removes the associated path
information from DMP. When the paths are being removed the IO occurring to this 
disk could still get re-directed to this path, after it has been deleted,
leading to an IO failure.

DESCRIPTION:
When a path is being deleted from a DMP node the appropriate data structures 
for this path needs to be updated to not have it available for IO after 
deletion which is not happening currently.

RESOLUTION:
The DMP code is modified to not select the deleted path for future IOs.

* 2882488 (Tracking ID: 2754819)

SYMPTOM:
Diskgroup rebuild through vxmake d gets stuck with following stack 
trace:
buildlocks()
MakeEverybody()
main()

DESCRIPTION:
During diskgroup rebuild for configurations having multiple 
objects on a single cache object, an internal list of cache objects gets 
incorrectly modified to a circular list, which causes infinite looping during 
its access.

RESOLUTION:
Code changes are done to correctly populate the cache object list.

* 2886083 (Tracking ID: 2257850)

SYMPTOM:
Memory leak is observed when information about enclosure is accessed by vxdiskadm.

DESCRIPTION:
The memory allocated locally for a data structure keeping information about the
array specific attributes is not freed.

RESOLUTION:
Code changes are made to avoid such memory leaks.

* 2911009 (Tracking ID: 2535716)

SYMPTOM:
LVM Volume Group (VG) to VxVM Disk Group conversion fails requesting user to
reduce the number of configuration records. 
Following is an example of error messages seen during the conversion:

Analysis of <vgname> found insufficient Private Space for conversion
SMALLEST VGRA space = 176
RESERVED space sectors = 78
PRIVATE SPACE/FREE sectors = 98
AVAILABLE sector space = 49
AVAILABLE sector bytes = 50176
RECORDS needed to convert = 399
MAXIMUM records allowable = 392
The smallest disk in the Volume Group (<vgname>) does not have sufficient 
private space for the conversion to succeed. There is only enough private space 
for 392 VM Database records and the conversion of Volume Group (<vgname>) would 
require enough space to allow 399 VxVM Database records. This would roughly 
translate to needing an additional 896 bytes available in the private space.
This can be accomplished by reducing the number of volumes in the (<vgname>) 
Volume Group, and allowing that for every volume removed, the number of 
Database records required would be reduced by three. This is only a rough 
approximation, however.

DESCRIPTION:
Conversion process works in-place such that VXVM's public region is same as LVM
public region, while VXVM creates the private region between the start of the
disk and the start of the public region. In case the public region for LVM
starts early on the disk and source LVM configuration contains very large number
of records such that the space between the start of the disk and the start of
the public region is not sufficient to hold the private region, conversion
process fails.

RESOLUTION:
Conversion process now creates private region after the public region using the
free PE's available in case enough space is not available between the start of
the disk to the start of the public region. With this change, new behavior of
vxvmconvert ensures that the conversion process succeeds in case at least one
disk in the configuration is able to hold the private region either before or
after the public region. Conversion process would now fail only if none of the 
disks in the source configuration is capable of holding private region anywhere 
on the disk.

* 2911010 (Tracking ID: 2627056)

SYMPTOM:
vxmake(1M) command when run with a very large

DESCRIPTION:
Due to a memory leak in vxmake(1M) command, data section limit for the process 
was reached. As a result further memory allocations failed and vxmake command 
failed with the above error

RESOLUTION:
Fixed the memory leak by freeing the memory after it has been used.

Patch ID: PHCO_42992, PHKL_42993

* 2280285 (Tracking ID: 2365486)

SYMPTOM:
In Two nodes SFRAC configuration, after enabling ports when avxdisk
scandisksa is run, systems panics with following stack: 

PANIC STACK:

.unlock_enable_mem()
.unlock_enable_mem()
dmp_update_path()
dmp_decode_update_dmpnode()
dmp_decipher_instructions()
dmp_process_instruction_buffer()
dmp_reconfigure_db()
gendmpioctl()
vxdmpioctl()
rdevioctl()
spec_ioctl()
vnop_ioctl()
vno_ioctl()
common_ioctl()
ovlya_addr_sc_flih_main()

DESCRIPTION:
Improper order of acquire and release of locks during reconfiguration of DMP
when I/O activity was running parallelly, lead to above panic.

RESOLUTION:
Release the locks in the same order as they in which they are acquired.

* 2532440 (Tracking ID: 2495186)

SYMPTOM:
With TCP protocol used for replication, I/O throttling happens due to
memory flow control.

DESCRIPTION:
In some slow network configuration, the I/O throughput is throttled
back due to the replication I/O.

RESOLUTION:
It is better to keep the replication I/O outside the normal I/O code
path to improve its I/O throughput performance.

* 2563291 (Tracking ID: 2527289)

SYMPTOM:
In a Campus Cluster setup, storage fault may lead to DETACH of all the
configured site. This also results in IOfailure on all the nodes in the Campus
Cluster.

DESCRIPTION:
Site detaches are done on site consistent dgs when any volume in the dg looses
all the mirrors of a Site. During the processing of the DETACH of last mirror in
a site we identify that it is the last mirror and DETACH the site which in turn
detaches all the objects of that site.

In Campus Cluster setup we attach a dco volume for any data volume created on a
site-consistent dg. The general configuration is to have one DCO mirror on each
site. Loss of a single mirror of the dco volume on any node will result in the
detach of that site. 

In a 2 site configuration this particular scenario would result in both the dco
mirrors being lost simultaneously. While the site detach for the first mirror is
being processed we also signal for DETACH of the second mirror which ends up
DETACHING the second site too. 

This is not hit in other tests as we already have a check to make sure that we
do not DETACH the last mirror of a Volume. This check is being subverted in this
particular case due to the type of storage failure.

RESOLUTION:
Before triggering the site detach we need to have an explicit check to see if we
are trying to DETACH the last ACTIVE site.

* 2621549 (Tracking ID: 2621465)

SYMPTOM:
When a failed disk belongs to a site has once again become accessible, it 
cannot be reattached to the disk group.

DESCRIPTION:
As the disk has a site tag name set, 'vxdg adddisk' command invoked 
in 'vxreattach' command needs the option '-f' to add the disk back to the 
disk group.

RESOLUTION:
Add the option '-f' to 'vxdg adddisk' command when it is invoked 
in 'vxreattach' command.

* 2626900 (Tracking ID: 2608849)

SYMPTOM:
1.Under a heavy I/O load on logclient node, write I/Os on VVR Primary logowner
takes a very long time to complete.

2. I/Os on "master" and "slave" nodes hang when "master" role is switched
multiple times using "vxclustadm setmaster" command.

DESCRIPTION:
1.
VVR can not allow more than 2048 I/Os outstanding on the SRL volume. Any I/Os
beyond this threshold will be throttled. The throttled I/Os are restarted after
every SRL header flush operation. During restarting the throttled I/Os, I/Os
came from logclient are given higher priority causing logowner I/Os to starve.

2.
In CVM reconfiguration code path the RLINK ports are not cleanly deleted on old
log-owner. This causes the RLINks not to connect leading to both replication and
I/O hang.

RESOLUTION:
Algorithm which restarts the throttled I/Os is modified to give fair chance to
both local and remote I/Os to proceed.
Additionally, the code changes are made in CVM reconfiguration code path to
delete the RLINK ports cleanly before switching the master role.

* 2626915 (Tracking ID: 2417546)

SYMPTOM:
Raw devices are lost after OS rebooting and also cause permissions issue due to
change in dmpnode permissions from 660 to 600.

DESCRIPTION:
On reboot, while creating raw devices we generate next available device number.
There is counting bug due to which VxVM were ending up creating one less device.
Also it was creating permissions issue due to change in dmpnode permissions.

RESOLUTION:
This issue is addressed by source change wherein correct counters are kept 
and device permissions are changed appropriately.

* 2626920 (Tracking ID: 2061082)

SYMPTOM:
"vxddladm -c assign names" command does not work if dmp_native_support 
tunable is enabled.

DESCRIPTION:
If dmp_native_support tunable is set to "on" then VxVM does not allow change in
name of dmpnodes. This holds true even for device with native support not
enabled like VxVM labeled or Third Party Devices. So there is  no way for
selectively changing name of devices for which native support is not enabled.

RESOLUTION:
This enhancement is addressed by code change to selectively change name for
devices with native support not enabled.

* 2636094 (Tracking ID: 2635476)

SYMPTOM:
DMP (Dynamic Multi Pathing) driver does not automatically enable the failed 
paths of Logical Units (LUNs) that are restored.

DESCRIPTION:
DMP's restore demon probes each failed path at a default interval of 5 minutes 
(tunable) to detect if that path can be enabled. As part of enabling the path, 
DMP issues an open() on the path's device number. Owing to a bug in the DMP
code, the open() was issued on a wrong device partition which resulted in
failure for every probe. Thus, the path remained in failed status at DMP layer
though it was enabled at the array side.

RESOLUTION:
Modified the DMP restore daemon code path to issue the open() on the appropriate
device partitions.

* 2643651 (Tracking ID: 2643634)

SYMPTOM:
If standard(non-clone) disks and cloned disks of the same disk group are seen in
a host, dg import will fail with the following error message when the
standard(non-clone) disks have no enabled configuration copy of the disk group.

# vxdg import <dgname>
VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed:
Disk group has no valid configuration copies

DESCRIPTION:
When VxVM is importing such a mixed configuration of standard(non-clone) disks
and cloned disks, standard(non-clone) disks will be selected as the member of
the disk group in 5.0MP3RP5HF1 and 5.1SP1RP2. It will be done while
administrators are not aware of the fact that there is a mixed configuration and
the standard(non-clone) disks are to be selected for the import. It is hard to
figure out from the error message and need time to investigate what is the issue.

RESOLUTION:
Syslog message enhancements are made in the code that administrators can figure
out if such a mixed configuration is seen in a host and also which disks are
selected for the import.

* 2666175 (Tracking ID: 2666163)

SYMPTOM:
A small memory leak may be seen in vxconfigd, the VxVM configuration daemon when
Serial Split Brain(SSB) error is detected in the import process.

DESCRIPTION:
The leak may occur when Serial Split Brain(SSB) error is detected in the import
process. It is because when the SSB error is returning from a function, a
dynamically allocated memory area in the same function would not be freed. The
SSB detection is a VxVM feature where VxVM detects if the configuration copy in
the disk private region becomes stale unexpectedly. A typical use case of the
SSB error is that a disk group is imported to different systems at the same time
and configuration copy update in both systems results in an inconsistency in the
copies. VxVM cannot identify which configuration copy is most up-to-date in this
situation. As a result, VxVM may detect SSB error on the next import and show
the details through a CLI message.

RESOLUTION:
Code changes are made to avoid the memory leak and also a small message fix has
been done.

* 2695225 (Tracking ID: 2675538)

SYMPTOM:
Data corruption can be observed on a CDS (Cross-platform Data Sharing) disk, 
as part of LUN resize operations. The following pattern would be found in the 
data region of the disk.

<DISK-IDENTIFICATION> cyl <number-of-cylinders> alt 2 hd <number-of-tracks> sec 
<number-of-sectors-per-track>

DESCRIPTION:
The CDS disk maintains a SUN VTOC in the zeroth block and a backup label at the 
end of the disk. The VTOC maintains the disk geometry information like number of 
cylinders, tracks and sectors per track. The backup label is the duplicate of 
VTOC and the backup label location is determined from VTOC contents. As part of 
resize, VTOC is not updated to the new size, which results in the wrong 
calculation of the backup label location. If the wrongly calculated backup label 
location falls in the public data region rather than the end of the disk as 
designed, data corruption occurs.

RESOLUTION:
Update the VTOC contents appropriately for LUN resize operations to prevent the 
data corruption.

* 2695227 (Tracking ID: 2674465)

SYMPTOM:
Data corruption is observed when DMP node names are changed by following
commands for DMP devices that are controlled by a third party multi-pathing
driver (E.g. MPXIO and PowerPath )

# vxddladm [-c] assign names
# vxddladm assign names file=<path-name>
# vxddladm set namingscheme=<scheme-name>

DESCRIPTION:
The above said commands when executed would re-assign names to each devices.
Accordingly the in-core DMP database should be updated for each device to map
the new device name with appropriate device number. Due to a bug in the code,
the mapping of names with the device number wasnat done appropriately which
resulted in subsequent IOs going to a wrong device thus leading to data 
corruption.

RESOLUTION:
DMP routines responsible for mapping the names with right device number is
modified to fix this corruption problem.

* 2695228 (Tracking ID: 2688747)

SYMPTOM:
Under a heavy I/O load on logclient node, the writes on VVR Primary logowner
takes a very long time to complete. Writes appear to be hung.

DESCRIPTION:
VVR cannot allow more than specific number of I/Os (4096)outstanding on the SRL
volume. Any I/Os beyond this threshold will be throttled. The throttled I/Os are
restarted periodically. While restarting, I/Os belonging logclient get high
preference compared to logowner I/Os, which can eventually lead to starvation or
I/O hang situation on logowner.

RESOLUTION:
Changes are done in algorithm of I/O scheduling of restarted I/Os, it's made
sure that throttled local I/Os will get the chance to proceed under all conditions.

* 2701152 (Tracking ID: 2700486)

SYMPTOM:
If the VVR Primary and Secondary nodes have the same host-name, and there is a
loss of heartbeats between them, vradmind daemon can core-dump if an active
stats session already exists on the Primary node.

Following stack-trace is observed:

pthread_kill()
_p_raise() 
raise.raise()
abort() 
__assert_c99
StatsSession::sessionInitReq()
StatsSession::processOpReq()
StatsSession::processOpMsgs()
RDS::processStatsOpMsg()
DBMgr::processStatsOpMsg()
process_message()
main()

DESCRIPTION:
On loss of heartbeats between the Primary and Secondary nodes, and a subsequent
reconnect, RVG information is sent to the Primary by Secondary node. In this 
case, if a Stats session already exists on the Primary, a STATS_SESSION_INIT 
request is sent back to the Secondary. However, the code was using "hostname" 
(as returned by `uname -a`) to identify the secondary node. Since both the 
nodes had the same hostname, the resulting STATS_SESSION_INIT request was 
received at the Primary itself, causing vradmind to core dump.

RESOLUTION:
Code was modified to use 'virtual host-name' information contained in the 
RLinks, rather than hostname(1m), to identify the secondary node. In a scenario 
where both Primary and Secondary have the same host-name, virtual host-names 
are used to configure VVR.

* 2702110 (Tracking ID: 2700792)

SYMPTOM:
vxconfigd, the VxVM volume configuration daemon may dump a core with the
following stack during the Cluster Volume Manager(CVM) startup with "hares
-online cvm_clus -sys [node]".

  dg_import_finish()
  dg_auto_import_all()
  master_init()
  role_assume()
  vold_set_new_role()
  kernel_get_cvminfo()
  cluster_check()
  vold_check_signal()
  request_loop()
  main()

DESCRIPTION:
During CVM startup, vxconfigd accesses the disk group record's pointer of a
pending record while the transaction on the disk group is in progress. At times,
vxconfigd incorrectly accesses the stale pointer while processing the current
transaction, thus resulting in a core dump.

RESOLUTION:
Code changes are made to access the appropriate pointer of the disk group record
which is active in the current transaction. Also, the disk group record is
appropriately initialized to NULL value.

* 2703370 (Tracking ID: 2700086)

SYMPTOM:
In the presence of "Not-Ready" EMC devices on the system, multiple dmp (path
disabled/enabled) events messages are seen in the syslog

DESCRIPTION:
The issue is that vxconfigd enables the BCV devices which are in Not-Ready state
for IO as the SCSI inquiry succeeds, but soon finds that they cannot be used for
I/O and disables those paths. This activity takes place whenever "vxdctl enable"
or "vxdisk scandisks" command is executed.

RESOLUTION:
Avoid changing the state of the BCV device which is in "Not-Ready" to prevent IO
and dmp event messages.

* 2703373 (Tracking ID: 2698860)

SYMPTOM:
Mirroring a large size VxVM volume comprising of THIN luns underneath
and with VxFS filesystem atop mounted fails with the following error:

Command error
# vxassist -b -g $disk_group_name mirror $volume_name
VxVM vxplex ERROR V-5-1-14671 Volume <volume_name> is configured on THIN luns
and not mounted. Use 'force' option, to bypass smartmove. To take advantage of
smartmove for supporting thin luns, retry this operation after mounting the
volume.
VxVM vxplex ERROR V-5-1-407 Attempting to cleanup after failure ...

Truss output error:
statvfs("<mount_point>", 0xFFBFEB54)              Err#79 EOVERFLOW

DESCRIPTION:
The statvfs system call is invoked internally during mirroring
operation to retrieve statistics information of VxFS file system hosted
on the volume. However, since the statvfs system call only
supports maximum 4294967295 (4GB-1) blocks, so if the total filesystem
blocks are greater than that, EOVERFLOW error occurs. This also results
in vxplex terminating with the errors.

RESOLUTION:
Use the 64 bits version of statvfs i.e., statvfs64 system call to resolve
the EOVERFLOW and vxplex errors.

* 2711758 (Tracking ID: 2710579)

SYMPTOM:
When data corruption is observed on a Cross-platform Data Sharing (CDS) disk, 
as a part of the operations such asLUN resize, Disk FLUSH, Disk ONLINE and so 
on, thefollowing pattern is found in the data region of the disk:
<DISK-IDENTIFICATION> cyl <number-of-cylinders> alt 2 hd
<number-of-tracks> sec
<number-of-sectors-per-track>

DESCRIPTION:
The CDS disk maintains a SUN Volume Table of Contents(VTOC) VTOC in block zero 
and a backup label at the endof the disk. The VTOC maintains the disk geometry
information similar to  the number of cylinders, tracks and the sectors per 
track. The backup label is the duplicate of  the VTOC and the backup label 
location is determined from the  VTOC contents. As part of the resize, the VTOC 
is not updated to the new size, which results in the wrong calculation of the 
backup label location. If the wrongly calculated backup label location comes in 
the public data region instead of at the end of the disk as designed, data 
corruption occurs.

RESOLUTION:
The code is modified such that the writing of backup label is suppressed to 
prevent the data corruption.

* 2713862 (Tracking ID: 2390998)

SYMPTOM:
When running 'vxdctl enable' or 'vxdisk scandisks' command after the 
configuration
changes in SAN ports, system panicked with the following stack trace:
.disable_lock()
dmp_close_path()
dmp_do_cleanup()
dmp_decipher_instructions()
dmp_process_instruction_buffer()
dmp_reconfigure_db()
gendmpioctl()
vxdmpioctl()

DESCRIPTION:
After the configuration changes in SAN ports, the configuration in VxVM also
needs to be updated. In the reconfiguration process, VxVM may temporarily have
the old dmp path nodes and the new dmp path nodes, both of which has the same
device number, to migrate the old ones to new ones. VxVM maintains two types
of open count to avoid platform dependency. However when openining/closing the
old dmp path nodes while the migration process is going on, VxVM wrongly 
calculates
the open counts in the dmp path nodes; calculates an open count in the new node
and then calculates the other open count in the old node. This results in the
inconsistent open counts of the node and cause panic while checking open counts.

RESOLUTION:
The code change has been done to maintain the open counts on the same dmp path
node database correctly while performing dmp device open/close.

* 2741105 (Tracking ID: 2722850)

SYMPTOM:
Disabling/enabling controllers while I/O is in progress results in dmp (Dynamic
Multi-Pathing) thread hang with following stack:

dmp_handle_delay_open
gen_dmpnode_update_cur_pri
dmp_start_failover
gen_update_cur_pri
dmp_update_cur_pri
dmp_process_curpri
dmp_daemons_loop

DESCRIPTION:
DMP takes an exclusive lock to quiesce a node to be failed over, and releases
the lock to do update operations. These update operations presume that the node
will be in quiesced status. A small timing window exists between lock release
and update operations, wherein other threads can break-in into this window and
unquiesce the node, which will lead to the hang while performing update operations.

RESOLUTION:
Corrected the quiesce counter of a node to avoid other threads unquiesce it when
a thread is performing update operations.

* 2744219 (Tracking ID: 2729501)

SYMPTOM:
In Dynamic Multi pathing environment, excluding a path also excludes other set 
of paths with matching substrings.

DESCRIPTION:
excluding a path using vxdmpadm exclude vxvm path=<> is excluding all the paths 
with matching substring. This is due to strncmp() used for comparison.
Also the size of h/w path defined in the structure is more than what is actually 
fetched.

RESOLUTION:
Correct the size of h/w path in the structure and use strcmp for comparison 
inplace of strncmp()

* 2750454 (Tracking ID: 2423701)

SYMPTOM:
Upgrade of VxVM caused change in permissions of /etc/vx/vxesd during live
upgrade from drwx------  to d---r-x---.

DESCRIPTION:
'/etc/vx/vxesd' directory gets shipped in VxVM with "drwx------" permissions.
However, while starting the vxesd daemon, if this directory is not present, it
gets created with "d---r-x---".

RESOLUTION:
Changes are made so that while starting vxesd daemon '/etc/vx/vxesd' gets
created with 'drwx------' permissions.

* 2752178 (Tracking ID: 2741240)

SYMPTOM:
In a VxVM environment, "vxdg join" when executed during heavy IO load fails 
with 
the below message.

VxVM vxdg ERROR V-5-1-4597 vxdg join [source_dg] [target_dg] failed
join failed : Commit aborted, restart transaction
join failed : Commit aborted, restart transaction

Half of the disks that were part of source_dg will become part of target_dg 
whereas other half will have no DG details.

DESCRIPTION:
In a vxdg join transaction, VxVM has implemented it as a two phase transaction. 
If the transaction fails after the first phase and during the second phase, 
half 
of the disks belonging to source_dg will become part of target_dg and the other 
half of the disks will be in a complex irrecoverable state. Also, in heavy IO 
situation, any retry limit (i.e.) a limit to retry transactions can be easily 
exceeded.

RESOLUTION:
"vxdg join" is now designed as a one phase atomic transaction and the retry 
limit is eliminated.

* 2774907 (Tracking ID: 2771452)

SYMPTOM:
In lossy and high latency network, I/O gets hung on VVR primary. Just before the
I/O hang, Rlink frequently connects and disconnects.

DESCRIPTION:
In lossy and high latency network, because of heartbeat time outs, RLINK gets
disconnected. As a part of Rlink disconnect, the communication port is deleted.
During this process, the RVG is serialized and the I/Os are kept in a special
queue - rv_restartq. The I/Os in rv_restartq are supposed to be restarted once the
port deletion is successful.
The port deletion involves termination of all the communication server processes.
Because of a bug in the port deletion logic, the global variable which keeps track
of number of communication server processes got decremented twice.
This caused port deletion process to be hung leading to I/Os in rv_restartq never
being restarted.

RESOLUTION:
In port deletion logic, it's made sure that the global variable which keeps track
of number of communication server processes will get decremented correctly.

Patch ID: PHCO_42807, PHKL_42808

* 2440015 (Tracking ID: 2428170)

SYMPTOM:
I/O hangs when reading or writing to a volume after a total storage 
failure in CVM environments with Active-Passive arrays.

DESCRIPTION:
In the event of a storage failure, in active-passive environments, 
the CVM-DMP fail over protocol is initiated. This protocol is responsible for 
coordinating the fail-over of primary paths to secondary paths on all nodes in 
the 
cluster.
In the event of a total storage failure, where both the primary paths and 
secondary paths fail, in some situations the protocol fails to cleanup some 
internal structures, leaving the devices quiesced.

RESOLUTION:
After a total storage failure all devices should be un-quiesced, 
allowing the I/Os to fail. The CVM-DMP protocol has been changed to cleanup 
devices, even if all paths to a device have been removed.

* 2477272 (Tracking ID: 2169726)

SYMPTOM:
After import operation, the imported diskgroup contains combination of cloned 
and original disks. For example, after importing the diskgroup which has four 
disks, two of the disks from imported diskgroup are cloned disks and the other 
two are original disks.

DESCRIPTION:
For a particular diskgroup, if some of the original disks are not available at 
the time of diskgroup import operation and the corresponding cloned disks are 
present, then the diskgroup imported through vxdg import operation contains 
combination of cloned and original disks.
Example - 
Diskgroup named dg1 with the disks disk1 and disk2 exists on some machine. 
Clones of disks named disk1_clone disk2_clone are also available. If disk2 goes 
offline and the import for dg1 is performed, then the resulting diskgroup will 
contain disks disk1 and disk2_clone.

RESOLUTION:
The diskgroup import operation will consider cloned disks only if no original 
disk is available. If any of the original disks exists at the time of import 
operation, then the import operation will be attempted using original disks 
only.

* 2493635 (Tracking ID: 2419803)

SYMPTOM:
Secondary Site panics in VVR (Veritas Volume Replicator).
Stack trace might look like:

kmsg_sys_snd+0xa8()
nmcom_send_tcp+0x800()
nmcom_do_send+0x290()
nmcom_throttle_send+0x178()
nmcom_sender+0x350()
thread_start+4()

DESCRIPTION:
While Secondary site is communicating with Primary site, if it 
encounters "EAGAIN" (try again) error, then it tries to send data on next 
connection. If all the session connections are not established by this time, it 
leads to panic as the connection is not initialized.

RESOLUTION:
Code changes have been made to check for a valid connection before sending data.

* 2497637 (Tracking ID: 2489350)

SYMPTOM:
In a Storage Foundation environment running Symantec Oracle Disk Manager (ODM),
Veritas File System (VxFS), Cluster volume Manager (CVM) and Veritas Volume
Replicator (VVR), kernel memory is leaked under certain conditions.

DESCRIPTION:
In CVR (CVM + VVR), under certain conditions (for example when I/O throttling
gets enabled or kernel messaging subsystem is overloaded), the I/O resources
allocated before are freed and the I/Os are being restarted afresh. While
freeing the I/O resources, VVR primary node doesn't free the kernel memory
allocated for FS-VM private information data structure and causing the kernel
memory leak of 32 bytes for each restarted I/O.

RESOLUTION:
Code changes are made in VVR to free the kernel memory allocated for FS-VM
private information data structure before the I/O is restarted afresh.

* 2497796 (Tracking ID: 2235382)

SYMPTOM:
IOs can hang in DMP driver when IOs are in progress while carrying out path
failover.

DESCRIPTION:
While restoring any failed path to a non-A/A LUN, DMP driver is checking that
whether any pending IOs are there on the same dmpnode. If any are present then DMP
is marking the corresponding LUN with special flag so that path failover/failback
can be triggered by the pending IOs. There is a window here and by chance if all
the pending IOs return before marking the dmpnode, then any future IOs on the
dmpnode get stuck in wait queues.

RESOLUTION:
Make sure that whenever the LUN is having pending IOs then only to set the flag on
it so that failover can be triggered by pending IOs.

* 2507120 (Tracking ID: 2438426)

SYMPTOM:
The following messages are displayed after vxconfigd is started.

pp_claim_device: Could not get device number for /dev/rdsk/emcpower0 
pp_claim_device: Could not get device number for /dev/rdsk/emcpower1

DESCRIPTION:
Device Discovery Layer(DDL) has incorrectly marked a path under dmp device with 
EFI flag even though there is no corresponding Extensible Firmware Interface 
(EFI) device in /dev/[r]dsk/. As a result, Array Support Library (ASL) issues a 
stat command on non-existent EFI device and displays the above messages.

RESOLUTION:
Avoided marking EFI flag on Dynamic MultiPathing (DMP) paths which correspond to 
non-efi devices.

* 2507124 (Tracking ID: 2484334)

SYMPTOM:
The system panic occurs with the following stack while collecting the DMP 
stats.

dmp_stats_is_matching_group()
dmp_group_stats()
dmp_get_stats()
gendmpioctl()
dmpioctl()

DESCRIPTION:
Whenever new devices are added to the system, the stats table is adjusted to
accomodate the new devices in the DMP. There exists a race between the stats
collection thread and the thread which adjusts the stats table to accomodate
the new devices. The race can result the stats collection thread to access the
memory beyond the known size of the table causing the system panic.

RESOLUTION:
The stats collection code in the DMP is rectified to restrict the access to the 
known size of the stats table.

* 2508294 (Tracking ID: 2419486)

SYMPTOM:
Data corruption is observed with single path when naming scheme is changed 
from enclodure based (EBN) to OS Native (OSN).

DESCRIPTION:
The Data corruption can occur in the following configuration, 
when the naming scheme is changed while applications are on-line.

1. The DMP device is configured with single path or the devices are controlled
   by Third party Multipathing Driver (Ex: MPXIO, MPIO etc.,)

2. The DMP device naming scheme is EBN (enclosure based naming) and 
persistence=yes

3. The naming scheme is changed to OSN using the following command
   # vxddladm set namingscheme=osn


There is possibility of change in name of the VxVM device (DA record) while
the naming scheme is changing. As a result of this the device attribute list 
is updated with new DMP device names. Due to a bug in the code which updates 
the attribute list, the VxVM device records are mapped to wrong DMP devices.

Example:

Following are the device names with EBN naming scheme.

MAS-usp0_0   auto:cdsdisk    hitachi_usp0_0  prod_SC32    online
MAS-usp0_1   auto:cdsdisk    hitachi_usp0_4  prod_SC32    online
MAS-usp0_2   auto:cdsdisk    hitachi_usp0_5  prod_SC32    online
MAS-usp0_3   auto:cdsdisk    hitachi_usp0_6  prod_SC32    online
MAS-usp0_4   auto:cdsdisk    hitachi_usp0_7  prod_SC32    online
MAS-usp0_5   auto:none       -            -            online invalid
MAS-usp0_6   auto:cdsdisk    hitachi_usp0_1  prod_SC32    online
MAS-usp0_7   auto:cdsdisk    hitachi_usp0_2  prod_SC32    online
MAS-usp0_8   auto:cdsdisk    hitachi_usp0_3  prod_SC32    online
MAS-usp0_9   auto:none       -            -            online invalid
disk_0       auto:cdsdisk    -            -            online
disk_1       auto:none       -            -            online invalid

bash-3.00# vxddladm set namingscheme=osn

The follwoing is after executing the above command.
The MAS-usp0_9 is changed as MAS-usp0_6 and the following devices
are changed accordingly.

bash-3.00# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
MAS-usp0_0   auto:cdsdisk    hitachi_usp0_0  prod_SC32    online
MAS-usp0_1   auto:cdsdisk    hitachi_usp0_4  prod_SC32    online
MAS-usp0_2   auto:cdsdisk    hitachi_usp0_5  prod_SC32    online
MAS-usp0_3   auto:cdsdisk    hitachi_usp0_6  prod_SC32    online
MAS-usp0_4   auto:cdsdisk    hitachi_usp0_7  prod_SC32    online
MAS-usp0_5   auto:none       -            -            online invalid
MAS-usp0_6   auto:none       -            -            online invalid
MAS-usp0_7   auto:cdsdisk    hitachi_usp0_1  prod_SC32    online
MAS-usp0_8   auto:cdsdisk    hitachi_usp0_2  prod_SC32    online
MAS-usp0_9   auto:cdsdisk    hitachi_usp0_3  prod_SC32    online
c4t20000014C3D27C09d0s2 auto:none       -            -            online invalid
c4t20000014C3D26475d0s2 auto:cdsdisk    -            -            online

RESOLUTION:
Code changes are made to update device attribute list correctly even if name of
the VxVM device is changed while the naming scheme is changing.

* 2508418 (Tracking ID: 2390431)

SYMPTOM:
In a Disaster Recovery environment, when DCM (Data Change Map) is active and 
during SRL(Storage Replicator Log)/DCM flush, the system panics due to missing
parent on one of the DCM in an RVG (Replicated Volume Group).

DESCRIPTION:
The DCM flush happens during every log update and its frequency depends on the 
IO load. If the I/O load is high, the DCM flush happens very often and if there 
are more volumes in the RVG, the frequency is very high. Every DCM flush 
triggers the DCM flush on all the volumes in the RVG. If there are 50 volumes, 
in an RVG, then each DCM flush creates 50 children and is controlled by one 
parent SIO. Once all the 50 children are done, then the parent SIO releases 
itself for the next flush. Once the DCM flush of each child completes, it 
detaches itself from the parent by setting the parent field to NULL. It so 
happens that, if the 49th child is done and before it is detaching it from the 
parent, the 50th child completes and releases the parent_SIO for the next DCM 
flush. Before the 49th child detaches, the new DCM flush is started on the same 
50th child. After the next flush is started, the 49th child of the previous 
flush detaches itself from the parent and since it is a static SIO, it 
indirectly resets the new flush parent field. Also, the lock is not obtained 
before modifing the sio state field in a few scenarios.

RESOLUTION:
Before reducing the children count, detach the parent first. This will make 
sure the new flush will not race with the previous flush. Protect the field 
with the required lock in all the scenarios.

* 2511928 (Tracking ID: 2420386)

SYMPTOM:
Corrupted data is seen near the end of a sub-disk, on thin-reclaimable 
disks with either CDS EFI or sliced disk formats.

DESCRIPTION:
In environments with thin-reclaim disks running with either CDS-EFI 
disks or sliced disks, misaligned reclaims can be initiated. In some situations, 
when reclaiming a sub-disk, the reclaim does not take into account the correct 
public region start offset, which in rare instances can potentially result in 
reclaiming data before the sub-disk which is being reclaimed.

RESOLUTION:
The public offset is taken into account when initiating all reclaim
operations.

* 2515137 (Tracking ID: 2513101)

SYMPTOM:
When VxVM is upgraded from 4.1MP4RP2 to 5.1SP1RP1, the data on CDS disk gets
corrupted.

DESCRIPTION:
When CDS disks are initialized with VxVM version 4.1MP4RP2, the number of 
cylinders are calculated based on the disk raw geometry. If the calculated 
number of cylinders exceed Solaris VTOC limit (65535), because of unsigned 
integer overflow, truncated value of number of cylinders gets written in CDS 
label.
    After the VxVM is upgraded to 5.1SP1RP1, CDS label gets wrongly written in
the public region leading to the data corruption.

RESOLUTION:
The code changes are made  to suitably adjust the number of tracks & heads so 
that the calculated number of cylinders be within Solaris VTOC limit.

* 2517819 (Tracking ID: 2530279)

SYMPTOM:
vxesd consumes 100% CPU, and hang is in following stack:

vold_open ()
es_update_ddlconfig ()
do_get_ddl_config ()
start_ipc ()
main ()

DESCRIPTION:
Due to inappropriate handling of thread control, race condition occurs inside
the ESD code path. This race condition leads 100% CPU cycles and reaches to hang
state.

RESOLUTION:
Done the proper way of handling to control thread to avoid the race condition in
the code path.

* 2525333 (Tracking ID: 2148851)

SYMPTOM:
"vxdisk resize" operation fails on a disk with VxVM cdsdisk/simple/sliced layout
on Solaris/Linux platform with the following message:

      VxVM vxdisk ERROR V-5-1-8643 Device emc_clariion0_30: resize failed: New
      geometry makes partition unaligned

DESCRIPTION:
The new cylinder size selected during "vxdisk resize" operation is unaligned with
the partitions that existed prior to the "vxdisk resize" operation.

RESOLUTION:
The algorithm to select the new geometry has been redesigned such that the new
cylinder size is always aligned with the existing as well as new partitions.

* 2528144 (Tracking ID: 2528133)

SYMPTOM:
vxprint -l command gives following error (along with the output), when
multiple DGs have same DM_NAME
VxVM vxdisk ERROR V-5-1-0  disk100 - Record in multiple disk groups

DESCRIPTION:
vxprint -l, internally takes one record(in this case DM_NAME) at a
time and searches that record in all the DGs. If it finds the record in more
than one DG, it marks the flag to DGL_MORE. This DGL_MORE flag checking was
causing the error in case of vxprint -l.

RESOLUTION:
Since, it is a valid operation to create multiple DGs with same
DM_NAME. The error checking is not required at all. The code change removes the
flag checking logic from the function.

* 2531983 (Tracking ID: 2483053)

SYMPTOM:
VVR Primary system consumes very high kernel heap memory and appear to 
be hung.

DESCRIPTION:
There is a race between REGION LOCK deletion thread which runs as 
part of SLAVE leave reconfiguration and the thread which process the DATA_DONE 
message coming from log client to logowner. Because of this race, the flags 
which stores the status information about the I/Os was not correctly updated. 
This used to cause a lot of SIOs being stuck in a queue consuming a large kernel 
heap.

RESOLUTION:
The code changes are made to take the proper locks while updating 
the SIOs' fields.

* 2531987 (Tracking ID: 2510523)

SYMPTOM:
In CVM-VVR configuration, I/Os on "master" and "slave" nodes hang when "master"
role is switched to the other node using "vxclustadm setmaster" command.

DESCRIPTION:
Under heavy I/O load, the I/Os are sometimes throttled in VVR, if number of
outstanding I/Os on SRL reaches a certain limit (2048 I/Os).
When "master" role is switched to the other node by using "vxclustadm setmaster"
command, the throttled I/Os on original master are never restarted. This causes
the I/O hang.

RESOLUTION:
Code changes are made in VVR to make sure the throttled I/Os are restarted
before "master" switching is started.

* 2531993 (Tracking ID: 2524936)

SYMPTOM:
Disk group is disabled after rescanning disks with "vxdctl enable"
command. The error messages given below are seen in vxconfigd debug log output:
              
<timestamp>  VxVM vxconfigd ERROR V-5-1-12223 Error in claiming /dev/<disk>: 
The process file table is full. 
<timestamp>  VxVM vxconfigd ERROR V-5-1-12223 Error in claiming /dev/<disk>: 
The process file table is full. 
...
<timestamp> VxVM vxconfigd ERROR V-5-1-12223 Error in claiming /dev/<disk>: The 
process file table is full.

DESCRIPTION:
When attachment of shared memory segment to the process address 
space fails, proper error handling of such case was missing in vxconfigd code, 
hence resulting in error in claiming disks and offlining configuration copies 
which in-turn results in disabling of disk group.

RESOLUTION:
Code changes are made to handle the failure case while creating 
shared memory segment.

* 2552402 (Tracking ID: 2432006)

SYMPTOM:
System intermittently hangs during boot if disk is encapsulated.
When this problem occurs, OS boot process stops after outputing this:
"VxVM sysboot INFO V-5-2-3409 starting in boot mode..."

DESCRIPTION:
The boot process hung due to a dead lock between two threads, one VxVM
transaction thread and another thread attempting a read on root volume 
issued by dhcpagent.  Read I/O is deferred till transaction is finished but
read count incremented earlier is not properly adjusted.

RESOLUTION:
Proper care is taken to decrement pending read count if read I/O is deferred.

* 2553391 (Tracking ID: 2536667)

SYMPTOM:
In a CVM (Clustered Volume Manager) environment, the slave node panics with the 
following stack:
e_block_thread()
pse_block_thread()
pse_sleep_thread()
volsiowait()
voldio()
vol_voldio_read()
volconfig_ioctl()
volsioctl_real()
volsioctl()
vols_ioctl()
rdevioctl()
spec_ioctl()
vnop_ioctl()
vno_ioctl()

DESCRIPTION:
Panic happened due to accessing a stale DG pointer as DG got deleted before the 
I/O returned. It may happen on cluster configuration where commands generating 
private region i/os and "vxdg deport/delete" commands are executing 
simultaneously on two nodes of the cluster.

RESOLUTION:
Code changes are made to drain private region I/Os before deleting the DG.

* 2568208 (Tracking ID: 2431448)

SYMPTOM:
Panic in vol_rv_add_wrswaitq() while processing duplicate message. Stack trace
of panic

vxio:vol_rv_add_wrswaitq
vxio:vol_rv_msg_metadata_req
vxio:vol_get_timespec_latest 
vxio:vol_mv_kmsg_request
vxio:vol_kmsg_obj_request 
vxio:kmsg_gab_poll
vxio:vol_kmsg_request_receive
vxio:kmsg_gab_poll
vxio:vol_kmsg_receiver

DESCRIPTION:
On receiving message from slave node, VVR looks for duplicate message before
adding to per node queue. In case of duplicate message, VVR tries to copy some
data structure from old message, if processing of old message is complete then 
we
might end up accessing freed pointer which will cause panic.

RESOLUTION:
For duplicate message, copy from old message is not required since we discard
duplicate message. Removing code of copying data structure resolved this panic.

* 2574840 (Tracking ID: 2344186)

SYMPTOM:
In a master-slave configuration with FMR3/DCO volumes, reboot of a 
cluster node fails to join back the cluster again with following error messages 
in the console

[..]
Jul XX 18:44:09 vienna vxvm:vxconfigd: [ID 702911 daemon.error] V-5-1-11092 
cleanup_client: (Volume recovery in progress) 230
Jul XX 18:44:09 vienna vxvm:vxconfigd: [ID 702911 daemon.error] V-5-1-11467 
kernel_fail_join() :                Reconfiguration interrupted: Reason is 
retry to add a node failed (13, 0)
[..]

DESCRIPTION:
VxVM volumes with FMR3/DCO have inbuilt DRL mechanism to track the 
disk block of in-flight IOs in order to recover the data much quicker in case 
of a node crash. Thus, a joining node awaits the variable, responsible for 
recovery, to get unset to join the cluster. However, due to a bug in FMR3/DCO 
code, this variable was set forever, thus leading to node join failure.

RESOLUTION:
Modified the FMR3/DCO code to appropriately set and unset this 
recovery variable.

* 2583307 (Tracking ID: 2185069)

SYMPTOM:
In a CVR setup, while the application I/Os are going on all nodes of the primary
site, bringing down a slave node results in a panic on the master node and the
following stack trace is displayed:

volsiodone
vol_subdisksio_done
volkcontext_process 
voldiskiodone 
voldiskiodone_intr 
voldmp_iodone 
bio_endio 
gendmpiodone 
dmpiodone 
bio_endio 
req_bio_endio 
blk_update_request 
blk_update_bidi_request 
blk_end_bidi_request 
blk_end_request 
scsi_io_completion 
scsi_finish_command 
scsi_softirq_done 
blk_done_softirq 
__do_softirq 
call_softirq 
do_softirq 
irq_exit 
smp_call_function_single_interrupt 
call_function_single_interrupt

DESCRIPTION:
An internal data structure access is not serialized properly, resulting in
corruption of that data structure. This triggers the panic.

RESOLUTION:
The code is modified to properly serialize access to the internal data structure
so that its contents are not corrupted under any conditions.

* 2603605 (Tracking ID: 2419948)

SYMPTOM:
Race between the SRL flush due to SRL overflow and the kernel logging code, 
leads to a panic.

DESCRIPTION:
Rlink is disconencted, the RLINK state is moved to HALT. Primary RVG SRL is 
overflowed since there is no replication and which initiated DCM logging.

This change the STATE of rlink to DCM. (since rlink is already disconencted, 
this will keep the finale state as HALT.
During the SRL overflow, if the rlink connection resoted, then it has many 
state changes before completing the connection.

If the SRL overflow and  klogging code, finishes inbetween the above state 
transistion, and if it not finding it in VOLRP_PHASE_HALT, then the system is 
initiating the panic.

RESOLUTION:
Consider the above state change as valid, and make sure the SRL overflow code 
dont always expect the HALT state. Take action for the other state or wait for 
the full state transistion to complete for the rlink connection.

* 2676703 (Tracking ID: 2553729)

SYMPTOM:
The following is observed during 'Upgrade' of VxVM (Veritas Volume Manager):

i) 'clone_disk' flag is seen on non-clone disks in STATUS field when 'vxdisk -e 
list' is executed after uprade to 5.1SP1 from lower versions of VxVM.


Eg:

DEVICE       TYPE           DISK        GROUP        STATUS
emc0_0054    auto:cdsdisk   emc0_0054    50MP3dg     online clone_disk
emc0_0055    auto:cdsdisk   emc0_0055    50MP3dg     online clone_disk

ii) Disk groups (dg) whose versions are less than 140 do not get imported after 
upgrade to VxVM versions 5.0MP3RP5HF1 or 5.1SP1RP2.

Eg:

# vxdg -C import <dgname>
VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed:
Disk group version doesn't support feature; see the vxdg upgrade command

DESCRIPTION:
While uprading VxVM

i) After upgrade to 5.1SP1 or higher versions:
If a dg which is created on lower versions is deported and imported back on 
5.1SP1 after the upgrade, then "clone_disk" flags gets set on non-cloned disks 
because of the design change in UDID (unique disk identifier) of the disks.

ii) After upgrade to 5.0MP3RP5HF1 or 5.1SP1RP2:
Import of dg with versions less than 140 fails.

RESOLUTION:
Code changes are made to ensure that:
i) clone_disk flag does not get set for non-clone disks after the upgrade.
ii) Disk groups with versions less than 140 get imported after the upgrade.

Patch ID: PHCO_42245, PHKL_42246

* 2163809 (Tracking ID: 2151894)

SYMPTOM:
Internal testing utility volassert prints a message:

Volume TCv1-548914: recover_offset=0, expected 1024

DESCRIPTION:
We changed the behavior of recover_offset in other incident by resetting it
back to zero after starting a volume. This works okay for normal cases but not for
raid5 volumes.

RESOLUTION:
Recover offset will be set at the end of a volume after grow/init operations.

* 2169348 (Tracking ID: 2094672)

SYMPTOM:
Master node hang with lot of I/O's and during node reconfig due to node leave.

DESCRIPTION:
The reconfig is stuck, because the I/O is not drained completely. The master 
node is responsible to handle the I/O for the both primary and slave. When the 
slave node is died, and the pending slave I/O on the master node is not cleaned 
up himself properly. This lead to some I/O's left in the queue un-deleted.

RESOLUTION:
clean up the I/O during the node failure and reconfig scenario.

* 2198041 (Tracking ID: 2196918)

SYMPTOM:
When creating a space-opimized snapshot by specifying cache-object size
either in percentage terms of the volume size or an absolute size, the snapshot
creation can fail with an error similar to following:
"VxVM vxassist ERROR V-5-1-10127 creating volume snap-dvol2-CV01:
        Volume or log length violates disk group alignment"

DESCRIPTION:
VxVM expects all virtual storage objects to have size aligned to a
value which is set diskgroup-wide. One can get this value with:
# vxdg list testdg|grep alignment
alignment: 8192 (bytes)

When the cachesize is specified in percentage, the value might not align with dg
alignment. If not aligned, the creation of the cache-volume could fail with
specified error message

RESOLUTION:
After computing the cache-size from specified percentage value, it
is aligned up to the diskgroup alignment value before trying to create the
cache-volume.

* 2204146 (Tracking ID: 2200670)

SYMPTOM:
Some disks are left detached and not recovered by vxattachd.

DESCRIPTION:
If the shared disk group is not imported or node is not part of the 
cluster when storage connectivity to failed node is restored, the vxattachd 
daemon does not getting notified about storage connectivity restore and does not 
trigger a reattach. Even if the disk group is later imported or the node is 
joined to CVM cluster, the disks are not automatically reattached.

RESOLUTION:
i) Missing events for a deported diskgroup: The fix handles this by 
listening to the import event of the diksgroup and triggers the brute-force 
recovery for that specific diskgroup.
ii) parallel recover of volumes from same disk: vxrecover automatically 
serializes the recovery of objects that are from the same disk to avoid the back 
and forth head movements. Also provided an option in vxattchd and vxrecover to 
control the number of parallel recovery that can 
happen for objects from the same disk.

* 2211971 (Tracking ID: 2190020)

SYMPTOM:
On heavy I/O system load dmp_deamon requests 1 mega byte continuous memory 
paging which in turn slows down the system due to continuous page swapping.

DESCRIPTION:
dmp_deamon keeps calculating statistical information (every 1 second by 
default). When the I/O load is high the I/O statistics buffer allocation code 
path 
calculation dynamically allocates continuous ~1 mega byte per-cpu.

RESOLUTION:
To avoid repeated memory allocation/free calls in every DMP I/O stats daemon 
interval, a two buffer strategy was implemented for storing DMP stats records. 
Two buffers of same size will be allocated at the beginning, one of the buffer 
will be used for writing active records while the other will be read by IO stats 
daemon. The two buffers will be swapped every stats daemon interval.

* 2214184 (Tracking ID: 2202710)

SYMPTOM:
Transactions on Rlink are not allowed during SRL to DCM flush.

DESCRIPTION:
Present implementation doesnat allow rlink transaction to go through if SRL
to DCM flush is in progress. As SRL overflows, VVR start reading from SRL and
mark the dirty regions in corresponding DCMs of data volumes, it is called SRL
to DCM flush. During SRL to DCM flush transactions on rlink is not allowed. Time
to complete SRL flush depend on SRL size, it could range from minutes to many
hours. If user initiate any transaction on rlink then it will hang until SRL
flush completes.

RESOLUTION:
Changed the code behavior to allow rlink transaction during SRL flush. Fix stops
the SRL flush for transaction to go ahead and restart the flush after
transaction completion.

* 2220064 (Tracking ID: 2228531)

SYMPTOM:
Vradmind hangs in vol_klog_lock() on VVR (Veritas Volume Replicator) Secondary 
site.
Stack trace might look like:

genunix:cv_wait+0x38()
vxio:vol_klog_lock+0x5c()
vxio:vol_mv_close+0xc0()
vxio:vol_close_object+0x30()
vxio:vol_object_ioctl+0x198()
vxio:voliod_ioctl()
vxio:volsioctl_real+0x2d4()
specfs:spec_ioctl()
genunix:fop_ioctl+0x20()
genunix:ioctl+0x184()
unix:syscall_trap32+0xcc()

DESCRIPTION:
In this scenario, a flag value should be set for vradmind to be signalled and 
woken up. As the flag value is not set here, it causes an enduring sleep. A race 
condition exists between setting and resetting of the flag values, resulting in 
the hang.

RESOLUTION:
Code changes are made to hold a lock to avoid the race condition between 
setting and resetting of the flag values.

* 2232829 (Tracking ID: 2232789)

SYMPTOM:
With NetApp metro cluster disk arrays, takeover operations (toggling of LUN
ownership within NetApp filer) can lead to IO failures on VxVM volumes.

Example of an IO error message at VxVM
VxVM vxio V-5-0-2 Subdisk disk_36-03 block 24928: Uncorrectable write error

DESCRIPTION:
During the takeover operation, the array fails the PGR and IO SCSI commands on
secondary paths with the following transient error codes - 0x02/0x04/0x0a
(NOT READY/LOGICAL UNIT NOT ACCESSIBLE, ASYMMETRIC ACCESS STATE TRANSITION) or
0x02/0x04/0x01 (NOT READY/LOGICAL UNIT IS IN PROCESS OF BECOMING READY) -  
that are not handled properly within VxVM.

RESOLUTION:
Included required code logic within the APM so that the SCSI commands with
transient errors are retried for the duration of NetApp filer reconfig time (60
secs) before failing the IO's on VxVM volumes.

* 2234292 (Tracking ID: 2152830)

SYMPTOM:
A diskgroup (DG) import fails with a non-descriptive error message when 
multiple copies (clones) of the same device exist and the original devices are 
either offline or not available.
For example:
# vxdg import mydg
VxVM vxdg ERROR V-5-1-10978 Disk group mydg: import
failed:
No valid disk found containing disk group

DESCRIPTION:
If the original devices are offline or unavailable, the vxdg(1M) command picks 
up cloned disks for import.DG import fails unless the clones are tagged and the 
tag is specified during the DG import. The import failure is expected, but the 
error message is non-descriptive and does not specify the corrective action to 
be taken by the user.

RESOLUTION:
The code is modified to give the correct error message when duplicate clones 
exist during import. Also, details of the duplicate clones are reported in the 
system log.

* 2241149 (Tracking ID: 2240056)

SYMPTOM:
'vxdg move/split/join' may fail during high I/O load.

DESCRIPTION:
During heavy I/O load 'dg move' transcation may fail because of open/close 
assertion and retry will be done. As the retry limit is set to 30 'dg move' 
fails if retry hits the limit.

RESOLUTION:
Change the default transaction retry to unlimit, introduce a new option 
to 'vxdg move/split/join' to set transcation retry limit as follows:

vxdg [-f] [-o verify|override] [-o expand] [-o transretry=retrylimit] move 
src_diskgroup dst_diskgroup objects ...

vxdg [-f] [-o verify|override] [-o expand] [-o transretry=retrylimit] split 
src_diskgroup dst_diskgroup objects ...

vxdg [-f] [-o verify|override] [-o transretry=retrylimit] join src_diskgroup 
dst_diskgroup

* 2247645 (Tracking ID: 2243044)

SYMPTOM:
Initialization of VxVM cdsdisk layout on a disk with greater than or equal to 1 
TB fails on HP-UX/ia-64 platform.

DESCRIPTION:
During the initialization of VxVM cdsdisk layout on a disk with size greater 
than or equal to 1 TB on HP-UX/ia-64 platform, we were checking if the disk has 
an existing HP-UX LVM layout.  During this check, the i/o was performed 
incorrectly using the DMP (Dynamic Multi-Pathing Driver) paths associated with 
the existing GPT partitions on the disk.

RESOLUTION:
This issue has been resolved by performing i/o to the whole device path during 
the initialization of VxVM cdsdisk layout on a disk with size greater than or 
equal to 1 TB.

* 2248354 (Tracking ID: 2245121)

SYMPTOM:
Rlinks do not connect for NAT (Network Address Translations) configurations.

DESCRIPTION:
When VVR (Veritas Volume Replicator) is replicating over a Network Address 
Translation (NAT) based firewall, rlinks fail to connect resulting in 
replication failure.

Rlinks do not connect as there is a failure during exchange of VVR heartbeats.
For NAT based firewalls, conversion of mapped IPV6 (Internet Protocol Version 
6) address to IPV4 (Internet Protocol Version 4) address is not handled which 
caused VVR heartbeat exchange with incorrect IP address leading to VVR 
heartbeat failure.

RESOLUTION:
Code fixes have been made to appropriately handle the exchange of VVR 
heartbeats under NAT based firewall.

* 2253269 (Tracking ID: 2263317)

SYMPTOM:
vxdg(1M) man page does not clearly describe diskgroup import and destroy 
operations for the case in which original diskgroup is destroyed and cloned 
disks are present.

DESCRIPTION:
Diskgroup import with dgid is cosidered as a recovery operation. Therefore, 
while 
importing with dgid, even though the original diskgroup is destroyed, both the 
original as well as cloned disks are considered as available disks. Hence, the 
original diskgroup is imported in such a scenario.
The existing vxdg(1M) man page does not clearly describe this scenario.

RESOLUTION:
Modified the vxdg(1M) man page to clearly describe the scenario.

* 2256728 (Tracking ID: 2248730)

SYMPTOM:
Command hungs if "vxdg import" called from script with STDERR
redirected.

DESCRIPTION:
If script is having "vxdg import" with STDERR redirected then
script does not finish till DG import and recovery is finished. Pipe between
script and vxrecover is not closed properly which keeps calling script waiting
for vxrecover to complete.

RESOLUTION:
Closed STDERR in vxrecover and redirected the output to
/dev/console.

* 2316309 (Tracking ID: 2316297)

SYMPTOM:
The following error messages are printed on the console every time system
boots.           
 VxVM vxdisk ERROR V-5-1-534 Device [DEVICE NAME]: Device is in use

DESCRIPTION:
During system boot up, while Volume Manager diskgroup imports, vxattachd daemon
tries to online the disk. Since the disk may be already online sometimes, an
attempt to re-online disk gives the below error message:
 VxVM vxdisk ERROR V-5-1-534 Device [DEVICE NAME]: Device is in use

RESOLUTION:
The solution is to check if the disk is already in "online" state. If so, avoid
reonline.

* 2323999 (Tracking ID: 2323925)

SYMPTOM:
If the rootdisk is under VxVM control and /etc/vx/reconfig.d/state.d/install-db
file exists, the following messages are observed on the console:

UX:vxfs fsck: ERROR: V-3-25742: /dev/vx/dsk/rootdg/homevol:sanity check 
failed: cannot open /dev/vx/dsk/rootdg/homevol: No such device or address
UX:vxfs fsck: ERROR: V-3-25742: /dev/vx/dsk/rootdg/optvol:sanity check failed: 
cannot open /dev/vx/dsk/rootdg/optvol: No such device or address

DESCRIPTION:
In the vxvm-startup script, there is check for the
/etc/vx/reconfig.d/state.d/install-db file. If the install-db file exist on the
system, the VxVM assumes that volume manager is not configured and does not
start volume configuration daemon "vxconfigd". "install-db" file somehow existed
on the system for a VxVM rootable system, this causes the failure.

RESOLUTION:
If install-db file exists on the system and the system is VxVM rootable, the
following warning message is displayed on the console:
"This is a VxVM rootable system.
 Volume configuration daemon could not be started due to the presence of
 /etc/vx/reconfig.d/state.d/install-db file.
 Remove the install-db file to proceed"

* 2328219 (Tracking ID: 2253552)

SYMPTOM:
vxconfigd leaks memory while reading the default tunables related to
smartmove (a VxVM feature).

DESCRIPTION:
In Vxconfigd, memory allocated for default tunables related to
smartmove feature is not freed causing a memory leak.

RESOLUTION:
The memory is released after its scope is over.

* 2328268 (Tracking ID: 2285709)

SYMPTOM:
On a VxVM rooted setup with boot devices connected through Magellan interface
card, system hangs at the early boot time, due to transient i/o errors.

DESCRIPTION:
On a VxVM rooted setup with boot devices connected through Magellan interface
card, due to fault in Magellan interface card, transient i/o errors are seen, and
system hangs as DMP doesn't do error handling at this early boot.

RESOLUTION:
Changes are done in DMP code, to spawn the dmp_daemon threads which perform error
handling, restoring paths etc. at early dmp module intialization time. With this
change, if a transient error is seen, then dmp_daemon thread will try to probe the
path and try to bring it online, so that system doesn't hang at early boot cycle.

* 2328286 (Tracking ID: 2244880)

SYMPTOM:
Initialization of VxVM cdsdisk layout on a disk with size greater than or equal 
to 1 TB fails.

DESCRIPTION:
During the initialization of VxVM cdsdisk layout on a disk with size greater 
than 
or equal to 1 TB, alternative (backup) GPT label is written in the last 33 
sectors of the disk.  SCSI pass-thru mode was used to write the alternative GPT 
label.  SCSI pass-thru mode could not handle device offsets equal to or greater 
than 1 TB

RESOLUTION:
This issue has been resolved by using the posix system calls to write the 
alternative GPT label during initialization of VxVM cdsdisk layout.

* 2337091 (Tracking ID: 2255182)

SYMPTOM:
If EMC CLARiiON arrays are configured with different failovermode for each host 
controllers ( e.g. one HBA has failovermode set as 1 while the other as 2 ), 
then VxVMas vxconfigd demon dumps core.

DESCRIPTION:
DDL (VxVMas Device Discovery Layer) determines the array type depending on the 
failovermode setting. DDL expects the same array type to be returned across all 
the paths going to that array. This fundamental assumption of DDL will be broken
with different failovermode settings thus leading to vxconfigd core dump.

RESOLUTION:
Validation code is added in DDL to detect such configurations and emit 
appropriate warning messages to the user to take corrective actions and skips
the later set of paths that are reporting different array type.

* 2349653 (Tracking ID: 2349352)

SYMPTOM:
Data corruption is observed on DMP(Dynamic Multipathing) device with single path 
during Storage reconfiguration (LUN addition/removal).

DESCRIPTION:
Data corruption can occur in the following configuration, when new LUNs are 
provisioned or removed under VxVM, while applications are on-line.
 
1. The DMP device naming scheme is EBN (enclosure based naming) and 
persistence=no
2. The DMP device is configured with single path or the devices are controlled 
by Third Party Multipathing Driver (Ex: MPXIO, MPIO etc.,)
 
There is a possibility of change in name of the VxVM devices (DA record), when 
LUNs are removed or added followed by the following commands, since the 
persistence naming is turned off.
 
(a) vxdctl enable
(b) vxdisk scandisks
 
Execution of above commands discovers all the devices and rebuilds the device 
attribute list with new DMP device names. The VxVM device records are then 
updated with this new attributes. Due to a bug in the code, the VxVM device 
records are mapped to wrong DMP devices. 
 
Example:
 
Following are the device before adding new LUNs.
 
sun6130_0_18 auto:cdsdisk    disk_0       prod_SC32    online nohotuse
sun6130_0_19 auto:cdsdisk    disk_1       prod_SC32    online nohotuse
 
The following are after adding new LUNs
 
sun6130_0_18 auto            -            -            nolabel
sun6130_0_19 auto            -            -            nolabel
sun6130_0_20 auto:cdsdisk    disk_0       prod_SC32    online nohotuse
sun6130_0_21 auto:cdsdisk    disk_1       prod_SC32    online nohotuse
 
The name of the VxVM device sun6130_0_18 is changed to sun6130_0_20.

RESOLUTION:
The code that updates the VxVM device records is rectified.

* 2353325 (Tracking ID: 1791397)

SYMPTOM:
Replication doesn't start if rlink detach and attach is done just after SRL
overflow.

DESCRIPTION:
As SRL overflows, it starts flush writes from SRL to DCM(Data change map). If
rlink is detached before complete SRL is flushed to DCM then it leaves the rlink
in SRL flushing state. Due to flushing state of rlink, attaching the rlink again
doesn't start the replication. Problem here is the way rlink flushing state is
interpreted.

RESOLUTION:
To fix this issue, we changed the logic to correctly interpret rlink flushing state.

* 2353327 (Tracking ID: 2179259)

SYMPTOM:
When using disks of size > 2TB and the disk encounters a media error with offset >
2TB while the disk responds to SCSI inquiry, data corruption can occur incase of a
write operation

DESCRIPTION:
The I/O rety logic in DMP assumes that the I/O offset is within 2TB limit and
hence when using disks of size > 2TB and the disk encounters a media error with
offset > 2TB while the disk responds to SCSI inquiry, the I/O would be issued on a
wrong offset within the 2TB range causing data corruption incase of write I/Os.

RESOLUTION:
The fix for this issue to change the I/O retry mechanism to work for >2TB offsets
as well so that no offset truncation happens that could lead to data corruption

* 2353328 (Tracking ID: 2194685)

SYMPTOM:
vxconfigd dumps core in scenario where array side ports are disabled/enabled in
loop for some iterations. 

gdb) where
#0  0x081ca70b in ddl_delete_node ()
#1  0x081cae67 in ddl_check_migration_of_devices ()
#2  0x081d0512 in ddl_reconfigure_all ()
#3  0x0819b6d5 in ddl_find_devices_in_system ()
#4  0x0813c570 in find_devices_in_system ()
#5  0x0813c7da in mode_set ()
#6  0x0807f0ca in setup_mode ()
#7  0x0807fa5d in startup ()
#8  0x08080da6 in main ()

DESCRIPTION:
Due to disabling the array side ports, the secondary paths get removed. But the
primary paths are reusing the devno of the removed secondary paths which is not
correctly handled in current migration code. Due to this, the DMP database gets
corrupted and subsequent discoveries lead to configd core dump.

RESOLUTION:
The issue is due to incorrect setting of a DMP flag.
The flag settting has been fixed to prevent the  DMP database from corruption in
the mentioned scenario.

* 2353403 (Tracking ID: 2337694)

SYMPTOM:
"vxdisk -o thin list" displays size as 0 for thin luns of capacity greater than 
2 TB.

DESCRIPTION:
SCSI READ CAPACITY ioctl is invoked to get the disk capacity.  SCSI READ 
CAPACITY returns data in extended data format if a disk capacity is 2 TB or 
greater.  This extended data was parsed incorectly while calculating the disk 
capacity.

RESOLUTION:
This issue has been resolved by properly parsing the extended data returned by 
SCSI READ CAPACITY ioctl for disks of size greater than 2 TB or greater.

* 2353404 (Tracking ID: 2334757)

SYMPTOM:
Vxconfigd consumes a lot of memory when the DMP tunable
dmp_probe_idle_lun is set on.  apmapa command on vxconfigd process shows
continuous growing heap.

DESCRIPTION:
DMP path restoration daemon probes idle LUNs(Idle LUNs are VxVM disks on
which no I/O requests are scheduled) and generates notify events to vxconfigd. 
        Vxconfigd in turn send the nofification of these events to its clients.
For any reasons, if vxconfigd could not deliver  these events (because client is
busy processing earlier sent event), it keeps these events to itself.
        Because of this slowness of events consumption by its clients, memory
consumption of vxconfigd grows.

RESOLUTION:
dmp_probe_idle_lun is set to off by default.

* 2353410 (Tracking ID: 2286559)

SYMPTOM:
System panics in DMP (Dynamic Multi Pathing) kernel module due to kernel heap 
corruption while DMP path failover is in progress.

Panic stack may look like:

vpanic
kmem_error+0x4b4()
gen_get_enabled_ctlrs+0xf4()
dmp_get_enabled_ctlrs+0xf4()
dmp_info_ioctl+0xc8()
dmpioctl+0x20()
dmp_get_enabled_cntrls+0xac()
vx_dmp_config_ioctl+0xe8()
quiescesio_start+0x3e0()
voliod_iohandle+0x30()
voliod_loop+0x24c()
thread_start+4()

DESCRIPTION:
During path failover in DMP, the routine gen_get_enabled_ctlrs() allocates 
memory proportional to the number of enabled paths. However, while releasing 
the memory, the routine may end up freeing more memory because of the change in 
number of enabled paths.

RESOLUTION:
Code changes have been made in the routines to free allocated memory only.

* 2353421 (Tracking ID: 2334534)

SYMPTOM:
In CVM (Cluster Volume Manager) environment, a node (SLAVE) join to the cluster
is getting stuck and leading to unending join hang unless join operation is
stopped on joining node (SLAVE) using command a/opt/VRTS/bin/vxclustadm
stopnodea. While CVM join is hung in user-land (also called as vxconfigd level
join), on CVM MASTER node, vxconfigd (Volume Manager Configuration daemon)
doesnat respond to any VxVM command, which communicates to vxconfigd process.

When vxconfigd level CVM join is hung in user-land, avxdctl -c modea on joining
node (SLAVE) displays an output such as:
 
     bash-3.00#  vxdctl -c mode
     mode: enabled: cluster active - SLAVE
     master: mtvat1000-c1d
     state: joining
     reconfig: vxconfigd in join

DESCRIPTION:
As part of a CVM node join to the cluster, every node in the cluster updates the
current CVM membership information (membership information which can be viewed
by using command a/opt/VRTS/bin/vxclustadm nidmapa) in kernel first and then
sends a signal to vxconfigd in user land to use that membership in exchanging
configuration records among each others. Since each node receives the signal
(SIGIO) from kernel independently, the joining nodeas (SLAVE) vxconfigd is ahead
of the MASTER in its execution. Thus any requests coming from the joining node
(SLAVE) is denied by MASTER with the error aVE_CLUSTER_NOJOINERSa i.e. join
operation is not currently allowed (error number: 234) since MASTERas vxconfigd
has not got the updated membership from the kernel yet. While responding to
joining node (SLAVE) with error aVE_CLUSTER_NOJOINERSa, if there is any change
in current membership (change in CVM node ID) as part of node join then MASTER
node is wrongly updating the internal data structure of vxconfigd, which is
being used to send response to joining (SLAVE) nodes. Due to wrong update of
internal data structure, later when the joining node retries its request, the
response from master is sent to a wrong node, which doesnat exist in the
cluster, and no response is sent to the joining node. Joining node (SLAVE) never
gets the response from MASTER for its request and hence CVM node join is not
completed and leading to cluster hang.

RESOLUTION:
vxconfigd code is modified to handle the above mentioned scenario effectively. 
vxconfid on MASTER node will process connection request coming from joining node
(SLAVE) effectively only when MASTER node gets the updated CVM membership
information from kernel.

* 2353425 (Tracking ID: 2320917)

SYMPTOM:
vxconfigd, the VxVM configuration daemon dumps core and loses disk group 
configuration while invoking the following VxVM reconfiguration steps:

1)	Volumes which were created on thin reclaimable disks are deleted.
2)	Before the space of the deleted volumes is reclaimed, the disks (whose 
volume is deleted) are removed from the DG with  'vxdg rmdisk' command using a-
ka option.
3)	The disks  are removed using  'vxedit rm' command.
4)	 New disks are added to the disk group using 'vxdg addisk' command.

The stack trace of the core dump is :
[
 0006f40c rec_lock3 + 330
 0006ea64 rec_lock2 + c
 0006ec48 rec_lock2 + 1f0
 0006e27c rec_lock + 28c
 00068d78 client_trans_start + 6e8
 00134d00 req_vol_trans + 1f8
 00127018 request_loop + adc
 000f4a7c main  + fb0
 0003fd40 _start + 108
]

DESCRIPTION:
When a volume is deleted from a disk group that uses thin reclaim luns, 
subdisks are not removed immediately, rather it is marked with a special flag. 
The reclamation happens at a scheduled time every day. avxdefaulta command can 
be invoked to list and modify the settings.

After the disk is removed from disk group using 'vxdg -k rmdisk' and 'vxedit 
rm' command, the subdisks records are still in core database and they are 
pointing to disk media record which has been freed. When the next command is 
run to add another new disk to the disk group, vxconfigd dumps core when 
locking the disk media record which has already been freed.

The subsequent disk group deport and import commands erase all disk group 
configuration as it detects an invalid association between the subdisks and the 
removed disk.

RESOLUTION:
1)	The following message will be printed when 'vxdg rmdisk' is used to 
remove disk that has reclaim pending subdisks:

VxVM vxdg ERROR V-5-1-0 Disk <diskname> is used by one or more subdisks which
are pending to be reclaimed.
        Use "vxdisk reclaim <diskname>" to reclaim space used by these subdisks,
        and retry "vxdg rmdisk" command.
        Note: reclamation is irreversible.

2)	Add a check when using 'vxedit rm' to remove disk. If the disk is in 
removed state and has reclaim pending subdisks, following error message will be 
printed:

VxVM vxedit ERROR V-5-1-10127 deleting <diskname>:
        Record is associated

* 2353427 (Tracking ID: 2337353)

SYMPTOM:
The avxdmpadm includea command is including all the excluded devices along with 
the device given in the command.

Example:

# vxdmpadm exclude vxvm dmpnodename=emcpower25s2
# vxdmpadm exclude vxvm dmpnodename=emcpower24s2

# more /etc/vx/vxvm.exclude
exclude_all 0
paths
emcpower24c /dev/rdsk/emcpower24c emcpower25s2
emcpower10c /dev/rdsk/emcpower10c emcpower24s2
#
controllers
#
product
#
pathgroups
#

# vxdmpadm include vxvm dmpnodename=emcpower24s2

# more /etc/vx/vxvm.exclude
exclude_all 0
paths
#
controllers
#
product
#
pathgroups
#

DESCRIPTION:
When a dmpnode is excluded, an entry is made in /etc/vx/vxvm.exclude file. This 
entry has to be removed when the dmpnode is included later. Due to a bug in 
comparison of dmpnode device names, all the excluded devices are included.

RESOLUTION:
The bug in the code which compares the dmpnode device names is rectified.

* 2353464 (Tracking ID: 2322752)

SYMPTOM:
Duplicate device names are observed for NR (Not Ready) devices, when vxconfigd 
is restarted (vxconfigd ak).

# vxdisk list 

emc0_0052    auto            -            -            error
emc0_0052    auto:cdsdisk    -            -            error
emc0_0053    auto            -            -            error
emc0_0053    auto:cdsdisk    -            -            error

DESCRIPTION:
During vxconfigd restart, disk access records are rebuilt in vxconfigd 
database. As part of this process IOs are issued on all the devices to read the 
disk private regions. The failure of these IOs on NR devicess resulted in 
creating duplicate disk access records.

RESOLUTION:
vxconfigd code is modified not to create dupicate disk access records.

* 2353922 (Tracking ID: 2300195)

SYMPTOM:
Uninitialization of VxVM cdsdisk of size greater than 1 TB fails on HP-UX/ia-64 
platform.

DESCRIPTION:
During the uninitialization of VxVM cdsdisk of size greater than 1 TB on HP-
UX/ia-64 platform, DMP (dynamic-multi-pathing) paths corresponding to the GPT 
partitions on the VxVM cdsdisk were created incorrectly.  This resulted in i/o 
failure while destroying the VxVM cdsdisk.

RESOLUTION:
This issue has been solved by performing the i/o to the whole device while 
uninitializing the VxVM cdsdisk.

* 2357579 (Tracking ID: 2357507)

SYMPTOM:
Machine can panic while detecting unstable paths with following stack
trace.

#0  crash_nmi_callback 
#1  do_nmi 
#2  nmi 
#3  schedule 
#4  __down 
#5  __wake_up 
#6  .text.lock.kernel_lock 
#7  thread_return 
#8  printk 
#9  dmp_notify_event 
#10 dmp_restore_node

DESCRIPTION:
After detecting unstable paths restore daemon allocates memory to
report the event to userland daemons like vxconfigd. While requesting for memory
allocation restore daemon did not drop the spin lock resulting to the machine
panic.

RESOLUTION:
Fixed the code so that spinlocks are not held while requesting for
memory allocation in restore daemon.

* 2357820 (Tracking ID: 2357798)

SYMPTOM:
VVR leaking memory due to unfreed vol_ru_update structure. Memory leak is very
small but it can accumulate to big value if VVR is running for many days.

DESCRIPTION:
VVR allocates update structure for each write, if replication is up-to-date then
next write coming in will also create multi-update and add it to VVR replication
queue. While creating multi-update, VVR wrongly marked the original update with
flag, which means that update is in replication queue, but it was never added(not
required) to replication queue. When update free routine is called it check if
update has flag marked then don't free it, assuming that update is still in
replication queue, it will get free while remove it from queue. Since update was
not in the queue it will never get free and leak the memory. Memory leak will
happen for only first write coming after each time rlink become up-to-date, that
is reason it will take many days to leak big memory.

RESOLUTION:
Marking of flag for some updates was causing this memory leak, flag marking is not
required as we are not adding update into replication queue. Fix is to remove
marking and checking of flag.

* 2360415 (Tracking ID: 2242268)

SYMPTOM:
The agenode which got already freed got accessed which led to the panic.
Panic stack looks like

[0674CE30]voldrl_unlog+0001F0 (F100000070D40D08, F10001100A14B000,
   F1000815B002B8D0, 0000000000000000)
[06778490]vol_mv_write_done+000AD0 (F100000070D40D08, F1000815B002B8D0)
[065AC364]volkcontext_process+0000E4 (F1000815B002B8D0)
[066BD358]voldiskiodone+0009D8 (F10000062026C808)
[06594A00]voldmp_iodone+000040 (F10000062026C808)

DESCRIPTION:
Panic happened because of accessing the memory location which got already freed.

RESOLUTION:
Skip the data structure for further processing when the memory 
already got freed off.

* 2360419 (Tracking ID: 2237089)

SYMPTOM:
=======
vxrecover failed to recover the data volumes with associated cache volume.

DESCRIPTION:
===========
vxrecover doesn't wait till the recovery of the cache volumes is complete before 
triggering the recovery of the data volumes that are created on top of cache
volume. Due to this the recovery might fail for the data volumes.

RESOLUTION:
==========
Code changes are done to serialize the recovery for different volume types.

* 2360719 (Tracking ID: 2359814)

SYMPTOM:
1. vxconfigbackup(1M) command fails with the following error:
ERROR V-5-2-3720 dgid mismatch

2. "-f" option for the vxconfigbackup(1M) is not documented in the man page.

DESCRIPTION:
1. In some cases, a *.dginfo file will have two lines starting with
"dgid:". It causes vxconfigbackup to fail.
The output from the previous awk command returns 2 lines instead of one for the
$bkdgid variable and the comparison fails, resulting in "dgid mismatch" error even
when the dgids are the same.
This happens in the case if the temp dginfo file is not removed during last run of
vxconfigbackup, such as the script is interrupted, the temp dginfo file is 
updated with appending mode, 

vxconfigbackup.sh:

   echo "TIMESTAMP" >> $DGINFO_F_TEMP 2>/dev/null

Therefore, there may have 2 or more dginfo are added into  the dginfo file, it 
causes the config backup failure with dgid mismatch.

2. "-f" option to force a backup is not documented in the man page of
vxconfigbackup(1M).

RESOLUTION:
1. The solution is to change append mode to destroy mode.

2. Updated the vxconfigbackup(1M) man page with the "-f" option.

* 2364700 (Tracking ID: 2364253)

SYMPTOM:
In case of Space Optimized snapshots at secondary site, VVR leaks kernel memory.

DESCRIPTION:
In case of Space Optimized snapshots at secondary site, VVR proactively starts
the copy-on-write on the snapshot volume. The I/O buffer allocated for this
proactive copy-on-write was not freed even after I/Os are completed which lead
to the memory leak.

RESOLUTION:
After the proactive copy-on-write is complete, memory allocated for the I/O
buffers is released.

* 2377317 (Tracking ID: 2408771)

SYMPTOM:
VXVM does not show all the discovered devices. Number of devices shown
by VXVM is lesser than those by the OS.

DESCRIPTION:
For every lunpath device discovered, VXVM creates a data structure and
is stored in a hash table. Hash value is computed based on unique minor of the
lunpath. In case minor number exceeds 831231, we encounter integer overflow and
store the data structure for this path at wrong location. When we later traverse
this hash list, we limit the accesses based on total number of discovered paths
and as the devices with minor numbers greater than 831232 are hashed wrongly, we
do not create DA records for such devices.

RESOLUTION:
Integer overflow problem has been resolved by appropriately typecasting
the minor number and hence correct hash value is computed.

* 2379034 (Tracking ID: 2379029)

SYMPTOM:
Changing of enclosure name was not working for all devices in enclosure. All these
devices were present in /etc/vx/darecs.

# cat /etc/vx/darecs
ibm_ds8x000_02eb        auto    online 
format=cdsdisk, privoffset=256, pubslice=2, privslice=2
ibm_ds8x000_02ec        auto    online 
format=cdsdisk, privoffset=256, pubslice=2, privslice=2
# vxdmpadm setattr enclosure ibm_ds8x000 name=new_ibm_ds8x000
# vxdisk -o alldgs list
DEVICE       TYPE            DISK         GROUP        STATUS
ibm_ds8x000_02eb auto:cdsdisk    ibm_ds8x000_02eb  mydg         online
ibm_ds8x000_02ec auto:cdsdisk    ibm_ds8x000_02ec  mydg         online
new_ibm_ds8x000_02eb auto            -            -            error
new_ibm_ds8x000_02ec auto            -            -            error

DESCRIPTION:
/etc/vx/darecs only stores foreign, nopriv or simple devices,
the auto device should NOT be written into this file.
A DA record is flushed in the /etc/vx/darecs at the end of transaction, if
R_NOSTORE flag is NOT set on a DA
record. There was a bug in VM where if we initialize a disk that does not
exist in da_list(e.g. removed using vxdisk rm), the R_NOSTORE flag is NOT set
for the new
created DA record. Hence duplicate entries for these devices were created and
resulted in these DAs going in error state.

RESOLUTION:
Source has been modified to add R_NOSTORE flag for auto type DA record created by
auto_init() or auto_define().

# vxdmpadm setattr enclosure ibm_ds8x000 name=new_ibm_ds8x000
# vxdisk -o alldgs list
new_ibm_ds8x000_02eb auto:cdsdisk    ibm_ds8x000_02eb  mydg         online
new_ibm_ds8x000_02ec auto:cdsdisk    ibm_ds8x000_02ec  mydg         online

* 2382705 (Tracking ID: 1675599)

SYMPTOM:
The vxconfigd daemon leaks memory while excluding and 
including a Third party Driver-controlled Logical Unit 
Number (LUN) in a loop. As a part of this, vxconfigd loses
its license information and the following error is seen 
in the system log:
"License has expired or is not available for operation"

DESCRIPTION:
In the vxconfigd code, the memory allocated for various 
data structures related to device discovery layer is not 
freed, resulting in the memory leak.

RESOLUTION:
The code is modified so that the memory is released after 
its scope is over.

* 2382710 (Tracking ID: 2139179)

SYMPTOM:
DG import can fail with SSB (Serial Split Brain) though the SSB does not exist.

DESCRIPTION:
An association between DM and DA records is done while importing any DG, if the 
SSB id of the DM and DA records match. On a system with stale cloned disks, the 
system is attempting to associate the DM with cloned DA, where the SSB id 
mismatch is observed and resulted in import failure with SSB mismatch.

RESOLUTION:
The selection of DA to associate with DM is rectified to resolve the issue.

* 2382714 (Tracking ID: 2154287)

SYMPTOM:
In the presence of Not-Ready" devices when the SCSI inquiry on the device succeeds
but open or read/write operations fail, one sees that paths to such devices are
continuously marked as ENABLED and DISABLED for every DMP restore task cycle.

DESCRIPTION:
The issue is that the DMP restore task finds these paths connected and hence
enables them for I/O but soon finds that they cannot be used for I/O and
disables them

RESOLUTION:
The fix is to not enable the path unless it is found to be connected and available
to open and issue I/O.

* 2382717 (Tracking ID: 2197254)

SYMPTOM:
vxassist, the VxVM volume creation utility when creating volume with
alogtype=nonea doesnat function as expected.

DESCRIPTION:
While creating volumes on thinrclm disks, Data Change Object(DCO) version 20 log
is attached to every volume by default. If the user do not want this default
behavior then alogtype=nonea option can be specified as a parameter to vxassist
command. But with VxVM on HP 11.31 , this option does not work and DCO version
20 log is created by default.  The reason for this inconsistency is that  when
alogtype=nonea option is specified, the utility sets the flag to prevent
creation of log. However, VxVM wasnat checking whether the flag is set before
creating DCO log which led to this issue.

RESOLUTION:
This is a logical issue which is addressed by code fix. The solution is to check
for this corresponding flag of  alogtype=nonea before creating DCO version 20 by
default.

* 2383705 (Tracking ID: 2204752)

SYMPTOM:
The following message is observed after the diskgroup creation:
"VxVM ERROR V-5-3-12240: GPT entries checksum mismatch"

DESCRIPTION:
This message is observed with the disk which was initialized as cds_efi and later
on this was initialized as hpdisk. A harmless message "checksum mismatch" is
thrown out even when the diskgroup initialization is successful.

RESOLUTION:
Remove the harmless message "GPT entries checksum mismatch"

* 2384473 (Tracking ID: 2064490)

SYMPTOM:
vxcdsconvert utility fails if disk capacity is greater than or equal to 1 TB

DESCRIPTION:
VxVM cdsdisk uses GPT layout if the disk capacity is greater than 1 TB and uses 
VTOC layout if the disk capacity is less 1 TB.  Thus, vxcdsconvert utility was 
not able to convert to the GPT layout if the disk capacity is greater than or 
equal to 1 TB.

RESOLUTION:
This issue has been resolved by converting to proper cdsdisk layout depending 
on the disk capacity

* 2384844 (Tracking ID: 2356744)

SYMPTOM:
When avxvm-recovera are executed manually, the duplicate instances of the
Veritas Volume Manager(VxVM) daemons (vxattachd, vxcached, vxrelocd, vxvvrsecdgd
and vxconfigbackupd) are invoked.
When user tries to kill any of the daemons manually, the other instances of the
daemons are left on this system.

DESCRIPTION:
The Veritas Volume Manager(VxVM) daemons (vxattachd, vxcached, vxrelocd,
vxvvrsecdgd and vxconfigbackupd) do not have :

  1. A check for duplicate instance.
  and
  2. Mechanism to clean up the stale processes.

Because of this, when user executes the startup script(vxvm-recover), all
daemons are invoked again and if user kills any of the daemons manually, the
other instances of the daemons are left on this system.

RESOLUTION:
The VxVM daemons are modified to do the "duplicate instance check" and "stale
process cleanup" appropriately.

* 2386763 (Tracking ID: 2346470)

SYMPTOM:
The Dynamic Multi Pathing Administration operations such as "vxdmpadm 
exclude vxvm dmpnodename=<daname>" and "vxdmpadm include vxvm dmpnodename=
<daname>" triggers memory leaks in the heap segment of VxVM Configuration Daemon 
(vxconfigd).

DESCRIPTION:
vxconfigd allocates chunks of memory to store VxVM specific information 
of the disk being included during "vxdmpadm include vxvm dmpnodename=<daname>" 
operation. The allocated memory is not freed while excluding the same disk from 
VxVM control. Also when excluding a disk from VxVM control, another chunk of 
memory is temporarily allocated by vxconfigd to store more details of the device 
being excluded. However this memory is not freed at the end of exclude 
operation.

RESOLUTION:
Memory allocated during include operation of a disk is freed during 
corresponding exclude operation of the disk. Also temporary memory allocated 
during exclude operation of a disk is freed at the end of exclude operation.

* 2389095 (Tracking ID: 2387993)

SYMPTOM:
In presence of NR (Not-Ready) devices, vxconfigd (VxVM configuration 
        daemon) goes into disabled mode once restarted.  

	# vxconfigd -k -x syslog
	# vxdctl mode
	mode: disabled

	If vxconfigd is restarted in debug mode at level 9 following message 
        could be seen. 

	# vxconfigd -k ax 9 -x syslog

	VxVM vxconfigd DEBUG  V-5-1-8856 DA_RECOVER() failed, thread 87: Kernel 
        and on-disk configurations don't match

DESCRIPTION:
When vxconfid is restarted, all the VxVM devices are recovered. As part
        of recovery the capacity of the device is read, which can fail with EIO.
        This error is not handled properly. As a result of this the vxconfigd is  
        going to DISABLED state.

RESOLUTION:
EIO error code from read capacity ioctl is handled specifically.

* 2390804 (Tracking ID: 2249113)

SYMPTOM:
VVR volume recovery hang, at vol_ru_recover_primlog_done() function in a dead 
loop.

DESCRIPTION:
During the SRL recovery, the SRL is read to apply the update to the data 
volume.  
There are possible hold in the SRL due to some writes are not complete 
properly. 
This holes must have to be skipped. and this regions is read as a dummy update 
and sent it to secondary. If the dummy update size is larger than max_write 
(>256k), then the code logic goes intoa dead loop, keep reading the same dummy 
update for ever.

RESOLUTION:
Handle the large holes which are greater than VVR MAX_WRITE.

* 2390815 (Tracking ID: 2383158)

SYMPTOM:
The panic in vol_rv_mdship_srv_done() due to sio is freed and having the 
invalid node pointer.

DESCRIPTION:
The vol_rv_mdship_srv_done() is panicking at referencing wrsio->wrsrv_node as 
the wrsrv_node is having the invalid pointer.It is also observed that the wrsio 
is freed or allocated for different SIO. Looking closely, the 
vol_rv_check_wrswaitq() is called at every done of the SIO, which looks into 
the waitq and releases all the SIO which has RV_WRSHIP_SRV_SIO_FLAG_LOGEND_DONE 
flag set on it. In vol_rv_mdship_srv_done(), we set this flag and do more 
operations on wrsrv. During this time the other SIO which is completed with the 
DONE, calls the function vol_rv_check_wrswaitq() and deletes the SIO of it own 
and other SIO which has the RV_WRSHIP_SRV_SIO_FLAG_LOGEND_DONE flag set. This 
leads to deleting the SIO which is on the fly, and is causing the panic.

RESOLUTION:
The flag must be set just before calling the function vol_rv_mdship_srv_done(), 
and at the end of the SIOdone() to avoid other SIO's to race and delete the 
current running one.

* 2390822 (Tracking ID: 2369786)

SYMPTOM:
On VVR Secondary cluster, if SRL disk goes bad then, then vxconfigd may hang in
transaction code path.

DESCRIPTION:
In case of any error seen in VVR shared disk group environments, error handling
is done cluster wide. On VVR Secondary, if SRL disk goes bad due to some
temporary or actual disk failure, it starts cluster wide error handling. Error
handling requires serialization, in some cases we didn't do serialization which
caused error handling to go in dead loop hence the hang.

RESOLUTION:
Making sure we always serialize the I/O during error handling on VVR Secondary
resolved this issue.

* 2397663 (Tracking ID: 2165394)

SYMPTOM:
If the cloned copy of a diskgroup and a destroyed diskgroup exists on the 
system, an import operation imports destroyed diskgroup instread of cloned one.
For example, consider a system with diskgroup dg containing disk disk1. Disk 
disk01 is cloned to disk02. When diskgroup dg containing disk01 is destroyed and 
diskgroup dg is imported, VXVM should import dg with cloned disk i.e disk02. 
However, it imports the diskgroup dg with disk01.

DESCRIPTION:
After destroying a diskgroup, if the cloned copy of the same diskgroup exists on 
the system, the following disk group import operation wrongly identifies the 
disks to be import and hence destroyed diskgroup gets imported.

RESOLUTION:
The diskgroup import code is modified to identify the correct diskgroup when a 
cloned copy of the destroyed diskgroup exists.

* 2405446 (Tracking ID: 2253970)

SYMPTOM:
Enhancement to customize private region I/O size based on maximum transfer size 
of underlying disk.

DESCRIPTION:
There are different types of Array Controllers which support data transfer 
sizes starting from 256K and beyond. VxVM tunable volmax_specialio controls 
vxconfigd's configuration I/O as well as Atomic Copy I/O size. When 
volmax_specialio is tuned to a value greater than 1MB to leverage maximum 
transfer sizes of underlying disks, import operation is failing for disks which 
cannot accept more than 256K I/O size. If the tunable is set to 256k then it 
will be the case where large transfer size of disks is not being leveraged.

RESOLUTION:
This enhancement leverages large disk transfer sizes as well as supports Array 
controllers with 256K transfer sizes.

* 2408209 (Tracking ID: 2291226)

SYMPTOM:
Data corruption can be observed on a CDS (Cross-platform Data Sharing) disk, 
whose capacity is more than 1 TB. The following pattern would be found in the 
data region of the disk.

<DISK-IDENTIFICATION> cyl <number-of-cylinders> alt 2 hd <number-of-tracks> sec 
<number-of-sectors-per-track>

DESCRIPTION:
The CDS disk maintains a SUN vtoc in the zeroth block of the disk. This VTOC 
maintains the disk geometry information like number of cylinders, tracks and 
sectors per track. These values are limited by a maximum of 65535 by design of 
SUN's vtoc, which limits the disk capacity to 1TB. As per SUN's requirement, 
few backup VTOC labels have to be maintained on the last track of the disk.

VxVM 5.0 MP3 RP3 allows to setup CDS disk on a disk with capacity more than 
1TB. The data region of the CDS disk would span more than 1TB utilizing all the 
accessible cylinders of the disk. As mentioned above, the VTOC labels would be 
written at zeroth block and on the last track considering the disk capacity as 
1TB. The backup labels would fall in to the data region of the CDS disk causing 
the data corruption.

RESOLUTION:
Suppress writing the backup labels to prevent the data corruption.

* 2408864 (Tracking ID: 2346376)

SYMPTOM:
Some DMP IO statistics records were lost from per-cpu IO stats queue. Hence, DMP 
IO stat reporting CLI was displaying incorrect data.

DESCRIPTION:
DMP IO statistics daemon has two buffers for maintaining IO statistics records. 
One of the buffer is active and is updated on every I/O completion, while the 
other shadow buffer is read by IO statistics daemon. The central IO statistics 
table is updated, every IO statistics interval, from the records in active 
buffer. The problem occurs because swapping of two buffers can happen from two 
contexts, IO throttling and IO statistics collection. IO throttling swaps the 
buffers but doesnat update central IO statistics table. So, all IO records in 
active buffer are lost when the two buffers are swapped in throttling context.

RESOLUTION:
As a

* 2409212 (Tracking ID: 2316550)

SYMPTOM:
While doing cold/ignite install to 11.31 + VxVM 5.1, following warning messages
are seen on a setup with ALUA array:

"VxVM vxconfigd WARNING V-5-1-0 ddl_add_disk_instr: Turning off NMP Alua mode
failed for dmpnode 0xffffffff with ret = 13 "

DESCRIPTION:
The above warning messages are displayed by vxconfigd started at the early boot 
if
fails to turn off the NMP ALUA mode for a given dmp device. These messages are 
not
harmful, as later vxconfigd started in enabled mode will turn off the NMP ALUA
mode for all the dmp devices.

RESOLUTION:
Changes done in vxconfigd to not to print these warning messages in vxconfigd 
boot
mode.

* 2411052 (Tracking ID: 2268408)

SYMPTOM:
1) On suppressing the underlying path of powerpath controlled device, the disk 
goes in error state. 2) "vxdmpadm exclude vxvm dmpnodename=<emcpower#>" command 
does not suppress TPD devices.

DESCRIPTION:
During discovery, H/W path corresponding to the basename is not generated for 
powerpath controlled devices because basename does not contain the slice 
portion. Device name with s2 slice is expected while generating H/W name.

RESOLUTION:
Whole disk name i.e., device name with s2 slice is used to generate H/W path.

* 2411053 (Tracking ID: 2410845)

SYMPTOM:
If a DG(Disk Group) is imported with reservation key, then during DG deport
lots of 'reservation conflict' messages will be seen.
                
    [DATE TIME] [HOSTNAME] multipathd: VxVM26000: add path (uevent)
    [DATE TIME] [HOSTNAME] multipathd: VxVM26000: failed to store path info
    [DATE TIME] [HOSTNAME] multipathd: uevent trigger error
    [DATE TIME] [HOSTNAME] multipathd: VxVM26001: add path (uevent)
    [DATE TIME] [HOSTNAME] multipathd: VxVM26001: failed to store path info
    [DATE TIME] [HOSTNAME] multipathd: uevent trigger error
    ..
    [DATE TIME] [HOSTNAME] kernel: sd 2:0:0:2: reservation conflict
    [DATE TIME] [HOSTNAME] kernel: sd 2:0:0:2: reservation conflict
    [DATE TIME] [HOSTNAME] kernel: sd 2:0:0:1: reservation conflict
    [DATE TIME] [HOSTNAME] kernel: sd 2:0:0:2: reservation conflict
    [DATE TIME] [HOSTNAME] kernel: sd 2:0:0:1: reservation conflict
    [DATE TIME] [HOSTNAME] kernel: sd 2:0:0:2: reservation conflict

DESCRIPTION:
When removing a PGR(Persistent Group Reservation) key during DG deport, we
need to preempt the key but the preempt operation is failed with reservation
conflict error because the passing key for preemption is not correct.

RESOLUTION:
Code changes are made to set the correct key value for the preemption 
operation.

* 2413077 (Tracking ID: 2385680)

SYMPTOM:
The vol_rv_async_childdone() panic occurred because of corrupted pripendingq

DESCRIPTION:
The pripendingq is always corrupted in this panic. The head
entry is always freed from the queue but not removed. In mdship_srv_done code, 
for error condition, we remove the update from pripendingq only if the next or 
prev pointers of updateq is non-null. This leads to the head pointer not 
getting removed in the abort scenerio and causing the free to happen without 
deleting it from the queue.

RESOLUTION:
The prev and next checks are removed in all the places. Also handled the abort 
case carefully for the following conditions:

1) abort logendq due to slave node panic (i.e.) this has the update entry but 
the update is not removed from the pripendingq.

2) vol_kmsg_eagain type of failures, (i.e.) the update is there, but it is 
removed from the pripendingq.

3) abort very early in the mdship_sio_start() (i.e.) update is allocated but 
not in pripendingq.

* 2413908 (Tracking ID: 2413904)

SYMPTOM:
Performing Dynamic LUN reconfiguration operations (adding and removing LUNs),
can cause corruption in DMP database. This in turn may lead to vxconfigd core
dump OR system panic.

DESCRIPTION:
When a LUN is removed from the VM using  avxdisk rma and at the same time some
new LUN is added and in case the newly added LUN reuses the devno of the removed
LUN then this may corrupt the DMP database as this condition is not handled
currently.

RESOLUTION:
Fixed the DMP code to handle the mentioned issue.

* 2415566 (Tracking ID: 2369177)

SYMPTOM:
When using > 2TB disks and the device respons to SCSI inquiry but fails to service
I/O, data corruption can occur as the write I/O would be directed at an incorrect
offset

DESCRIPTION:
Currently when the failed I/O is retried, DMP assumes the offset to be a 32 bit
value and hence I/O offsets >2TB can get truncated leading to the rety I/O issued
at wrong offset value

RESOLUTION:
Change the offset value to a 64 bit quantity to avoid truncation during I/O
retries from DMP.

* 2415577 (Tracking ID: 2193429)

SYMPTOM:
Enclosure attributes like iopolicy, recoveryoption etc do not persist across
reboots in case when before vold startup itself DMP driver is already 
configured before with different array type (e.g. in case of root support) than 
stored in array.info.

DESCRIPTION:
When DMP driver is already configured before vold comes up (as happens in root
support), then the enclosure attributes do not take effect if the enclosure name
in kernel has changed from previous boot cycle. This is because when vold comes 
up
da_attr_list will be NULL. And then it gets events from DMP kernel for data
structures already present in kernel. On receiving this information, it tries to
write da_attr_list into the array.info, but since da_attr_list is NULL, 
array.info
gets overwritten with no data. And hence later vold could not correlate the
enclosure attributes present in dmppolicy.info with enclosures present in
array.info, so the persistent attributes could not get applied.

RESOLUTION:
Do not overwrite array.info of da_attr_list is NULL

* 2417184 (Tracking ID: 2407192)

SYMPTOM:
Application I/O hangs on RVG volumes when RVG logowner is being set on the node
which takes over the master role (either as part of avxclustadm setmastera OR as
part of original master leave)

DESCRIPTION:
Whenever a node takes over the master role, RVGs are recovered on the new
master. Because of a race between RVG recovery thread (initiated as part of
master takeover) and the thread which is changing RVG logowner(which is run as
part of avxrvg set logowner=ona,  RVG recovery does not get completed which
leads to I/O hang.

RESOLUTION:
The race condition is handled with appropriate locks and conditional variable.

* 2417205 (Tracking ID: 2407699)

SYMPTOM:
The vxassist command dumps core if the file "/etc/default/vxassist" contains the
line "wantmirror=<ctlr|target|...>"

DESCRIPTION:
vxassist, the Veritas Volume Manager client utility can accept attributes from
the system defaults file (/etc/default/vxassist), the user specified alternate
defaults file and the command line. vxassist automatically merges all the
attributes by pre-defined priority. However, a null pointer checking is missed
while merging "wantmirror" attribute which leads to the core dump.

RESOLUTION:
Within vxassist, while merging attributes, add a check for NULL pointer.

* 2421100 (Tracking ID: 2419348)

SYMPTOM:
System Panic with stack 

dmp_get_path_state()
do_passthru_ioctl()
dmp_passthru_ioctl()
dmpioctl()
fop_ioctl()
ioctl()
syscall_trap32()

DESCRIPTION:
This panic was because of race condition between vxconfigd process and vxdclid
process. vxconfigd was doing reconfiguration and freed the dmpnode and this
dmpnode is accessed by vxdclid thread. This leads to system panic.

RESOLUTION:
Get the dmpnode from global hash table in vxdclid thread. This hash table always
will contain the dmpnode while execution of vxdclid thread.

* 2421491 (Tracking ID: 2396293)

SYMPTOM:
On VXVM rooted systems, during machine bootup, vxconfigd core dumps with
following assert and machine does not bootup.
Assertion failed: (0), file auto_sys.c, line 1024
05/30 01:51:25:  VxVM vxconfigd ERROR V-5-1-0 IOT trap - core dumped

DESCRIPTION:
DMP deletes and regenerates device numbers dynamically on every
boot. When we start static vxconfigd in boot mode, since ROOT file system is
READ only, new DSF's for DMP nodes are not created. But, DMP configures devices
in userland and kernel.
So, there is mismatch in device numbers of the DSF's and that in DMP kernel, as
there are stale DSF's from previous boot present.
This leads vxconfigd to actually send I/O's to wrong device numbers resulting in
claiming disk with wrong format.

RESOLUTION:
Issue is fixed by getting the device numbers from vxconfigd and not
doing stat on DMP DSF's.

* 2423086 (Tracking ID: 2033909)

SYMPTOM:
Disabling a controller of a A/P-G type array could lead to I/O hang even when 
there are available paths for I/O.

DESCRIPTION:
DMP was not clearing a flag, in an internal DMP data structure,  to enable I/O 
to all the LUNs during group failover operation.

RESOLUTION:
DMP code modified to clear the appropriate flag for all the LUNs of the LUN 
group so that the failover can occur when a controller is disabled.

* 2428179 (Tracking ID: 2425722)

SYMPTOM:
VxVM's subdisk operation - vxsd mv <source_subdisk> <destination_subdisk> - 
fails on subdisk sizes greater than or equal to 2TB. 

Eg: 

#vxsd -g nbuapp mv disk_1-03 disk_2-03 

VxVM vxsd ERROR V-5-1-740 New subdisks have different size than subdisk disk_1-
03, use -o force

DESCRIPTION:
VxVM code uses 32-bit unsigned integer variable to store the size of subdisks 
which can only accommodate values less than 2TB. Thus, for larger subdisk sizes 
integer overflows resulting in the subdisk move operation failure.

RESOLUTION:
The code has been modified to accommodate larger subdisk sizes.

* 2435050 (Tracking ID: 2421067)

SYMPTOM:
With VVR configured, 'vxconfigd' hangs on the primary site when trying to recover 
the SRL log, after a system or storage failure.

DESCRIPTION:
At the start of each SRL log disk we keep a config header. Part of this header 
includes a flag which is used by VVR to serialize the flushing of the SRL 
configuration table, to ensure only a single thread flushes the table at any one 
time.
In this instance, the 'VOLRV_SRLHDR_CONFIG_FLUSHING' flag was set in the config 
header, and then the config header was written to disk. At this point the storage 
became inaccessible.
During recovery the config header was read from from disk, and when trying to 
initiate a new flush of the SRL table, the system hung as the flag was already 
set to indicate that a flush was in progress.

RESOLUTION:
When loading the SRL header from disk, the flag 'VOLRV_SRLHDR_CONFIG_FLUSHING' is 
now cleared.

* 2436283 (Tracking ID: 2425551)

SYMPTOM:
The cvm reconfiguration takes 1 minute for each RVG configuration.

DESCRIPTION:
Every RVG is given 1 minute time to drain the IO, if not drained, then the code
wait for 1 minute before aborting the I/O's waiting in the logendq. The logic
is such that, for every RVG, it wait 1 minute for the I/O's to drain.

RESOLUTION:
It should be enough to give oveall 1 minute for all RVGs, and abort all the 
RVG's after 1 minute time, instead of waiting for each RVG.
The alternate solution (long term solution) is,

Abort the RVG immediately when the objiocount(rv) == queue_count(logendq). This
will reduce the 1 minute dealy further down to the actual requirend time. In 
this, follwoing things to be take care
1. rusio may be active, which need to be reduced in iocount.
2. every I/O goes into the logendq before getting serviced. So, have to make 
sure they are not in the process of servicing.

* 2436287 (Tracking ID: 2428875)

SYMPTOM:
On a CVR configuration, issue i/o from both master and slave. reboot the slave
lead to reconfiguration hang.

DESCRIPTION:
The I/O's on both master and slave fills up the SRL and goes to the DCM mode. In
DCM mode, the header flush to flush the DCM and the SRL header happens for every
512 updates. Since most of the I/O's are from the SLAVe node, the I/O's
throttled due to the hdr_flush is queued in mdship_throttle_q. This queue is
flushed at the end of header flush. If the slave node is rebooted and when the
SIO are in throttle_q, and when the system is rebooted, the reconfig code
path dont flush the mdship_throttleq and wait for them to drain. This lead to
the reconfiguration hang due to positive I/O count.

RESOLUTION:
abort all the SIO's queued in the mdship_throttleq, when the node is aborted.
Restart the SIO's for the nodes that did not leave.

* 2436288 (Tracking ID: 2411698)

SYMPTOM:
I/Os hang in CVR (Clustered Volume Replicator) environment.

DESCRIPTION:
In CVR environment, when CVM (Clustered Volume Manager) Slave node sends a 
write request to the CVM Master node, following tasks occur.

1) Master grabs the *REGION LOCK* for the write and permits slave to issue the 
write.
2) When new IOs occur on the same region (till the write that acquired *REGION 
LOCK* is not complete), they wait in a *REGION LOCK QUEUE*.
3) Once the IO that acquired the *REGION LOCK* is serviced by slave node, it 
responds to the Master about the same, and Master processes the IOs queued in 
the *REGION LOCK QUEUE*.

The problem occurs when the slave node dies before sending the response to the 
Master about completion of the IO that held the *REGION LOCK*.

RESOLUTION:
Code changes have been made to accomodate the condition as mentioned in the 
section "DESCRIPTION".

* 2440031 (Tracking ID: 2426274)

SYMPTOM:
In a Storage Foundation environment running Veritas File System (VxFS) and 
Volume Manager (VxVM), a system panic may occur with following the stack trace 
in case IO hints are being used. One such scenario is in case of using Symantec 
Oracle Disk Manager (ODM)

  [<ffffffffa11268fc>] _volsio_mem_free+0x4c/0x270 [vxio]
  [<ffffffffa112c8a9>] vol_subdisksio_done+0x59/0x220 [vxio]
  [<ffffffffa10d2076>] volkcontext_process+0x346/0x9a0 [vxio]
  [<ffffffffa109b014>] voldiskiodone+0x764/0x850 [vxio]
  [<ffffffffa109b30a>] voldiskiodone_intr+0xfa/0x180 [vxio]
  [<ffffffffa108e954>] volsp_iodone_common+0x234/0x3e0 [vxio]
  [<ffffffff811b578b>] blk_update_request+0xbb/0x3e0
  [<ffffffff811b5acf>] blk_update_bidi_request+0x1f/0x70
  [<ffffffff811b68d7>] blk_end_bidi_request+0x27/0x80
  [<ffffffffa00257aa>] scsi_end_request+0x3a/0xc0 [scsi_mod]
  [<ffffffffa0025b79>] scsi_io_completion+0x109/0x4e0 [scsi_mod]
  [<ffffffff811bb73d>] blk_done_softirq+0x6d/0x80
  [<ffffffff810526ff>] __do_softirq+0xbf/0x170
  [<ffffffff810040bc>] call_softirq+0x1c/0x30
  [<ffffffff81005cfd>] do_softirq+0x4d/0x80
  [<ffffffff81052475>] irq_exit+0x85/0x90
  [<ffffffff8100525e>] do_IRQ+0x6e/0xe0
  [<ffffffff81003913>] ret_from_intr+0x0/0xa
  [<ffffffff8100ae02>] default_idle+0x32/0x40
  [<ffffffff8100206a>] cpu_idle+0x5a/0xb0
  [<ffffffff81778e65>] start_kernel+0x2ca/0x395
  [<ffffffff81778378>] x86_64_start_kernel+0xe1/0xf2

DESCRIPTION:
A single Volume Manager I/O (staged I/O) while doing 'done' processing, was 
trying to access the FS-VM private information data structure which was freed. 
This free also resulted in an assert which indicated a mismatch in size of the 
IO that was freed thereby hitting a panic.

RESOLUTION:
The solution is to preserve the FS-VM private information data structure 
pertaining to the I/O till its last access. After that, it is freed to release 
that memory.

* 2440351 (Tracking ID: 2440349)

SYMPTOM:
The grow operation on a DCO volume may grow it into any 'site' not
honoring the allocation requirements strictly.

DESCRIPTION:
When a DCO volume is grown, it may not honor the allocation
specification strictly to use only a particular site even though they are
specified explicitly.

RESOLUTION:
The Data Change Object of Volume Manager is modified such that it
will honor the alloc specification strictly if provided explicitly

* 2442850 (Tracking ID: 2317703)

SYMPTOM:
When the vxesd daemon is invoked by device attach & removal operations in a loop,
it leaves open file descriptors with vxconfigd daemon

DESCRIPTION:
The issue is caused due to multiple vxesd daemon threads trying to establish
contact with vxconfigd daemon at the same time and ending up using losing track of
the file descriptor through which the communication channel was established

RESOLUTION:
The fix for this issue is to maintain a single file descriptor that has a thread
safe reference counter thereby not having multiple communication channels
established between vxesd and vxconfigd by various threads of vxesd.

* 2477291 (Tracking ID: 2428631)

SYMPTOM:
Shared DG import or Node Join fails with Hitachi Tagmastore storage

DESCRIPTION:
CVM uses different fence key for every DG. The key format is of type
'NPGRSSSS' where N is the node id (A, B, C..) and 'SSSS' is the sequence number.
Some arrays have a restriction on total number of unique keys that can be
registered (eg Hitachi Tagmastore) and hence causes issues for configs involving
large number of DGs, rather the product of #DGs and #nodes in the cluster.

RESOLUTION:
Having a unique key for each DG is not essential. Hence a tunable is added to
control this behavior. 

# vxdefault list
KEYWORD                        CURRENT-VALUE   DEFAULT-VALUE
...
same_key_for_alldgs            off             off
...

Default value of the tunable is 'off' to preserve the current behavior. If a
configuration hits the storage array limit on total number of unique keys, the
tunable value could be changed to 'on'. 

# vxdefault set same_key_for_alldgs on
# vxdefault list
KEYWORD                        CURRENT-VALUE   DEFAULT-VALUE
...
same_key_for_alldgs            on              off
...

This would make CVM generate same key for all subsequent DG imports/creates.
Already imported DGs need to be deported and re-imported for them to take into
consideration the changed value of the tunable.

* 2479746 (Tracking ID: 2406292)

SYMPTOM:
In case of I/Os on volumes having multiple subdisks (example striped volumes),
System panicks with following stack.

unix:panicsys+0x48()
unix:vpanic_common+0x78()
unix:panic+0x1c()
genunix:kmem_error+0x4b4()
vxio:vol_subdisksio_delete() - frame recycled
vxio:vol_plexsio_childdone+0x80()
vxio:volsiodone() - frame recycled
vxio:vol_subdisksio_done+0xe0()
vxio:volkcontext_process+0x118()
vxio:voldiskiodone+0x360()
vxio:voldmp_iodone+0xc()
genunix:biodone() - frame recycled
vxdmp:gendmpiodone+0x1ec()
ssd:ssd_return_command+0x240()
ssd:ssdintr+0x294()
fcp:ssfcp_cmd_callback() - frame recycled
qlc:ql_fast_fcp_post+0x184()
qlc:ql_status_entry+0x310()
qlc:ql_response_pkt+0x2bc()
qlc:ql_isr_aif+0x76c()
pcisch:pci_intr_wrapper+0xb8()
unix:intr_thread+0x168()
unix:ktl0+0x48()

DESCRIPTION:
On a striped volume, the IO is split in to multiple parts equivalent to the 
number of sub-disks in the stripe. Each part of the IO is processed parallell 
by different threads. Thus any such two threads processing the IO completion 
can enter in to a race condition. Due to such race condition one of the threads 
happens to access a stale address causing the system panic.

RESOLUTION:
The critical section of code is modified to hold appropriate locks to avoid 
race condition.

* 2480006 (Tracking ID: 2400654)

SYMPTOM:
avxdmpadm listenclosurea command hangs because of duplicate enclosure entries in
/etc/vx/array.info file.

Example: 

Enclosure "emc_clariion0" has two entries.

#cat /etc/vx/array.info
DD4VM1S
emc_clariion0
0
EMC_CLARiiON
DISKS
disk
0
Disk
DD3VM2S
emc_clariion0
0
EMC_CLARiiON

DESCRIPTION:
When "vxdmpadm listenclosure" command is run, vxconfigd reads its in-core
enclosure list which is populated from the /etc/vx/array.info file. Since the
enclosure "emc_clariion0" (as mentioned in the example) is also a last entry
within the file, the command expects vxconfigd to return the enclosure
information at the last index of the enclosure list. However because of
duplicate enclosure entries, vxconfigd returns a different enclosure information
thereby leading to the hang.

RESOLUTION:
The code changes are made in vxconfigd to detect duplicate entries in
/etc/vx/array.info file and return the appropriate enclosure information as
requested by the vxdmpadm command.

* 2483476 (Tracking ID: 2491091)

SYMPTOM:
The vxdisksetup(1M) command fails on the disk which have stale EFI information,
with the following error:

VxVM vxdisksetup ERROR V-5-2-4686 Disk <disk name> is currently an EFI formatted
                                  disk. Use -f option to force EFI removal.

DESCRIPTION:
The vxdisksetup(1M) command checks for the EFI headers on the disk, if found then
vxdisksetup(1M) command exits with the suggestion to use "-f" option. The check
for the EFI headers is not correct to take care of stale EFI headers, so the
vxdisksetup(1M) command exits with the error.

RESOLUTION:
The code is modified to check the valid EFI headers.

* 2484466 (Tracking ID: 2480600)

SYMPTOM:
I/O of large sizes like 512k and 1024k hang in CVR (Clustered Volume 
Replicator).

DESCRIPTION:
When large IOs, say, of sizes like, 1MB, are performed on volumes under RVG 
(Replicated Volume Group), a limited number of IOs can be accommodated based on 
RVIOMEM pool limit. So, the pool remains full for majority of the duration.At 
this time, when CVM (Clustered Volume Manager) slave gets rebooted, or goes 
down, the pending IOs are aborted and the corresponding memory is freed. In one 
of the cases, it does not get freed, leading to the hang.

RESOLUTION:
Code changes have been made to free the memory under all scenarios.

* 2484695 (Tracking ID: 2484685)

SYMPTOM:
In a Storage Foundation environment running Symantec Oracle Disk Manager (ODM), 
Veritas File System (VxFS) and Volume Manager (VxVM), a system panic may occur 
with following the stack trace:

  000002a10247a7a1 vpanic()
  000002a10247a851 kmem_error+0x4b4()
  000002a10247a921 vol_subdisksio_done+0xe0()
  000002a10247a9d1 volkcontext_process+0x118()
  000002a10247aaa1 voldiskiodone+0x360()
  000002a10247abb1 voldmp_iodone+0xc()
  000002a10247ac61 gendmpiodone+0x1ec()
  000002a10247ad11 ssd_return_command+0x240()
  000002a10247add1 ssdintr+0x294()
  000002a10247ae81 ql_fast_fcp_post+0x184()
  000002a10247af31 ql_24xx_status_entry+0x2c8()
  000002a10247afe1 ql_response_pkt+0x29c()
  000002a10247b091 ql_isr_aif+0x76c()
  000002a10247b181 px_msiq_intr+0x200()
  000002a10247b291 intr_thread+0x168()
  000002a10240b131 cpu_halt+0x174()
  000002a10240b1e1 idle+0xd4()
  000002a10240b291 thread_start+4()

DESCRIPTION:
A race condition exists between two IOs (specifically Volume Manager subdisk 
level staged I/Os) while doing 'done' processing which causes one thread to 
free FS-VM private information data structure before other thread accesses it. 

The propensity of the race increases by increasing the number of CPUs.

RESOLUTION:
Avoid the race condition such that the slower thread doesn't access the freed 
FS-VM private information data structure.

* 2485230 (Tracking ID: 2481938)

SYMPTOM:
The vxdisk command displays incorrect pubpath of an EFI partitioned disk on HP
1131 platform.
For example,
# vxdisk list cXtXdXs2
Device:    diskX_p2
devicetag: cXtXdXs2
..
pubpaths:  block=/dev/vx/dmp/cXtXdX char=/dev/vx/rdmp/cXtXdX
...

DESCRIPTION:
The VxVM configuration daemon (vxconfigd) maintains all the disks' information. 
On
HP platform, an EFI disk has EFI_SDI_FLAG flag set. When the vxconfigd sets up 
an
EFI partitioned disk, the EFI_SDI_FLAG is checked firstly and vxconfigd changes
the disk access name according to the flag and the disk's original value. 
However,
in some cases vxconfigd still uses the original disk access name not the new 
disk
access name which lead to the incorrect subpath value.

RESOLUTION:
vxconfigd is modified to use the new disk access name.

* 2485278 (Tracking ID: 2386120)

SYMPTOM:
Error messages printed in the syslog in the event of master takeover 
failure in some situations are not be enough to find out the root cause of the
failure.

DESCRIPTION:
During master takeover if the new master encounters some errors, 
the master takeover operation fails. We have messages in the code to log the
reasons for the failure. These log messages are not available on the customer
setups. These are generally enabled in the internal development\testing 
scenarios.

RESOLUTION:
Some of the relevant messages have been modified such that they will
now be available on the customer setups as well, logging crucial information
for root cause analysis of the issue.

* 2485288 (Tracking ID: 2431470)

SYMPTOM:
1. "vxpfto" command sets PFTO(Powerfail Timeout) value on a wrong VxVM device 
when it passes DM(Disk Media) name to the "vxdisk set" command with -g option.
2. "vxdisk set" command does not work when DA name is specified either -g 
option specified or not. 

  Ex.)
  # vxdisk set [DA name] clone=off 
  VxVM vxdisk ERROR V-5-1-5455 Operation requires a disk group 
  # vxdisk -g [DG name] set [DA name] clone=off 
  VxVM vxdisk ERROR V-5-1-0 Device [Da name] not in configuration or associated 
with DG [DG name]

DESCRIPTION:
1. "vxpfto" command invokes avxdisk seta command to set the PFTO value. It 
shall accept both DM and DA names for device specification. However DM and DA 
names can have conflicts such that even within the same disk group, the same 
name can refer to different devices - one as a DA name and another as a DM 
name. "vxpfto" command uses a DM name with -g option when invoking the "vxdisk 
set" command but it will choose a matching DA name before a DM name. This 
causes incorrect device to be acted upon.
Both DM and DA name can be specified for the avxdisk seta command with -g 
option however the DM name are given preference with -g option from the design 
perspective.

2. "vxdisk set" command shall accept DA name for device specification.
Without -g option, the command shall work only when DA name is specified. 
However it doesn't work because the disk group name is not extracted from the 
DA record correctly. Hence the first error.
With -g option the DA name specified is treated as a matching DM name wrongly, 
hence the second error.

RESOLUTION:
Code changes are made to make the "vxdisk set" command working correctly on DA 
name without -g option and on both DM and DA names with ag option. The given 
preference is DM name when ag option is specified. It resolves the "vxpfto" 
command issue as well.

* 2488042 (Tracking ID: 2431423)

SYMPTOM:
Panic in vol_mv_commit_check() while accessing Data Change Map(DCM) object. Stack
trace of panic
 
 vol_mv_commit_check at ffffffffa0bef79e
 vol_ktrans_commit at ffffffffa0be9b93
 volconfig_ioctl at ffffffffa0c4a957
 volsioctl_real at ffffffffa0c5395c
 vols_ioctl at ffffffffa1161122
 sys_ioctl at ffffffff801a2a0f
 compat_sys_ioctl at ffffffff801ba4fb
 sysenter_do_call at ffffffff80125039

DESCRIPTION:
In case of DCM failure, object pointer is set to NULL as part of transaction. If
DCM is active then we try to access DCM object in transaction code path without
checking it to be NULL. DCM object pointer could be NULL in case of failed DCM.
Accessing object pointer without check for NULL caused this panic.

RESOLUTION:
Fix is to put NULL check for DCM object in transaction code path.

* 2491856 (Tracking ID: 2424833)

SYMPTOM:
VVR primary node crashes while replicating in lossy and high latency network with
multiple TCP connections. In debug VxVM build TED assert is hit with following
stack :

brkpoint+000004 ()
ted_call_demon+00003C (0000000007D98DB8)
ted_assert+0000F0 (0000000007D98DB8, 0000000007D98B28,
   0000000000000000)
.hkey_legacy_gate+00004C ()
nmcom_send_msg_tcp+000C20 (F100010A83C4E000, 0000000200000002,
   0000000000000000, 0000000000000000, 0000000000000000, 0000000000000000,
   000000DA000000DA, 0000000100000000)
.nmcom_connect_tcp+0007D0 ()
vol_rp_connect+0012D0 (F100010B0408C000)
vol_rp_connect_start+000130 (F1000006503F9308, 0FFFFFFFF420FC50)
voliod_iohandle+0000AC (F1000006503F9308, 0000000100000001,
   0FFFFFFFF420FC50)
voliod_loop+000CFC (0000000000000000)
vol_kernel_thread_init+00002C (0FFFFFFFF420FFF0)
threadentry+000054 (??, ??, ??, ??)

DESCRIPTION:
In lossy and high latency network, connection between VVR primary and seconadry
can get closed and re-established frequently because of heartbeat timeouts or
DATA acknowledgement timeouts. In TCP multi-connection scenario, VVR primary sends
its very first message (called NMCOM_HANDSHAKE) to secondary on zeroth socket
connection number and then it sends "NMCOM_SESSION" message for each of the next
connections. By some reasons, if the sending of the NMCOM_HANDSHAKE message fails,
VVR primary tries to send it through the another connection without checking
whether it's a valid connection or NOT.

RESOLUTION:
Code changes are made in VVR to use the other connections only after all the
connections are established.

* 2492016 (Tracking ID: 2232411)

SYMPTOM:
Subsequent resize operations of raid5 or layered volumes may fail with "VxVM
vxassist ERROR V-5-1-16092 Volume TCv7-13263: There are other recovery activities.
Cannot grow volume"

DESCRIPTION:
If a user tries to grow or shrink a raid5 volume or a layered volume more than
once using vxassist command, the command may fail with the above mentioned error
message.

RESOLUTION:
1. Skip setting recover offset for RAID volumes. 
2. For layered volumes, 
   topvol: skip setting recover offset. 
   subvols: handles separately later. (code exists).



INSTALLING THE PATCH
--------------------
a) VxVM 5.1 SP1 (GA)version 5.1.100.000 or version 5.1.100.001 must be installed
before applying these patches.

b) VxVM VRTSaslapm version must be 5.1.103.100 or higher.

c) Below filesets on system must be B.11.31.1311 or higher:
        FC-COMMON.FC-SNIA
        SAS-COMMON.SAS-COMMON-RUN
        FC-FCD.FC-FCD-RUN
        FC-FCLP.FC-FCLP-RUN
        FC-FCOC.FC-FCOC-RUN

Please refer to the link for detail info:
http://www.symantec.com/business/support/index?page=content&amp;amp;amp;id=TECH200183

NOTE: In a CVM cluster environment, patch can be installed one node at a time.

If VCS is installed:
Shutdown VCS on the node using the command
# /opt/VRTSvcs/bin/hastop -local

If Serviceguard SMS (Storage Management Suite) is installed:
Stop the node use the below command
# cmhaltnode [-f]

NOTE: For environments other than VCS and Serviceguard SMS, please refer to
Release Note or contact Symantec support.

d) In case the patch is not registered, the patch can be registered using the
following command:
# swreg -l depot patch_directory
where patch_directory is the absolute path where the patch resides.

e) To install the patch, enter the following command:
# swinstall -x autoreboot=true -s patch_directory PHKL_44775, PHCO_44774

where patch_directory is the absolute path where the patch resides.

NOTE: After this step the system would be rebooted.

f) Please do swverify after installing the patches in order to make sure that
the patches are installed correctly using:
# swverify PHKL_44775, PHCO_44774
If VCS is installed:
Start VCS using the command
# /opt/VRTSvcs/bin/hastart

If Serviceguard SMS (Storage Management Suite) is installed:
Start the node using command
# cmrunnode

NOTE: For environments other than VCS and Serviceguard SMS, please refer to
Release Note or contact Symantec support.


REMOVING THE PATCH
------------------
NOTE: In a CVM cluster environment, patch can be removed one node at a time.

If VCS is installed:
Shutdown VCS on the node using the command
# /opt/VRTSvcs/bin/hastop -local

If Serviceguard SMS (Storage Management Suite) is installed:
Stop the node use the below command
# cmhaltnode [-f]

NOTE: For environments other than VCS and Serviceguard SMS, please refer to
Release Note or contact Symantec support.

To remove the patch, enter the following command:
# swremove -x autoreboot=true PHKL_44775, PHCO_44774
NOTE: After this step the system would be rebooted.

If VCS is installed:
Start VCS using the command
# /opt/VRTSvcs/bin/hastart

If Serviceguard SMS (Storage Management Suite) is installed:
Start the node using command
# cmrunnode

NOTE: For environments other than VCS and Serviceguard SMS, please refer to
Release Note or contact Symantec support.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE