README VERSION : 1.1 README CREATION DATE : 2012-03-06 PATCH-ID : PVCO_03937 PATCH NAME : VRTSvxvm 6.0RP1 BASE PACKAGE NAME : VRTSvxvm BASE PACKAGE VERSION : 6.0.0.0 OBSOLETE PATCHES : NONE SUPERSEDED PATCHES : NONE REQUIRED PATCHES : PVKL_03938 INCOMPATIBLE PATCHES : NONE SUPPORTED PADV : hpux1131 (P-PLATFORM , A-ARCHITECTURE , D-DISTRIBUTION , V-VERSION) PATCH CATEGORY : CORE , CORRUPTION , HANG , MEMORYLEAK REBOOT REQUIRED : NO PATCH INSTALLATION INSTRUCTIONS: -------------------------------- Please refer the release notes for installation instructions. PATCH UNINSTALLATION INSTRUCTIONS: ---------------------------------- Please refer the release notes for un-installation instructions. SPECIAL INSTRUCTIONS: --------------------- NONE SUMMARY OF FIXED ISSUES: ----------------------------------------- 2589962 Support utility vxfmrmap (deprecating vxfmrshowmap) to display DCO map contents and verification against possible state corruptions 2605706 write fails on volume on slave node after join which earlier had disks in "lfailed" state 2607793 DMP-ASM: disabling all paths and reboot of the host causes losing of /etc/vx/.vxdmprawdev records 2615288 Site consistency: Both sites become detached after data/dco plex failue at each site, leading to I/O cluster wide outage 2625709 VxVMconvert doesn't support migration of data for large number of LVM/VG configuration since it always creates private region at static offsets - 128th block 2625718 vxconfigbackup script error: vxcfgbk_corrupt calling keep_recentcopies with insufficient argument 2625724 vxvmconvert fails to convert LVM volume on DMP raw devices to vxvm volume with native support set to on 2625743 while upgrading diskgroup version if rlink is not upto date then the vxrvg shows error but diskgroup version gets updated 2625766 I/O hang on master node after storage is removed 2626746 Using vxassist -o ordered and mediatype:hdd options together do not work as expected 2626894 When detached disk after connectivity restoration is tried to reattach gives 'Tagid conflict' error 2626994 'vxdg listtag' should give error message and display correct usage when executed with wrong syntax 2628978 startup scripts use 'quit' instead of 'exit', causing empty directories in /tmp 2630111 CDS-EFI : vxcdsconvert is failing while converting from hpdisk format to cdsdisk format. 2633978 unable to perform SAN rootability on thin lun. 2643137 read/seek I/O errors during init/define of nopriv slice 2643142 'vxmake -g -d ' fails with very large configuration due to memory leaks 2643151 disks with hpdisk format can't be initialized with private region offset other than 128 2643154 vxtune doesn't accept tunables correctly in human readable format 2644185 vxdmpadm dump core in display_dmpnodes_of_redundancy 2647086 VxVMconvert: VxVMconvert utility is broken to convert LVM to VxVM:hpdisk for larger configurations. 2660157 vxtune -r option is printing wrong tunable value 2666174 A small portion of possible memory leak incase of mix (clone and non-cloned) diskgroup import 2680605 vxconfigbackupd does not work correctly with NUM_BK in bk_config 2689104 Data Corruption while adding/removing LUNs Data Corruption while adding/removing LUNs KNOWN ISSUES : -------------- Please refer the release notes. FIXED INCIDENTS: ---------------- PATCH ID:PVCO_03937 * INCIDENT NO:2589962 TRACKING ID:2574752 SYMPTOM: Existing vxfmrshowmap diagnostic shows invalid output with SF6.0 based instant DCO and requires user to find and specify DCO volume attributes to be specified in the CLI. DESCRIPTION: With SF6.0, instant DCO configured has layout different than previous releases. The new layout is not supported with vxfmrshowmap as the CLI was found to be complex to use. RESOLUTION:vxfmrshowmap is being deprecated and a new CLI, vxfmrmap is being introduced which is much simpler to use. vxfmrmap will have added functionality to check for inconsistencies in the map which could lead to data corruptions. As with vxfmrshowmap, vxfmrmap can be used to display the DCO map contents for the volume which is useful for Symantec Support in analysis of snapshot related issues. * INCIDENT NO:2605706 TRACKING ID:2590183 SYMPTOM: IOs on newly enabled paths can fail with reservation conflict error DESCRIPTION: While enabling the path PGR registration is not done, so IOs can fail with reservation conflict. RESOLUTION:Do the PGR registration on newly enabled paths. * INCIDENT NO:2607793 TRACKING ID:2556467 SYMPTOM: When dmp_native_support is enabled, ASM (Automatic Storage Management) disks are disconnected from host and host is rebooted, user defined user-group ownership of respective DMP (Dynamic Multipathing) devices is lost and ownership is set to default values. DESCRIPTION: The user-group ownership records of DMP devices in /etc/vx/.vxdmprawdev file are refreshed at the time of boot and only the records of currently available devices are retained. As part of refresh, records of all the disconnected ASM disks are removed from /etc/vx/.vxdmpraw and hence set to default value. RESOLUTION:Made code changes so that the file /etc/vx/.vxdmprawdev will not be refreshed at boot time. * INCIDENT NO:2615288 TRACKING ID:2527289 SYMPTOM: In a Campus Cluster setup, storage fault may lead to DETACH of all the configured site. This also results in IOfailure on all the nodes in the Campus Cluster. DESCRIPTION: Site detaches are done on site consistent dgs when any volume in the dg looses all the mirrors of a Site. During the processing of the DETACH of last mirror in a site we identify that it is the last mirror and DETACH the site which in turn detaches all the objects of that site. In Campus Cluster setup we attach a dco volume for any data volume created on a site-consistent dg. The general configuration is to have one DCO mirror on each site. Loss of a single mirror of the dco volume on any node will result in the detach of that site. In a 2 site configuration this particular scenario would result in both the dco mirrors being lost simultaneously. While the site detach for the first mirror is being processed we also signal for DETACH of the second mirror which ends up DETACHING the second site too. This is not hit in other tests as we already have a check to make sure that we do not DETACH the last mirror of a Volume. This check is being subverted in this particular case due to the type of storage failure. RESOLUTION:Before triggering the site detach we need to have an explicit check to see if we are trying to DETACH the last ACTIVE site. * INCIDENT NO:2625709 TRACKING ID:2535716 SYMPTOM: LVM Volume Group (VG) to VxVM Disk Group conversion fails requesting user to reduce the number of configuration records. Following is an example of error messages seen during the conversion: Analysis of found insufficient Private Space for conversion SMALLEST VGRA space = 176 RESERVED space sectors = 78 PRIVATE SPACE/FREE sectors = 98 AVAILABLE sector space = 49 AVAILABLE sector bytes = 50176 RECORDS needed to convert = 399 MAXIMUM records allowable = 392 The smallest disk in the Volume Group () does not have sufficient private space for the conversion to succeed. There is only enough private space for 392 VM Database records and the conversion of Volume Group () would require enough space to allow 399 VxVM Database records. This would roughly translate to needing an additional 896 bytes available in the private space. This can be accomplished by reducing the number of volumes in the () Volume Group, and allowing that for every volume removed, the number of Database records required would be reduced by three. This is only a rough approximation, however. DESCRIPTION: Conversion process works in-place such that VXVM's public region is same as LVM public region, while VXVM creates the private region between the start of the disk and the start of the public region. In case the public region for LVM starts early on the disk and source LVM configuration contains very large number of records such that the space between the start of the disk and the start of the public region is not sufficient to hold the private region, conversion process fails. RESOLUTION:Conversion process now creates private region after the public region using the free PE's available in case enough space is not available between the start of the disk to the start of the public region. With this change, new behavior of vxvmconvert ensures that the conversion process succeeds in case at least one disk in the configuration is able to hold the private region either before or after the public region. Conversion process would now fail only if none of the disks in the source configuration is capable of holding private region anywhere on the disk. * INCIDENT NO:2625718 TRACKING ID:2562416 SYMPTOM: vxconfigbackup is throwing following error message: c1062-ucs-bl18-vm8:/usr/lib/vxvm/bin # mv: missing destination file operand after `/etc/vx/cbr/bk/xx-xx_6.1314222990.197.c1062-ucs-bl18-vm8/corrupt.vxprint- m' Try `mv --help' for more information. DESCRIPTION: In vxconfigbackup script needed number of arguments are not passed to the function which causes this error message. RESOLUTION:Code changes are done to pass the needed number of arguments in the script and checks for the NULL argument are added. * INCIDENT NO:2625724 TRACKING ID:2571334 SYMPTOM: The vxvmconvert utility fails if source PV's with enclosure-based naming (EBN) names or user-defined names are longer than 13 characters and dmp_native_support ON. DESCRIPTION: LVM command lvdisplay has a limitation of displaying PV names up to 23 characters including the device path. This causes the disk names exceeding 13 character get truncated and doesn't appear in the output. vxvmconvert used the same output and hence failed for disk names exceeding 13 characters. RESOLUTION:Used an option with LVM command lvdisplay which does not have this limitation. By doing this, vxvmconvert got the correct PV name and convert passed. * INCIDENT NO:2625743 TRACKING ID:2591321 SYMPTOM: The vxdg upgrade, upgrades the disk-group version successfully, but reports error. DESCRIPTION: As part of DG version upgrade, the RVG version is upgraded as well. If RVG is active, the RVG upgrade will fail but the Disk-group upgrade will succeed. It is expected not return any error message in case of successful disk-group upgrade. RESOLUTION:Fixed the error message to report as information message in case of successful disk-group upgrade, and a information message that RVG is not upgraded and requires a manual process. * INCIDENT NO:2625766 TRACKING ID:2610764 SYMPTOM: I/O may hang when there are missing disks on CVM master and vxconfigd is restarted (or vxdisk scandisks is run) DESCRIPTION: When vxconfigd is restarted, LFAILED disks are attempted to be removed if it is determined that the disks are not connected cluster-wide. While the vxconfigd-level removal succceeds, the kernel-level removal can fail as the disks may still be associated with sub-disks. Thus, on connectivity restore, a duplicate disk can get created in the kernel. This in turn can cause connectivity info look up in the context of I/O error handling to hang causing I/O to hang. RESOLUTION:The fix is to not remove disks even if they are unavailable cluster-wide. Connectivity restore handling would then ensure that a single disk instance remains. * INCIDENT NO:2626746 TRACKING ID:2626741 SYMPTOM: Vxassist when used with "-o ordered " and "mediatype:hdd" during striped volume make operation does not maintain disk order. DESCRIPTION: Vxassist when invoked with "-o ordered" and "mediatype:hdd" options while creating a striped volume, does not maintain the disk order provided by the user. First stripe of the volume should correspond to the first disk provided by the user. RESOLUTION:Rectified code to use the disks as per the user specified disk order. * INCIDENT NO:2626894 TRACKING ID:2621465 SYMPTOM: When a failed disk belongs to a site has once again become accessible, it cannot be reattached to the disk group. DESCRIPTION: As the disk has a site tag name set, 'vxdg adddisk' command invoked in 'vxreattach' command needs the option '-f' to add the disk back to the disk group. RESOLUTION:Add the option '-f' to 'vxdg adddisk' command when it is invoked in 'vxreattach' command. * INCIDENT NO:2626994 TRACKING ID:2576602 SYMPTOM: On all platforms, listtag option for vxdg command gives results even when executed with wrong syntax. DESCRIPTION: The correct syntax, as per vxdg help, is "vxdg listtag [diskgroup ...]". However, when executed with the wrong syntax, "vxdg [-g diskgroup] listtag", it still give results. RESOLUTION:Please use the correct syntax as per help for vxdg command. The command has being modified from 6.0 release onwards to display error and usage message when wrong synatax is used. * INCIDENT NO:2628978 TRACKING ID:2516584 SYMPTOM: There are many random directories not cleaned up in /tmp/, like vx.$RANDOM.$RANDOM.$RANDOM.$$ on system startup. DESCRIPTION: In general the startup scripts should call quit(), in which it call do the cleanup when errors detected. The scripts were calling exit() directly instead of quit() leaving some random-created directories uncleaned. RESOLUTION:These script should be restored to call quit() instead of exit() directly. * INCIDENT NO:2630111 TRACKING ID:2621541 SYMPTOM: A disk that is greater than 1 TB on HP-UX Itanium architecture and which was originally initialized with CDSDISK format, goes into error state if the disk is reinitialized with HPDISK format. Following sequence of commands would lead to this problem: 1 Initialize the disk with CDS format using following command # vxdisksetup -if format=cdsdisk 2 Re-initialize this disk with HPDISK format using following command: # vxdisksetup -if format=hpdisk DESCRIPTION: During the re-initialization of disk with HPDISK format, the labels associated with CDSDISK format are not erased properly. These stale CDSDISK labels cause the disk to go into error state. RESOLUTION:During the disk re-initialization with HPDISK format, vxdisksetup now completely erases all stale CDSDISK labels. * INCIDENT NO:2633978 TRACKING ID:2589657 SYMPTOM: For a THIN lun, creation of VxVM rootdisk mirror using vxrootmir(1M) command fails. vxrootmir(1M) command fails giving error message as follows - VxVM vxassist ERROR V-5-1-16103 Cannot allocate space for 280 block for DCO log volume: Not enough devices for allocation DESCRIPTION: VxVM by default creates a Data Change Object(DCO) while mirroring for THIN luns and the default DCO size is 64MB. If any of the original or mirrored luns does not have enough space for this DCO, mirroring fails. RESOLUTION:DCO creation is disabled while mirroring VxVM rootdisk on a THIN lun. * INCIDENT NO:2643137 TRACKING ID:2565569 SYMPTOM: VxVM displays read i/o error messages when a VxVM 'nopriv' disk is defined on a partition slice other then slice 2. For example: VxVM vxdisk ERROR V-5-1-14581 read of block # 3840000 of /dev/vx/rdmp/c1t5d2s4 failed. VxVM vxdisk ERROR V-5-1-15859 read ID block of dev/vx/rdmp/c1t5d2s4 failed. DESCRIPTION: When a VxVM 'nopriv' disk (a disk type that has no private region metadata) is defined on a partition slice other than slice 2, read i/o error meessages may be displayed on the terminal. The are errors are displayed because VxVM used the wrong disk partition slice to check for ASM signatures. These error messages can be ignored, since they do not prevent the 'nopriv' disk from being created. RESOLUTION:The code has been modified to use the "full" partition slice when checking for ASM signatures on the disk. * INCIDENT NO:2643142 TRACKING ID:2627056 SYMPTOM: vxmake(1M) command when run with a very large DESCRIPTION: Due to a memory leak in vxmake(1M) command, data section limit for the process was reached. As a result further memory allocations failed and vxmake command failed with the above error RESOLUTION:Fixed the memory leak by freeing the memory after it has been used. * INCIDENT NO:2643151 TRACKING ID:2495338 SYMPTOM: VxVM disk initialization with HPDISK format failed with following error: $vxdisksetup -ivf eva4k6k0_8 format=hpdisk privoffset=256 VxVM vxdisksetup ERROR V-5-2-2186 privoffset is incompatible with the hpdisk format. DESCRIPTION: VxVM imposed limitation of private region offset as 128K or 256 sectors on non-boot(data) disks with HPDISK format. RESOLUTION:The limitation to initialize a data disk with HPDISK format with a fixed private region offset has been relaxed. Disks can now be initialized with private region offset of 128K or greater. * INCIDENT NO:2643154 TRACKING ID:2600863 SYMPTOM: vxtune doesn't accept tunable values correctly in human readable format # vxtune volpagemod_max_memsz 10k # vxtune -uh volpagemod_max_memsz Tunable Current Value Default Value Reboot --------------------- --------------- ------------- ------ volpagemod_max_memsz 10 MB 6 MB N DESCRIPTION: when tunable values are provided in human readable format, vxtune is not setting the tunable with correct value. RESOLUTION:vxtune behavior is rectified to accept and set correct tunable value when presented in human readable format. * INCIDENT NO:2644185 TRACKING ID:2649958 SYMPTOM: vxdmpadm dumped core with following stack. #0 0x40bd750:1 in display_dmpnodes_of_redundancy+0x661 () #1 0x4092e10:0 in do_getdmpnode+0x400 () #2 0x40db040:0 in main+0x1980 () DESCRIPTION: Core dump occurs due to NULL pointer dereference and only occurs if DMP database is in in-consistent state. RESOLUTION:Added changes in VxVM code to avoid NULL pointer dereference in concerned code path. * INCIDENT NO:2647086 TRACKING ID:2495346 SYMPTOM: LVM Volume Group (VG) to VxVM Disk Group conversion fails requesting user to reduce the number of configuration records. Following is an example of error messages seen during the conversion: Analysis of found insufficient Private Space for conversion SMALLEST VGRA space = 176 RESERVED space sectors = 78 PRIVATE SPACE/FREE sectors = 98 AVAILABLE sector space = 49 AVAILABLE sector bytes = 50176 RECORDS needed to convert = 399 MAXIMUM records allowable = 392 The smallest disk in the Volume Group () does not have sufficient private space for the conversion to succeed. There is only enough private space for 392 VM Database records and the conversion of Volume Group () would require enough space to allow 399 VxVM Database records. This would roughly translate to needing an additional 896 bytes available in the private space. This can be accomplished by reducing the number of volumes in the () Volume Group, and allowing that for every volume removed, the number of Database records required would be reduced by three. This is only a rough approximation, however. DESCRIPTION: Conversion process works in-place such that VXVM's public region is same as LVM public region, while VXVM creates the private region between the start of the disk and the start of the public region. In case the public region for LVM starts early on the disk and source LVM configuration contains very large number of records such that the space between the start of the disk and the start of the public region is not sufficient to hold the private region, conversion process fails. RESOLUTION:Conversion process first tries to initialize the disk with CDSDISK format and creates private region after the public region using the free PE's available in case enough space is not available between the start of the disk to the start of the public region. If this is not possible, vxvmconvert tries to initialize disk with HPDISK format with private region either before or after the public region. With this change, new behavior of vxvmconvert ensures that the conversion process succeeds in case at least one disk in the configuration is able to hold the private region either before or after the public region with either HPDISK or CDSDISK format. * INCIDENT NO:2660157 TRACKING ID:2575581 SYMPTOM: vxtune -r option is printing incorrect tunable values # vxtune vol_rvio_maxpool_sz | awk '{print $1"\t"$2}' Tunable Current Value --------------------- --------------- vol_rvio_maxpool_sz 1048704 # vxtune -r vol_rvio_maxpool_sz | awk '{print $1"\t"$2}' Tunable Current Value --------------------- ------------- vol_rvio_maxpool_sz 1048576 DESCRIPTION: vxtune '-r' option which is used to print tunable values in raw bytes is displaying incorrect value in bytes for some of the tunable values. RESOLUTION:vxtune behavior is rectified to print correct value in bytes for all possible tunable values. * INCIDENT NO:2666174 TRACKING ID:2666163 SYMPTOM: A small memory leak may be seen in vxconfigd, the VxVM configuration daemon when Serial Split Brain(SSB) error is detected in the import process. DESCRIPTION: The leak may occur when Serial Split Brain(SSB) error is detected in the import process. It is because when the SSB error is returning from a function, a dynamically allocated memory area in the same function would not be freed. The SSB detection is a VxVM feature where VxVM detects if the configuration copy in the disk private region becomes stale unexpectedly. A typical use case of the SSB error is that a disk group is imported to different systems at the same time and configuration copy update in both systems results in an inconsistency in the copies. VxVM cannot identify which configuration copy is most up-to-date in this situation. As a result, VxVM may detect SSB error on the next import and show the details through a CLI message. RESOLUTION:Code changes are made to avoid the memory leak and also a small message fix has been done. * INCIDENT NO:2680605 TRACKING ID:2680604 SYMPTOM: vxconfigbackupd(1M) does not work correctly with NUM_BK mentioned in the file "/etc/vx/cbr/bk_config". Many "diskgroup configuration backup copies" were backed up rather than the number which is set to NUM_BK. DESCRIPTION: The script gets number of backup copies currently available. If this is greater than required backup copies (as set by NUM_BK) then the script removes the old backup copies. The script tries to get old backup copies through ls(1) command. But the ls(1) command gets executed on "/" directory rather than directory where backup copies are stored. So directory list which are to be removed is not populated and old backup copies are not removed. This keeps on adding backup copies. RESOLUTION:The script has been modified to get the old backup copies from the directory where backup copies are stored and remove them as per the value of NUM_BK. * INCIDENT NO:2689104 TRACKING ID:2674465 SYMPTOM: Data corruption is observed when DMP node names are changed by following commands for DMP devices that are controlled by a third party multi-pathing driver (E.g. MPXIO and PowerPath ) # vxddladm [-c] assign names # vxddladm assign names file= # vxddladm set namingscheme= DESCRIPTION: The above said commands when executed would re-assign names to each devices. Accordingly the in-core DMP database should be updated for each device to map the new device name with appropriate device number. Due to a bug in the code, the mapping of names with the device number wasn't done appropriately which resulted in subsequent IOs going to a wrong device thus leading to data corruption. RESOLUTION:DMP routines responsible for mapping the names with right device number is modified to fix this corruption problem. INCIDENTS FROM OLD PATCHES: --------------------------- NONE