README VERSION : 1.1 README CREATION DATE : 2012-03-06 PATCH-ID : 6.0.1.0 PATCH NAME : VRTSvxvm 6.0RP1 BASE PACKAGE NAME : VRTSvxvm BASE PACKAGE VERSION : 6.0.0.0 OBSOLETE PATCHES : NONE SUPERSEDED PATCHES : NONE REQUIRED PATCHES : NONE INCOMPATIBLE PATCHES : NONE SUPPORTED PADV : sol10_sparc (P-PLATFORM , A-ARCHITECTURE , D-DISTRIBUTION , V-VERSION) PATCH CATEGORY : CORE , CORRUPTION , HANG , MEMORYLEAK , PANIC , PERFORMANCE REBOOT REQUIRED : NO PATCH INSTALLATION INSTRUCTIONS: -------------------------------- Please refer the release notes for installation instructions. PATCH UNINSTALLATION INSTRUCTIONS: ---------------------------------- Please refer the release notes for un-installation instructions. SPECIAL INSTRUCTIONS: --------------------- NONE SUMMARY OF FIXED ISSUES: ----------------------------------------- 2589962 Support utility vxfmrmap (deprecating vxfmrshowmap) to display DCO map contents and verification against possible state corruptions 2598525 CVR: memory leaks reported 2605706 write fails on volume on slave node after join which earlier had disks in "lfailed" state 2607793 DMP-ASM: disabling all paths and reboot of the host causes losing of /etc/vx/.vxdmprawdev records 2615288 Site consistency: Both sites become detached after data/dco plex failue at each site, leading to I/O cluster wide outage 2624573 vxdisksetup on EFI disk is taking ~2-4 mins 2624574 VVR Logowner: local I/O starved with heavy I/O load from Logclient 2625718 vxconfigbackup script error: vxcfgbk_corrupt calling keep_recentcopies with insufficient argument 2625743 while upgrading diskgroup version if rlink is not upto date then the vxrvg shows error but diskgroup version gets updated 2625762 secondary master panics at volkiofree 2625766 I/O hang on master node after storage is removed 2626746 Using vxassist -o ordered and mediatype:hdd options together do not work as expected 2626894 When detached disk after connectivity restoration is tried to reattach gives 'Tagid conflict' error 2626994 'vxdg listtag' should give error message and display correct usage when executed with wrong syntax 2628978 startup scripts use 'quit' instead of 'exit', causing empty directories in /tmp 2630074 Longevity:sfrac: after 'vxdg destroy' hung (for shared DiskGroup), all vxcommands hang on master 2637183 Intermittent data corruption after a vxassist move 2643118 Solaris vxdiskadm utility does not update default DM name when multiple disks are specified for encapsulation 2643134 Failure during validating mirror name interface for linked mirror volume 2643137 read/seek I/O errors during init/define of nopriv slice 2643138 IO hang due to SRL overflow & CVM reconfig 2643139 IO hung after SRL overflow 2643142 'vxmake -g -d ' fails with very large configuration due to memory leaks 2643154 vxtune doesn't accept tunables correctly in human readable format 2643155 VVR: Primary master panic'ed in rv_ibc_freeze_timeout 2643156 CVM: diskgroup activation can hang due to a bug in vxvm kernel code 2643157 Encapsulation issue: vxdiskadm should not have a default disk format of cdsdisk, it should be sliced 2644185 vxdmpadm dump core in display_dmpnodes_of_redundancy 2660157 vxtune -r option is printing wrong tunable value 2666174 A small portion of possible memory leak incase of mix (clone and non-cloned) diskgroup import 2680605 vxconfigbackupd does not work correctly with NUM_BK in bk_config 2682534 Starting 32TB RAID5 volume fails with V-5-1-10128 Unexpected kernel error in configuration update 2689104 Data Corruption while adding/removing LUNs Data Corruption while adding/removing LUNs KNOWN ISSUES: -------------- Please refer the release notes for known issues. FIXED INCIDENTS: ---------------- PATCH ID:6.0.1.0 * INCIDENT NO:2589962 TRACKING ID:2574752 SYMPTOM: Existing vxfmrshowmap diagnostic shows invalid output with SF6.0 based instant DCO and requires user to find and specify DCO volume attributes to be specified in the CLI. DESCRIPTION: With SF6.0, instant DCO configured has layout different than previous releases. The new layout is not supported with vxfmrshowmap as the CLI was found to be complex to use. RESOLUTION:vxfmrshowmap is being deprecated and a new CLI, vxfmrmap is being introduced which is much simpler to use. vxfmrmap will have added functionality to check for inconsistencies in the map which could lead to data corruptions. As with vxfmrshowmap, vxfmrmap can be used to display the DCO map contents for the volume which is useful for Symantec Support in analysis of snapshot related issues. * INCIDENT NO:2598525 TRACKING ID:2526498 SYMPTOM: Memory leak after running the automated VVR test case. DESCRIPTION: The IBC after servicing the request , it queued in free queue. This update is queued only when the reference count is not zero. In some cases IBC receive and IBC send race each other. during this time the ref_count may not be euqal to 0, and the update is queued in free queue. This free queue, is freed by the garbage collector, or get cleaned up when the RVG is removed. But in some code path, free queue is set to NULL with out freeing up the update. RESOLUTION:So, the proposed fix is to keep the free queue not RESET, and so either garbage collector or RVG delete will free the update. * INCIDENT NO:2605706 TRACKING ID:2590183 SYMPTOM: IOs on newly enabled paths can fail with reservation conflict error DESCRIPTION: While enabling the path PGR registration is not done, so IOs can fail with reservation conflict. RESOLUTION:Do the PGR registration on newly enabled paths. * INCIDENT NO:2607793 TRACKING ID:2556467 SYMPTOM: When dmp_native_support is enabled, ASM (Automatic Storage Management) disks are disconnected from host and host is rebooted, user defined user-group ownership of respective DMP (Dynamic Multipathing) devices is lost and ownership is set to default values. DESCRIPTION: The user-group ownership records of DMP devices in /etc/vx/.vxdmprawdev file are refreshed at the time of boot and only the records of currently available devices are retained. As part of refresh, records of all the disconnected ASM disks are removed from /etc/vx/.vxdmpraw and hence set to default value. RESOLUTION:Made code changes so that the file /etc/vx/.vxdmprawdev will not be refreshed at boot time. * INCIDENT NO:2615288 TRACKING ID:2527289 SYMPTOM: In a Campus Cluster setup, storage fault may lead to DETACH of all the configured site. This also results in IOfailure on all the nodes in the Campus Cluster. DESCRIPTION: Site detaches are done on site consistent dgs when any volume in the dg looses all the mirrors of a Site. During the processing of the DETACH of last mirror in a site we identify that it is the last mirror and DETACH the site which in turn detaches all the objects of that site. In Campus Cluster setup we attach a dco volume for any data volume created on a site-consistent dg. The general configuration is to have one DCO mirror on each site. Loss of a single mirror of the dco volume on any node will result in the detach of that site. In a 2 site configuration this particular scenario would result in both the dco mirrors being lost simultaneously. While the site detach for the first mirror is being processed we also signal for DETACH of the second mirror which ends up DETACHING the second site too. This is not hit in other tests as we already have a check to make sure that we do not DETACH the last mirror of a Volume. This check is being subverted in this particular case due to the type of storage failure. RESOLUTION:Before triggering the site detach we need to have an explicit check to see if we are trying to DETACH the last ACTIVE site. * INCIDENT NO:2624573 TRACKING ID:2589569 SYMPTOM: The vxdisksetup takes longer time (approximately 2-4 mins) to initialize sliced disk on A/P array. DESCRIPTION: In VxVM(Veritas Volume Manager), the DKIOCGVTOC/DKIOCGGEOM IOCTL(s) are used to detect the EFI disk. If the said IOCTL(s) return an error ENOTSUP, then the disk is said to have EFI label. Upon returning ENOTSUP error from primary path, the DMP driver attempts to retry the IOCTL(s) on secondary path, which is consuming more time. RESOLUTION:The IOCTL service routine is modified to restrict DMP driver from retrying the IOCTL(s) on secondary path. * INCIDENT NO:2624574 TRACKING ID:2608849 SYMPTOM: 1.Under a heavy I/O load on logclient node, write I/Os on VVR Primary logowner takes a very long time to complete. 2. I/Os on "master" and "slave" nodes hang when "master" role is switched multiple times using "vxclustadm setmaster" command. DESCRIPTION: 1. VVR can not allow more than 2048 I/Os outstanding on the SRL volume. Any I/Os beyond this threshold will be throttled. The throttled I/Os are restarted after every SRL header flush operation. During restarting the throttled I/Os, I/Os came from logclient are given higher priority causing logowner I/Os to starve. 2. In CVM reconfiguration code path the RLINK ports are not cleanly deleted on old log-owner. This causes the RLINks not to connect leading to both replication and I/O hang. RESOLUTION:Algorithm which restarts the throttled I/Os is modified to give fair chance to both local and remote I/Os to proceed. Additionally, the code changes are made in CVM reconfiguration code path to delete the RLINK ports cleanly before switching the master role. * INCIDENT NO:2625718 TRACKING ID:2562416 SYMPTOM: vxconfigbackup is throwing following error message: c1062-ucs-bl18-vm8:/usr/lib/vxvm/bin # mv: missing destination file operand after `/etc/vx/cbr/bk/xx-xx_6.1314222990.197.c1062-ucs-bl18-vm8/corrupt.vxprint- m' Try `mv --help' for more information. DESCRIPTION: In vxconfigbackup script needed number of arguments are not passed to the function which causes this error message. RESOLUTION:Code changes are done to pass the needed number of arguments in the script and checks for the NULL argument are added. * INCIDENT NO:2625743 TRACKING ID:2591321 SYMPTOM: The vxdg upgrade, upgrades the disk-group version successfully, but reports error. DESCRIPTION: As part of DG version upgrade, the RVG version is upgraded as well. If RVG is active, the RVG upgrade will fail but the Disk-group upgrade will succeed. It is expected not return any error message in case of successful disk-group upgrade. RESOLUTION:Fixed the error message to report as information message in case of successful disk-group upgrade, and a information message that RVG is not upgraded and requires a manual process. * INCIDENT NO:2625762 TRACKING ID:2607519 SYMPTOM: During initial sync from VVR primary site to VVR secondary site, if there is a cluster reconfiguration, the CVM Master on the the VVR secondary site may panic with following stack trace: volkiofree+000018 vol_dcm_read_cleanup+000150 vol_rvdcm_read_done+000834 volkcontext_process+0000E4 voldiskiodone+000B50 volsp_iodone_common+0001B8 volsp_iodone+00002C internal_iodone_offl+000170 iodone_offl+000078 i_softmod+00027C flih_util+000250 DESCRIPTION: The reason for the panic is an condition with respect to the cluster reconfiguration on the secondary site which is not correctly handled. De-referencing a NULL pointer variable caused the panic. RESOLUTION:The fix improves handling the reconfiguration conditions during the initial sync. * INCIDENT NO:2625766 TRACKING ID:2610764 SYMPTOM: I/O may hang when there are missing disks on CVM master and vxconfigd is restarted (or vxdisk scandisks is run) DESCRIPTION: When vxconfigd is restarted, LFAILED disks are attempted to be removed if it is determined that the disks are not connected cluster-wide. While the vxconfigd-level removal succceeds, the kernel-level removal can fail as the disks may still be associated with sub-disks. Thus, on connectivity restore, a duplicate disk can get created in the kernel. This in turn can cause connectivity info look up in the context of I/O error handling to hang causing I/O to hang. RESOLUTION:The fix is to not remove disks even if they are unavailable cluster-wide. Connectivity restore handling would then ensure that a single disk instance remains. * INCIDENT NO:2626746 TRACKING ID:2626741 SYMPTOM: Vxassist when used with "-o ordered " and "mediatype:hdd" during striped volume make operation does not maintain disk order. DESCRIPTION: Vxassist when invoked with "-o ordered" and "mediatype:hdd" options while creating a striped volume, does not maintain the disk order provided by the user. First stripe of the volume should correspond to the first disk provided by the user. RESOLUTION:Rectified code to use the disks as per the user specified disk order. * INCIDENT NO:2626894 TRACKING ID:2621465 SYMPTOM: When a failed disk belongs to a site has once again become accessible, it cannot be reattached to the disk group. DESCRIPTION: As the disk has a site tag name set, 'vxdg adddisk' command invoked in 'vxreattach' command needs the option '-f' to add the disk back to the disk group. RESOLUTION:Add the option '-f' to 'vxdg adddisk' command when it is invoked in 'vxreattach' command. * INCIDENT NO:2626994 TRACKING ID:2576602 SYMPTOM: On all platforms, listtag option for vxdg command gives results even when executed with wrong syntax. DESCRIPTION: The correct syntax, as per vxdg help, is "vxdg listtag [diskgroup ...]". However, when executed with the wrong syntax, "vxdg [-g diskgroup] listtag", it still give results. RESOLUTION:Please use the correct syntax as per help for vxdg command. The command has being modified from 6.0 release onwards to display error and usage message when wrong synatax is used. * INCIDENT NO:2628978 TRACKING ID:2516584 SYMPTOM: There are many random directories not cleaned up in /tmp/, like vx.$RANDOM.$RANDOM.$RANDOM.$$ on system startup. DESCRIPTION: In general the startup scripts should call quit(), in which it call do the cleanup when errors detected. The scripts were calling exit() directly instead of quit() leaving some random-created directories uncleaned. RESOLUTION:These script should be restored to call quit() instead of exit() directly. * INCIDENT NO:2630074 TRACKING ID:2530698 SYMPTOM: vxdg destroy command can hang. vxconfigd hung with following stack trace: Stack trace for process "vxconfigd" at 0xe000000161919300 (pid 392) Thread at 0xe000000161960000 (tid 799) #0 slpq_swtch_core+0x530 () #1 real_sleep+0x360 () #2 sleep_one+0x90 () #3 vol_kmsg_send_wait+0x4c0 () #4 volcvmdg_delete_group+0x270 () #5 vol_delete_group+0x220 () #6 volconfig_ioctl+0x200 () #7 volsioctl_real+0x7d0 () #8 volsioctl+0x60 () #9 vols_ioctl+0x80 () #10 spec_ioctl+0xf0 () #11 vno_ioctl+0x350 () #12 ioctl+0x410 () #13 syscall+0x590 () kmsg receiver thread hung with following stack trace on master node: (Note, on HP the process name is "vxiod" even for kmsg receiver thread) Stack trace for process "vxiod" at 0xe0000001510a9300 (pid 42) Thread at 0xe0000001af147380 (tid 19002) #0 slpq_swtch_core+0x530 () #1 inline real_sleep+0x360 () #2 sleep+0x90 () #3 vxvm_delay+0xe0 () #4 voldg_delete_finish+0x160 () #5 volcvmdg_delete_msg_receive+0xcf0 () #6 vol_kmsg_dg_request+0x2f0 () #7 vol_kmsg_request_receive+0x8c0 () #8 vol_kmsg_ring_broadcast_receive+0xcf0 () #9 vol_kmsg_receiver+0x1130 () DESCRIPTION: When vxdg destroy command is issued while internal IOs are in progress (plex attach or adding a mirror etc) that can lead to hang on the master node. There is a bug in cvm code, where master node keeps waiting for glock to be granted from a slave node that has already destroyed the dg. In this case, slaves respond with an error saying the dg does not exist any more. This will also result in vxconfigd to hang in the kernel. Once this issue is hit, most of vxvm commands hang on the master node, the only way to recover from this is to reboot the system. RESOLUTION:Code changes are made in cvm to handle error responses from slaves while requesting glocks during internal IOs. When master receives this error from a slave, the new code changes treat this as if glock has been granted. The dg destroy processing is also moved from kmsg receiver thread to vxiod threads to avoid any potential dead locks between destroy operation and any glock grant operation on master node. * INCIDENT NO:2637183 TRACKING ID:2647795 SYMPTOM: With Smartmove feature enabled, while moving subdisk data corruption is seen on file system as subdisk contents are not copied properly. DESCRIPTION: With FS Smartmove feature enabled, subdisk move operation queries VxFS for status of the region before deciding to synchronize the regions. While getting the information about the multiple such regions in one IOCTL to VxFS, if the start- offset is not aligned to region size, one I/O can span across two regions, and VxVM was not properly checking status of such regions and skips the synchronization of that region causing data corruption. RESOLUTION:Code changes are done to properly check the region state even if the region spans 2-bits in the FSMAP. * INCIDENT NO:2643118 TRACKING ID:2632120 SYMPTOM: The vxdiskadm utility's encapsulation operation is not updating the default DM name when multiple disks are selected for encapsulation. For example: A new disk group rootdg will be created and the disk device c2t20210002AC00065Cd10 will be encapsulated and added to the disk group with the disk name rootdg01. A new disk group rootdg will be created and the disk device c2t20210002AC00065Cd11 will be encapsulated and added to the disk group with the disk name rootdg01. DESCRIPTION: The vxdiskadm utility's encapsulation operation is not updating the default DM name when multiple disks are selected for encapulation. SOLUTION: The vxdiskadm utility has been changed to update the default DM name when multiple disks are selected for encapsulation. RESOLUTION:No Tags Found * INCIDENT NO:2643134 TRACKING ID:2348180 SYMPTOM: Mirror name is getting truncated while getting the name of mirror for a given volume and mirror number. DESCRIPTION: VxVM supports volume name up to 32 characters. But while getting the name of mirror for a given volume and mirror number because of miscalculation mirror name is getting truncated. RESOLUTION:Proper and complete mirror name is returned. * INCIDENT NO:2643137 TRACKING ID:2565569 SYMPTOM: VxVM displays read i/o error messages when a VxVM 'nopriv' disk is defined on a partition slice other then slice 2. For example: VxVM vxdisk ERROR V-5-1-14581 read of block # 3840000 of /dev/vx/rdmp/c1t5d2s4 failed. VxVM vxdisk ERROR V-5-1-15859 read ID block of dev/vx/rdmp/c1t5d2s4 failed. DESCRIPTION: When a VxVM 'nopriv' disk (a disk type that has no private region metadata) is defined on a partition slice other than slice 2, read i/o error meessages may be displayed on the terminal. The are errors are displayed because VxVM used the wrong disk partition slice to check for ASM signatures. These error messages can be ignored, since they do not prevent the 'nopriv' disk from being created. RESOLUTION:The code has been modified to use the "full" partition slice when checking for ASM signatures on the disk. * INCIDENT NO:2643138 TRACKING ID:2620555 SYMPTOM: During CVM reconfig, the RVG wait for the iocount to go to '0', to start the RVG recovery and complete the reconfiguration. DESCRIPTION: In CVR, the node leave will trigger the reconfiguration. The reconfiguration code path initiate the RVG recovery of all the shared diskgroup. The recovery is needed to flush the SRL (shared by all the nodes) to the data volume to avoid any missing writes to the data volume by the leaving node. This recovery involves, reading the data fromt he SRL and copy copy it to the data volume. The flush may take its own time depend on the disk response time and the size of SRL region required to flush. During the recovery a flag is set on the RVG to avoid any new I/O. In this particular hand, the recovery is taking 30 minutes. During this time, there is another node leave happened, which triggered the second reconfiguration. The second reconfiguration before it trigger another recovery it wait for the IO count to go to zero by setting the RECOVER flag to RVG. The first RVG recovery clears the RECOVER flag after 30 minutes once completed the SRL flush. Since this is the same flag set by the second reconfiguration, the second reconfiguration waiting indefinitely for the I/O count to go to zero. Since the RECOVER flag is unset, the I/O resumed. So second reconfiguration stuck forever. RESOLUTION:If the RECOVER flag is set, dont keep waiting for the iocount to become zero in the reconfigration code path. There is no need for another recovery, if the second reconfiguration is started before the first recovery completes. * INCIDENT NO:2643139 TRACKING ID:2620556 SYMPTOM: The I/O hung on the primary after SRL overflow and during SRL flush and rlink connect/disconnect. DESCRIPTION: As part of rlink connect or disconnect, the RVG is serialized to complete the connection or disconnection. The I/O throttle is normal during the SRL flush due to memory pool pressure or reaching the max throttle limit. During the serialization, the i/o is throttled to complete the DCM flush. The remote I/O's are kept in throttleq during the throttling is triggered. Due to I/O serilalization, the throttled I/O is never get flushed and because of that I/O never complete. RESOLUTION:If the serialization is successful, flush the throttleq immediately. This will make sure, the remote I/O's will get retried again in the serialization code path * INCIDENT NO:2643142 TRACKING ID:2627056 SYMPTOM: vxmake(1M) command when run with a very large DESCRIPTION: Due to a memory leak in vxmake(1M) command, data section limit for the process was reached. As a result further memory allocations failed and vxmake command failed with the above error RESOLUTION:Fixed the memory leak by freeing the memory after it has been used. * INCIDENT NO:2643154 TRACKING ID:2600863 SYMPTOM: vxtune doesn't accept tunable values correctly in human readable format # vxtune volpagemod_max_memsz 10k # vxtune -uh volpagemod_max_memsz Tunable Current Value Default Value Reboot --------------------- --------------- ------------- ------ volpagemod_max_memsz 10 MB 6 MB N DESCRIPTION: when tunable values are provided in human readable format, vxtune is not setting the tunable with correct value. RESOLUTION:vxtune behavior is rectified to accept and set correct tunable value when presented in human readable format. * INCIDENT NO:2643155 TRACKING ID:2607293 SYMPTOM: VVR Primary panics while deleting RVG. Here is stack trace panic_save_regs_switchstack+0x110 panic bad_news bubbleup+0x880 rv_ibc_freeze_timeout invoke_callouts_for_self soft_intr_handler external_interrupt bubbleup+0x880 DESCRIPTION: VVR Primary is frozen to send IBC for given timeout value. If RVG is deleted before unfreeze is done or timeout expire then it can cause panic. During RVG deletion freeze timer is not cleared due to bug in code. As freeze timer expires callback routine is called which access the RVG information, if RVG is deleted then accessing it causes panic. RESOLUTION:To fix this issue, check for IBC freeze timer while deleting RVG and unset it. * INCIDENT NO:2643156 TRACKING ID:2610877 SYMPTOM: vxdg -g set activation= might hang due to a bug in activation code path, when memory allocation fails in the kernel. DESCRIPTION: vxdg activation cmd is used to set read-write permission at dg level on each node. While running this command if there is a memory allocation failure in the vxvm kernel path, due to a bug in this code path command can hang. If this command hangs, then it will also end up blocking most of vxvm commands. RESOLUTION:Code changes are made in the vxvm kernel code path to handle memory allocation failure correctly and keep retrying memory allocation until it succeeds. * INCIDENT NO:2643157 TRACKING ID:2613425 SYMPTOM: When encapsulating a non-active boot disk using the vxdiskadm utility, the user was prompted to set the disk format. The prompt displayed the default format as cdsdisk, when it should have displayed it as sliced: Enter the desired format [cdsdisk,sliced,q,?] (default: cdsdisk) DESCRIPTION: The vxdiskadm utility's encapsulation operation prompted the user to enter the desired disk format for a non-active boot disk and displayed the wrong default format. This prompt is uncecessary. The utility should set the format to sliced because it has a root file system partition. RESOLUTION:The vxdiskadm utility's encapsulation operation has been updated to automatically set the disk format to sliced when the disk contains a root partition. If it is not the active boot disk, it will prompt user, before proceeding with the operation: VxVM vxencap INFO V-5-2-5850 Disk 3pardata0_1418 contains a root file system partition, but is not the active boot disk. This setup is not supported by rootability. If you continue with the encapsulation procedure, the root file system partitions will be removed and converted to volumes, but the disk will no longer be bootable. Do you wish to continue [y,n,q,?] (default: n) * INCIDENT NO:2644185 TRACKING ID:2649958 SYMPTOM: vxdmpadm dumped core with following stack. #0 0x40bd750:1 in display_dmpnodes_of_redundancy+0x661 () #1 0x4092e10:0 in do_getdmpnode+0x400 () #2 0x40db040:0 in main+0x1980 () DESCRIPTION: Core dump occurs due to NULL pointer dereference and only occurs if DMP database is in in-consistent state. RESOLUTION:Added changes in VxVM code to avoid NULL pointer dereference in concerned code path. * INCIDENT NO:2660157 TRACKING ID:2575581 SYMPTOM: vxtune -r option is printing incorrect tunable values # vxtune vol_rvio_maxpool_sz | awk '{print $1"\t"$2}' Tunable Current Value --------------------- --------------- vol_rvio_maxpool_sz 1048704 # vxtune -r vol_rvio_maxpool_sz | awk '{print $1"\t"$2}' Tunable Current Value --------------------- ------------- vol_rvio_maxpool_sz 1048576 DESCRIPTION: vxtune '-r' option which is used to print tunable values in raw bytes is displaying incorrect value in bytes for some of the tunable values. RESOLUTION:vxtune behavior is rectified to print correct value in bytes for all possible tunable values. * INCIDENT NO:2666174 TRACKING ID:2666163 SYMPTOM: A small memory leak may be seen in vxconfigd, the VxVM configuration daemon when Serial Split Brain(SSB) error is detected in the import process. DESCRIPTION: The leak may occur when Serial Split Brain(SSB) error is detected in the import process. It is because when the SSB error is returning from a function, a dynamically allocated memory area in the same function would not be freed. The SSB detection is a VxVM feature where VxVM detects if the configuration copy in the disk private region becomes stale unexpectedly. A typical use case of the SSB error is that a disk group is imported to different systems at the same time and configuration copy update in both systems results in an inconsistency in the copies. VxVM cannot identify which configuration copy is most up-to-date in this situation. As a result, VxVM may detect SSB error on the next import and show the details through a CLI message. RESOLUTION:Code changes are made to avoid the memory leak and also a small message fix has been done. * INCIDENT NO:2680605 TRACKING ID:2680604 SYMPTOM: vxconfigbackupd(1M) does not work correctly with NUM_BK mentioned in the file "/etc/vx/cbr/bk_config". Many "diskgroup configuration backup copies" were backed up rather than the number which is set to NUM_BK. DESCRIPTION: The script gets number of backup copies currently available. If this is greater than required backup copies (as set by NUM_BK) then the script removes the old backup copies. The script tries to get old backup copies through ls(1) command. But the ls(1) command gets executed on "/" directory rather than directory where backup copies are stored. So directory list which are to be removed is not populated and old backup copies are not removed. This keeps on adding backup copies. RESOLUTION:The script has been modified to get the old backup copies from the directory where backup copies are stored and remove them as per the value of NUM_BK. * INCIDENT NO:2682534 TRACKING ID:2657797 SYMPTOM: Starting a RAID5 volume fails, when one of the sub-disks in the RAID5 column starts at an offset greater than 1TB. Example: # vxvol -f -g dg1 -o delayrecover start vol1 VxVM vxvol ERROR V-5-1-10128 Unexpected kernel error in configuration update DESCRIPTION: VxVM uses an integer variable to store the starting block offset of a sub-disk in a RAID5 column. This overflows when a sub-disk is located at an offset greater than 2147483647 blocks (1TB) and results in failure to start the volume. Refer to "sdaj" in the following example. E.g. v RaidVol - DETACHED NEEDSYNC 64459747584 RAID - raid5 pl RaidVol-01 RaidVol ENABLED ACTIVE 64459747584 RAID 4/128 RW [..] SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE sd DiskGroup101-01 RaidVol-01 DiskGroup101 0 1953325744 0/0 sdaa ENA sd DiskGroup106-01 RaidVol-01 DiskGroup106 0 1953325744 0/1953325744 sdaf ENA sd DiskGroup110-01 RaidVol-01 DiskGroup110 0 1953325744 0/3906651488 sdaj ENA RESOLUTION:VxVM code is modified to handle integer overflow conditions for RAID5 volumes. * INCIDENT NO:2689104 TRACKING ID:2674465 SYMPTOM: Data corruption is observed when DMP node names are changed by following commands for DMP devices that are controlled by a third party multi-pathing driver (E.g. MPXIO and PowerPath ) # vxddladm [-c] assign names # vxddladm assign names file= # vxddladm set namingscheme= DESCRIPTION: The above said commands when executed would re-assign names to each devices. Accordingly the in-core DMP database should be updated for each device to map the new device name with appropriate device number. Due to a bug in the code, the mapping of names with the device number wasn't done appropriately which resulted in subsequent IOs going to a wrong device thus leading to data corruption. RESOLUTION:DMP routines responsible for mapping the names with right device number is modified to fix this corruption problem. INCIDENTS FROM OLD PATCHES: --------------------------- NONE