vm-sol10_sparc-Patch-6.2.1.500

 Basic information
Release type: Patch
Release date: 2017-10-20
OS update support: None
Technote: None
Documentation: None
Popularity: 4784 viewed    downloaded
Download size: 57.15 MB
Checksum: 3263918654

 Applies to one or more of the following products:
Dynamic Multi-Pathing 6.2 On Solaris 10 SPARC
Storage Foundation 6.2 On Solaris 10 SPARC
Storage Foundation Cluster File System 6.2 On Solaris 10 SPARC
Storage Foundation for Oracle RAC 6.2 On Solaris 10 SPARC
Storage Foundation for Sybase ASE CE 6.2 On Solaris 10 SPARC
Storage Foundation HA 6.2 On Solaris 10 SPARC
Volume Manager 6.2 On Solaris 10 SPARC

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vm-sol10_sparc-Patch-6.2.1.400 (obsolete) 2017-10-10
vm-sol10_sparc-Patch-6.2.1.300 (obsolete) 2017-10-10

This patch requires: Release date
sfha-sol10_sparc-MR-6.2.1 2015-04-24

 Fixes the following incidents:
3802857, 3803497, 3812192, 3847745, 3850890, 3851117, 3852148, 3854788, 3863971, 3871040, 3874737, 3875933, 3877637, 3879334, 3880573, 3881334, 3881335, 3889284, 3889850, 3890666, 3893134, 3893950, 3894783, 3897764, 3898129, 3898169, 3898296, 3902626, 3903647, 3904008, 3904017, 3904790, 3904796, 3904797, 3904800, 3904801, 3904802, 3904804, 3904805, 3904806, 3904807, 3904810, 3904811, 3904819, 3904822, 3904824, 3904825, 3904833, 3904834, 3904840, 3904851, 3904858, 3904859, 3904861, 3904863, 3904864, 3905471, 3906251, 3907017, 3907593, 3913126, 3915780, 3915786, 3917323, 3920545, 3922253, 3922254, 3922255, 3922256, 3922257, 3922258, 3927482, 3931027, 3931028, 3931040

 Patch ID:
None.

Readme file
                          * * * READ ME * * *
               * * * Symantec Volume Manager 6.2.1 * * *
                         * * * Patch 500 * * *
                         Patch Date: 2017-10-10


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Symantec Volume Manager 6.2.1 Patch 500


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Solaris 10 SPARC


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Symantec Dynamic Multi-Pathing 6.2
   * Symantec Storage Foundation 6.2
   * Symantec Storage Foundation Cluster File System HA 6.2
   * Symantec Storage Foundation for Oracle RAC 6.2
   * Symantec Storage Foundation HA 6.2
   * Symantec Volume Manager 6.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 151232-05
* 3920545 (3795739) In a split brain scenario, cluster formation takes very long time.
* 3922253 (3919642) IO hang after switching log owner because of IOs not completely quiesced 
during log owner change.
* 3922254 (3919640) IO hang along with vxconfigd hang on master node because of metadata request 
SIO(Staged IO) hogging CPU.
* 3922255 (3919644) IO hang when switching log owner because of stale flags in RV(Replication 
Volume) kernel structure.
* 3922256 (3919641) IO hang when pausing rlink because of deadlock situation.
* 3922257 (3919645) IO hang when switching log owner because of stale information in 
RV(Replication Volume) kernel structure.
* 3922258 (3919643) IO hang when switching log owner because of stale information in 
RV(Replication Volume) kernel structure.
* 3927482 (3895950) vxconfigd hang observed due to accessing stale/un-initiliazed lock.
* 3931027 (3911930) Provide a way to clear the PGR_FLAG_NOTSUPPORTED on the device instead of using
exclude/include commands
* 3931028 (3918356) zpools are imported automatically when DMP(Dynamic Multipathing) native support is set to on which may lead to zpool corruption.
* 3931040 (3893150) VxDMP vxdmpadm native ls command sometimes doesn't report imported disks' 
pool name
Patch ID: 151232-04
* 3913126 (3910432) When a mirror volume is created two log plexes are created by default.
* 3915780 (3912672) "vxddladm assign names" causes ASM disks' user-group ownership/permissions loss
affecting Oracle databases on system.
* 3915786 (3890602) OS command cfgadm command hangs after reboot when hundreds devices are under 
DMP's(Dynamic Multi-Pathing) control.
* 3917323 (3917786) Storage of cold data on dedicated SAN storage spaces increases storage cost 
and maintenance.Move cold data from local storage to cloud storage.
Patch ID: 151232-03
* 3802857 (3726110) On systems with high number of CPUs, Dynamic Multi-Pathing (DMP) devices may perform considerably slower than OS device paths.
* 3803497 (3802750) VxVM (Veritas Volume Manager) volume I/O-shipping functionality is not disabled even after the user issues the correct command to disable it.
* 3812192 (3764326) VxDMP(Veritas Dynamic Multi-Pathing) repeatedly reports "failed to get devid".
* 3847745 (3899198) VxDMP (Veritas Dynamic MultiPathing)  causes system panic 
after a shutdown/reboot.
* 3850890 (3603792) The first boot after live upgrade to new version of Solaris 11 and VxVM 
(Veritas Volume Manager) takes long time.
* 3851117 (3662392) In the Cluster Volume Manager (CVM) environment, if I/Os are getting executed 
on slave node, corruption can happen when the vxdisk resize(1M) command is 
executing on the master node.
* 3852148 (3852146) Shared DiskGroup(DG) fails to import when "-c" and "-o noreonline" options 
are
specified together
* 3854788 (3783356) After Dynamic Multi-Pathing (DMP) module fails to load, dmp_idle_vector is not NULL.
* 3863971 (3736502) Memory leakage is found when transaction aborts.
* 3871040 (3868444) Disk header timestamp is updated even if the disk group(DG) import fails.
* 3874737 (3874387) Disk header information is not logged to the syslog 
sometimes even if the disk is missing and dg import fails.
* 3875933 (3737585) "Uncorrectable write error" or panic with IOHINT in VVR (Veritas Volume 
Replicator) environment
* 3877637 (3878030) Enhance VxVM DR tool to clean up OS and VxDMP device trees without user 
interaction.
* 3879334 (3879324) VxVM DR tool fails to handle busy device problem while LUNs are removed from  OS
* 3880573 (3886153) vradmind daemon core dump occurs in a VVR primary-primary configuration 
because of assert() failure.
* 3881334 (3864063) Application I/O hangs because of a race between the Master Pause SIO (Staging
I/O) and the Error Handler SIO.
* 3881335 (3867236) Application IO hang happens because of a race between Master Pause SIO(Staging IO) 
and RVWRITE1 SIO.
* 3889284 (3878153) VVR 'vradmind' deamon core dump.
* 3889850 (3878911) QLogic driver returns an error due to Incorrect aiusize in FC header
* 3890666 (3882326) vxconfigd core dump when slice of a device is exported from control domain to LDOM (Logical Domain)
* 3893134 (3864318) Memory consuming keeps increasing when reading/writing data against VxVM volume
with 
big block size.
* 3893950 (3841242) Use of deprecated APIs provided by Oracle, may result in a 
system hang.
* 3894783 (3628743) New BE takes too much time to startup during live upgrade on Solaris 11.2
* 3897764 (3741003) After removing storage from one of multiple plex in a mirrored DCO (Data 
Change Object) volume, entire DCO volume is detached and DCO object is having 
BADLOG flag marked because of a flag reset missing.
* 3898129 (3790136) File system hang observed due to IO's in Ditry Region Logging (DRL).
* 3898169 (3740730) While creating volume using vxassist CLI, dco log volume length specified at
command line was not getting honored.
* 3898296 (3767531) In Layered volume layout with FSS configuration, when few 
of the FSS_Hosts are rebooted, Full resync is happening for non-affected disks 
on master.
* 3902626 (3795739) In a split brain scenario, cluster formation takes very long time.
* 3903647 (3868934) System panic happens while deactivate the SIO (staging IO).
* 3904008 (3856146) The Solaris sparc 11.2 latest SRUs and Solaris sparc 11.3 System  panics during 
reboot and fails to come up, after turning off the dmp_native_support.
* 3904017 (3853151) I/O error occurs when vxrootadm join is triggered.
* 3904790 (3795788) Performance degrades when many application sessions open the same data file on the VxVMvolume.
* 3904796 (3853049) The display of stats delayed beyond the set interval for vxstat and multiple 
sessions of vxstat impacted the IO performance.
* 3904797 (3857120) Commands like vxdg deport which try to close a VxVM volume might hang.
* 3904800 (3860503) Poor performance of vxassist mirroring is observed on some high end servers.
* 3904801 (3686698) vxconfigd was getting hung due to deadlock between two threads
* 3904802 (3721565) vxconfigd hang is seen.
* 3904804 (3486861) Primary node panics when storage is removed while replication is going on with heavy 
IOs.
* 3904805 (3788644) Reuse raw device number when checking for available raw devices.
* 3904806 (3807879) User data corrupts because of the writing of the backup EFT GPT disk label 
during the VxVM disk-group flush operation.
* 3904807 (3867145) When VVR SRL occupation > 90%, then output the SRL occupation is shown by 10 
percent.
* 3904810 (3871750) In parallel VxVM vxstat commands report abnormal disk IO statistic data
* 3904811 (3875563) While dumping the disk header information, human readable
timestamp was not converted correctly from corresponding epoch time.
* 3904819 (3811946) When invoking "vxsnap make" command with cachesize option to create space optimized snapshot, the command succeeds but a plex I/O error message is displayed in syslog.
* 3904822 (3755209) The Veritas Dynamic Multi-pathing(VxDMP) device configured in Solaris Logical 
DOMains(LDOM) guest is disabled when an active controller of an ALUA array is 
failed.
* 3904824 (3795622) With Dynamic Multipathing (DMP) Native Support enabled, Logical Volume Manager
(LVM) global_filter is not updated properly in lvm.conf file.
* 3904825 (3859009) global_filter of lvm.conf is not updated due to some paths of LVM dmpnode are 
reused during DDL(Device Discovery Layer) discovery cycle.
* 3904833 (3729078) VVR(Veritas Volume Replication) secondary site panic occurs during patch 
installation because of flag overlap issue.
* 3904834 (3819670) When smartmove with 'vxevac' command is run in background by hitting 'ctlr-z' key and 'bg' command, the execution of 'vxevac' is terminated abruptly.
* 3904840 (3769927) "vxdmpadm settune dmp_native_support=off" command fails on Solaris.
* 3904851 (3804214) VxDMP (Dynamic Multi-Pathing) path enable operation fails after the disk label is
changed from guest LDOM. Open fails with error 5 on the path being enabled.
* 3904858 (3899568) Adding tunable dmp_compute_iostats to start/stop the iostat gathering
persistently.
* 3904859 (3901633) vxrsync reports error during rvg sync because of incorrect volume end offset 
calculation.
* 3904861 (3904538) IO hang happens during slave node leave or master node switch because of racing 
between RV(Replicate Volume) recovery SIO(Staged IO) and new coming IOs.
* 3904863 (3851632) Some VxVM commands fail when you use the localized messages.
* 3904864 (3769303) System pancis when Cluster Volume Manager (CVM) group is brought online
* 3905471 (3868533) IO hang happens because of a deadlock situation.
* 3906251 (3806909) Due to some modification in licensing , for STANDALONE DMP, DMP keyless 
license was not working.
* 3907017 (3877571) Disk header is updated even if the dg import operation fails
* 3907593 (3660869) Enhance the Dirty region logging (DRL) dirty-ahead logging for sequential write 
workloads


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: 151232-05

* 3920545 (Tracking ID: 3795739)

SYMPTOM:
In a split brain scenario, cluster formation takes very long time.

DESCRIPTION:
In a split brain scenario, the surviving nodes in the cluster try to preempt the keys of nodes leaving the cluster. If the keys have been already preempted by one of the surviving nodes, other surviving nodes will receive UNIT Attention. DMP (Dynamic Multipathing) then retries the preempt command after a delayof 1 second if it receives Unit attention. Cluster formation cannot complete untill PGR keys of all the leaving nodes are removed from all the disks. If the number of disks are very large, the preemption of keys takes a lot of time, leading to the very long time for cluster formation.

RESOLUTION:
The code is modified to avoid adding delay for first couple of retries when reading PGR keys. This allows faster cluster formation with arrays that clear the Unit Attention condition sooner.

* 3922253 (Tracking ID: 3919642)

SYMPTOM:
IO hang may happen after log owner switch.

DESCRIPTION:
IOs are partly quiesced during log owner change, result in some updates' log 
end process lost hence upcoming IO hang.

RESOLUTION:
Code changes have been mode to quiesce IO completely during log owner 
switch.

* 3922254 (Tracking ID: 3919640)

SYMPTOM:
When log owner on slave and SRL overflow triggered, IO hang and vxconfigd 
hang may happen if there're heavy IO load from master node.

DESCRIPTION:
After SRL(Serialized Replicate Log) overflow, log owner on slave node sent 
DCM(Data Change Map) active request to master node. Master node process the 
request when therere IO daemons idle. But all IO daemons are busy in 
sending and retry sending metadata request to log owner node, as log owner 
is migrating to DCM mode, the request couldnt be processed, hence the hang.

RESOLUTION:
Code changes have been mode to fix the issue.

* 3922255 (Tracking ID: 3919644)

SYMPTOM:
When switching log owner in case rlink is DCM(Data Change Map) mode, IO hang 
may happen when the log owner switches back.

DESCRIPTION:
In case log owner switch happens in DCM mode, as some RV kernel flag reset 
is missed. When the log owner switched back after DCM replay, data 
inconsistent make DCM activated again and IO hang may happen.

RESOLUTION:
Code changes have been mode to clear stale information during log owner 
switch.

* 3922256 (Tracking ID: 3919641)

SYMPTOM:
In case of log owner configured on slave node, when pausing rlink from 
master node and adding IO from log owner node, IO hang on log owner node.

DESCRIPTION:
Deadlock may happen between Master Pause SIO(Staged IO) and Error Handler 
SIO. Master pause SIO needs to disconnect rlink, results in RV serialized by 
invoking error handler SIO. Serialized RV(Replicate Volume) prevents master 
pause SIO from re-starting. As error handler SIO depends on master pause SIO 
done to complete so that RV can't get out of serialized state.

RESOLUTION:
Code changes have been mode to fix the deadlock issue.

* 3922257 (Tracking ID: 3919645)

SYMPTOM:
When switching log owner in case rlink is DCM(Data Change Map) mode, IO hang 
may happen when the log owner switches back.

DESCRIPTION:
In case log owner switch happens in DCM mode, as some RV kernel information 
reset is missed. When the log owner switched back after DCM replay, data 
inconsistent make DCM activated again and IO hang may happen.

RESOLUTION:
Code changes have been mode to clear stale information during log owner 
switch.

* 3922258 (Tracking ID: 3919643)

SYMPTOM:
When switching log owner in case rlink is in SRL(Serialized Replicate Log) 
flush state, IO hang may happen when the log owner switches back.

DESCRIPTION:
SRL flush start/end position is stored in RV kernel structure and reset of 
these positions is missed during log owner switch, so when log owner 
switched back this node will continue to perform SRL flush operation. As SRL 
flush has been completed by other node and SRL may contain new updates, 
readback from SRL may contain invalid updates, SRL flush couldn't continue, 
hence the hang.

RESOLUTION:
Code changes have been mode to clear stale information during log owner 
switch.

* 3927482 (Tracking ID: 3895950)

SYMPTOM:
vxconfigd hang may be observed all of a sudden. The following stack will be 
seen as part of threadlist:
slock()
.disable_lock()
volopen_isopen_cluster()
vol_get_dev()
volconfig_ioctl()
volsioctl_real()
volsioctl()
vols_ioctl()
rdevioctl()
spec_ioctl()
vnop_ioctl()
vno_ioctl()
common_ioctl(??, ??, ??, ??)

DESCRIPTION:
Some of the critical structures in the code are protected with lock to avoid 
simultaneous modification. A particular lock structure gets copied to the 
local stack memory. In this case
the structure might have information about the state of the lock and also at 
the time of copy that lock structure might be in an intermediate state. When 
function 
tries to access such type of lock structure, the result could lead to panic 
or hang since that lock structure might be in some unknown state.

RESOLUTION:
When we make local copy of the structure, no one is going to modify the new 
local structure and hence acquiring lock is not required while accessing 
this copy.

* 3931027 (Tracking ID: 3911930)

SYMPTOM:
Valid PGR operations sometimes fail on a dmpnode.

DESCRIPTION:
As part of the PGR operations, if the inquiry command finds that PGR is not
supported on the dmpnode node, a flag PGR_FLAG_NOTSUPPORTED is set on the dmpnode.
Further PGR operations check this flag and issue PGR commands only if this flag is
NOT set.
This flag remains set even if the hardware is changed so as to support PGR.

RESOLUTION:
A new command (namely enablepr) is provided in the vxdmppr utility to clear this
flag
on the specified dmpnode.

* 3931028 (Tracking ID: 3918356)

SYMPTOM:
zpools are imported automatically when DMP native support is set to on which may lead to zpool corruption.

DESCRIPTION:
When DMP native support is set to on all zpools are imported using DMP devices so that when the import happens for the same zpool again it is 
automatically imported using DMP device. In clustered environment if the import of the same zpool is triggered on two different nodes at the 
same time it can lead to zpool corruption. A way needs to be provided so that zpools are not imported.

RESOLUTION:
Changes are made to provide a way to customer to not import the zpools if required. The way is to set the variable auto_import_exported_pools 
to off in the file /var/adm/vx/native_input like below:
bash:~# cat /var/adm/vx/native_input
auto_import_exported_pools=off

* 3931040 (Tracking ID: 3893150)

SYMPTOM:
VxDMP(Veritas Dynamic Multi-Pathing) vxdmpadm native ls command sometimes 
doesn't report imported disks' pool name

DESCRIPTION:
When Solaris pool is imported with extra options like -d or -R, paths in 
'zpool status <pool name>' can be disk full path. 'Vxdmpadm native ls' 
command doesn't handle such situation hence fails to report its pool name.

RESOLUTION:
Code changes have been made to correctly handle disk full path to get its 
pool name.

Patch ID: 151232-04

* 3913126 (Tracking ID: 3910432)

SYMPTOM:
When a mirror volume is created two log plexes are created by default.

DESCRIPTION:
-
For a mirror/RAID 5 volumes a log plex is reqquired. Due to a bug in the code we are creating 
two log plex records while creation of a volume.

RESOLUTION:
Code changes are done to create a single log plex by default.

* 3915780 (Tracking ID: 3912672)

SYMPTOM:
"vxddladm assign names" command results in the ASM disks losing their
ownership/permission settings which may affect the Oracle databases

DESCRIPTION:
The command "vxddladm assign names" calls a function which creates raw and block
device nodes and set the user-group ownership/permissions as per the mode stored
in in-memory record structure. The in-memory records are not getting created (it
is NULL) before going to that function. Hence no setting of permissions after
device nodes' creation.

RESOLUTION:
Code changes are done to make sure that in-memory records of user-group
ownership/permissions of each dmpnode from vxdmprawdev file gets created before
the function call which creates device nodes and sets permissions on them.

* 3915786 (Tracking ID: 3890602)

SYMPTOM:
OS command cfgadm command hangs after reboot when hundreds devices are under 
DMP's control.

DESCRIPTION:
DMP generates the same entry for each of the partitions (8). A large number of 
vxdmp properties that devfsadmd has to touch causes anything that is touching 
devlinks to temporarily hang behind it.

RESOLUTION:
Code changes have been done to reduce the properties count by a factor of 8.

* 3917323 (Tracking ID: 3917786)

SYMPTOM:
Storage of cold data on dedicated SAN storage spaces increases storage cost 
and maintenance.

DESCRIPTION:
The local storage capacity is consumed by cold or legacy files which are not 
consumed or processed frequently. These files occupy dedicated SAN storage 
space, which is expensive. Moving such files to public or private S3 cloud 
storage services is a better cost-effective solution. Additionally, cloud 
storage is elastic allowing varying service levels based on changing needs. 
Operational charges apply for managing objects in buckets for public cloud 
services using the Storage Transfer Service.

RESOLUTION:
You can now migrate or move legacy data from local SAN storage to a target 
Private or public cloud.

Patch ID: 151232-03

* 3802857 (Tracking ID: 3726110)

SYMPTOM:
On systems with high number of CPUs, DMP devices may perform  considerably slower than OS device  paths.

DESCRIPTION:
In high CPU configuration, I/O statistics related functionality in DMP takes more CPU time because DMP statistics are collected on per CPU basis. This stat collection happens in DMP I/O code path hence it reduces the I/O performance. Because of this, DMP devices perform slower than OS device paths.

RESOLUTION:
The code is modified to remove some of the stats collection functionality from DMP I/O code path. Along with this, the following tunable need to be turned off: 
1. Turn off idle lun probing. 
#vxdmpadm settune dmp_probe_idle_lun=off
2. Turn off statistic gathering functionality.  
#vxdmpadm iostat stop

Notes: 
1. Please apply this patch if system configuration has large number of CPU and if DMP performs considerably slower than OS device paths. For normal systems this issue is not applicable.

* 3803497 (Tracking ID: 3802750)

SYMPTOM:
Once VxVM (Veritas Volume Manager) volume I/O-shipping functionality is turned on, it is not getting disabled even after the user issues the correct command to disable it.

DESCRIPTION:
VxVM (Veritas Volume Manager) volume I/O-shipping functionality is turned off by default. The following two commands can be used to turn it on and off:
	vxdg -g <dgname> set ioship=on
	vxdg -g <dgname> set ioship=off

The command to turn off I/O-shipping is not working as intended because I/O-shipping flags are not reset properly.

RESOLUTION:
The code is modified to correctly reset I/O-shipping flags when the user issues the CLI command.

* 3812192 (Tracking ID: 3764326)

SYMPTOM:
VxDMP repeatedly reports warning messages in system log:
	WARNING: VxVM vxdmp V-5-0-2046 : Failed to get devid for device 
0x70259720
	WARNING: VxVM vxdmp V-5-3-2065 dmp_devno_to_devidstr ldi_get_devid 
failed for devno 0x13800000a60

DESCRIPTION:
Due to VxDMP code issue, the device path name is inconsistent during creation and deletion. It leaves stale device file under /devices. Because some devices don't support Solaris devid operations, the devid related functions fail against such devices. VxDMP doesn't skip such devices when creating or removing minor nodes.

RESOLUTION:
The code is modified to address the device path name inconsistency and skip devid manipulation for third party devices.

* 3847745 (Tracking ID: 3899198)

SYMPTOM:
VxDMP causes system panic after a shutdown or reboot and 
displays
the following stack trace:
vpanic
volinfo_ioctl()
volsioctl_real()
ldi_ioctl()
dmp_signal_vold()
dmp_throttle_paths()
dmp_process_stats()
dmp_daemons_loop()
thread_start()

DESCRIPTION:
In a special scenario of system shutdown/reboot, the DMP
(Dynamic MultiPathing) restore daemon tries to call the ioctl functions
in VXIO module which is being unloaded and this causes system panic.

RESOLUTION:
The code is modified to stop the DMP I/O restore daemon 
before system shutdown/reboot.

* 3850890 (Tracking ID: 3603792)

SYMPTOM:
The first boot after live upgrade to new version of Solaris 11 and VxVM takes 
long time since post installation stalled for a long time.

DESCRIPTION:
In Solaris 11, the OS command devlinks which used to add /dev entries stalled 
for a long time in post installation of VxVM. The OS command devfsadm should be 
used in the post-install script.

RESOLUTION:
The code is modified to replace devlinks with devfsadm in the post installation 
process of VxVM.

* 3851117 (Tracking ID: 3662392)

SYMPTOM:
In the CVM environment, if I/Os are getting executed on slave node, corruption 
can happen when the vxdisk resize(1M) command is executing on the master 
node.

DESCRIPTION:
During the first stage of resize transaction, the master node re-adjusts the 
disk offsets and public/private partition device numbers.
On a slave node, the public/private partition device numbers are not adjusted 
properly. Because of this, the partition starting offset is are added twice 
and causes the corruption. The window is small during which public/private 
partition device numbers are adjusted. If I/O occurs during this window then 
only corruption is observed. After the resize operation completes its execution,
no further corruption will happen.

RESOLUTION:
The code has been changed to add partition starting offset properly to an I/O 
on slave node during execution of a resize command.

* 3852148 (Tracking ID: 3852146)

SYMPTOM:
In a CVM cluster, when importing a shared diskgroup specifying both -c and -o
noreonline options, the following error may be returned: 
VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed: Disk for disk
group not found.

DESCRIPTION:
The -c option will update the disk ID and disk group ID on the private region
of the disks in the disk group being imported. Such updated information is not
yet seen by the slave because the disks have not been re-onlined (given that
noreonline option is specified). As a result, the slave cannot identify the
disk(s) based on the updated information sent from the master, causing the
import to fail with the error Disk for disk group not found.

RESOLUTION:
The code is modified to handle the working of the "-c" and "-o noreonline"
options together.

* 3854788 (Tracking ID: 3783356)

SYMPTOM:
After DMP module fails to load, dmp_idle_vector is not NULL.

DESCRIPTION:
After DMP module load failure, DMP resources are not cleared off from the system memory, so some of the 
resources are in NON-NULL value. When system retries to load, it frees invalid data, leading to system panic with error 
message BAD FREE, because the data being freed is not valid at that point.

RESOLUTION:
The code is modified to clear up the DMP resources when module failure happens.

* 3863971 (Tracking ID: 3736502)

SYMPTOM:
When FMR is configured in VVR environment, 'vxsnap refresh' fails with below 
error message:
"VxVM VVR vxsnap ERROR V-5-1-10128 DCO experienced IO errors during the
operation. Re-run the operation after ensuring that DCO is accessible".
Also, multiple messages of connection/disconnection of replication 
link(rlink) are seen.

DESCRIPTION:
Inherently triggered rlink connection/disconnection causes the transaction 
retries. During transaction, memory is allocated for Data Change Object(DCO) 
maps and is not cleared on abortion of a transaction.
This leads to a problem of memory leak and eventually to exhaustion of maps.

RESOLUTION:
The fix has been added to clear the allocated DCO maps when transaction 
aborts.

* 3871040 (Tracking ID: 3868444)

SYMPTOM:
Disk header timestamp is updated even if the disk group import fails.

DESCRIPTION:
While doing dg import operation, during join operation disk header timestamps are updated. This makes difficult for 
support to understand which disk is having latest config copy if dg import is failed and decision is to be made if
force dg import is safe or not.

RESOLUTION:
Dump the old disk header timestamp and sequence number in the syslog which can be referred on deciding if force dg 
import would be safe or not

* 3874737 (Tracking ID: 3874387)

SYMPTOM:
Disk header information is not logged to the syslog sometimes
even if the disk is missing and dg import fails.

DESCRIPTION:
In scenarios where disk has config copy enabled
and get active disk record, then disk header information was not getting
logged even though the disk is missing thereafter dg import fails.

RESOLUTION:
Dump the disk header information even if the disk record is
active and attached to the disk group.

* 3875933 (Tracking ID: 3737585)

SYMPTOM:
Customer encounters following "Uncorrectable write error or below panic in 
VVR environment:

[000D50D0]xmfree+000050 
[051221A0]vol_tbmemfree+0000B0
[05122294]vol_memfreesio_start+00001C
[05131324]voliod_iohandle+000050
[05131740]voliod_loop+0002D0
[05126438]vol_kernel_thread_init+000024

DESCRIPTION:
IOHINT structure allocated from VxFS is also freed by VxFS after IO done from 
VxVM. IOs to VxVM with VVR needs 2 phases, SRL(Serial Replication Log) write 
and Data volume write, VxFS gets IO done after SRL write and doesnt wait for 
Data Volume write completion, so if Data volume write gets started or done after

VxFS frees IOHINT, it may cause write IO error or double free panic due to

freeing memory wrongly as per corrupted IOHINT info.

RESOLUTION:
Code changes were done to clone the IOHINT structure before writing to data 
volume.

* 3877637 (Tracking ID: 3878030)

SYMPTOM:
Enhance VxVM(Veritas Volume Manager) DR(Dynamic Reconfiguration) tool to 
clean up OS and VxDMP(Veritas Dynamic Multi-Pathing) device trees without 
user interaction.

DESCRIPTION:
When users add or remove LUNs, stale entries in OS or VxDMP device trees can 
prevent VxVM from discovering changed LUNs correctly. It even causes VxVM 
vxconfigd process core dump under certain conditions, users have to reboot 
system to let vxconfigd restart again.
VxVM has DR tool to help users adding or removing LUNs properly but it 
requires user inputs during operations.

RESOLUTION:
Enhancement has been done to VxVM DR tool. It accepts '-o refresh' option to 
clean up OS and VxDMP device trees without user interaction.

* 3879334 (Tracking ID: 3879324)

SYMPTOM:
VxVM(Veritas Volume Manager) DR(Dynamic Reconfiguration) tool fails to 
handle busy device problem while LUNs are removed from OS

DESCRIPTION:
OS devices may still be busy after removing them from OS, it fails 'luxadm -
e offline <disk>' operation and leaves staled entries in 'vxdisk list' 
output 
like:
emc0_65535   auto            -            -            error
emc0_65536   auto            -            -            error

RESOLUTION:
Code changes have been done to address busy devices issue.

* 3880573 (Tracking ID: 3886153)

SYMPTOM:
In a VVR primary-primary configuration, if 'vrstat' command is runing, vradmind 
core dump may occur with the stack like below:

__assert_c99 
StatsSession::sessionInitReq 
StatsSession::processOpReq 
StatsSession::processOpMsgs  
RDS::processStatsOpMsg 
DBMgr::processStatsOpMsg  
process_message

DESCRIPTION:
vrstat command initiates StatSession which need to send initilization request to 
secondary. On Secondary there is assert() to ensure it's secondary that 
processing the request. In primary-primary configuration it leads to core dump.

RESOLUTION:
The code changes have been made to fix the issue by returning failure to 
StatSession initiator.

* 3881334 (Tracking ID: 3864063)

SYMPTOM:
Application I/O hangs after the Master Pause command is issued.

DESCRIPTION:
Some flags (VOL_RIFLAG_DISCONNECTING or VOL_RIFLAG_REQUEST_PENDING) in VVR
(Veritas Volume Replicator) kernel are not cleared because of a race between the
Master Pause SIO and the Error Handler SIO. This causes the RU (Replication
Update) SIO to fail to proceed, which leads to I/O hang.

RESOLUTION:
The code is modified to handle the race condition.

* 3881335 (Tracking ID: 3867236)

SYMPTOM:
Application IO hang happens after issuing Master Pause command.

DESCRIPTION:
The flag VOL_RIFLAG_REQUEST_PENDING in VVR(Veritas Volume Replicator) kernel is 
not cleared because of a race between Master Pause SIO and RVWRITE1 SIO resulting 
in RU (Replication Update) SIO to fail to proceed thereby causing IO hang.

RESOLUTION:
Code changes have been made to handle the race condition.

* 3889284 (Tracking ID: 3878153)

SYMPTOM:
VVR (Veritas Volume Replicator)  'vradmind' deamon core dump in following stack.

#0  __kernel_vsyscall ()
#1  raise () from /lib/libc.so.6
#2  abort () from /lib/libc.so.6
#3  __libc_message () from /lib/libc.so.6
#4  malloc_printerr () from /lib/libc.so.6
#5  _int_free () from /lib/libc.so.6
#6  free () from /lib/libc.so.6
#7  operator delete(void*) () from /usr/lib/libstdc++.so.6
#8  operator delete[](void*) () from /usr/lib/libstdc++.so.6
#9  inIpmHandle::~IpmHandle (this=0x838a1d8,
    __in_chrg=<optimized out>) at Ipm.C:2946
#10 IpmHandle::events (handlesp=0x838ee80, vlistsp=0x838e5b0,
    ms=100) at Ipm.C:644
#11 main (argc=1, argv=0xffffd3d4) at srvmd.C:703

DESCRIPTION:
Under certain circumstances 'vradmind' daemon may core dump freeing a variable
allocated in 
stack.

RESOLUTION:
Code change has been done to address the issue.

* 3889850 (Tracking ID: 3878911)

SYMPTOM:
QLogic driver returns following error due to Incorrect aiusize in FC header
FC_ELS_MALFORMED, cnt=c60h, size=314h

DESCRIPTION:
When creating CT pass-through command to be sent, the ct_aiusize we specify in 
request header does not conform to FT standard. Hence during the sanity check of FT 
header in OS layer, it reports error and get_topology() failed.

RESOLUTION:
Code changes have been done so that ct_aiusize is in compliance with FT standard.

* 3890666 (Tracking ID: 3882326)

SYMPTOM:
vxconfigd core dump when slice of a device is exported from control domain to LDOM (Logical Domain) with the following stack:
ddl_process_failed_node()
ddl_migration_devlist_removed()
ddl_reconfig_full()
ddl_reconfigure_all()
ddl_find_devices_in_system()
find_devices_in_system()
mode_set()
req_vold_enable()
request_loop()
main()
_start()

DESCRIPTION:
The core dump occurs because of NULL pointer dereference happening during reconfiguration. The issue occurs since ldi_get_devid fails for slice devices and works only 
for full devices. Even though ldi_get_devid fails vxconfigd core dump should not core dump.

RESOLUTION:
Code changes have been done to prevent vxconfigd core dump when the failure happens.

* 3893134 (Tracking ID: 3864318)

SYMPTOM:
Memory consuming keeps increasing when reading/writing data against VxVM volume
with 
big block size.

DESCRIPTION:
In case incoming IO size is too big for a disk to handle, VxVM will split it into 
smaller ones to move forward. VxVM will allocate memory to backup the those split 
IOs. Due to code defect, the allocated space doesn't got freed when split IOs are 
completed.

RESOLUTION:
The code is modified to free VxVM allocated memory after split IOs competed.

* 3893950 (Tracking ID: 3841242)

SYMPTOM:
Threads will be hung and and the stack will contain any of the following 
function.
ddi_pathname_to_dev_t()
ddi_find_devinfo()
ddi_install_driver()
devinfo_tree_lock
e_ddi_get_dev_info()
Stack may look like below - 
void genunix:cv_wait
void genunix:ndi_devi_enter
int genunix:devi_config_one
int genunix:ndi_devi_config_one
int genunix:resolve_pathname_noalias
int genunix:resolve_pathname
dev_t genunix:ddi_pathname_to_dev_t
void vxdmp:dmp_setbootdev
int vxdmp:_init
int genunix:modinstall

DESCRIPTION:
Oracle has deprecated some APIs 
ddi_pathname_to_dev_t()
ddi_find_devinfo()
ddi_install_driver()
devinfo_tree_lock
e_ddi_get_dev_info() which were used by VxVM(Veritas Volume Manager) and which 
were not thread safe.
If VxVM modules are loaded in parallel to the other OS modules while making use 
of these APIs, it may result in a 
deadlock and a hang could be observed.

RESOLUTION:
Deprecated ddi_x() API calls have been replaced with 
ldi_x() calls which are thread safe.

* 3894783 (Tracking ID: 3628743)

SYMPTOM:
On Solaris 11.2, New boot environment takes long time to start up during 
live upgrade. Here deadlock is seen in ndi_devi_enter( ), when loading VxDMP 
driver and Deadlocks caused by VXVM drivers due to use of Solaris 
ddi_pathname_to_dev_t or ddi_hold_devi_by_path private interfaces.

DESCRIPTION:
Here deadlocks caused by VXVM drivers due to use of Solaris 
ddi_pathname_to_dev_t or e_ddi_hold_devi_by_path private interface and 
ddi_pathname_to_dev_t/e_ddi_hold_devi_by_path are Solaris internal use only 
routine and is not multi-thread safe.  Normally this is not a problem as the 
various VXVM drivers don't unload or detach, however there are certain 
conditions where our _init routines might be called which can expose this 
deadlock condition.

RESOLUTION:
Code is modified to resolve deadlock.

* 3897764 (Tracking ID: 3741003)

SYMPTOM:
In CVM (Cluster Volume Manager) environment, after removing storage from one 
of multiple plex in a mirrored DCO volume, the DCO volume is detached and DCO 
object is having BADLOG flag marked.

DESCRIPTION:
When one plexs storage of a mirrored volume is removed, only that plex should 
be detached instead of entire volume. While IO reading is undergoing on a 
failed DCO plex, the local failed IO gets restarted and shiped to other nodes 
for retry, which also gets failed, since the storage is removed from other 
nodes as well. Because a flag reset is missing, the failed IO returns error 
result in entire volume is detached and marked as BADLOG flag even though the 
IO is successful from an alternate plex.

RESOLUTION:
Code changes are added to handle this case for the Resiliency of VxVM in 
Partial-Storage outage scenario.

* 3898129 (Tracking ID: 3790136)

SYMPTOM:
File system hang can be observed sometimes due to IO's hung in DRL.

DESCRIPTION:
There might be some IO's hung in DRL of mirrored volume due to incorrect 
calculation of outstanding IO's on volume and number of active IO's which are 
currently in progress on DRL. The value of the outstanding IO on volume can get 
modified incorrectly leading to IO's on DRL not to progress further which in 
turns results in a hang kind of scenario.

RESOLUTION:
Code changes have been done to avoid incorrect modification of value of 
outstanding IO's on volume and prevent the hang.

* 3898169 (Tracking ID: 3740730)

SYMPTOM:
While creating volume using vxassist CLI, dco-log length specified as command
line
parameter was not getting honored.

E.g. ->
bash # vxassist -g <dgname> make <volume-name> <volume-size> logtype=dco
dcolen=<dcolog-length>
VxVM vxassist ERROR V-5-1-16707  Specified dcologlen(<specified dcolog length>)
is
less than minimum dcologlen(17152)

DESCRIPTION:
While creating volume, using dcologlength attribute of dco-volume in vxassist
CLI,
the size of dcolog specified is not correctly parsed in the code, because of
which
it internally compares the size with incorrectly calculated size & throws the
error indicating that size specified isn't sufficient.
So the values in comparison was incorrect. Hence changed the code to compare the
user-specified value passes the minimum-threshold value or not.

RESOLUTION:
The code is changed to Fix the issue, which honors the length value of dcolog
volume specified by user in vxassist CLI.

* 3898296 (Tracking ID: 3767531)

SYMPTOM:
In Layered volume layout with FSS configuration, when few of the 
FSS_Hosts are rebooted, Full resync is happening for non-affected disks on 
master.

DESCRIPTION:
In configuration, where there are multiple FSS-Hosts, with 
layered volume created on the hosts. When the slave nodes are rebooted , few 
of the 
sub-volumes of non-affected disks are fully getting synced on master.

RESOLUTION:
Code-changes have been made to sync only needed part of sub-
volume.

* 3902626 (Tracking ID: 3795739)

SYMPTOM:
In a split brain scenario, cluster formation takes very long time.

DESCRIPTION:
In a split brain scenario, the surviving nodes in the cluster try to preempt the keys of nodes leaving the cluster. If the keys have been already preempted by one of the surviving nodes, other surviving nodes will receive UNIT Attention. DMP (Dynamic Multipathing) then retries the preempt command after a delayof 1 second if it receives Unit attention. Cluster formation cannot complete untill PGR keys of all the leaving nodes are removed from all the disks. If the number of disks are very large, the preemption of keys takes a lot of time, leading to the very long time for cluster formation.

RESOLUTION:
The code is modified to avoid adding delay for first couple of retries when reading PGR keys. This allows faster cluster formation with arrays that clear the Unit Attention condition sooner.

* 3903647 (Tracking ID: 3868934)

SYMPTOM:
System panic in the stack like below, while deactivate the VVR(VERITAS Volume 
Replicator) batch write SIO:
 
panic_trap+000000 
vol_cmn_err+000194 
vol_rv_inactive+000090 
vol_rv_batch_write_start+001378
voliod_iohandle+000050 
voliod_loop+0002D0
vol_kernel_thread_init+000024

DESCRIPTION:
When VVR do batch write SIO, if it fails to reserve VVR IO memory, the SIO will be 
put 
on queue for restart and then will be deactivated. If the deactivation blocked  for 
some time since cannot get the lock, and during this period, the SIO is restarted due 
to the IO memory reservation request satisfied, the SIO would be corrupted due to 
be 
deactivated twice. Hence it causes the system panic.

RESOLUTION:
Code changes have been made to remove the unnecessary SIO deactivation after 
VVR 
IO memory reservation failed.

* 3904008 (Tracking ID: 3856146)

SYMPTOM:
Two issues are hit on latest SRUs Solaris 11.2.8 and greater and Solaris sparc 
11.3 when dmp_native 
support is on. These issues are mentioned below:
1.            Turning of dmp_native_support  on and off requires reboot. 
System gets panic during the reboot as a part of setting dmp_native_support 
off.
2.            Sometimes, system comes up after reboot when dmp_native_support 
is 
set to off. In such case, panic is observed when system is rebooted after 
uninstallation of SF and it fails to boot up.
The panic string is same for both the issues.
panic[cpu0]/thread=20012000: read_binding_file: /etc/name_to_major file not 
found

DESCRIPTION:
The issue happened because of /etc/system and /etc/name_to_major files.

As per the discussion with Oracle through SR(3-11640878941)

Removal of aforementioned 2 files from the boot-archive is causing this panic.
Because  files -> /etc/name_to_major & /etc/system are included in the SPARC 
boot_archive of Solaris 11.2.8.4.0 ( and greater versions) and they should not 
be removed. 
The system will fail to come up if they are removed."

RESOLUTION:
The code has been modified to avoid panic while setting dmp_native_support to 
off.

* 3904017 (Tracking ID: 3853151)

SYMPTOM:
In the Root Disk Encapsulation (RDE) environment, vxrootadm join/split 
operations will cause DMP I/O errors  in syslog as follows:
NOTICE: VxVM vxdmp V-5-0-0 [Error] i/o error occurred (errno=0x5) on dmpnode

DESCRIPTION:
When the splited root disk group joins back and you use the vxrootadm join 
dgname command, a DMP I/O error message is recorded in the syslog. This 
occurs because the disk has ioctls when partitions are deleted.

RESOLUTION:
The code is modified to not to record the DMP I/O error message in the syslog.

* 3904790 (Tracking ID: 3795788)

SYMPTOM:
Performance degradation is seen when many application sessions open the same data file on Veritas Volume Manager (VxVM) volume.

DESCRIPTION:
This issue occurs because of the lock contention. When many application sessions open the same data file on the VxVM volume,  the exclusive lock is occupied on all CPUs. If there are a lot of CPUs in the system, this process could be quite time- consuming, which leads to performance degradation at the initial start of applications.

RESOLUTION:
The code is modified to change the exclusive lock to the shared lock when  the data file on the volume is open.

* 3904796 (Tracking ID: 3853049)

SYMPTOM:
On a server with more number CPUs, the stats output of vxstat is delayed beyond 
the set interval. Also multiple sessions of vxstat is impacting the IO 
performance.

DESCRIPTION:
The vxstat acquires an exclusive lock on each CPU in order to gather the stats. 
This would affect the consolidation and display of stats in an environment with 
huge number of CPUs and disks. The output of stats for interval of 1 second can 
get delayed beyond the set interval. Also the acquisition of lock happens in 
the IO path which would affect the IO performance due contention of these locks.

RESOLUTION:
The code modified to remove the exclusive spin lock.

* 3904797 (Tracking ID: 3857120)

SYMPTOM:
If the volume is shared in a CVM configuration, the following stack traces will
be seen under a vxiod daemon suggesting an attempt to drain I/O. In this case,
CVM halt will be blocked and eventually time out.
The stack trace may appear as:
sleep+0x3f0  
vxvm_delay+0xe0  
volcvm_iodrain_dg+0x150  
volcvmdg_abort_complete+0x200  
volcvm_abort_sio_start+0x140  
voliod_iohandle+0x80

or 

cv_wait+0x3c() 
delay_common+0x6c 
vol_mv_close_check+0x68 
vol_close_device+0x1e4
vxioclose+0x24
spec_close+0x14c
fop_close+0x8c 
closef2+0x11c
closeall+0x3c 
proc_exit+0x46c
exit+8
post_syscall+0x42c
syscall_trap+0x188

Since the vxconfigd would be busy in transaction of trying to close the volume
or drain the IO then all the other threads which send a request to vxconfigd
will hang.

DESCRIPTION:
VxVM maintains an I/O count of the in-progress I/O on the volume. When two
threads from VxVM asynchronously manipulate the I/O count on the volume, the
race between these threads might lead to stale I/O count remaining on the volume
even though the volume has actually completed all I/Os . Since there is an
invalid pending I/O count on the volume due to the race condition, the volume
cannot be closed.

RESOLUTION:
This issue has been fixed in the VxVM code manipulating the I/O count to avoid
the race condition between the two threads.

* 3904800 (Tracking ID: 3860503)

SYMPTOM:
Poor performance of vxassist mirroring is observed compared to using raw dd
utility to do mirroring .

DESCRIPTION:
There is huge lock contention on high end server with large number of cpus,
because doing copy on each region needs to obtain some unnecessary cpu locks.

RESOLUTION:
VxVM code has been changed to decrease the lock contention.

* 3904801 (Tracking ID: 3686698)

SYMPTOM:
vxconfigd was getting hung due to deadlock between two threads

DESCRIPTION:
Two threads were waiting for same lock causing deadlock between 
them. This will lead to block all vx commands. 
untimeout function will not return until pending callback is cancelled  (which 
is set through timeout function) OR pending callback has completed its 
execution (if it has already started). Therefore locks acquired by callback 
routine should not be held across call to untimeout routine or deadlock may 
result.

Thread 1: 
    untimeout_generic()   
    untimeout()
    voldio()
    volsioctl_real()
    fop_ioctl()
    ioctl()
    syscall_trap32()
 
Thread 2:
    mutex_vector_enter()
    voldsio_timeout()
    callout_list_expire()
    callout_expire()
    callout_execute()
    taskq_thread()
    thread_start()

RESOLUTION:
Code changes have been made to call untimeout outside the lock 
taken by callback handler.

* 3904802 (Tracking ID: 3721565)

SYMPTOM:
vxconfigd hang is seen with below stack.
genunix:cv_wait_sig_swap_core
genunix:cv_wait_sig_swap 
genunix:pause
unix:syscall_trap32

DESCRIPTION:
In FMR environment, write is done on a source volume having space-optimized(SO) 
snapshot. Memory is acquired first and then ILOCKs are acquired on individual SO 
volumes for pushed writes. On the other hand, a user write on SO snapshot will first 
acquire ILOCK and then acquire memory. This causes deadlock.

RESOLUTION:
Code is modified to resolve deadlock.

* 3904804 (Tracking ID: 3486861)

SYMPTOM:
Primary node panics with below stack when storage is removed while replication is 
going on with heavy IOs.
Stack:
oops_end 
no_context 
page_fault 
vol_rv_async_done 
vol_rv_flush_loghdr_done 
voliod_iohandle 
voliod_loop

DESCRIPTION:
In VVR environment, when write to data volume failson primary node, error handling 
is initiated. As a part 
of it, SRL header will be flushed. As primary storage is removed, flushing will 
fail. Panic will be hit as 
invalid values will be accessed while logging error message.

RESOLUTION:
Code is modified to resolve the issue.

* 3904805 (Tracking ID: 3788644)

SYMPTOM:
When DMP (Dynamic Multi-Pathing) native support enabled for Oracle ASM 
environment, if we constantly adding and removing DMP devices, it will cause error 
like:
/etc/vx/bin/vxdmpraw enable oracle dba 775 emc0_3f84
VxVM vxdmpraw INFO V-5-2-6157
Device enabled : emc0_3f84
Error setting raw device (Invalid argument)

DESCRIPTION:
There is a limitation (8192) of maximum raw device number N (exclusive) of 
/dev/raw/rawN. This limitation is defined in boot configuration file. When binding a 
raw 
device to a dmpnode, it uses /dev/raw/rawN to bind the dmpnode. The rawN is 
calculated by one-way incremental process. So even if we unbind the device later on, 
the "released" rawN number will not be reused in the next binding. When the rawN 
number is increased to exceed the maximum limitation, the error will be reported.

RESOLUTION:
Code has been changed to always use the smallest available rawN number instead of 
calculating by one-way incremental process.

* 3904806 (Tracking ID: 3807879)

SYMPTOM:
Writing the backup EFI GPT disk label during the disk-group flush 
operation may cause data corruption on volumes in the disk group. The backup 
label could incorrectly get flushed to the disk public region and overwrite the 
user data with the backup disk label.

DESCRIPTION:
For EFI disks initialized under VxVM (Veritas Volume Manager), it 
is observed that during a disk-group flush operation, vxconfigd (veritas 
configuration daemon) could stop writing the EFI GPT backup label to the volume 
public region, thereby causing user data corruption. When this issue happens, 
the real user data are replaced with the backup EFI disk label

RESOLUTION:
The code is modified to prevent the writing of the EFI GPT backup 
label during the VxVM disk-group flush operation.

* 3904807 (Tracking ID: 3867145)

SYMPTOM:
When VVR SRL occupation > 90%, then output the SRL occupation is shown by 10 
percent.

DESCRIPTION:
This is kind of enhancement, to show the SRL Occupation when it's more than 
90% is previously shown with 10 percentage gap.
Here the enhancement is to show the logs with 1 percentage granularity.

RESOLUTION:
Changes are done to show the syslog messages wih 1 percent granularity, when 
SRL is filled > 90%.

* 3904810 (Tracking ID: 3871750)

SYMPTOM:
In parallel VxVM(Veritas Volume Manager) vxstat commands report abnormal 
disk IO statistic data. Like below:
# /usr/sbin/vxstat -g <dg name> -u k -dv -i 1 -S
...... 
dm  emc0_2480                       4294967210 4294962421           -382676k  
4294967.38 4294972.17
......

DESCRIPTION:
After VxVM IO statistics was optimized for huge CPUs and disks, there's a 
race condition when multiple vxstat commands are running to collect disk IO 
statistic data. It causes disk's latest IO statistic value become smaller 
than previous one, hence VxVM treates the value overflow so that abnormal 
large IO statistic value is printed.

RESOLUTION:
Code changes are done to eliminate such race condition.

* 3904811 (Tracking ID: 3875563)

SYMPTOM:
While dumping the disk header information, human readable timestamp was
not converted correctly from corresponding epoch time.

DESCRIPTION:
When disk group import fails if one of the disk is missing while
importing the disk group, it will dump the disk header information the syslog.
But, human readable time stamp was not getting converted correctly from
corresponding epoch time.

RESOLUTION:
Code changes done to dump disk header information correctly.

* 3904819 (Tracking ID: 3811946)

SYMPTOM:
When invoking "vxsnap make" command with cachesize option to create space optimized snapshot, the command succeeds but the following error message is displayed in syslog:

kernel: VxVM vxio V-5-0-603 I/O failed.  Subcache object <subcache-name> does 
not have a valid sdid allocated by cache object <cache-name>.
kernel: VxVM vxio V-5-0-1276 error on Plex <plex-name> while writing volume 
<volume-name> offset 0 length 2048

DESCRIPTION:
When space optimized snapshot is created using "vxsnap make" command along with cachesize option, cache and subcache objects are created by the same command. During the creation of snapshot, I/Os from the volumes may be pushed onto a subcache even though the subcache ID has not yet been allocated. As a result, the I/O fails.

RESOLUTION:
The code is modified to make sure that I/Os on the subcache are 
pushed only after the subcache ID has been allocated.

* 3904822 (Tracking ID: 3755209)

SYMPTOM:
The Dynamic Multi-pathing(DMP) device configured in Solaris LDOM guest is 
disabled when an active controller of an ALUA array is failed.

DESCRIPTION:
DMP in guest environment monitors cached target port ID of virtual paths in 
LDOM. If a controller of an ALUA array fails for some reason, active/primary 
target port ID of an ALUA array will be changed in I/O domain resulting in 
stale entry in the guest. DMP in the guest wrongly interprets this target port 
change to mark the path as unavailable. This causes I/O on the path to be 
failed. As a result the DMP device is disabled in LDOM.

RESOLUTION:
The code is modified to not use the cached target port IDs for LDOM virtual 
disks.

* 3904824 (Tracking ID: 3795622)

SYMPTOM:
With Dynamic Multi-Pathing (DMP) Native Support enabled, LVM global_filter is
not updated properly in lvm.conf file to reject the newly added paths.

DESCRIPTION:
With DMP Native Support enabled, when new paths are added to existing LUNs, LVM
global_filter is not updated properly in lvm.conf file to reject the newly added
paths. This can lead to duplicate PV (physical volumes) found error reported by
LVM commands.

RESOLUTION:
The code is modified to properly update global_filter field in lvm.conf file
when new paths are added to existing disks.

* 3904825 (Tracking ID: 3859009)

SYMPTOM:
pvs command will show the duplicate PV messages since global_filter of 
lvm.conf is not updated after fiber switch or storage controller get rebooted.

DESCRIPTION:
When fiber switch or storage controller reboot, some paths dev No. may get reused 
during DDL reconfig cycle, in this case VxDMP(Veritas Dynamic Multi-Pathing) wont 
treat them as newly added devices. For those devices belong to LVM dmpnode, VxDMP 
will not trigger lvm.conf update for them. As a result, the global_filter of 
lvm.conf will not be updated. Hence the issue.

RESOLUTION:
The code has been changed to update lvm.conf correctly.

* 3904833 (Tracking ID: 3729078)

SYMPTOM:
In VVR environment, the panic may occur after SF(Storage Foundation) patch 
installation or uninstallation on the secondary site.

DESCRIPTION:
VXIO Kernel reset invoked by SF patch installation removes all Disk Group 
objects that have no preserved flag set, because the preserve flag is overlapped 
with RVG(Replicated Volume Group) logging flag, the RVG object won't be removed, 
but its rlink object is removed, result of system panic when starting VVR.

RESOLUTION:
Code changes have been made to fix this issue.

* 3904834 (Tracking ID: 3819670)

SYMPTOM:
When smartmove with 'vxevac' command is run in background by hitting 'ctlr-z' key and 'bg' command, the execution of 'vxevac' is terminated abruptly.

DESCRIPTION:
As part of "vxevac" command for data movement, VxVM submits the data as a task in the kernel, and use select() primitive on the task file descriptor to wait for task finishing events to arrive. When "ctlr-z" and bg is used to run vxevac in background, the select() returns -1 with errno EINTR. VxVM wrongly interprets it as user termination action and hence vxevac is terminated.  
Instead of terminating vxevac, the select() should be retried untill task completes.

RESOLUTION:
The code is modified so that when select() returns with errno EINTR, it checks whether vxevac task is finished. If not, the select() is retried.

* 3904840 (Tracking ID: 3769927)

SYMPTOM:
Turning off dmp_native_support tunable fails with the following errors:

VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more zpools 
VxVM vxdmpadm ERROR V-5-1-15686 The following zpool(s) could not be migrated 
as they are not healthy - <zpool_name>.

DESCRIPTION:
Turning off the dmp_native_support tunable fails even if the zpools are healthy.
The vxnative script doesn't allow turning off the dmp_native_support if it detects that the zpool is unhealthy, which means the zpool state is ONLINE and some action is required to be taken on zpool. "upgrade zpool" is considered as one of the actions indicating unhealthy zpool state. This is not correct.

RESOLUTION:
The code is modified to consider "upgrade zpool" action as expected. Turning off dmp_native_support tunable is supported if the action is "upgrade zpool"..

* 3904851 (Tracking ID: 3804214)

SYMPTOM:
VxDMP (Dynamic Multi-Pathing) path enable operation fails after the disk label is
changed from guest LDOM. Open fails with error 5 (EIO) on the path being enabled.

Following error messages can be seen in /var/adm/messages:

<time-stamp hostname> vxdmp: [ID 808364 kern.notice] NOTICE: VxVM vxdmp V-5-3-0
dmp_open_path: Open failed with 5 for path 237/0x30
<time-stamp hostname> vxdmp: [ID 382146 kern.notice] NOTICE: VxVM vxdmp V-5-0-112
[Warn] disabled path 237/0x30 belonging to the dmpnode 307/0x38 due to open failure

DESCRIPTION:
While a disk is exported to the Solaris LDOM, Solaris OS in the control/IO domain
holds NORMAL mode open on the existing partitions of the DMP node. If the disk
partitions/label is changed from LDOM such that some of the older partitions are
removed, Solaris OS in the control/IO domain does not know about this change and
continues to hold NORMAL mode open on those deleted partitions. If a disabled DMP
path is enabled in this scenario, the NORMAL mode open the path fails and path
enable operation errors out. This can be worked around by detaching and
reattaching the disk to the LDOM. Due to a problem in DMP code, the stale NORMAL
mode open flag was not being reset even when the DMP disk was detached from the
LDOM. This was preventing the DMP path to be enabled even after the DMP disk was
detached from the LDOM.

RESOLUTION:
Code was fixed to reset NORMAL mode open when the DMP disk is detached from
the LDOM. With this fix, DMP disk will have to reattached to the LDOM only
once after the disk labels change. When the disk is reattached, it will get the
correct open mode (NORMAL/NDELAY) on the partitions that exist after label change.

* 3904858 (Tracking ID: 3899568)

SYMPTOM:
"vxdmpadm iostat stop" as per design cannot stop the iostat gathering
persistently. To avoid Performance & Memory crunch related issues, it is
generally recommended to stop the iostat gathering.There is a requirement
to provide such ability to stop/start the iostat gathering persistently
in those cases.

DESCRIPTION:
Today DMP iostat daemon is stopped using - "vxdmpadm iostat stop". but this 
is not persistent setting. After reboot this would be lost and hence 
customer
needs to also have to put this in init scripts at appropriate place for
persistent effect.

RESOLUTION:
Code is modified to provide a  tunable "dmp_compute_iostats" which can
start/stop the iostat gathering persistently.

Notes:
Use following command to start/stop the iostat gathering persistently.
# vxdmpadm settune dmp_compute_iostats=on/off.

* 3904859 (Tracking ID: 3901633)

SYMPTOM:
Lots of error messages like the following are reported while performing RVG 
sync.
VxVM VVR vxrsync ERROR V-5-52-2027 getdigest response err 
[192.168.10.101:/dev/vx/dsk/testdg/v1 <- 
192.168.10.105:/dev/vx/dsk/testdg/v1] [[ndigests sent=-1 ndigests 
received=0]]
VxVM VVR vxrsync ERROR V-5-52-2027 getdigest response err 
[192.168.10.101:/dev/vx/dsk/testdg/v1 <- 
192.168.10.105:/dev/vx/dsk/testdg/v1] [[ndigests sent=-2 ndigests 
received=0]]

DESCRIPTION:
While performing last volume region read and sync, volume end offset 
calculation is not correct, which may lead to over volume end read and sync, 
result in an internal variable became negative number and vxrsync reports 
error. It can happen if volume size is not multiple of 512KB, plus the last 
512KB volume region is partly in use by VxFS.

RESOLUTION:
Code changes have been done to fix the issue.

* 3904861 (Tracking ID: 3904538)

SYMPTOM:
RV(Replicate Volume) IO hang happens during slave node leave or master node 
switch.

DESCRIPTION:
RV IO hang happens because of SRL(Serial Replicate Log) header is updated by RV 
recovery SIO. After slave node leave or master node switch, RV recovery could 
be 
initiated. During RV recovery, all new coming IOs should be quiesced by setting 
NEED 
RECOVERY flag on RV to avoid racing. Due to a code defect, this flag is removed 
by 
transaction commit, result in conflicting between new IOs and RV recovery SIO.

RESOLUTION:
Code changes have been made to fix this issue.

* 3904863 (Tracking ID: 3851632)

SYMPTOM:
When you use the localized messages, some VxVM commands fail while 
mirroring the volume through vxdiskadm. The error message is similar to the 
following:  
 ? [y, n, q,?] (: y) y
 /usr/lib/vxvm/voladm.d/bin/disk.repl: test: unknown operator 1

DESCRIPTION:
The issue occurs when the output of the vxdisk list command 
appears in the localized format. When the output is not translated into English 
language, a mismatch of messages is observed and command fails.

RESOLUTION:
The code is modified to convert the output of the necessary commands in the 
scripts into English language before comparing it with the expected output.

* 3904864 (Tracking ID: 3769303)

SYMPTOM:
System pancis when CVM group is brought online with below stack:

voldco_acm_pagein
voldco_write_pervol_maps_instant
voldco_map_update
voldco_write_pervol_maps
volfmr_copymaps_instant
vol_mv_get_attmir
vol_subvolume_get_attmir
vol_plex_get_attmir
vol_mv_fmr_precommit
vol_mv_precommit
vol_commit_iolock_objects
vol_ktrans_commit
volconfig_ioctl
ns_capable
volsioctl_real
mntput
path_put
vfs_fstatat
from_kgid_munged
read_tsc
vols_ioctl
vols_compat_ioctl
compat_sys_ioctl
sysenter_dispatch
voldco_get_accumulator

DESCRIPTION:
In case of layered volumes, when 'vxvol' comamnd is triggered through 
'vxrecover' command with '-Z vols(implicit) option, only the volumes passed 
through CLI are started, the respective top level volumes remain unstarted. As 
a result, associated DCO volumes also remain unstarted. At this point of time, 
if any of the plex of sub-volume needs to be attached back, vxrecover will 
trigger it. 
With DCO version 30, vxplex command tries to perform some map manipulation 
as a part of plex-attach transaction.  If the DCO volume is not started before 
plex attach, the in-core DCO contents are improperly loaded and this leads to 
panic.

RESOLUTION:
The code is modified to handle the starting of appropriate associated volumes 
of a layered volume group.

* 3905471 (Tracking ID: 3868533)

SYMPTOM:
IO hang happens when starting replication. VXIO deamon hang with stack like 
following:

vx_cfs_getemap at ffffffffa035e159 [vxfs]
vx_get_freeexts_ioctl at ffffffffa0361972 [vxfs]
vxportalunlockedkioctl at ffffffffa06ed5ab [vxportal]
vxportalkioctl at ffffffffa06ed66d [vxportal]
vol_ru_start at ffffffffa0b72366 [vxio]
voliod_iohandle at ffffffffa09f0d8d [vxio]
voliod_loop at ffffffffa09f0fe9 [vxio]

DESCRIPTION:
While performing DCM replay in case Smart Move feature is enabled, VxIO 
kernel needs to issue IOCTL to VxFS kernel to get file system free region. 
VxFS kernel needs to clone map by issuing IO to VxIO kernel to complete this 
IOCTL. Just at the time RLINK disconnection happened, so RV is serialized to 
complete the disconnection. As RV is serialized, all IOs including the 
clone map IO form VxFS is queued to rv_restartq, hence the deadlock.

RESOLUTION:
Code changes have been made to handle the dead lock situation.

* 3906251 (Tracking ID: 3806909)

SYMPTOM:
During installation of volume manager installation using CPI in key-less 
mode, following logs were observed.
VxVM vxconfigd DEBUG  V-5-1-5736 No BASIC license
VxVM vxconfigd ERROR  V-5-1-1589 enable failed: License has expired or is 
not available for operation transactions are disabled.

DESCRIPTION:
While using CPI for STANDALONE DMP installation in key less mode, volume 
manager Daemon(vxconfigd) cannot be started due to a modification in a DMP 
NATIVE license string that is used for license verification and this 
verification was failing.

RESOLUTION:
Appropriate code changes are incorporated to resolve the DMP keyless License 
issue to work with STANDALONE DMP.

* 3907017 (Tracking ID: 3877571)

SYMPTOM:
Disk header is updated even if the dg import operation fails

DESCRIPTION:
When dg import fails because of the disk failure, importing dg
forcefully needs checking the disks having latest configuration copy. But, it is
very difficult to decide which disk to choose without disk header update logs.

RESOLUTION:
Improved the logging to track the disk header changes.

* 3907593 (Tracking ID: 3660869)

SYMPTOM:
Enhance the DRL dirty-ahead logging for sequential write workloads.

DESCRIPTION:
With the current DRL implementation, when sequential hints are passed by the above 
FS layer, further regions in the DRL are dirtied to ensure that the write on the DRL 
is saved when the new IO on the region comes. But with the current design, there is 
a flaw and the number of IO's on the DRL are similar to the number of IO's on the 
data volume. Because of the flaw, same region is being dirtied again and again as 
part of the DRL IO. This can lead to performance hit as well.

RESOLUTION:
In order to improve the performance, the number of IO's on the DRL are reduced by 
enhancing the implementation of Dirty-ahead logging with DRL.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch vm-sol10_x86_64-Patch-6.2.1.500.tar.gz to /tmp
2. Untar vm-sol10_x86_64-Patch-6.2.1.500.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/vm-sol10_x86_64-Patch-6.2.1.500.tar.gz
    # tar xf /tmp/vm-sol10_x86_64-Patch-6.2.1.500.tar.gz
3. Copy the latest available 621 CPI hotfix to /tmp/hf
4. Untar latest available 621 CPI hotfix to to /tmp/hf

5. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
   To install the patch 6.2.1.500 will also need a CPI hotfix. If do not use the CPI hotfix may failed when upgrade.
    # pwd /tmp/hf
    # ./ installVRTSvxvm621P5 -require  /tmp/hf/CPI_6.2.1_P12.pl [<host1> <host>...]

6. To bring all the services and modules up execute
    # ./ installVRTSvxvm621P5 -start -require /tmp/hf/CPI_6.2.1_P12.pl


7. To stop all the services.
    # ./ installVRTSvxvm621P5 -stop -require /tmp/hf/CPI_6.2.1_P12.pl

You can also install this patch together with 6.2.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 6.2.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] -require /tmp/hf/CPI_6.2.1_P12.pl [<host1> <host2>...]
installmr -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
o Before-the-upgrade :-
  (a) Stop I/Os to all the VxVM volumes.
  (b) Umount any filesystems with VxVM volumes.
  (c) Stop applications using any VxVM volumes.
For Solaris 10 release, refer to the man pages for instructions on using 'patchadd' and 'patchrm' scripts provided with Solaris.
Any other special or non-generic installation instructions should be described below as special instructions.  The following example installs a patch to a st
andalone machine:
        example# patchadd 151232-05
Please follow the special instructions mentioned below after installing the patch.


REMOVING THE PATCH
------------------
Run the Uninstaller script to automatically remove the patch:
------------------------------------------------------------
To uninstall the patch perform the following step on at least one node in the cluster:
    # /opt/VRTS/install/uninstallVRTSvxvm621P5 -require /tmp/hf/CPI_6.2.1_P12.pl [<host1> <host2>...]

Remove the patch manually:
-------------------------
The following example removes a patch from a standalone system:
        example# patchrm 151232-05
Note: patchrm command will give an error related to vxvm-vxcloud script. Since we have introduced a new service in the patch, the error is coming. The error is harmless and can be ignored.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE