This page lists publically-released patches for Veritas Enterprise Products.
For Product GA build, see Veritas Entitlement Management System(VEMS) by clicking the Veritas Support 'Licensing' option.
For information on private patches, contact Veritas Technical Support.
For NetBackup Enterprise Server and NetBackup Server patches, see the NetBackup Downloads.
Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.
vm-sles11_x86_64-Patch-6.2.1.500
Sign in if you want to rate this patch.

 Basic information
Release type: Patch
Release date: 2017-10-06
OS update support: None
Technote: None
Documentation: None
Popularity: 284 viewed    16 downloaded
Download size: 80.93 MB
Checksum: 1973989638

 Applies to one or more of the following products:
Dynamic Multi-Pathing 6.2 On SLES11 x86-64
Storage Foundation 6.2 On SLES11 x86-64
Storage Foundation Cluster File System 6.2 On SLES11 x86-64
Storage Foundation for Oracle RAC 6.2 On SLES11 x86-64
Storage Foundation HA 6.2 On SLES11 x86-64
Volume Manager 6.2 On SLES11 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vm-sles11_x86_64-Patch-6.2.1.300 (obsolete) 2017-10-10

This patch requires: Release date
sfha-sles11_x86_64-MR-6.2.1 2015-04-24

 Fixes the following incidents:
3780334, 3802857, 3803497, 3816222, 3839293, 3850478, 3851117, 3852148, 3854788, 3863971, 3868653, 3871040, 3871124, 3873145, 3874737, 3875933, 3877637, 3879334, 3880573, 3881334, 3881335, 3889284, 3889850, 3891789, 3893134, 3893362, 3894783, 3897764, 3898129, 3898168, 3898169, 3898296, 3902626, 3903647, 3904790, 3904796, 3904797, 3904800, 3904801, 3904802, 3904804, 3904805, 3904806, 3904807, 3904810, 3904811, 3904819, 3904822, 3904824, 3904825, 3904830, 3904831, 3904833, 3904834, 3904851, 3904858, 3904859, 3904861, 3904863, 3904864, 3905471, 3906251, 3906566, 3907017, 3907593, 3907595, 3913126, 3915780, 3915784, 3920545, 3922253, 3922254, 3922255, 3922256, 3922257, 3922258, 3927482

 Patch ID:
VRTSvxvm-6.2.1.500-SLES11

 Readme file  [Save As...]
                          * * * READ ME * * *
               * * * Symantec Volume Manager 6.2.1 * * *
                         * * * Patch 500 * * *
                         Patch Date: 2017-09-25


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Symantec Volume Manager 6.2.1 Patch 500


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES11 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Symantec Dynamic Multi-Pathing 6.2
   * Symantec Storage Foundation 6.2
   * Symantec Storage Foundation Cluster File System HA 6.2
   * Symantec Storage Foundation for Oracle RAC 6.2
   * Symantec Storage Foundation HA 6.2
   * Symantec Volume Manager 6.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 6.2.1.500
* 3913126 (3910432) When a mirror volume is created two log plexes are created by default.
* 3915780 (3912672) "vxddladm assign names" causes ASM disks' user-group ownership/permissions loss
affecting Oracle databases on system.
* 3915784 (3868154) When DMP Native Support is set to ON, dmpnode with multiple VGs cannot be listed
properly in the 'vxdmpadm native ls' command
* 3920545 (3795739) In a split brain scenario, cluster formation takes very long time.
* 3922253 (3919642) IO hang after switching log owner because of IOs not completely quiesced 
during log owner change.
* 3922254 (3919640) IO hang along with vxconfigd hang on master node because of metadata request 
SIO(Staged IO) hogging CPU.
* 3922255 (3919644) IO hang when switching log owner because of stale flags in RV(Replication 
Volume) kernel structure.
* 3922256 (3919641) IO hang when pausing rlink because of deadlock situation.
* 3922257 (3919645) IO hang when switching log owner because of stale information in 
RV(Replication Volume) kernel structure.
* 3922258 (3919643) IO hang when switching log owner because of stale information in 
RV(Replication Volume) kernel structure.
* 3927482 (3895950) vxconfigd hang observed due to accessing stale/un-initiliazed lock.
Patch ID: 6.2.1.300
* 3780334 (3762580) In Linux kernels greater than or equal to RHEL6.6 (e.g. RHEL7 and SLES11SP3), the vxfen module fails to register the SCSI-3 PR keys to EMC devices when powerpath co-exists  
with DMP (Dynamic Multi-Pathing).
* 3802857 (3726110) On systems with high number of CPUs, Dynamic Multi-Pathing (DMP) devices may perform considerably slower than OS device paths.
* 3803497 (3802750) VxVM (Veritas Volume Manager) volume I/O-shipping functionality is not disabled even after the user issues the correct command to disable it.
* 3816222 (3816219) VxDMP event source daemon keeps reporting UDEV change event in syslog.
* 3839293 (3776520) Filters are not updated properly in lvm.conf file in VxDMP initrd (initial ramdisk) while Dynamic Multipathing (DMP) Native Support is being 
enabled.
* 3850478 (3850477) kmalloc-1024 and kmalloc-2048 memory consuming keeps increasing when reading or writing data against VxVM volume with big block size
* 3851117 (3662392) In the Cluster Volume Manager (CVM) environment, if I/Os are getting executed 
on slave node, corruption can happen when the vxdisk resize(1M) command is 
executing on the master node.
* 3852148 (3852146) Shared DiskGroup(DG) fails to import when "-c" and "-o noreonline" options 
are
specified together
* 3854788 (3783356) After Dynamic Multi-Pathing (DMP) module fails to load, dmp_idle_vector is not NULL.
* 3863971 (3736502) Memory leakage is found when transaction aborts.
* 3868653 (3866051) Driver name over 32 bytes may cause vxconfigd unable to startup
* 3871040 (3868444) Disk header timestamp is updated even if the disk group(DG) import fails.
* 3871124 (3823283) While unencapsulating a boot disk in SAN environment (Storage Area etwork), 
Linux operating system sticks in grub after reboot.
* 3873145 (3872197) vxconfigd panics when NVME devices are attached to the system
* 3874737 (3874387) Disk header information is not logged to the syslog 
sometimes even if the disk is missing and dg import fails.
* 3875933 (3737585) "Uncorrectable write error" or panic with IOHINT in VVR (Veritas Volume 
Replicator) environment
* 3877637 (3878030) Enhance VxVM DR tool to clean up OS and VxDMP device trees without user 
interaction.
* 3879334 (3879324) VxVM DR tool fails to handle busy device problem while LUNs are removed from  OS
* 3880573 (3886153) vradmind daemon core dump occurs in a VVR primary-primary configuration 
because of assert() failure.
* 3881334 (3864063) Application I/O hangs because of a race between the Master Pause SIO (Staging
I/O) and the Error Handler SIO.
* 3881335 (3867236) Application IO hang happens because of a race between Master Pause SIO(Staging IO) 
and RVWRITE1 SIO.
* 3889284 (3878153) VVR 'vradmind' deamon core dump.
* 3889850 (3878911) QLogic driver returns an error due to Incorrect aiusize in FC header
* 3891789 (3873625) System panicked when pulling out FC cables on SFHA6.2.1/RHEL7.2
* 3893134 (3864318) Memory consuming keeps increasing when reading/writing data against VxVM volume
with 
big block size.
* 3893362 (3881132) vxcommands hang following san change.
* 3894783 (3628743) New BE takes too much time to startup during live upgrade on Solaris 11.2
* 3897764 (3741003) After removing storage from one of multiple plex in a mirrored DCO (Data 
Change Object) volume, entire DCO volume is detached and DCO object is having 
BADLOG flag marked because of a flag reset missing.
* 3898129 (3790136) File system hang observed due to IO's in Ditry Region Logging (DRL).
* 3898168 (3739933) Allow VxVM package installation on EFI enabled Linux machines.
* 3898169 (3740730) While creating volume using vxassist CLI, dco log volume length specified at
command line was not getting honored.
* 3898296 (3767531) In Layered volume layout with FSS configuration, when few 
of the FSS_Hosts are rebooted, Full resync is happening for non-affected disks 
on master.
* 3902626 (3795739) In a split brain scenario, cluster formation takes very long time.
* 3903647 (3868934) System panic happens while deactivate the SIO (staging IO).
* 3904790 (3795788) Performance degrades when many application sessions open the same data file on the VxVMvolume.
* 3904796 (3853049) The display of stats delayed beyond the set interval for vxstat and multiple 
sessions of vxstat impacted the IO performance.
* 3904797 (3857120) Commands like vxdg deport which try to close a VxVM volume might hang.
* 3904800 (3860503) Poor performance of vxassist mirroring is observed on some high end servers.
* 3904801 (3686698) vxconfigd was getting hung due to deadlock between two threads
* 3904802 (3721565) vxconfigd hang is seen.
* 3904804 (3486861) Primary node panics when storage is removed while replication is going on with heavy 
IOs.
* 3904805 (3788644) Reuse raw device number when checking for available raw devices.
* 3904806 (3807879) User data corrupts because of the writing of the backup EFT GPT disk label 
during the VxVM disk-group flush operation.
* 3904807 (3867145) When VVR SRL occupation > 90%, then output the SRL occupation is shown by 10 
percent.
* 3904810 (3871750) In parallel VxVM vxstat commands report abnormal disk IO statistic data
* 3904811 (3875563) While dumping the disk header information, human readable
timestamp was not converted correctly from corresponding epoch time.
* 3904819 (3811946) When invoking "vxsnap make" command with cachesize option to create space optimized snapshot, the command succeeds but a plex I/O error message is displayed in syslog.
* 3904822 (3755209) The Veritas Dynamic Multi-pathing(VxDMP) device configured in Solaris Logical 
DOMains(LDOM) guest is disabled when an active controller of an ALUA array is 
failed.
* 3904824 (3795622) With Dynamic Multipathing (DMP) Native Support enabled, Logical Volume Manager
(LVM) global_filter is not updated properly in lvm.conf file.
* 3904825 (3859009) global_filter of lvm.conf is not updated due to some paths of LVM dmpnode are 
reused during DDL(Device Discovery Layer) discovery cycle.
* 3904830 (3840359) Some VxVM commands fail on using the localized messages.
* 3904831 (3802075) Foreign disks with name having digit in it, defined by udev rules goes into error 
state after vxdisk scandisks/
* 3904833 (3729078) VVR(Veritas Volume Replication) secondary site panic occurs during patch 
installation because of flag overlap issue.
* 3904834 (3819670) When smartmove with 'vxevac' command is run in background by hitting 'ctlr-z' key and 'bg' command, the execution of 'vxevac' is terminated abruptly.
* 3904851 (3804214) VxDMP (Dynamic Multi-Pathing) path enable operation fails after the disk label is
changed from guest LDOM. Open fails with error 5 on the path being enabled.
* 3904858 (3899568) Adding tunable dmp_compute_iostats to start/stop the iostat gathering
persistently.
* 3904859 (3901633) vxrsync reports error during rvg sync because of incorrect volume end offset 
calculation.
* 3904861 (3904538) IO hang happens during slave node leave or master node switch because of racing 
between RV(Replicate Volume) recovery SIO(Staged IO) and new coming IOs.
* 3904863 (3851632) Some VxVM commands fail when you use the localized messages.
* 3904864 (3769303) System pancis when Cluster Volume Manager (CVM) group is brought online
* 3905471 (3868533) IO hang happens because of a deadlock situation.
* 3906251 (3806909) Due to some modification in licensing , for STANDALONE DMP, DMP keyless 
license was not working.
* 3906566 (3907654) Storage of cold data on dedicated SAN storage spaces 
increases storage cost and maintenance. 
Move cold data from local storage to cloud storage.
* 3907017 (3877571) Disk header is updated even if the dg import operation fails
* 3907593 (3660869) Enhance the Dirty region logging (DRL) dirty-ahead logging for sequential write 
workloads
* 3907595 (3907596) vxdmpadm setattr command gives error while setting the path attribute.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: 6.2.1.500

* 3913126 (Tracking ID: 3910432)

SYMPTOM:
When a mirror volume is created two log plexes are created by default.

DESCRIPTION:
-
For a mirror/RAID 5 volumes a log plex is reqquired. Due to a bug in the code we are creating 
two log plex records while creation of a volume.

RESOLUTION:
Code changes are done to create a single log plex by default.

* 3915780 (Tracking ID: 3912672)

SYMPTOM:
"vxddladm assign names" command results in the ASM disks losing their
ownership/permission settings which may affect the Oracle databases

DESCRIPTION:
The command "vxddladm assign names" calls a function which creates raw and block
device nodes and set the user-group ownership/permissions as per the mode stored
in in-memory record structure. The in-memory records are not getting created (it
is NULL) before going to that function. Hence no setting of permissions after
device nodes' creation.

RESOLUTION:
Code changes are done to make sure that in-memory records of user-group
ownership/permissions of each dmpnode from vxdmprawdev file gets created before
the function call which creates device nodes and sets permissions on them.

* 3915784 (Tracking ID: 3868154)

SYMPTOM:
When DMP Native Support is set to ON, and if a dmpnode has multiple VGs,
'vxdmpadm native ls' shows incorrect VG entries for dmpnodes.

DESCRIPTION:
When DMP Native Support is set to ON, multiple VGs can be created on a disk as
Linux supports creating VG on a whole disk as well as on a partition of 
a disk.This possibility was not handled in the code, hence the display of
'vxdmpadm native ls' was getting messed up.

RESOLUTION:
Code now handles the situation of multiple VGs of a single disk

* 3920545 (Tracking ID: 3795739)

SYMPTOM:
In a split brain scenario, cluster formation takes very long time.

DESCRIPTION:
In a split brain scenario, the surviving nodes in the cluster try to preempt the keys of nodes leaving the cluster. If the keys have been already preempted by one of the surviving nodes, other surviving nodes will receive UNIT Attention. DMP (Dynamic Multipathing) then retries the preempt command after a delayof 1 second if it receives Unit attention. Cluster formation cannot complete untill PGR keys of all the leaving nodes are removed from all the disks. If the number of disks are very large, the preemption of keys takes a lot of time, leading to the very long time for cluster formation.

RESOLUTION:
The code is modified to avoid adding delay for first couple of retries when reading PGR keys. This allows faster cluster formation with arrays that clear the Unit Attention condition sooner.

* 3922253 (Tracking ID: 3919642)

SYMPTOM:
IO hang may happen after log owner switch.

DESCRIPTION:
IOs are partly quiesced during log owner change, result in some updates' log 
end process lost hence upcoming IO hang.

RESOLUTION:
Code changes have been mode to quiesce IO completely during log owner 
switch.

* 3922254 (Tracking ID: 3919640)

SYMPTOM:
When log owner on slave and SRL overflow triggered, IO hang and vxconfigd 
hang may happen if there're heavy IO load from master node.

DESCRIPTION:
After SRL(Serialized Replicate Log) overflow, log owner on slave node sent 
DCM(Data Change Map) active request to master node. Master node process the 
request when therere IO daemons idle. But all IO daemons are busy in 
sending and retry sending metadata request to log owner node, as log owner 
is migrating to DCM mode, the request couldnt be processed, hence the hang.

RESOLUTION:
Code changes have been mode to fix the issue.

* 3922255 (Tracking ID: 3919644)

SYMPTOM:
When switching log owner in case rlink is DCM(Data Change Map) mode, IO hang 
may happen when the log owner switches back.

DESCRIPTION:
In case log owner switch happens in DCM mode, as some RV kernel flag reset 
is missed. When the log owner switched back after DCM replay, data 
inconsistent make DCM activated again and IO hang may happen.

RESOLUTION:
Code changes have been mode to clear stale information during log owner 
switch.

* 3922256 (Tracking ID: 3919641)

SYMPTOM:
In case of log owner configured on slave node, when pausing rlink from 
master node and adding IO from log owner node, IO hang on log owner node.

DESCRIPTION:
Deadlock may happen between Master Pause SIO(Staged IO) and Error Handler 
SIO. Master pause SIO needs to disconnect rlink, results in RV serialized by 
invoking error handler SIO. Serialized RV(Replicate Volume) prevents master 
pause SIO from re-starting. As error handler SIO depends on master pause SIO 
done to complete so that RV can't get out of serialized state.

RESOLUTION:
Code changes have been mode to fix the deadlock issue.

* 3922257 (Tracking ID: 3919645)

SYMPTOM:
When switching log owner in case rlink is DCM(Data Change Map) mode, IO hang 
may happen when the log owner switches back.

DESCRIPTION:
In case log owner switch happens in DCM mode, as some RV kernel information 
reset is missed. When the log owner switched back after DCM replay, data 
inconsistent make DCM activated again and IO hang may happen.

RESOLUTION:
Code changes have been mode to clear stale information during log owner 
switch.

* 3922258 (Tracking ID: 3919643)

SYMPTOM:
When switching log owner in case rlink is in SRL(Serialized Replicate Log) 
flush state, IO hang may happen when the log owner switches back.

DESCRIPTION:
SRL flush start/end position is stored in RV kernel structure and reset of 
these positions is missed during log owner switch, so when log owner 
switched back this node will continue to perform SRL flush operation. As SRL 
flush has been completed by other node and SRL may contain new updates, 
readback from SRL may contain invalid updates, SRL flush couldn't continue, 
hence the hang.

RESOLUTION:
Code changes have been mode to clear stale information during log owner 
switch.

* 3927482 (Tracking ID: 3895950)

SYMPTOM:
vxconfigd hang may be observed all of a sudden. The following stack will be 
seen as part of threadlist:
slock()
.disable_lock()
volopen_isopen_cluster()
vol_get_dev()
volconfig_ioctl()
volsioctl_real()
volsioctl()
vols_ioctl()
rdevioctl()
spec_ioctl()
vnop_ioctl()
vno_ioctl()
common_ioctl(??, ??, ??, ??)

DESCRIPTION:
Some of the critical structures in the code are protected with lock to avoid 
simultaneous modification. A particular lock structure gets copied to the 
local stack memory. In this case
the structure might have information about the state of the lock and also at 
the time of copy that lock structure might be in an intermediate state. When 
function 
tries to access such type of lock structure, the result could lead to panic 
or hang since that lock structure might be in some unknown state.

RESOLUTION:
When we make local copy of the structure, no one is going to modify the new 
local structure and hence acquiring lock is not required while accessing 
this copy.

Patch ID: 6.2.1.300

* 3780334 (Tracking ID: 3762580)

SYMPTOM:
In Linux kernels greater than or equal to RHEL6.6 (e.g. RHEL7 and SLES11SP3), the vxfen module fails to register the SCSI-3 PR keys to EMC devices when powerpath co-exists 
with DMP (Dynamic Multi-Pathing). The following logs are printed while  setting up fencing for the cluster.

VXFEN: vxfen_reg_coord_pt: end ret = -1
vxfen_handle_local_config_done: Could not register with a majority of the
coordination points.

DESCRIPTION:
In Linux kernels greater than or equal to RHEL6.6 (e.g. RHEL7 and SLES11SP3), the interface used by DMP to send the SCSI commands to block devices does not transfer the 
data to or from the device. Therefore, the SCSI-3 PR keys do not get registered.

RESOLUTION:
The code is modified to use SCSI request_queue to send the SCSI commands to the 
underlying block device.
Additional patch is required from EMC to support processing SCSI commands via the request_queue mechanism on EMC PowerPath devices. Please contact EMC for patch details 
for a specific kernel version.

* 3802857 (Tracking ID: 3726110)

SYMPTOM:
On systems with high number of CPUs, DMP devices may perform  considerably slower than OS device  paths.

DESCRIPTION:
In high CPU configuration, I/O statistics related functionality in DMP takes more CPU time because DMP statistics are collected on per CPU basis. This stat collection happens in DMP I/O code path hence it reduces the I/O performance. Because of this, DMP devices perform slower than OS device paths.

RESOLUTION:
The code is modified to remove some of the stats collection functionality from DMP I/O code path. Along with this, the following tunable need to be turned off: 
1. Turn off idle lun probing. 
#vxdmpadm settune dmp_probe_idle_lun=off
2. Turn off statistic gathering functionality.  
#vxdmpadm iostat stop

Notes: 
1. Please apply this patch if system configuration has large number of CPU and if DMP performs considerably slower than OS device paths. For normal systems this issue is not applicable.

* 3803497 (Tracking ID: 3802750)

SYMPTOM:
Once VxVM (Veritas Volume Manager) volume I/O-shipping functionality is turned on, it is not getting disabled even after the user issues the correct command to disable it.

DESCRIPTION:
VxVM (Veritas Volume Manager) volume I/O-shipping functionality is turned off by default. The following two commands can be used to turn it on and off:
	vxdg -g <dgname> set ioship=on
	vxdg -g <dgname> set ioship=off

The command to turn off I/O-shipping is not working as intended because I/O-shipping flags are not reset properly.

RESOLUTION:
The code is modified to correctly reset I/O-shipping flags when the user issues the CLI command.

* 3816222 (Tracking ID: 3816219)

SYMPTOM:
VxDMP  (Veritas Dynamic Multi-Pathing) event source daemon (vxesd) keeps 
reporting a lot of messages in syslog as below:
"vxesd: Device sd*(*/*) is changed"

DESCRIPTION:
The vxesd daemon registers with the UDEV framework and keeps VxDMP up-to-date 
with devices' status. Due to some change at device, vxesd keeps reporting this 
kind of change-event 
listened by udev. VxDMP only cares about "add" and "remove" UDEV events. For 
UDEV "change" event, we can avoid logging for these events to VxDMP.

RESOLUTION:
The code is modified to stop logging UDEV change-event related messages in 
syslog.

* 3839293 (Tracking ID: 3776520)

SYMPTOM:
Filters are not updated properly in lvm.conf file in VxDMP initrd while DMP Native Support is being enabled. As a result, root Logical Volume 
(LV) is mounted on OS device upon reboot.

DESCRIPTION:
From LVM version 105, global_filter was introduced as part of lvm.conf file. VxDMP updates initird lvm.conf file with the filters required for 
DMP Native Support to function. While updating the lvm.conf, VxDMP checks for the filter field to be updated, but ideally we should check for 
global_filter field to be updated in the latest LVM version. This leads to lvm.conf file not updated with the proper filters.

RESOLUTION:
The code is modified to properly update global_filter field in lvm.conf file in VxDMP initrd.

* 3850478 (Tracking ID: 3850477)

SYMPTOM:
kmalloc-1024 and kmalloc-2048 memory consuming keeps increasing when reading or writing data against VxVM volume with big block size

DESCRIPTION:
In case the incoming I/O size is too big for a disk to handle, VxVM splits it into smaller ones to move forward. VxVM then allocates memory to backup those splited I/Os. Due to code issue, the allocated space is not freed when I/O splitting completes.

RESOLUTION:
The code is modified to free VxVM allocated memory after I/O splitting completes.

* 3851117 (Tracking ID: 3662392)

SYMPTOM:
In the CVM environment, if I/Os are getting executed on slave node, corruption 
can happen when the vxdisk resize(1M) command is executing on the master 
node.

DESCRIPTION:
During the first stage of resize transaction, the master node re-adjusts the 
disk offsets and public/private partition device numbers.
On a slave node, the public/private partition device numbers are not adjusted 
properly. Because of this, the partition starting offset is are added twice 
and causes the corruption. The window is small during which public/private 
partition device numbers are adjusted. If I/O occurs during this window then 
only corruption is observed. After the resize operation completes its execution,
no further corruption will happen.

RESOLUTION:
The code has been changed to add partition starting offset properly to an I/O 
on slave node during execution of a resize command.

* 3852148 (Tracking ID: 3852146)

SYMPTOM:
In a CVM cluster, when importing a shared diskgroup specifying both -c and -o
noreonline options, the following error may be returned: 
VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed: Disk for disk
group not found.

DESCRIPTION:
The -c option will update the disk ID and disk group ID on the private region
of the disks in the disk group being imported. Such updated information is not
yet seen by the slave because the disks have not been re-onlined (given that
noreonline option is specified). As a result, the slave cannot identify the
disk(s) based on the updated information sent from the master, causing the
import to fail with the error Disk for disk group not found.

RESOLUTION:
The code is modified to handle the working of the "-c" and "-o noreonline"
options together.

* 3854788 (Tracking ID: 3783356)

SYMPTOM:
After DMP module fails to load, dmp_idle_vector is not NULL.

DESCRIPTION:
After DMP module load failure, DMP resources are not cleared off from the system memory, so some of the 
resources are in NON-NULL value. When system retries to load, it frees invalid data, leading to system panic with error 
message BAD FREE, because the data being freed is not valid at that point.

RESOLUTION:
The code is modified to clear up the DMP resources when module failure happens.

* 3863971 (Tracking ID: 3736502)

SYMPTOM:
When FMR is configured in VVR environment, 'vxsnap refresh' fails with below 
error message:
"VxVM VVR vxsnap ERROR V-5-1-10128 DCO experienced IO errors during the
operation. Re-run the operation after ensuring that DCO is accessible".
Also, multiple messages of connection/disconnection of replication 
link(rlink) are seen.

DESCRIPTION:
Inherently triggered rlink connection/disconnection causes the transaction 
retries. During transaction, memory is allocated for Data Change Object(DCO) 
maps and is not cleared on abortion of a transaction.
This leads to a problem of memory leak and eventually to exhaustion of maps.

RESOLUTION:
The fix has been added to clear the allocated DCO maps when transaction 
aborts.

* 3868653 (Tracking ID: 3866051)

SYMPTOM:
After we load a driver with name over 32 bytes in kernel, we will not be able to
restart 
vxconfigd.

DESCRIPTION:
In Kernel, if we have any driver with name over 32 bytes AND when we restart
vxconfigd. Then due to 
a defect in our code about size of driver name we accept, the process stack will
be corrupted. Hence, vxconfigd becomes unable to startup.

RESOLUTION:
Code changes are made to fix the memory corruption issue.

* 3871040 (Tracking ID: 3868444)

SYMPTOM:
Disk header timestamp is updated even if the disk group import fails.

DESCRIPTION:
While doing dg import operation, during join operation disk header timestamps are updated. This makes difficult for 
support to understand which disk is having latest config copy if dg import is failed and decision is to be made if
force dg import is safe or not.

RESOLUTION:
Dump the old disk header timestamp and sequence number in the syslog which can be referred on deciding if force dg 
import would be safe or not

* 3871124 (Tracking ID: 3823283)

SYMPTOM:
Linux operating system sticks in grub after reboot. Manual kernel load is 
required to make operating system functional.

DESCRIPTION:
During unencapsulation of a boot disk in SAN environment, multiple entries 
corresponding to root disk are found in by-id device directory. As a 
result, a parse command fails, leading to the creation of an improper menu 
file in grub directory. This menu file defines the device path to load 
kernel and other modules.

RESOLUTION:
The code is modified to handle multiple entries for SAN boot disk.

* 3873145 (Tracking ID: 3872197)

SYMPTOM:
vxconfigd panics when NVME devices are attached to the system with the following stack:
panic+0xa7/0x16f
oops_end+0xe4/0x100
no_context+0xfb/0x260
__bad_area_nosemaphore+0x125/0x1e0
bad_area+0x4e/0x60
__do_page_fault+0x473/0x500
dmp_rel_shared_lock+0x20/0x30 [vxdmp]
dmp_send_scsipkt+0xd8/0x120 [vxdmp]
do_page_fault+0x3e/0xa0
page_fault+0x25/0x30
elv_may_queue+0xd/0x20
get_request+0x49/0x3c0
get_request_wait+0x2a/0x1d0
swiotlb_map_sg_attrs+0x79/0x130
blk_get_request+0x46/0xa0
dmp_kernel_scsi_ioctl+0x11d/0x3a0 [vxdmp]
dmp_scsi_ioctl+0xae/0x2a0 [vxdmp]
__wake_up+0x53/0x70
dmp_send_scsireq+0x5f/0xc0 [vxdmp]
dmp_do_scsi_gen+0xab/0x1b0 [vxdmp]
dmp_pr_check_aptpl+0xcd/0x150 [vxdmp]
dmp_make_mp_node+0x239/0x280 [vxdmp]
dmp_decode_add_disk+0x816/0x1110 [vxdmp]
dmp_decipher_instructions+0x270/0x350 [vxdmp]
dmp_process_instruction_buffer+0x1be/0x1d0 [vxdmp]
dmp_reconfigure_db+0x6e/0xf0 [vxdmp]
gendmpioctl+0x2c2/0x610 [vxdmp]
dmpioctl+0x35/0x70 [vxdmp]
dmp_ioctl+0x2b/0x50 [vxdmp]
dmp_compat_ioctl+0x56/0x70 [vxdmp]

DESCRIPTION:
When vxconfigd is started it tries to send a SGIO IOCTL to send a SCSI command to NVME devices using request queue mechanism. 
The NVME devices does not have elevator set leading to the failure of SGIO command leading to panic of the system.

RESOLUTION:
Code changes have been done to bypass the SGIO command for NVME devices.

* 3874737 (Tracking ID: 3874387)

SYMPTOM:
Disk header information is not logged to the syslog sometimes
even if the disk is missing and dg import fails.

DESCRIPTION:
In scenarios where disk has config copy enabled
and get active disk record, then disk header information was not getting
logged even though the disk is missing thereafter dg import fails.

RESOLUTION:
Dump the disk header information even if the disk record is
active and attached to the disk group.

* 3875933 (Tracking ID: 3737585)

SYMPTOM:
Customer encounters following "Uncorrectable write error or below panic in 
VVR environment:

[000D50D0]xmfree+000050 
[051221A0]vol_tbmemfree+0000B0
[05122294]vol_memfreesio_start+00001C
[05131324]voliod_iohandle+000050
[05131740]voliod_loop+0002D0
[05126438]vol_kernel_thread_init+000024

DESCRIPTION:
IOHINT structure allocated from VxFS is also freed by VxFS after IO done from 
VxVM. IOs to VxVM with VVR needs 2 phases, SRL(Serial Replication Log) write 
and Data volume write, VxFS gets IO done after SRL write and doesnt wait for 
Data Volume write completion, so if Data volume write gets started or done after

VxFS frees IOHINT, it may cause write IO error or double free panic due to

freeing memory wrongly as per corrupted IOHINT info.

RESOLUTION:
Code changes were done to clone the IOHINT structure before writing to data 
volume.

* 3877637 (Tracking ID: 3878030)

SYMPTOM:
Enhance VxVM(Veritas Volume Manager) DR(Dynamic Reconfiguration) tool to 
clean up OS and VxDMP(Veritas Dynamic Multi-Pathing) device trees without 
user interaction.

DESCRIPTION:
When users add or remove LUNs, stale entries in OS or VxDMP device trees can 
prevent VxVM from discovering changed LUNs correctly. It even causes VxVM 
vxconfigd process core dump under certain conditions, users have to reboot 
system to let vxconfigd restart again.
VxVM has DR tool to help users adding or removing LUNs properly but it 
requires user inputs during operations.

RESOLUTION:
Enhancement has been done to VxVM DR tool. It accepts '-o refresh' option to 
clean up OS and VxDMP device trees without user interaction.

* 3879334 (Tracking ID: 3879324)

SYMPTOM:
VxVM(Veritas Volume Manager) DR(Dynamic Reconfiguration) tool fails to 
handle busy device problem while LUNs are removed from OS

DESCRIPTION:
OS devices may still be busy after removing them from OS, it fails 'luxadm -
e offline <disk>' operation and leaves staled entries in 'vxdisk list' 
output 
like:
emc0_65535   auto            -            -            error
emc0_65536   auto            -            -            error

RESOLUTION:
Code changes have been done to address busy devices issue.

* 3880573 (Tracking ID: 3886153)

SYMPTOM:
In a VVR primary-primary configuration, if 'vrstat' command is runing, vradmind 
core dump may occur with the stack like below:

__assert_c99 
StatsSession::sessionInitReq 
StatsSession::processOpReq 
StatsSession::processOpMsgs  
RDS::processStatsOpMsg 
DBMgr::processStatsOpMsg  
process_message

DESCRIPTION:
vrstat command initiates StatSession which need to send initilization request to 
secondary. On Secondary there is assert() to ensure it's secondary that 
processing the request. In primary-primary configuration it leads to core dump.

RESOLUTION:
The code changes have been made to fix the issue by returning failure to 
StatSession initiator.

* 3881334 (Tracking ID: 3864063)

SYMPTOM:
Application I/O hangs after the Master Pause command is issued.

DESCRIPTION:
Some flags (VOL_RIFLAG_DISCONNECTING or VOL_RIFLAG_REQUEST_PENDING) in VVR
(Veritas Volume Replicator) kernel are not cleared because of a race between the
Master Pause SIO and the Error Handler SIO. This causes the RU (Replication
Update) SIO to fail to proceed, which leads to I/O hang.

RESOLUTION:
The code is modified to handle the race condition.

* 3881335 (Tracking ID: 3867236)

SYMPTOM:
Application IO hang happens after issuing Master Pause command.

DESCRIPTION:
The flag VOL_RIFLAG_REQUEST_PENDING in VVR(Veritas Volume Replicator) kernel is 
not cleared because of a race between Master Pause SIO and RVWRITE1 SIO resulting 
in RU (Replication Update) SIO to fail to proceed thereby causing IO hang.

RESOLUTION:
Code changes have been made to handle the race condition.

* 3889284 (Tracking ID: 3878153)

SYMPTOM:
VVR (Veritas Volume Replicator)  'vradmind' deamon core dump in following stack.

#0  __kernel_vsyscall ()
#1  raise () from /lib/libc.so.6
#2  abort () from /lib/libc.so.6
#3  __libc_message () from /lib/libc.so.6
#4  malloc_printerr () from /lib/libc.so.6
#5  _int_free () from /lib/libc.so.6
#6  free () from /lib/libc.so.6
#7  operator delete(void*) () from /usr/lib/libstdc++.so.6
#8  operator delete[](void*) () from /usr/lib/libstdc++.so.6
#9  inIpmHandle::~IpmHandle (this=0x838a1d8,
    __in_chrg=<optimized out>) at Ipm.C:2946
#10 IpmHandle::events (handlesp=0x838ee80, vlistsp=0x838e5b0,
    ms=100) at Ipm.C:644
#11 main (argc=1, argv=0xffffd3d4) at srvmd.C:703

DESCRIPTION:
Under certain circumstances 'vradmind' daemon may core dump freeing a variable
allocated in 
stack.

RESOLUTION:
Code change has been done to address the issue.

* 3889850 (Tracking ID: 3878911)

SYMPTOM:
QLogic driver returns following error due to Incorrect aiusize in FC header
FC_ELS_MALFORMED, cnt=c60h, size=314h

DESCRIPTION:
When creating CT pass-through command to be sent, the ct_aiusize we specify in 
request header does not conform to FT standard. Hence during the sanity check of FT 
header in OS layer, it reports error and get_topology() failed.

RESOLUTION:
Code changes have been done so that ct_aiusize is in compliance with FT standard.

* 3891789 (Tracking ID: 3873625)

SYMPTOM:
System panicked when pulling out FC cables on SFHA6.2.1/RHEL7.2, the stack 
trace of the panic is like following:

 #8 [ffff880fecb23a90] page_fault at ffffffff8163d408
    [exception RIP: blk_rq_map_kern+31]
    RIP: ffffffff812cfd8f  RSP: ffff880fecb23b48  RFLAGS: 00010296
    RAX: ffffffffffffffed  RBX: ffff880fcf847230  RCX: 0000000000001010
    RDX: 0000000000001010  RSI: ffff880fd10d2000  RDI: ffff880fcf847230
    RBP: ffff880fecb23b70   R8: 0000000000000010   R9: ffff880fcf7a5b40
    R10: ffff88080f803b00  R11: 0000000000000001  R12: 0000000000000000
    R13: ffffffffffffffed  R14: ffffffffffffffed  R15: ffff8807fcfd5b00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff880fecb23b78] dmp_kernel_scsi_ioctl at ffffffffa0a4e499 [vxdmp]
#10 [ffff880fecb23bb8] dmp_scsi_ioctl at ffffffffa0a908a9 [vxdmp]
#11 [ffff880fecb23c50] dmp_send_scsireq at ffffffffa0a91580 [vxdmp]
#12 [ffff880fecb23c70] dmp_do_scsi_gen at ffffffffa0a868b4 [vxdmp]
#13 [ffff880fecb23c98] dmp_pr_send_cmd at ffffffffa0a8eec3 [vxdmp]
#14 [ffff880fecb23d30] dmp_pr_do_read at ffffffffa0a61d30 [vxdmp]
#15 [ffff880fecb23da8] dmp_def_get_reservation at ffffffffa0a63284 [vxdmp]
#16 [ffff880fecb23db8] dmp_pgr_read at ffffffffa0a8d3c4 [vxdmp]
#17 [ffff880fecb23df0] gendmpioctl at ffffffffa0a5abf3 [vxdmp]
#18 [ffff880fecb23e18] dmpioctl at ffffffffa0a5b181 [vxdmp]
#19 [ffff880fecb23e30] dmp_ioctl at ffffffffa0a811aa [vxdmp]
#20 [ffff880fecb23e50] blkdev_ioctl at ffffffff812d8da3
#21 [ffff880fecb23ea8] block_ioctl at ffffffff81219701
#22 [ffff880fecb23eb8] do_vfs_ioctl at ffffffff811f1ef5
#23 [ffff880fecb23f30] sys_ioctl at ffffffff811f2171
#24 [ffff880fecb23f80] system_call_fastpath at ffffffff81645909

DESCRIPTION:
Linux kernel function __get_request() may fail under memory pressure and 
returns negative value, DMP doesn't check this and dereference it as a valid 
req pointer, hence the panic.

RESOLUTION:
Modified the code to check if the req is valid or error code.

* 3893134 (Tracking ID: 3864318)

SYMPTOM:
Memory consuming keeps increasing when reading/writing data against VxVM volume
with 
big block size.

DESCRIPTION:
In case incoming IO size is too big for a disk to handle, VxVM will split it into 
smaller ones to move forward. VxVM will allocate memory to backup the those split 
IOs. Due to code defect, the allocated space doesn't got freed when split IOs are 
completed.

RESOLUTION:
The code is modified to free VxVM allocated memory after split IOs competed.

* 3893362 (Tracking ID: 3881132)

SYMPTOM:
vxcommands hangs following the SAN change. OS device handles and DMP devices are
not cleaned up, causing kernel core dump.

DESCRIPTION:
Prior to discovering NEW LUNs, we need to check the PQ values of the OS device
handles and deletes them when non-zero. OS and VM Device Trees should be in-Sync
after adding/removing Luns. Devices were not getting cleaned up. Fix is to
clean-up device-tree to make them in-sync.

RESOLUTION:
Code changes made in DMPDR tool, which will delete stale OS device handles to fix
the issue.

* 3894783 (Tracking ID: 3628743)

SYMPTOM:
On Solaris 11.2, New boot environment takes long time to start up during 
live upgrade. Here deadlock is seen in ndi_devi_enter( ), when loading VxDMP 
driver and Deadlocks caused by VXVM drivers due to use of Solaris 
ddi_pathname_to_dev_t or ddi_hold_devi_by_path private interfaces.

DESCRIPTION:
Here deadlocks caused by VXVM drivers due to use of Solaris 
ddi_pathname_to_dev_t or e_ddi_hold_devi_by_path private interface and 
ddi_pathname_to_dev_t/e_ddi_hold_devi_by_path are Solaris internal use only 
routine and is not multi-thread safe.  Normally this is not a problem as the 
various VXVM drivers don't unload or detach, however there are certain 
conditions where our _init routines might be called which can expose this 
deadlock condition.

RESOLUTION:
Code is modified to resolve deadlock.

* 3897764 (Tracking ID: 3741003)

SYMPTOM:
In CVM (Cluster Volume Manager) environment, after removing storage from one 
of multiple plex in a mirrored DCO volume, the DCO volume is detached and DCO 
object is having BADLOG flag marked.

DESCRIPTION:
When one plexs storage of a mirrored volume is removed, only that plex should 
be detached instead of entire volume. While IO reading is undergoing on a 
failed DCO plex, the local failed IO gets restarted and shiped to other nodes 
for retry, which also gets failed, since the storage is removed from other 
nodes as well. Because a flag reset is missing, the failed IO returns error 
result in entire volume is detached and marked as BADLOG flag even though the 
IO is successful from an alternate plex.

RESOLUTION:
Code changes are added to handle this case for the Resiliency of VxVM in 
Partial-Storage outage scenario.

* 3898129 (Tracking ID: 3790136)

SYMPTOM:
File system hang can be observed sometimes due to IO's hung in DRL.

DESCRIPTION:
There might be some IO's hung in DRL of mirrored volume due to incorrect 
calculation of outstanding IO's on volume and number of active IO's which are 
currently in progress on DRL. The value of the outstanding IO on volume can get 
modified incorrectly leading to IO's on DRL not to progress further which in 
turns results in a hang kind of scenario.

RESOLUTION:
Code changes have been done to avoid incorrect modification of value of 
outstanding IO's on volume and prevent the hang.

* 3898168 (Tracking ID: 3739933)

SYMPTOM:
VxVM package installation fails when Linux-server is having EFI support enabled.

DESCRIPTION:
In LINUX, VxVM install scripts assumes GRUB bootloader in BIOS mode, and tries
to locate
corresponding grub config file. In case system has GRUB bootloader in EFI mode,
VxVM fails
to locate required grub config file and so installation gets aborted.

RESOLUTION:
Code changes added to allow VxVM installation on LINUX machine, where EFI
support is Enabled.

* 3898169 (Tracking ID: 3740730)

SYMPTOM:
While creating volume using vxassist CLI, dco-log length specified as command
line
parameter was not getting honored.

E.g. ->
bash # vxassist -g <dgname> make <volume-name> <volume-size> logtype=dco
dcolen=<dcolog-length>
VxVM vxassist ERROR V-5-1-16707  Specified dcologlen(<specified dcolog length>)
is
less than minimum dcologlen(17152)

DESCRIPTION:
While creating volume, using dcologlength attribute of dco-volume in vxassist
CLI,
the size of dcolog specified is not correctly parsed in the code, because of
which
it internally compares the size with incorrectly calculated size & throws the
error indicating that size specified isn't sufficient.
So the values in comparison was incorrect. Hence changed the code to compare the
user-specified value passes the minimum-threshold value or not.

RESOLUTION:
The code is changed to Fix the issue, which honors the length value of dcolog
volume specified by user in vxassist CLI.

* 3898296 (Tracking ID: 3767531)

SYMPTOM:
In Layered volume layout with FSS configuration, when few of the 
FSS_Hosts are rebooted, Full resync is happening for non-affected disks on 
master.

DESCRIPTION:
In configuration, where there are multiple FSS-Hosts, with 
layered volume created on the hosts. When the slave nodes are rebooted , few 
of the 
sub-volumes of non-affected disks are fully getting synced on master.

RESOLUTION:
Code-changes have been made to sync only needed part of sub-
volume.

* 3902626 (Tracking ID: 3795739)

SYMPTOM:
In a split brain scenario, cluster formation takes very long time.

DESCRIPTION:
In a split brain scenario, the surviving nodes in the cluster try to preempt the keys of nodes leaving the cluster. If the keys have been already preempted by one of the surviving nodes, other surviving nodes will receive UNIT Attention. DMP (Dynamic Multipathing) then retries the preempt command after a delayof 1 second if it receives Unit attention. Cluster formation cannot complete untill PGR keys of all the leaving nodes are removed from all the disks. If the number of disks are very large, the preemption of keys takes a lot of time, leading to the very long time for cluster formation.

RESOLUTION:
The code is modified to avoid adding delay for first couple of retries when reading PGR keys. This allows faster cluster formation with arrays that clear the Unit Attention condition sooner.

* 3903647 (Tracking ID: 3868934)

SYMPTOM:
System panic in the stack like below, while deactivate the VVR(VERITAS Volume 
Replicator) batch write SIO:
 
panic_trap+000000 
vol_cmn_err+000194 
vol_rv_inactive+000090 
vol_rv_batch_write_start+001378
voliod_iohandle+000050 
voliod_loop+0002D0
vol_kernel_thread_init+000024

DESCRIPTION:
When VVR do batch write SIO, if it fails to reserve VVR IO memory, the SIO will be 
put 
on queue for restart and then will be deactivated. If the deactivation blocked  for 
some time since cannot get the lock, and during this period, the SIO is restarted due 
to the IO memory reservation request satisfied, the SIO would be corrupted due to 
be 
deactivated twice. Hence it causes the system panic.

RESOLUTION:
Code changes have been made to remove the unnecessary SIO deactivation after 
VVR 
IO memory reservation failed.

* 3904790 (Tracking ID: 3795788)

SYMPTOM:
Performance degradation is seen when many application sessions open the same data file on Veritas Volume Manager (VxVM) volume.

DESCRIPTION:
This issue occurs because of the lock contention. When many application sessions open the same data file on the VxVM volume,  the exclusive lock is occupied on all CPUs. If there are a lot of CPUs in the system, this process could be quite time- consuming, which leads to performance degradation at the initial start of applications.

RESOLUTION:
The code is modified to change the exclusive lock to the shared lock when  the data file on the volume is open.

* 3904796 (Tracking ID: 3853049)

SYMPTOM:
On a server with more number CPUs, the stats output of vxstat is delayed beyond 
the set interval. Also multiple sessions of vxstat is impacting the IO 
performance.

DESCRIPTION:
The vxstat acquires an exclusive lock on each CPU in order to gather the stats. 
This would affect the consolidation and display of stats in an environment with 
huge number of CPUs and disks. The output of stats for interval of 1 second can 
get delayed beyond the set interval. Also the acquisition of lock happens in 
the IO path which would affect the IO performance due contention of these locks.

RESOLUTION:
The code modified to remove the exclusive spin lock.

* 3904797 (Tracking ID: 3857120)

SYMPTOM:
If the volume is shared in a CVM configuration, the following stack traces will
be seen under a vxiod daemon suggesting an attempt to drain I/O. In this case,
CVM halt will be blocked and eventually time out.
The stack trace may appear as:
sleep+0x3f0  
vxvm_delay+0xe0  
volcvm_iodrain_dg+0x150  
volcvmdg_abort_complete+0x200  
volcvm_abort_sio_start+0x140  
voliod_iohandle+0x80

or 

cv_wait+0x3c() 
delay_common+0x6c 
vol_mv_close_check+0x68 
vol_close_device+0x1e4
vxioclose+0x24
spec_close+0x14c
fop_close+0x8c 
closef2+0x11c
closeall+0x3c 
proc_exit+0x46c
exit+8
post_syscall+0x42c
syscall_trap+0x188

Since the vxconfigd would be busy in transaction of trying to close the volume
or drain the IO then all the other threads which send a request to vxconfigd
will hang.

DESCRIPTION:
VxVM maintains an I/O count of the in-progress I/O on the volume. When two
threads from VxVM asynchronously manipulate the I/O count on the volume, the
race between these threads might lead to stale I/O count remaining on the volume
even though the volume has actually completed all I/Os . Since there is an
invalid pending I/O count on the volume due to the race condition, the volume
cannot be closed.

RESOLUTION:
This issue has been fixed in the VxVM code manipulating the I/O count to avoid
the race condition between the two threads.

* 3904800 (Tracking ID: 3860503)

SYMPTOM:
Poor performance of vxassist mirroring is observed compared to using raw dd
utility to do mirroring .

DESCRIPTION:
There is huge lock contention on high end server with large number of cpus,
because doing copy on each region needs to obtain some unnecessary cpu locks.

RESOLUTION:
VxVM code has been changed to decrease the lock contention.

* 3904801 (Tracking ID: 3686698)

SYMPTOM:
vxconfigd was getting hung due to deadlock between two threads

DESCRIPTION:
Two threads were waiting for same lock causing deadlock between 
them. This will lead to block all vx commands. 
untimeout function will not return until pending callback is cancelled  (which 
is set through timeout function) OR pending callback has completed its 
execution (if it has already started). Therefore locks acquired by callback 
routine should not be held across call to untimeout routine or deadlock may 
result.

Thread 1: 
    untimeout_generic()   
    untimeout()
    voldio()
    volsioctl_real()
    fop_ioctl()
    ioctl()
    syscall_trap32()
 
Thread 2:
    mutex_vector_enter()
    voldsio_timeout()
    callout_list_expire()
    callout_expire()
    callout_execute()
    taskq_thread()
    thread_start()

RESOLUTION:
Code changes have been made to call untimeout outside the lock 
taken by callback handler.

* 3904802 (Tracking ID: 3721565)

SYMPTOM:
vxconfigd hang is seen with below stack.
genunix:cv_wait_sig_swap_core
genunix:cv_wait_sig_swap 
genunix:pause
unix:syscall_trap32

DESCRIPTION:
In FMR environment, write is done on a source volume having space-optimized(SO) 
snapshot. Memory is acquired first and then ILOCKs are acquired on individual SO 
volumes for pushed writes. On the other hand, a user write on SO snapshot will first 
acquire ILOCK and then acquire memory. This causes deadlock.

RESOLUTION:
Code is modified to resolve deadlock.

* 3904804 (Tracking ID: 3486861)

SYMPTOM:
Primary node panics with below stack when storage is removed while replication is 
going on with heavy IOs.
Stack:
oops_end 
no_context 
page_fault 
vol_rv_async_done 
vol_rv_flush_loghdr_done 
voliod_iohandle 
voliod_loop

DESCRIPTION:
In VVR environment, when write to data volume failson primary node, error handling 
is initiated. As a part 
of it, SRL header will be flushed. As primary storage is removed, flushing will 
fail. Panic will be hit as 
invalid values will be accessed while logging error message.

RESOLUTION:
Code is modified to resolve the issue.

* 3904805 (Tracking ID: 3788644)

SYMPTOM:
When DMP (Dynamic Multi-Pathing) native support enabled for Oracle ASM 
environment, if we constantly adding and removing DMP devices, it will cause error 
like:
/etc/vx/bin/vxdmpraw enable oracle dba 775 emc0_3f84
VxVM vxdmpraw INFO V-5-2-6157
Device enabled : emc0_3f84
Error setting raw device (Invalid argument)

DESCRIPTION:
There is a limitation (8192) of maximum raw device number N (exclusive) of 
/dev/raw/rawN. This limitation is defined in boot configuration file. When binding a 
raw 
device to a dmpnode, it uses /dev/raw/rawN to bind the dmpnode. The rawN is 
calculated by one-way incremental process. So even if we unbind the device later on, 
the "released" rawN number will not be reused in the next binding. When the rawN 
number is increased to exceed the maximum limitation, the error will be reported.

RESOLUTION:
Code has been changed to always use the smallest available rawN number instead of 
calculating by one-way incremental process.

* 3904806 (Tracking ID: 3807879)

SYMPTOM:
Writing the backup EFI GPT disk label during the disk-group flush 
operation may cause data corruption on volumes in the disk group. The backup 
label could incorrectly get flushed to the disk public region and overwrite the 
user data with the backup disk label.

DESCRIPTION:
For EFI disks initialized under VxVM (Veritas Volume Manager), it 
is observed that during a disk-group flush operation, vxconfigd (veritas 
configuration daemon) could stop writing the EFI GPT backup label to the volume 
public region, thereby causing user data corruption. When this issue happens, 
the real user data are replaced with the backup EFI disk label

RESOLUTION:
The code is modified to prevent the writing of the EFI GPT backup 
label during the VxVM disk-group flush operation.

* 3904807 (Tracking ID: 3867145)

SYMPTOM:
When VVR SRL occupation > 90%, then output the SRL occupation is shown by 10 
percent.

DESCRIPTION:
This is kind of enhancement, to show the SRL Occupation when it's more than 
90% is previously shown with 10 percentage gap.
Here the enhancement is to show the logs with 1 percentage granularity.

RESOLUTION:
Changes are done to show the syslog messages wih 1 percent granularity, when 
SRL is filled > 90%.

* 3904810 (Tracking ID: 3871750)

SYMPTOM:
In parallel VxVM(Veritas Volume Manager) vxstat commands report abnormal 
disk IO statistic data. Like below:
# /usr/sbin/vxstat -g <dg name> -u k -dv -i 1 -S
...... 
dm  emc0_2480                       4294967210 4294962421           -382676k  
4294967.38 4294972.17
......

DESCRIPTION:
After VxVM IO statistics was optimized for huge CPUs and disks, there's a 
race condition when multiple vxstat commands are running to collect disk IO 
statistic data. It causes disk's latest IO statistic value become smaller 
than previous one, hence VxVM treates the value overflow so that abnormal 
large IO statistic value is printed.

RESOLUTION:
Code changes are done to eliminate such race condition.

* 3904811 (Tracking ID: 3875563)

SYMPTOM:
While dumping the disk header information, human readable timestamp was
not converted correctly from corresponding epoch time.

DESCRIPTION:
When disk group import fails if one of the disk is missing while
importing the disk group, it will dump the disk header information the syslog.
But, human readable time stamp was not getting converted correctly from
corresponding epoch time.

RESOLUTION:
Code changes done to dump disk header information correctly.

* 3904819 (Tracking ID: 3811946)

SYMPTOM:
When invoking "vxsnap make" command with cachesize option to create space optimized snapshot, the command succeeds but the following error message is displayed in syslog:

kernel: VxVM vxio V-5-0-603 I/O failed.  Subcache object <subcache-name> does 
not have a valid sdid allocated by cache object <cache-name>.
kernel: VxVM vxio V-5-0-1276 error on Plex <plex-name> while writing volume 
<volume-name> offset 0 length 2048

DESCRIPTION:
When space optimized snapshot is created using "vxsnap make" command along with cachesize option, cache and subcache objects are created by the same command. During the creation of snapshot, I/Os from the volumes may be pushed onto a subcache even though the subcache ID has not yet been allocated. As a result, the I/O fails.

RESOLUTION:
The code is modified to make sure that I/Os on the subcache are 
pushed only after the subcache ID has been allocated.

* 3904822 (Tracking ID: 3755209)

SYMPTOM:
The Dynamic Multi-pathing(DMP) device configured in Solaris LDOM guest is 
disabled when an active controller of an ALUA array is failed.

DESCRIPTION:
DMP in guest environment monitors cached target port ID of virtual paths in 
LDOM. If a controller of an ALUA array fails for some reason, active/primary 
target port ID of an ALUA array will be changed in I/O domain resulting in 
stale entry in the guest. DMP in the guest wrongly interprets this target port 
change to mark the path as unavailable. This causes I/O on the path to be 
failed. As a result the DMP device is disabled in LDOM.

RESOLUTION:
The code is modified to not use the cached target port IDs for LDOM virtual 
disks.

* 3904824 (Tracking ID: 3795622)

SYMPTOM:
With Dynamic Multi-Pathing (DMP) Native Support enabled, LVM global_filter is
not updated properly in lvm.conf file to reject the newly added paths.

DESCRIPTION:
With DMP Native Support enabled, when new paths are added to existing LUNs, LVM
global_filter is not updated properly in lvm.conf file to reject the newly added
paths. This can lead to duplicate PV (physical volumes) found error reported by
LVM commands.

RESOLUTION:
The code is modified to properly update global_filter field in lvm.conf file
when new paths are added to existing disks.

* 3904825 (Tracking ID: 3859009)

SYMPTOM:
pvs command will show the duplicate PV messages since global_filter of 
lvm.conf is not updated after fiber switch or storage controller get rebooted.

DESCRIPTION:
When fiber switch or storage controller reboot, some paths dev No. may get reused 
during DDL reconfig cycle, in this case VxDMP(Veritas Dynamic Multi-Pathing) wont 
treat them as newly added devices. For those devices belong to LVM dmpnode, VxDMP 
will not trigger lvm.conf update for them. As a result, the global_filter of 
lvm.conf will not be updated. Hence the issue.

RESOLUTION:
The code has been changed to update lvm.conf correctly.

* 3904830 (Tracking ID: 3840359)

SYMPTOM:
On using localized messages, some VxVM commands fail while executing vxrootadm. The error message is as follows:  
VxVM vxmkrootmir ERROR V-5-2-3943 The Master Boot Record (MBR) could not be copied to the root disk mirror.To manually install it, follow the procedures in the VxVM Boot Disk Recovery chapter of the VxVM Trouble Shooting Guide.

DESCRIPTION:
The issue occurs when the output of the sfdisk command appears in the localized format. When the output is not translated into English language, a mismatch of messages is observed and command fails.

RESOLUTION:
The code is modified to convert the output of necessary commands in the scripts into English language before comparing it with the expected output.

* 3904831 (Tracking ID: 3802075)

SYMPTOM:
Disks having digit in its name and which are added as foreign path using "vxddladm 
addforeign" goes into ERROR state 
after running vxdisk scandisks.

DESCRIPTION:
When the disk is added as foreign using 'vxddladm addforeign' , and 
after performing device-discovery 
using vxdisk scandisks, we use disk whole disk name, which is not the exact name of 
the disk. When digits are added in 
the name of the disk using udev rule, we now use the actual name of disk instead of 
whole disk name.

RESOLUTION:
The code is modified to use the exact disk-device name which adds the 
foreign disk successfully.

* 3904833 (Tracking ID: 3729078)

SYMPTOM:
In VVR environment, the panic may occur after SF(Storage Foundation) patch 
installation or uninstallation on the secondary site.

DESCRIPTION:
VXIO Kernel reset invoked by SF patch installation removes all Disk Group 
objects that have no preserved flag set, because the preserve flag is overlapped 
with RVG(Replicated Volume Group) logging flag, the RVG object won't be removed, 
but its rlink object is removed, result of system panic when starting VVR.

RESOLUTION:
Code changes have been made to fix this issue.

* 3904834 (Tracking ID: 3819670)

SYMPTOM:
When smartmove with 'vxevac' command is run in background by hitting 'ctlr-z' key and 'bg' command, the execution of 'vxevac' is terminated abruptly.

DESCRIPTION:
As part of "vxevac" command for data movement, VxVM submits the data as a task in the kernel, and use select() primitive on the task file descriptor to wait for task finishing events to arrive. When "ctlr-z" and bg is used to run vxevac in background, the select() returns -1 with errno EINTR. VxVM wrongly interprets it as user termination action and hence vxevac is terminated.  
Instead of terminating vxevac, the select() should be retried untill task completes.

RESOLUTION:
The code is modified so that when select() returns with errno EINTR, it checks whether vxevac task is finished. If not, the select() is retried.

* 3904851 (Tracking ID: 3804214)

SYMPTOM:
VxDMP (Dynamic Multi-Pathing) path enable operation fails after the disk label is
changed from guest LDOM. Open fails with error 5 (EIO) on the path being enabled.

Following error messages can be seen in /var/adm/messages:

<time-stamp hostname> vxdmp: [ID 808364 kern.notice] NOTICE: VxVM vxdmp V-5-3-0
dmp_open_path: Open failed with 5 for path 237/0x30
<time-stamp hostname> vxdmp: [ID 382146 kern.notice] NOTICE: VxVM vxdmp V-5-0-112
[Warn] disabled path 237/0x30 belonging to the dmpnode 307/0x38 due to open failure

DESCRIPTION:
While a disk is exported to the Solaris LDOM, Solaris OS in the control/IO domain
holds NORMAL mode open on the existing partitions of the DMP node. If the disk
partitions/label is changed from LDOM such that some of the older partitions are
removed, Solaris OS in the control/IO domain does not know about this change and
continues to hold NORMAL mode open on those deleted partitions. If a disabled DMP
path is enabled in this scenario, the NORMAL mode open the path fails and path
enable operation errors out. This can be worked around by detaching and
reattaching the disk to the LDOM. Due to a problem in DMP code, the stale NORMAL
mode open flag was not being reset even when the DMP disk was detached from the
LDOM. This was preventing the DMP path to be enabled even after the DMP disk was
detached from the LDOM.

RESOLUTION:
Code was fixed to reset NORMAL mode open when the DMP disk is detached from
the LDOM. With this fix, DMP disk will have to reattached to the LDOM only
once after the disk labels change. When the disk is reattached, it will get the
correct open mode (NORMAL/NDELAY) on the partitions that exist after label change.

* 3904858 (Tracking ID: 3899568)

SYMPTOM:
"vxdmpadm iostat stop" as per design cannot stop the iostat gathering
persistently. To avoid Performance & Memory crunch related issues, it is
generally recommended to stop the iostat gathering.There is a requirement
to provide such ability to stop/start the iostat gathering persistently
in those cases.

DESCRIPTION:
Today DMP iostat daemon is stopped using - "vxdmpadm iostat stop". but this 
is not persistent setting. After reboot this would be lost and hence 
customer
needs to also have to put this in init scripts at appropriate place for
persistent effect.

RESOLUTION:
Code is modified to provide a  tunable "dmp_compute_iostats" which can
start/stop the iostat gathering persistently.

Notes:
Use following command to start/stop the iostat gathering persistently.
# vxdmpadm settune dmp_compute_iostats=on/off.

* 3904859 (Tracking ID: 3901633)

SYMPTOM:
Lots of error messages like the following are reported while performing RVG 
sync.
VxVM VVR vxrsync ERROR V-5-52-2027 getdigest response err 
[192.168.10.101:/dev/vx/dsk/testdg/v1 <- 
192.168.10.105:/dev/vx/dsk/testdg/v1] [[ndigests sent=-1 ndigests 
received=0]]
VxVM VVR vxrsync ERROR V-5-52-2027 getdigest response err 
[192.168.10.101:/dev/vx/dsk/testdg/v1 <- 
192.168.10.105:/dev/vx/dsk/testdg/v1] [[ndigests sent=-2 ndigests 
received=0]]

DESCRIPTION:
While performing last volume region read and sync, volume end offset 
calculation is not correct, which may lead to over volume end read and sync, 
result in an internal variable became negative number and vxrsync reports 
error. It can happen if volume size is not multiple of 512KB, plus the last 
512KB volume region is partly in use by VxFS.

RESOLUTION:
Code changes have been done to fix the issue.

* 3904861 (Tracking ID: 3904538)

SYMPTOM:
RV(Replicate Volume) IO hang happens during slave node leave or master node 
switch.

DESCRIPTION:
RV IO hang happens because of SRL(Serial Replicate Log) header is updated by RV 
recovery SIO. After slave node leave or master node switch, RV recovery could 
be 
initiated. During RV recovery, all new coming IOs should be quiesced by setting 
NEED 
RECOVERY flag on RV to avoid racing. Due to a code defect, this flag is removed 
by 
transaction commit, result in conflicting between new IOs and RV recovery SIO.

RESOLUTION:
Code changes have been made to fix this issue.

* 3904863 (Tracking ID: 3851632)

SYMPTOM:
When you use the localized messages, some VxVM commands fail while 
mirroring the volume through vxdiskadm. The error message is similar to the 
following:  
 ? [y, n, q,?] (: y) y
 /usr/lib/vxvm/voladm.d/bin/disk.repl: test: unknown operator 1

DESCRIPTION:
The issue occurs when the output of the vxdisk list command 
appears in the localized format. When the output is not translated into English 
language, a mismatch of messages is observed and command fails.

RESOLUTION:
The code is modified to convert the output of the necessary commands in the 
scripts into English language before comparing it with the expected output.

* 3904864 (Tracking ID: 3769303)

SYMPTOM:
System pancis when CVM group is brought online with below stack:

voldco_acm_pagein
voldco_write_pervol_maps_instant
voldco_map_update
voldco_write_pervol_maps
volfmr_copymaps_instant
vol_mv_get_attmir
vol_subvolume_get_attmir
vol_plex_get_attmir
vol_mv_fmr_precommit
vol_mv_precommit
vol_commit_iolock_objects
vol_ktrans_commit
volconfig_ioctl
ns_capable
volsioctl_real
mntput
path_put
vfs_fstatat
from_kgid_munged
read_tsc
vols_ioctl
vols_compat_ioctl
compat_sys_ioctl
sysenter_dispatch
voldco_get_accumulator

DESCRIPTION:
In case of layered volumes, when 'vxvol' comamnd is triggered through 
'vxrecover' command with '-Z vols(implicit) option, only the volumes passed 
through CLI are started, the respective top level volumes remain unstarted. As 
a result, associated DCO volumes also remain unstarted. At this point of time, 
if any of the plex of sub-volume needs to be attached back, vxrecover will 
trigger it. 
With DCO version 30, vxplex command tries to perform some map manipulation 
as a part of plex-attach transaction.  If the DCO volume is not started before 
plex attach, the in-core DCO contents are improperly loaded and this leads to 
panic.

RESOLUTION:
The code is modified to handle the starting of appropriate associated volumes 
of a layered volume group.

* 3905471 (Tracking ID: 3868533)

SYMPTOM:
IO hang happens when starting replication. VXIO deamon hang with stack like 
following:

vx_cfs_getemap at ffffffffa035e159 [vxfs]
vx_get_freeexts_ioctl at ffffffffa0361972 [vxfs]
vxportalunlockedkioctl at ffffffffa06ed5ab [vxportal]
vxportalkioctl at ffffffffa06ed66d [vxportal]
vol_ru_start at ffffffffa0b72366 [vxio]
voliod_iohandle at ffffffffa09f0d8d [vxio]
voliod_loop at ffffffffa09f0fe9 [vxio]

DESCRIPTION:
While performing DCM replay in case Smart Move feature is enabled, VxIO 
kernel needs to issue IOCTL to VxFS kernel to get file system free region. 
VxFS kernel needs to clone map by issuing IO to VxIO kernel to complete this 
IOCTL. Just at the time RLINK disconnection happened, so RV is serialized to 
complete the disconnection. As RV is serialized, all IOs including the 
clone map IO form VxFS is queued to rv_restartq, hence the deadlock.

RESOLUTION:
Code changes have been made to handle the dead lock situation.

* 3906251 (Tracking ID: 3806909)

SYMPTOM:
During installation of volume manager installation using CPI in key-less 
mode, following logs were observed.
VxVM vxconfigd DEBUG  V-5-1-5736 No BASIC license
VxVM vxconfigd ERROR  V-5-1-1589 enable failed: License has expired or is 
not available for operation transactions are disabled.

DESCRIPTION:
While using CPI for STANDALONE DMP installation in key less mode, volume 
manager Daemon(vxconfigd) cannot be started due to a modification in a DMP 
NATIVE license string that is used for license verification and this 
verification was failing.

RESOLUTION:
Appropriate code changes are incorporated to resolve the DMP keyless License 
issue to work with STANDALONE DMP.

* 3906566 (Tracking ID: 3907654)

SYMPTOM:
Storage of cold data on dedicated SAN storage spaces increases 
storage 
cost and maintenance.

DESCRIPTION:
The local storage capacity is consumed by cold or legacy files 
which are not consumed or processed frequently. 
These files occupy dedicated SAN storage space, which is expensive. 
Moving such files to public or private S3 cloud storage services is a better 
cost-effective solution. 
Additionally, cloud storage is elastic allowing varying service levels based 
on changing needs. 
Operational charges apply for managing objects in buckets for public cloud 
services 
using the Storage Transfer Service.

RESOLUTION:
You can now migrate or move legacy data from local SAN storage 
to a target Private or public cloud.

* 3907017 (Tracking ID: 3877571)

SYMPTOM:
Disk header is updated even if the dg import operation fails

DESCRIPTION:
When dg import fails because of the disk failure, importing dg
forcefully needs checking the disks having latest configuration copy. But, it is
very difficult to decide which disk to choose without disk header update logs.

RESOLUTION:
Improved the logging to track the disk header changes.

* 3907593 (Tracking ID: 3660869)

SYMPTOM:
Enhance the DRL dirty-ahead logging for sequential write workloads.

DESCRIPTION:
With the current DRL implementation, when sequential hints are passed by the above 
FS layer, further regions in the DRL are dirtied to ensure that the write on the DRL 
is saved when the new IO on the region comes. But with the current design, there is 
a flaw and the number of IO's on the DRL are similar to the number of IO's on the 
data volume. Because of the flaw, same region is being dirtied again and again as 
part of the DRL IO. This can lead to performance hit as well.

RESOLUTION:
In order to improve the performance, the number of IO's on the DRL are reduced by 
enhancing the implementation of Dirty-ahead logging with DRL.

* 3907595 (Tracking ID: 3907596)

SYMPTOM:
vxdmpadm setattr command gives the below error while setting the path attribute:
"VxVM vxdmpadm ERROR V-5-1-14526 Failed to save path information persistently"

DESCRIPTION:
Device names on linux change once the system is rebooted. Thus the persistent attributes of the device are stored using persistent 
hardware path. The hardware paths are stored as symbolic links in the directory /dev/vx/.dmp. The hardware paths are obtained from 
/dev/disk/by-path using the path_id command. In SLES12, the command to extract the hardware path changes to path_id_compat. Since 
the command changed, the script was failing to generate the hardware paths in /dev/vx/.dmp directory leading to the persistent 
attributes not being set.

RESOLUTION:
Code changes have been made to use the command path_id_compat to get the hardware path from /dev/disk/by-path directory.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch vm-rhel6_x86_64-Patch-6.2.1.500.tar.gz to /tmp
2. Untar vm-sles11_x86_64-Patch-6.2.1.500.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/vm-sles11_x86_64-Patch-6.2.1.500.tar.gz
    # tar xf /tmp/vm-sles11_x86_64-Patch-6.2.1.500.tar.gz
3. Copy the latest available 621 CPI hotfix to /tmp/hf
4. Untar latest available 621 CPI hotfix to to /tmp/hf

5. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)   
   To install the patch 6.2.1.500 will also need a CPI hotfix. If do not use the CPI hotfix may failed when upgrade.
    # pwd /tmp/hf
    # ./ installVRTSvxvm621P5 -require  /tmp/hf/CPI_6.2.1_P12.pl [<host1> <host2>...]

6. To bring all the services and modules up execute
    # ./ installVRTSvxvm621P5 -start -require /tmp/hf/CPI_6.2.1_P12.pl


7. To stop all the services.
    # ./ installVRTSvxvm621P5 -stop -require /tmp/hf/CPI_6.2.1_P12.pl

You can also install this patch together with 6.2.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 6.2.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] -require /tmp/hf/CPI_6.2.1_P12.pl [<host1> <host2>...]

Install the patch manually:
--------------------------
1.Before-the-upgrade :-
  (a) Stop I/Os to all the VxVM volumes.
  (b) Umount any filesystems with VxVM volumes.
  (c) Stop applications using any VxVM volumes.
2.Check whether root support or DMP native support is enabled or not:
        # vxdmpadm gettune dmp_native_support
  If the current value is "on", DMP native support is enabled on this machine.
  If disabled: goto step 4.
  If enabled: goto step 3.
3.If DMP native support is enabled:
  a.It is essential to disable DMP native support.
    Run the following command to disable DMP native support
        # vxdmpadm settune dmp_native_support=off
  b.Reboot the system
        # reboot
4.Select the appropriate RPMs for your system, and upgrade to the new patch.
        # rpm -Uhv VRTSvxvm-6.2.1.500-SLES11.x86_64.rpm
5.Run vxinstall to get VxVM configured
        # vxinstall
6.If DMP Native Support was enabled before patch upgrade, enable it back.
  a. Run the following command to enable DMP native support
        # vxdmpadm settune dmp_native_support=on
   b.  Reboot the system
        # reboot


REMOVING THE PATCH
------------------
#rpm -e VRTSvxvm-6.2.1.500-SLES11.x86_64.rpm --nodeps


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE



Read and accept Terms of Service