This page lists publically-released patches for Veritas Enterprise Products.
For Product GA build, see Veritas Entitlement Management System(VEMS) by clicking the Veritas Support 'Licensing' option.
For information on private patches, contact Veritas Technical Support.
Veritas is making it easier to find all software installers and updates for Veritas products with a completely redesigned experience. NetBackup HotFixes and NetBackup Appliance patches are now also available at the new Veritas Download Center.
Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.
vm-hpux1131-6.0.3.300
Obsolete
The latest patch(es) : sfha-hpux1131-6.0.5 
Sign in if you want to rate this patch.

 Basic information
Release type: Patch
Release date: 2013-12-18
OS update support: None
Technote: None
Documentation: None
Popularity: 1704 viewed    84 downloaded
Download size: 411.5 MB
Checksum: 466242257

 Applies to one or more of the following products:
Dynamic Multi-Pathing 6.0.1 On HP-UX 11i v3 (11.31)
Storage Foundation 6.0.1 On HP-UX 11i v3 (11.31)
Storage Foundation Cluster File System 6.0.1 On HP-UX 11i v3 (11.31)
Storage Foundation for Oracle RAC 6.0.1 On HP-UX 11i v3 (11.31)
Storage Foundation HA 6.0.1 On HP-UX 11i v3 (11.31)

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-hpux1131-6.0.5 2014-04-15

This patch supersedes the following patches: Release date
vm-hpux1131-6.0.3.100 (obsolete) 2013-09-13
vm-hpux1131-6.0.1.200 (obsolete) 2012-10-10

This patch requires: Release date
sfha-hpux1131-6.0.3 (obsolete) 2013-02-01

 Fixes the following incidents:
3358313, 3358345, 3358346, 3358348, 3358351, 3358352, 3358354, 3358357, 3358367, 3358368, 3358369, 3358371, 3358372, 3358374, 3358377, 3358379, 3358380, 3358381, 3358382, 3358404, 3358414, 3358416, 3358417, 3358418, 3358420, 3358423, 3358429, 3358430, 3358433, 3366688, 3366703, 3367778, 3368234, 3368236, 3374166, 3376953, 3387405, 3387417

 Patch ID:
PVCO_04029
PVKL_04030

 Readme file  [Save As...]
                          * * * READ ME * * *
                * * * Veritas Volume Manager 6.0.3 * * *
                        * * * Hot Fix 300 * * *
                         Patch Date: 2013-12-18


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Volume Manager 6.0.3 Hot Fix 300


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
HP-UX 11i v3 (11.31)


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Storage Foundation for Oracle RAC 6.0.1
   * Veritas Storage Foundation Cluster File System 6.0.1
   * Veritas Storage Foundation 6.0.1
   * Veritas Storage Foundation High Availability 6.0.1
   * Veritas Dynamic Multi-Pathing 6.0.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: PVCO_04029, PVKL_04030
* 3358313 (3194358) The continuous messages displayed in the syslog file with EMC
not-ready (NR) LUNs.
* 3358345 (2091520) The ability to move the configdb placement from one disk to another 
using "vxdisk set <disk> keepmeta=[always|skip|default]" command.
* 3358346 (3353211) Once the BCV device came back to RW state, the OPEN mode of one of the paths is changing from NDELAY to NORMAL, while other path is still retaining the NDELAY mode.
* 3358348 (2665425) Enhance the vxdisk -px <attribute> list CLI interface to report vxvm disk attribute information
* 3358351 (3158320) [VxVM] command "vxdisk -px REPLICATED list" shows the same as "vxdisk -px REPLICATED_TYPE list"
* 3358352 (3326964) VxVM hangs in CVM environments in presence of FMR operations.
* 3358354 (3332796) Getting message: VxVM vxisasm INFO V-5-1-0 seeking block #... while initializing disk that is not ASM disk.
* 3358357 (3277258) BAD TRAP panic in vxio:vol_mv_pldet_callback
* 3358367 (3230148) Clustered Volume Manager (CVM) hangs during split brain testing.
* 3358368 (3249264) Disks get into 'ERROR' state after being destroyed with the command 'vxdg destroy <dg-name>'
* 3358369 (3250369) vxdisk scandisks causes endless messages in syslog.
* 3358371 (3125711) Secondary node panics it is rebooted while reclaim was in progress on primary.
* 3358372 (3156295) The permission and owner of /dev/raw/raw# device is wrong after reboot.
* 3358374 (3237503) System hang may happen after creating space-optimized snapshot with large size cache volume.
* 3358377 (3199398) Output of the command "vxdmpadm pgrrereg" depends on the order of DMP node list where the terminal output depends on the last LUN (DMP node)
* 3358379 (1783763) In a Veritas Volume Replicator (VVR) environment, the vxconfigd(1M) daemon may 
hang during a configuration change operation.
* 3358380 (2152830) A diskgroup (DG) import fails with a non-descriptive error message when 
multiple copies (clones) of the same device exist and the original devices are 
either offline or not available.
* 3358381 (2859470) The Symmetrix Remote Data Facility R2 (SRDF-R2) with the Extensible Firmware 
Interface (EFI) label is not recognized by Veritas Volume Manager (VxVM) and 
goes in an error state.
* 3358382 (3086627) "vxdisk -o thin, fssize list" command fails with error: VxVM vxdisk ERROR V-5-1-16282 Cannot retrieve stats: Bad address
* 3358404 (3021970) Secondary Master panic while running IO load running
* 3358414 (3139983) Failed I/Os from SCSI are retried only on very few paths to a LUN instead of utilizing all the available paths
* 3358416 (3312162) Verification of data on DR site reports differences even though replication is up-to-date.
* 3358417 (3325122) In CVR environment, creation of stripe-mirror volume with logtype=dcm failed
* 3358418 (3283525) DCO corruption after volume resize leads to vxconfigd hang
* 3358420 (3236773) Multiple error messages of format  "vxdmp V-5-3-0 dmp_indirect_ioctl: Ioctl 
Failed" can be seen during set/get failover-mode for EMC ALUA disk array.
* 3358423 (3194305) Replication status goes in paused state since the vxstart_vvr start does not start the vxnetd daemon automatically on secondary side.
* 3358429 (3300418) Volume operations (vxassist, vxsnap..) creates IO request to all drives in DG
* 3358430 (3258276) DMP Paths keep huge layer open number which causes ssd driver's total open number overflows (0x80000000)
* 3358433 (3301470) All CVR nodes panic repeatedly due to null pointer dereference in vxio
* 3366688 (2957645) vold restart on linux caused terminal flooded with cvm related error messages.
* 3366703 (3056311) For release < 5.1 SP1, allow disk initialization with CDS format using raw geometry.
* 3367778 (3152274) "dd" command to SRDF-R2 (write disable)device hang, and leads "vm" command hang for long time. but no issue with OS devices
* 3368234 (3236772) resizesrl and resizevol operations are getting failed intermittently, with error "vradmin ERROR Lost connection to host"
* 3368236 (3327842) "vradmin verifydata" failed with "Lost connection to <host>; terminating command execution."
* 3374166 (3325371) Panic occurs in the vol_multistepsio_read_source() function when snapshots are 
used.
* 3376953 (3372724) Failed install of VxVM (Veritas Volume Manager) panics the server.
* 3387405 (3019684) IO hang on master while SRL is about to overflow
* 3387417 (3107741) vxrvg snapdestroy fails with "Transaction aborted waiting for io drain" error and vxconfigd hangs for around 45 minutes


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following Symantec incidents:

Patch ID: PVCO_04029, PVKL_04030

* 3358313 (Tracking ID: 3194358)

SYMPTOM:
Continuous I/O error messages on OS device and DMP node can be seen in the
syslog associated with the EMC Symmetrix not-ready (NR) Logical units.

DESCRIPTION:
VxVM tries to online the EMC not-ready (NR) logical units. As part of the disk
online process, it tries to read the disk label from the logical unit. Because
the logical unit is NR the I/O fails. The failure messages are displayed in the
syslog file.

RESOLUTION:
The code is modified to skip the disk online for the EMC NR LUNs.

* 3358345 (Tracking ID: 2091520)

SYMPTOM:
Customers cannot selectively disable VxVM configuration copies on the disks 
associated with a disk group.

DESCRIPTION:
An enhancement is required to enable customers to selectively disable VxVM 
configuration copies on disks associated with a disk group.

RESOLUTION:
The code is modified to provide a "keepmeta=skip" option to the vxdiskset(1M) 
command to allow a customer to selectively disable VxVM configuration copies on 
disks that are a part of the disk group.

* 3358346 (Tracking ID: 3353211)

SYMPTOM:
A. After EMC Symmetrix BCV (Business Continuance Volume) device switches to read-
write mode, continuous vxdmp (Veritas Dynamic Multi Pathing) error messages 
flood syslog as shown below:

NOTE VxVM vxdmp V-5-3-1061 dmp_restore_node: The path 18/0x2 has not yet aged - 
299
NOTE VxVM vxdmp 0 dmp_tur_temp_pgr: open failed: error = 6 dev=0x24/0xD0
NOTE VxVM vxdmp V-5-3-1062 dmp_restore_node: Unstable path 18/0x230 will not be 
available for I/O until 300 seconds
NOTE VxVM vxdmp V-5-3-1061 dmp_restore_node: The path 18/0x2 has not yet aged - 
299
NOTE VxVM vxdmp V-5-0-0 [Error] i/o error occurred (errno=0x6) on dmpnode 
36/0xD0
..
..

B. DMP metanode/path under DMP metanode getting disabled unexpectedly

DESCRIPTION:
A. DMP caches the last discovery NDELAY open for the BCV dmpnode paths. BCV device 
switching to read-write mode is an array side operation. Typically in such cases 
system admins are required to run 

1. vxdisk rm <accessname>

OR 

In case of parallel backup jobs,

1. vxdisk offline <accessname>
2. vxdisk online <accessname>

This would cause DMP to close cached open and during next discovery, device is 
opened in read-write mode. If the above steps are skipped then it would cause 
DMP device to go in state where 1 of the path is in read-write mode and others 
remain in NDELAY mode. 

If the above layers request for NORMAL open then DMP has the code to close 
NDELAY cached open and reopen in NORMAL mode. During the online of dmpnode this 
happens only for 1 of the paths of dmpnode. 

B. DMP performs error analysis for paths on which I/O failed. In some cases the 
SCSI probes sent, failed with the return value/sense codes that was not handled 
by DMP, causing the paths to get disabled.

RESOLUTION:
A. DMP EMC ASL (Array Support Library) is modified to handle case A for EMC
Symmetrix arrays.

B. DMP code is modified to handle SCSI conditions correctly for case B.

* 3358348 (Tracking ID: 2665425)

SYMPTOM:
The "vxdisk -px <attribute> list" CLI does not support some basic vxvm
attributes, nor does it allow a user to specify multiple attributes in a
specific sequence. The display layout was not presented in a readable or
parsable manner.

DESCRIPTION:
Some basic vxvm disk attributes are not supported by the "vxdisk -px <attribute>
list" CLI which are useful for customizing the command output.  The
display output is also not aligned by column or suitable for parsing by a
utility.   In addition, the CLI does not allow multiple attributes to be
specified in a usable manner.

RESOLUTION:
Support for the following vxvm disk attributes were added to the CLI:

 SETTINGS              ALERTS                INFO
 HOSTID                DISK_TYPE             FORMAT
 DA_INFO               PRIV_OFF              PRIV_LEN
 PUB_OFF               PUB_LEN               PRIV_UDID
 DG_NAME               DGID                  DG_STATE
 DISKID                DISK_TIMESTAMP        STATE

The CLI has been enhanced to support multiple attributes separated by a comma,
and to align the display output by a column, separable by a comma for 
parsability.

 For example:

 # vxdisk -px ENCLOSURE_NAME, DG_NAME, LUN_SIZE, SETTINGS, state list
 DEVICE ENCLOSURE_NAME DG_NAME LUN_SIZE   SETTINGS             STATE
 sda    disk           -       143374650  -                    online
 sdb    disk           -       143374650  -                    online
 sdc    storwizev70000 fencedg 10485760   thinrclm, coordinator online

* 3358351 (Tracking ID: 3158320)

SYMPTOM:
VxVM (Veritas Volume Manager) command "vxdisk -px REPLICATED list <disk>" shows 
wrong output.

DESCRIPTION:
"vxdisk -px REPLICATED list <disk>" shows the same output as "vxdisk -px 
REPLICATED_TYPE list <disk>" and doesn't work as designed to show the values 
as "yes", "no" or "-".
It is because the command line parameter specified is parsed wrongly so that 
the REPLICATED attribute is wrongly dealt as REPLICATED_TYPE.

RESOLUTION:
Code changes have made to deal "REPLICATED" attribute correctly.

* 3358352 (Tracking ID: 3326964)

SYMPTOM:
VxVM (Veritas Volume Manager) hangs in CVM (Clustered Volume Manager) 
environments in presence of FMR/Flashsnap (Fast Mirror Resync) operations.

DESCRIPTION:
During split brain testing in presence of FMR activities, when there are errors 
on the DCO (Data change object), the DCO error handling code hooks up the CPU 
as the same error gets set again in its handler. This causes the VxVM SIO 
(Staged IO) loop around the same code, thus, causing the hang.

RESOLUTION:
Code changes are made to appropriately handle the error prone scenario without 
causing an infinite loop.

* 3358354 (Tracking ID: 3332796)

SYMPTOM:
The following message is seen while initializing any EFI disk, though the disk 
was not used previously as ASM disk.
"VxVM vxisasm INFO v-5-1-0 seeking block #... "

DESCRIPTION:
As a part of disk initialization for every EFI disk, VxVM will check if an EFI 
disk has ASM label. "VxVM vxisasm INFO v-5-1-0 seeking block #..." is printed 
unconditionally and this is unreasonable.

RESOLUTION:
Code changes have been made to not display the message.

* 3358357 (Tracking ID: 3277258)

SYMPTOM:
When DRL ( Dirty Region Log ) was off and detach of plex was attempted on a
mirrored volume, the below panic happened :

Unix:panicsys+0x48()
unix:vpanic_common+0x78()
unix:panic+0x1c()
unix:die+0x78()
unix:trap+0x9e0()
unix:ktl0+0x48()
-- trap data  type: 0x31 (data access MMU miss)  rp: 0x2a1033396d0  --
  addr: 0xf0
pc:  0x7c589f8c vxio:vol_mv_pldet_callback+0x94:   lduw [%o1 + 0xf0], %i2
npc: 0x7c589f90 vxio:vol_mv_pldet_callback+0x98:   andcc  %i2, %i5        ( 
btst  %i2 %i5 )

-----------------------------------------------------------------------------
<trap>vxio:vol_mv_pldet_callback+0x94()
vxio:vol_klog_start+0x98()
vxio:voliod_iohandle+0x30()
vxio:voliod_loop+0x3e0()
unix:thread_start+0x4()
-- end of kernel thread's stack --

DESCRIPTION:
In the plex detach code, there was no conditional check for the presence of DRL
object before checking for DRL version. Here passing a NULL value to check for
DRL version results in system panic.

RESOLUTION:
DRL version check is removed since it is not necessary to have in
place. Further code itself handles version check.

* 3358367 (Tracking ID: 3230148)

SYMPTOM:
Clustered Volume Manager (CVM) hangs during split brain testing.

DESCRIPTION:
During split brain testing in presence of FMR activities, a read-writeback 
operation/SIO (Staged IO) can be issued as part of DCO (Data change object) 
chunk update. This SIO tries to read from plex1, and when this read fails, it 
reads from other available plex(es) and performs a write on all other plexes. 
As the other plex has already failed, write operation also fails and gets 
retried with IOSHIPPING, which also fails due to unavailability of the plex 
from other nodes as well (because of split brain testing). As remote plex is 
unavailable, write will fail again and serialization is called again on this 
sio during which system hangs due to mismatch in active and serial counts.

RESOLUTION:
Code changes have been done to take care of active/serial counts when the SIOs 
are restarted with IOSHIPPING.

* 3358368 (Tracking ID: 3249264)

SYMPTOM:
Thin Disks and Disk groups containing thin disk goes in ERROR state or lose the 
configuration copy after a reclaim operation is performed on the disk. The 
following are the commonly observed

DESCRIPTION:
Disks of disk groups with volumes created with option init=zero on thin reclaim 
disks, formatted as sliced, get into the ERROR state after being destroyed with 
the vxdg destroy command. As the partition offset is not taken into 
consideration 
for these types of disks, the private region data is lost resulting in 
the disks going into the Error state.

RESOLUTION:
The code is modified to consider disk_offset during operations on disks 
formatted as sliced.

* 3358369 (Tracking ID: 3250369)

SYMPTOM:
The following command is triggering number of events.

# vxdisk scandisks

DESCRIPTION:
Execution of the command triggers a re-online of all the disks, which involves 
reading of the private region from all the disks. Failure of these read IOs 
generate error events, which are notified to all the clients waiting on 
"vxnotify". One of such clients is "vxattachd" daemon. The daemon initiates a 
"vxdisk scandisks", when the number of events are more than 256. Thus 
"vxattachd" is initiating another cycle of above activity resulting in endless 
events.

RESOLUTION:
The count value which triggers the vxattachd daemon is changed from 256 to 
1024. 
The DMP events are sub categorized further, as per the requirement of vxattachd 
daemon.

* 3358371 (Tracking ID: 3125711)

SYMPTOM:
While reclaim is going on Primary node if secondary node is rebooted then it
panics with following stack:
do_page_fault 
page_fault 
dmp_reclaim_device 
dmp_reclaim_storage
gendmpioctl 
dmpioctl 
vol_dmp_ktok_ioctl 
voldisk_reclaim_region 
vol_reclaim_disk 
vol_subdisksio_start 
voliod_iohandle 
voliod_loop 
...

DESCRIPTION:
In VVR environment, there was a corner case with reclaim operation on secondary
where the reclaim length was calculated incorrectly leading to memory allocation
failure. This resulted in a panic

RESOLUTION:
Modified condition to calculate reclaim length correctly

* 3358372 (Tracking ID: 3156295)

SYMPTOM:
When DMP (Dynamic Multi-pathing) native support is enabled for Oracle ASM
(Automatic Storage Management) devices, the permission and ownership of 
'/dev/raw/raw#' devices goes wrong after reboot.

DESCRIPTION:
When VxVM binds the DMP (Dynamic Multi-pathing) devices to raw devices during a
reboot, it invokes 'raw' command to create raw devices and then tries to set 
the permission and ownership of them immediately after the 'raw' command is
invoked asynchronously. However in some cases the raw device is not yet created 
at the time when VxVM tries to set the permission and ownership. In that case,
eventually the raw device gets created but the correct permission and ownership 
are not set.

RESOLUTION:
Code changes are done to set the permission and ownership of the raw devices
when DMP gets the OS event which implies the raw device creation is finished. It 
ensures to set the correct permission and ownership of the raw devices.

* 3358374 (Tracking ID: 3237503)

SYMPTOM:
System hang may happen after creating space-optimized snapshot with large size
cache volume.

DESCRIPTION:
For all changes written to cache volume after snapshot volume is created, a
translation map with B+tree structure is used to speed up search/insert/delete
operations. When trying to insert a node to the tree, type casting of page
offset to 'unsigned int' causes value truncation for the offset beyond maximum
32bit integer. The value truncation corrupts the B+tree thereby resulting in an
SIO (VxVM Staged IO) hang.

RESOLUTION:
Code changes were made to remove all type casting to 'unsigned int' in cache
volume code.

* 3358377 (Tracking ID: 3199398)

SYMPTOM:
Output of the command "vxdmpadm pgrrereg" depends on the order of DMP (Dynamic MultiPathing) node 
list where the terminal output depends on the last LUN (DMP node).

1. Terminal message when PGR (Persistent Group Reservation) re-registration is succeeded on the last LUN

  # vxdmpadm pgrrereg
  VxVM vxdmpadm INFO V-5-1-0 DMP PGR re-registration done for ALL PGR enabled dmpnodes.

2. Terminal message when PGR re-registration is failed on the last LUN

  # vxdmpadm pgrrereg
  vxdmpadm: Permission denied

DESCRIPTION:
"vxdmpadm pgrrereg" command has been introduced to support the facility to move a guest 
OS on one physical node to another node. In Solaris LDOM environment, the feature is called "Live 
Migration". When a customer is using I/O fencing feature and a guest OS is moved to another physical 
node, I/O will not be succeeded in the guest OS after the physical node migration because each DMP nodes 
of the guest OS doesn't have a valid SCSI-3 PGR key as the physical HBA is changed. This command will 
help on re-registering the valid PGR keys for new physical nodes, however its command output is 
depending on the last LUN (DMP node).

RESOLUTION:
Code changes are done to log the re-registration failures in System log file. Terminal output now instructs 
to look into the system log when an error is seen on a LUN.

* 3358379 (Tracking ID: 1783763)

SYMPTOM:
In a VVR environment, the vxconfigd(1M) daemon may hang during a configuration 
change operation. The following stack trace is observed:
delay
vol_rv_transaction_prepare
vol_commit_iolock_objects
vol_ktrans_commit
volconfig_ioctl
volsioctl_real
volsioctl
vols_ioctl
...

DESCRIPTION:
Incorrect serialization primitives are used. This results in the vxconfigd(1M) 
daemon to hang.

RESOLUTION:
The code is modified to use the correct serialization primitives.

* 3358380 (Tracking ID: 2152830)

SYMPTOM:
A diskgroup (DG) import fails with a non-descriptive error message when 
multiple copies (clones) of the same device exist and the original devices are 
either offline or not available.
For example:
# vxdg import mydg
VxVM vxdg ERROR V-5-1-10978 Disk group mydg: import
failed:
No valid disk found containing disk group

DESCRIPTION:
If the original devices are offline or unavailable, the vxdg(1M) command picks 
up cloned disks for import.DG import fails unless the clones are tagged and the 
tag is specified during the DG import. The import failure is expected, but the 
error message is non-descriptive and does not specify the corrective action to 
be taken by the user.

RESOLUTION:
The code is modified to give the correct error message when duplicate clones 
exist during import. Also, details of the duplicate clones are reported in the 
system log.

* 3358381 (Tracking ID: 2859470)

SYMPTOM:
The EMC SRDF-R2 disk may go in error state when the Extensible Firmware 
Interface (EFI) label is created on the R1 disk. For example:
R1 site
# vxdisk -eo alldgs list | grep -i srdf
emc0_008c auto:cdsdisk emc0_008c SRDFdg online c1t5006048C5368E580d266 srdf-r1

R2 site
# vxdisk -eo alldgs list | grep -i srdf
emc1_0072 auto - - error c1t5006048C536979A0d65 srdf-r2

DESCRIPTION:
Since R2 disks are in write protected mode, the default open() call made for 
the read-write mode fails for the R2 disks, and the disk is marked as invalid.

RESOLUTION:
The code is modified to change Dynamic Multi-Pathing (DMP) to be able to read 
the EFI label even on a write-protected SRDF-R2 disk.

* 3358382 (Tracking ID: 3086627)

SYMPTOM:
"vxdisk -o thin, fssize list" command fails with error:
VxVM vxdisk ERROR V-5-1-16282 Cannot retrieve stats: Bad address

DESCRIPTION:
This issue happens when system has more than 200 LUNs. VxVM reads file system
statistical information for each lun to generate file system size data. But 
after
reading the information for first 200 luns, buffer was not reset correctly. So,
subsequent access to buffer address will generate this error.

RESOLUTION:
Code changes are done to properly reset buffer address.

* 3358404 (Tracking ID: 3021970)

SYMPTOM:
Secondary node panics due to NULL pointer dereference while freeing an 
interlock.

page_fault
volsio_ilock_free
vol_rv_inactivate_wsio
vol_rv_restart_wsio
vol_rv_serialise_sec_logging
vol_rv_serialize
vol_rv_errorhandler_start
voliod_iohandle
voliod_loop
...

DESCRIPTION:
The panic is seen if there is a node crash/node reconfiguration on the Primary.
The secondary did not correctly handle the updates for the period of crash
correctly and resulted in a panic.

RESOLUTION:
Necessary code changes have been done to properly handle the freeing of 
interlock
for node crash/reconfigurations on the Primary side.

* 3358414 (Tracking ID: 3139983)

SYMPTOM:
Failed I/Os from SCSI are retried only on very few paths to a LUN instead of 
utilizing all the available paths. At times this results in multiple IO retries 
without success thus DMP sending IO failure to the application bounded by the 
recoveryoption tunable. 
 
 The following messages are displayed in the console log:
 [..]
 Mon Apr xx 04:18:01.885: I/O analysis done as DMP_PATH_OKAY on Path
 <path-name> belonging to Dmpnode <dmpnode-name> Mon Apr xx 04:18:01.885: I/O 
error occurred (errno=0x0) on Dmpnode <dmpnode-name> [..]

DESCRIPTION:
When I/O failure is returned to DMP with a retry error from SCSI, DMP retries 
that IO on another path. However, it failed to choose the path that has the 
higher probability of successfully handling the IO.

RESOLUTION:
The code is modified to implement this intelligence of choosing appropriate 
paths that can successfully process the I/Os during retries.

* 3358416 (Tracking ID: 3312162)

SYMPTOM:
Data Corruption may occur on VVR DR (Secondary) Site. Following signs may 
indicate corruption:
1) 'vradmin verifydata' reports data differences even though replication is up-
to-date.
2) Secondary site may require a Full Fsck after 'Migrate'/'Takeover' Operations.
3) Error messages of following form may appear:
Example:
msgcnt 21 mesg 017: V-2-17: vx_dirlook - /dev/vx/dsk/<dgname>/<volname>  file 
system inode <inode number> marked bad incore
4) Silent corruption may occur without any visible errors.

DESCRIPTION:
With Secondary Logging enabled, replicated data on DR site gets written on to 
its SRL first, and later applied on the corresponding Data Volumes. While the 
writes from SRL are being flushed on to the data volumes, data corruption might 
occur, provided all the following conditions occur together:
 
1) Multiple writes for the same data block must occur in a short span of time, 
i.e while the given set of SRL writes are being flushed on to its data 
volumes.  (and)
2) Based on relative timing, locks to perform these writes (which occur on the 
same data block) get granted out of order, thus, leading to writes themselves 
being applied out of order.

RESOLUTION:
Code changes have been done to protect write order fidelity in strict order by 
ensuring that locks are granted in its strict order.

* 3358417 (Tracking ID: 3325122)

SYMPTOM:
In CVR environment, creation of stripe-mirror volume with logtype=dcm failed with
following error:
VxVM vxplex ERROR V-5-1-10128  Unexpected kernel error in configuration update

DESCRIPTION:
In layered volumes, DCM plexes are attached to the storage volumes and not to
the top level volume. There was error condition not handled correctly in the CVR
configuration.

RESOLUTION:
Modified code to handle the DCM plex placement in the layered volume case.

* 3358418 (Tracking ID: 3283525)

SYMPTOM:
Stopping and Starting the data volume (with an associated DCO volume) results in 
a vxconfigd hang with the below stack. The data volume has undergone vxresize 
earlier.


#0 [ffff882fdf625708] schedule at ffffffff8143f640
#1 [ffff882fdf625850] volsync_wait at ffffffffa10117a5 [vxio]
#2 [ffff882fdf6258c0] volsiowait at ffffffffa10af89b [vxio]
#3 [ffff882fdf625940] volpvsiowait at ffffffffa10af968 [vxio]
#4 [ffff882fdf625a30] voldco_get_accumulator at ffffffffa1037741 [vxio]
#5 [ffff882fdf625a50] voldco_acm_pagein at ffffffffa1037864 [vxio]
#6 [ffff882fdf625b30] voldco_write_pervol_maps_instant at ffffffffa103acb0 
[vxio]
#7 [ffff882fdf625bb0] voldco_write_pervol_maps at ffffffffa101d34d [vxio]
#8 [ffff882fdf625c70] volfmr_copymaps_instant at ffffffffa1072c49 [vxio]
#9 [ffff882fdf625d00] vol_mv_precommit at ffffffffa10885db [vxio]
#10 [ffff882fdf625d40] vol_commit_iolock_objects at ffffffffa107fd9a [vxio]
#11 [ffff882fdf625d90] vol_ktrans_commit at ffffffffa1080b80 [vxio]
#12 [ffff882fdf625de0] volconfig_ioctl at ffffffffa10f41b9 [vxio]
#13 [ffff882fdf625e10] volsioctl_real at ffffffffa10fc513 [vxio]
#14 [ffff882fdf625ee0] vols_ioctl at ffffffffa05fe113 [vxspec]
#15 [ffff882fdf625f00] vols_compat_ioctl at ffffffffa05fe18c [vxspec]
#16 [ffff882fdf625f10] compat_sys_ioctl at ffffffff8119b413
#17 [ffff882fdf625f80] sysenter_dispatch at ffffffff8144aaf0

DESCRIPTION:
In the VxVM code, Data Change Object (DCO) Table of Content (TOC) entry was not 
marked with an appropriate flag which prevents the incore new map size to be 
flushed to disk. This leads to corruption. A subsequent stop and start of the 
volume will read the incorrect TOC from disk detecting the corruption and 
resulting in vxconfigd hang.

RESOLUTION:
Mark the DCO TOC entry with the appropriate flag which will ensure that the 
incore data is flushed to disk to prevent the corruption and the subsequent 
vxconfigd hang.

During grow of volume, if grow of paging module fails, DCO TOC may not be 
updated as per current size and could lead to inconsistent DCO. The fix is to 
make sure to fail the precommit if paging module grow failed.

* 3358420 (Tracking ID: 3236773)

SYMPTOM:
"vxdmpadm getattr enclosure <enclr_name> failovermode" generates multiple "vxdmp 
V-5-3-0 dmp_indirect_ioctl: Ioctl Failed" error messages in syslog if the
enclosure is configured as EMC ALUA.

DESCRIPTION:
EMC disk array with ALUA mode only supports "implicit" type of failover-mode.
Moreover such disk array doesn't support set or get failover-mode. Then any 
set/get attempts for the failover-mode attribute generate "Ioctl Failed" error 
messages.

RESOLUTION:
The code is modified while set/get failover-mode for EMC ALUA hardware 
configuration.

* 3358423 (Tracking ID: 3194305)

SYMPTOM:
In VVR environment, replication status goes in paused state since the 
vxstart_vvr start does not start vxnetd daemon automatically on secondary side.

vradmin -g vvrdg repstatus vvrvg
Replicated Data Set: vvrvg
Primary:
  Host name:                  Host IP
  RVG name:                   vvrvg
  DG name:                    vvrdg
  RVG state:                  enabled for I/O
  Data volumes:               1
  VSets:                      0
  SRL name:                   srlvol
  SRL size:                   5.00 G
  Total secondaries:          1
Secondary:
  Host name:                  Host IP
  RVG name:                   vvrvg
  DG name:                    vvrdg
  Data status:                consistent, up-to-date
  Replication status:         paused due to network disconnection
  Current mode:               asynchronous
  Logging to:                 SRL
  Timestamp Information:      behind by  0h 0m 0s

DESCRIPTION:
vxnetd daemon is seen stopped on secondary as a result of which replication 
status is seen paused on primary. vxnetd needs to start gracefully on secondary 
for the replication to be in proper state.

RESOLUTION:
Necessary code changes have been done to implement internal retry able mechanism
for starting vxnetd.

* 3358429 (Tracking ID: 3300418)

SYMPTOM:
VxVM volume operations on shared volumes cause unnecessary read I/Os 
against disks that have both config copy and log copy disabled on slaves.

DESCRIPTION:
The unnecessary disk read I/Os are generated on slaves while 
refreshing private region info into memory during VxVM transaction, no need to 
refresh private region info when the disk already has config copy and log copy 
disabled.

RESOLUTION:
Code changes are made to skip the refreshing if both config copy and 
log copy are disabled on master and slaves.

* 3358430 (Tracking ID: 3258276)

SYMPTOM:
DMP(Dynamic Multi-Pathing) paths keep huge layer open number which causes SSD 
driver's total open number overflows (0x80000000), when dmp_cache_open is 
enabled. System panic with following stack:
unix:panicsys+0x48()
unix:vpanic_common+0x78()
genunix:cmn_err+0x98()
genunix:mod_rele_dev_by_major+0x80()
genunix:ddi_rele_driver()
vxdmp:dmp_dev_close+0x18()
vxdmp:gendmpclose+0x29c()
genunix:dev_close() 
vxdmp:dmp_dev_close+0xec()
vxdmp:dmp_indirect_ioctl+0x1b4()
vxdmp:gendmpioctl() 
vxdmp:dmpioctl+0x20()
specfs:spec_ioctl() 
genunix:fop_ioctl+0x20()
genunix:ioctl+0x184()
unix:syscall_trap32+0xcc()

DESCRIPTION:
There is an open leak in DMP which causes SSD driver's total open number 
overflows, and lead to system panic.

RESOLUTION:
Code changes have been made to avoid any open leaks.

* 3358433 (Tracking ID: 3301470)

SYMPTOM:
In CVR environment, a recovery on the primary side is causing all the nodes to
panic with following stack:
trap
ktl0
search_vxvm_mem
voliomem_range_iter
vol_ru_alloc_buffer_start
voliod_iohandle
voliod_loop

DESCRIPTION:
Recovery is trying to do a zero size readback from the SRL. This is resulting in 
a
panic

RESOLUTION:
Modified code to handle the corner case which was resulting in a zero sized 
readback.

* 3366688 (Tracking ID: 2957645)

SYMPTOM:
Terminal flooded with error messages like bellow:

VxVM INFO V-5-2-16543 connresp: new client ID allocation failed for cvm nodeid 
* with error *.

DESCRIPTION:
When restart vxconfigd, if failed to get a client ID, there is no need to print 
the error message as the default level. The messages will flood the terminal.

RESOLUTION:
Code changes have been made to print those messages only in debug level.

* 3366703 (Tracking ID: 3056311)

SYMPTOM:
Following problems can be seen on disks initialized with 5.1SP1 listener and which
are being used for older releases like 4.1, 5.0, 5.0.1:
1. Creation of a volume failed on a disk indicating in-sufficient space available.
2. Data corruption seen. CDS backup label signature seen within PUBLIC region data.
3. Disks greater than 1TB in size will appear "online invalid" after on older
releases.

DESCRIPTION:
VxVM listener can be used to initialize Boot disks and Data disks which can be
used with older VxVM releases. Eg: 5.1SP1 Listener can be used to initialize
disks which can be used with all previous VxVM releases like 5.0.1, 5.0, 4.1 etc.
With 5.1SP1 onwards VxVM always uses Fabricated geometry while initializing disk
with CDS format.
Older releases like 4.1, 5.0, 5.0.1 use Raw geometry. These releases do not
honor LABEL geometry. Hence, if a disk was initialized through 5.1SP1 listener,
disk would be stamped with Fabricated geometry. When such a disk was used with
older VxVM releases like 5.0.1, 5.0, 4.1, there can be a mismatch between the
stamped geometry (Fabricated) and in-memory geometry (Raw). If on-disk cylinder
size < in-memory cylinder size, we might encounter data corruption issues.
To prevent any data corruption issues, we need to initialize disks through
listener with older CDS format by using raw geometry.
Also, if disk size is >= 1TB, 5.1SP1 VxVM will initialize the disk with CDS EFI
format. Older releases like 4.1, 5.0, 5.0.1 etc. do not understand EFI format.

RESOLUTION:
From releases 5.1SP1 onwards, through HP-UX listener, disk to be used for older
releases like 4.1, 5.0, 5.0.1 will be initialized with raw geometry. 
Also, initialization of disk through HPUX listener whose size is greater than 1TB
will fail.

* 3367778 (Tracking ID: 3152274)

SYMPTOM:
I/O hang seen with Not-Ready(NR) OR write-disabled(WD) LUNs. System syslog
floods with I/O error messages like:

Apr 22 03:09:51 d2950rs3 kernel: [162164.751628] 
Apr 22 03:09:51 d2950rs3 kernel: [162164.751632] VxVM vxdmp V-5-0-0 [Error] i/o
error occurred (errno=0x6) on dmpnode 201/0xb0
Apr 22 03:09:51 d2950rs3 kernel: [162164.751634] 
Apr 22 03:09:51 d2950rs3 kernel: [162164.751637] VxVM vxdmp V-5-0-0 [Error] i/o
error occurred (errno=0x6) on dmpnode 201/0xb0
Apr 22 03:09:51 d2950rs3 kernel: [162164.751639] 
Apr 22 03:09:51 d2950rs3 kernel: [162164.751643] VxVM vxdmp V-5-0-0 [Error] i/o
error occurred (errno=0x6) on dmpnode 201/0xb0
Apr 22 03:09:51 d2950rs3 kernel: [162164.751644] 
Apr 22 03:09:51 d2950rs3 kernel: [162164.751648] VxVM vxdmp V-5-0-0 [Error] i/o
error occurred (errno=0x6) on dmpnode 201/0xb0
..
..

DESCRIPTION:
For performance reasons, DMP immediately routes failed I/O through alternate
available path while performing asynchronous error analysis on I/O failed path. 
Not-ready (NR) rejects all kind of I/O requests. Write-disabled(WD) devices
rejects write I/O requests. But these devices respond fine to SCSI probes like
inquiry. so for those devices I/O was getting retried through DMP asynchronous
error analysis for different paths and was not actually getting terminated due
to code bug.

RESOLUTION:
DMP code was modified to better handle Not-Ready (NR) OR Write-Disabled (WD)
kind of devices. DMP async error analysis code was modified to handle such 
cases.

* 3368234 (Tracking ID: 3236772)

SYMPTOM:
With IO load on primary and replication going on, if we run "vradmin resizevol"
on primary, often these operations terminate with error message "vradmin ERROR
Lost connection to host".

DESCRIPTION:
There was a race condition on the secondary between the transaction and messages
delivered from Primary to Secondary. This resulted in repeated timeouts of
transactions on the Secondary. The repeated transaction timeout resulted in
session timeouts between Primary and secondary vradmind.

RESOLUTION:
Modified code to resolve the race condition.

* 3368236 (Tracking ID: 3327842)

SYMPTOM:
In CVR environment, with IO load on primary and replication going on, if we run
"vradmin resizevol" on primary, often these operations terminate with error
message "vradmin ERROR Lost connection to host".

DESCRIPTION:
There was a race condition on the secondary between the transaction and messages
delivered from Primary to Secondary. This resulted in repeated timeouts of
transactions on the Secondary. The repeated transaction timeout resulted in
session timeouts between Primary and secondary vradmind.

RESOLUTION:
Modified code to resolve the race condition.

* 3374166 (Tracking ID: 3325371)

SYMPTOM:
Panic occurs in the vol_multistepsio_read_source() function when VxVM's 
FastResync feature is used. The stack trace observed is as following:
vol_multistepsio_read_source()
vol_multistepsio_start()
volkcontext_process()
vol_rv_write2_start()
voliod_iohandle()
voliod_loop()
kernel_thread()

DESCRIPTION:
When a volume is resized, Data Change Object (DCO) also needs to be resized. 
However, the old accumulator contents are not copied into the new accumulator. 
Thereby, the respective regions are marked as invalid. Subsequent I/O on these 
regions triggers the panic.

RESOLUTION:
The code is modified to appropriately copy the accumulator contents during the 
resize operation.

* 3376953 (Tracking ID: 3372724)

SYMPTOM:
System panics while installing VxVM (Veritas Volume Manager) with the following 
warnings:
vxdmp: WARNING: VxVM vxdmp V-5-0-216 mod_install returned 6 vxspec V-5-0-0 
vxspec: vxio not loaded. Aborting vxspec load

DESCRIPTION:
During installation of VxVM, if DMP (Dynamic Multipathing) module fails to load, 
the cleanup procedure fails to reset the statistics timer (which is set while 
loading). As a result, the timer dereferences a function pointer which is 
already unloaded. Hence, the panic.

RESOLUTION:
Code changes have been made to perform a complete cleanup when DMP fails to 
load.

* 3387405 (Tracking ID: 3019684)

SYMPTOM:
IO hang is observed when SRL is about to overflow after logowner switch from
slave to master. Stack trace looks like:

biowait
default_physio
volrdwr
fop_write
write
syscall_trap32

DESCRIPTION:
Delineating the steps, with slave as logowner, overflow the SRL and following it
up with DCM resync. Then, switching back logowner to master and trying to
overflow SRL again would manifest the IO hang in the master when SRL is about to
overflow. This happens because the master has a stale flag set with incorrect
value related to last SRL overflow.

RESOLUTION:
Reset the stale flag and ensure that flag is reset whether the logowner is
master or slave.

* 3387417 (Tracking ID: 3107741)

SYMPTOM:
"vxrvg snapdestroy" command fails with error message "Transaction aborted
waiting for io drain", and vxconfigd hang is observed. vxconfigd stack trace is:

vol_commit_iowait_objects
vol_commit_iolock_objects
vol_ktrans_commit
volconfig_ioctl
volsioctl_real
vols_ioctl
vols_compat_ioctl
compat_sys_ioctl
...

DESCRIPTION:
The smartmove query of VxFS depends on some reads and writes. If some
transaction in VxVM blocks the new read and write, then API is hung waiting for
the response. This creates a deadlock-like situation with Smartmove API is
waiting for transaction to complete and transaction waiting Smartmove API is
hung waiting for transaction, and hence the hang.

RESOLUTION:
Disallow transactions during the Smartmove API.



INSTALLATION PRE-REQUISITES
---------------------------
VRTSvxvm 6.0.300.300 requires VRTSaslapm version 06.00.0100.0202 or higher
as a prerequisite. Make sure to install VRTSaslapm version 06.00.0100.0202
at https://sort.symantec.com/asl/details/645.


INSTALLING THE PATCH
--------------------
VxVM 6.0.100.000 (GA)  must be installed before applying these  patches.
To install the patch, enter the following command:
        # swinstall -x autoreboot=true -s <patch_directory>   <patch id>  
Incase the patch is not registered, the patch can be registered
using the following command:
        # swreg -l depot <patch_directory>  
where  <patch_directory>   is the absolute path where the patch resides.
d) Please do swverify after installing the patches in order to make sure
      that the patches are installed correctly using:
        $ swverify <patch id>


REMOVING THE PATCH
------------------
To remove the patch, enter the following command:
        # swremove -x autoreboot=true <patch id>


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE





Read and accept Terms of Service