vm-sles11_x86_64-5.1SP1RP3P1
Obsolete
The latest patch(es) : sfha-sles11_x86_64-5.1SP1RP4 

 Basic information
Release type: P-patch
Release date: 2012-11-17
OS update support: None
Technote: None
Documentation: None
Popularity: 3803 viewed    downloaded
Download size: 16.36 MB
Checksum: 374246801

 Applies to one or more of the following products:
Dynamic Multi-Pathing 5.1SP1 On SLES11 x86-64
Storage Foundation 5.1SP1 On SLES11 x86-64
Storage Foundation Cluster File System 5.1SP1 On SLES11 x86-64
Storage Foundation Cluster File System for Oracle RAC 5.1SP1 On SLES11 x86-64
Storage Foundation for Oracle RAC 5.1SP1 On SLES11 x86-64
Storage Foundation HA 5.1SP1 On SLES11 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-sles11_x86_64-5.1SP1RP4 2013-08-21

This patch requires: Release date
sfha-sles11_x86_64-5.1SP1RP3 (obsolete) 2012-10-02

 Fixes the following incidents:
2485252, 2711758, 2847333, 2860208, 2881862, 2883606, 2906832, 2919718, 2928768, 2929003, 2940448, 2946948, 2949855, 2957608, 2962269, 2978414, 2979692

 Patch ID:
VRTSvxvm-5.1.133.100-SP1RP3P1_SLES11

Readme file
                          * * * READ ME * * *
             * * * Veritas Volume Manager 5.1 SP1 RP3 * * *
                         * * * P-patch 1 * * *
                         Patch Date: 2012-11-15


This document provides the following information:

   * PATCH NAME
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Volume Manager 5.1 SP1 RP3 P-patch 1


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Storage Foundation for Oracle RAC 5.1 SP1
   * Veritas Storage Foundation Cluster File System 5.1 SP1
   * Veritas Storage Foundation 5.1 SP1
   * Veritas Storage Foundation High Availability 5.1 SP1
   * Veritas Storage Foundation Cluster File System for Oracle RAC 5.1 SP1
   * Veritas Dynamic Multi-Pathing 5.1 SP1


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES11 x86-64


INCIDENTS FIXED BY THE PATCH
----------------------------
This patch fixes the following Symantec incidents:

Patch ID: 5.1.133.100

* 2485252 (Tracking ID: 2910043)

SYMPTOM:
Frequent swapin/swapout seen due to higher order memory requests

DESCRIPTION:
In VxVM operations such as plex attach, snapshot resync/reattach issue 
ATOMIC_COPY IOCTL's. Default I/O size for these operation is 1MB and VxVM 
allocates this memory from operating system. Memory allocations of such large 
size can results into swapin/swapout of pages and are not very efficient. In 
presence of lot of such operations , system may not work very efficiently.

RESOLUTION:
VxVM has its own I/O memory management module, which allocates pages from 
operating system and efficiently manage them. Modified ATOMIC_COPY code to make 
use of VxVM's internal I/O memory pool instead of directly allocating memory 
from operating system.

* 2711758 (Tracking ID: 2710579)

SYMPTOM:
Data corruption can be observed on a CDS (Cross-platform Data Sharing) disk, 
as part of operations like LUN resize, Disk FLUSH, Disk ONLINE etc. The
following pattern would be found in the 
data region of the disk.

<DISK-IDENTIFICATION> cyl <number-of-cylinders> alt 2 hd <number-of-tracks> sec 
<number-of-sectors-per-track>

DESCRIPTION:
The CDS disk maintains a SUN VTOC in the zeroth block and a backup label at the 
end of the disk. The VTOC maintains the disk geometry information like number 
of cylinders, tracks and sectors per track. The backup label is the duplicate 
of VTOC and the backup label location is determined from VTOC contents. If the
content of SUN VTOC located in the zeroth sector are incorrect, this may result
in the wrong calculation of the backup label location. If the wrongly 
calculated backup label location falls in the public data region rather than 
the end of the disk as designed, data corruption occurs.

RESOLUTION:
Suppressed writing the backup label to prevent the data corruption.

* 2847333 (Tracking ID: 2834046)

SYMPTOM:
VxVM dynamically reminors all the volumes during DG import if the DG base minor
numbers are not in the correct pool. This behaviour cases NFS client to have to
re-mount all NFS file systems in an environment where CVM is used on the NFS
server side.

DESCRIPTION:
Starting from 5.1, the minor number space is divided into two pools, one for
private disk groups and another for shared disk groups. During DG import, the DG
base minor numbers will be adjusted automatically if not in the correct pool,
and so do the volumes in the disk groups. This behaviour reduces many minor
conflicting cases during DG import. But in NFS environment, it makes all file
handles on the client side stale. Customers had to unmount files systems and
restart applications.

RESOLUTION:
A new tunable, "autoreminor", is introduced. The default value is "on". Most of
the customers don't care about auto-reminoring. They can just leave it as it is.
For a environment that autoreminoring is not desirable, customers can just turn
it off. Another major change is that during DG import, VxVM won't change minor 
numbers as long as there is no minor conflicts. This includes the cases that 
minor numbers are in the wrong pool.

* 2860208 (Tracking ID: 2859470)

SYMPTOM:
The EMC SRDF-R2 disk may go in error state when you create EFI label on the R1 
disk. For example:

R1 site
# vxdisk -eo alldgs list | grep -i srdf
emc0_008c auto:cdsdisk emc0_008c SRDFdg online c1t5006048C5368E580d266 srdf-r1

R2 site
# vxdisk -eo alldgs list | grep -i srdf
emc1_0072 auto - - error c1t5006048C536979A0d65 srdf-r2

DESCRIPTION:
Since R2 disks are in write protected mode, the default open() call (made for 
read-write mode) fails for the R2 disks, and the disk is marked as invalid.

RESOLUTION:
As a fix, DMP was changed to be able to read the EFI label even on a write 
protected SRDF-R2 disk.

* 2881862 (Tracking ID: 2878876)

SYMPTOM:
vxconfigd, VxVM configuration daemon dumps core with the following stack.

vol_cbr_dolog ()
vol_cbr_translog ()
vold_preprocess_request () 
request_loop ()
main     ()

DESCRIPTION:
This core is a result of a race between two threads which are processing the 
requests from the same client. While one thread completed processing a request 
and is in the phase of releasing the memory used, other thread is processing a 
request "DISCONNECT" from the same client. Due to the race condition, the 
second thread attempted to access the memory which is being released and dumped 
core.

RESOLUTION:
The issue is resolved by protecting the common data of the client by a mutex.

* 2883606 (Tracking ID: 2189812)

SYMPTOM:
While executing 'vxdisk updateudid' on a disk which is in 'online invalid' 
state causes vxconfigd to dump core with following stack:

priv_join()
req_disk_updateudid()
request_loop()
main()

DESCRIPTION:
While updating udid, nullity check was not done for an internal data structure.
This lead vxconfigd to dump core.

RESOLUTION:
Code changes are done to add nullity checks for internal data structure.

* 2906832 (Tracking ID: 2398954)

SYMPTOM:
Machine panics while doing I/O on a VxFS mounted instant snapshot with ODM
smartsync enabled.
The panic has the following stack.
panic: post_hndlr(): Unresolved kernel interruption
cold_vm_hndlr
bubbledown
as_ubcopy
privlbcopy
volkio_to_kio_copy
vol_multistepsio_overlay_data
vol_multistepsio_start
voliod_iohandle
voliod_loop
kthread_daemon_startup

DESCRIPTION:
VxVM uses the fields av_back and av_forw of io buf structure to store its 
private information. VxFS also uses these fields to chain io buffers before 
passing I/O to VxVM. When an I/O is received at VxVM layer it always resets 
these fields. But if ODM smartsync is enabled, VxFS uses a special strategy 
routine to pass on hints to VxVM. Due to a bug in the special strategy routine, 
the fields av_back and av_forw are not reset and could be pointing to a valid 
buffer in VxFS io buffer chain. So VxVM interprets these fields wrongly and 
modifies its contents, it corrupts the next buffer in the chain leading to this 
panic.

RESOLUTION:
The fields av_back and av_forw of io buf structure are reset in the special
strategy routine.

* 2919718 (Tracking ID: 2919714)

SYMPTOM:
On a THIN lun, vxevac returns 0 without migrating unmounted VxFS volumes.  The
following error messages are displayed when an unmounted VxFS volumes is processed:

 VxVM vxsd ERROR V-5-1-14671 Volume v2 is configured on THIN luns and not mounted.
Use 'force' option, to bypass smartmove. To take advantage of smartmove for
supporting thin luns, retry this operation after mounting the volume.
 VxVM vxsd ERROR V-5-1-407 Attempting to cleanup after failure ...

DESCRIPTION:
On a THIN lun, VM will not move or copy data on an unmounted VxFS volumes unless
smartmove is bypassed.  The vxevac command fails needs to be enhanced to detect
unmounted VxFS volumes on THIN luns and to support a force option that allows the
user to bypass smartmove.

RESOLUTION:
The vxevac script has be modified to check for unmounted VxFS volumes on THIN luns
prior to performing the migration. If an unmounted VxFS volume is detected the
command fails with a non-zero return code and displays a message notifying the
user to mount the volumes or bypass smartmove by specifying the force option: 

 VxVM vxevac ERROR V-5-2-0 The following VxFS volume(s) are configured
 on THIN luns and not mounted:

         v2

 To take advantage of smartmove support on thin luns, retry this operation
 after mounting the volume(s).  Otherwise, bypass smartmove by specifying
 the '-f' force option.

* 2928768 (Tracking ID: 2928764)

SYMPTOM:
If the tunable dmp_fast_recovery is set to off, PGR( Persistent Group
Reservation) key registration fails except for the first path i.e. only
for the first path PGR key gets registered.

Consider we are registering keys as follows.

# vxdmpadm settune dmp_fast_recovery=off

# vxdmpadm settune dmp_log_level=9

# vxdmppr read -t REG /dev/vx/rdmp/hitachi_r7000_00d9

Node: /dev/vx/rdmp/hitachi_r7000_00d9
    ASCII-KEY           HEX-VALUE
    -----------------------------

# vxdmppr register -s BPGR0000 /dev/vx/rdmp/hitachi_r7000_00d9

# vxdmppr read -t REG /dev/vx/rdmp/hitachi_r7000_00d9
    Node: /dev/vx/rdmp/hitachi_r7000_00d9
    ASCII-KEY           HEX-VALUE
    -----------------------------
    BPGR0000            0x4250475230303030

This being a multipathed disk, only first path gets PGR key registered through 
it.


You will see the log messages similar to following: 

Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-0-0 SCSI error opcode=0x5f
returned rq_status=0x12 cdb_status=0x1 key=0x6 asc=0x2a ascq=0x3 on path 8/0x90
Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-3-0 dmp_scsi_ioctl: SCSI
ioctl completed host_byte = 0x0 rq_status = 0x8
Sep  6 11:29:41 clabcctlx04 kernel: sd 4:0:0:4: reservation conflict

Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-3-0 dmp_scsi_ioctl: SCSI
ioctl completed host_byte = 0x11 rq_status = 0x17

Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-0-0 SCSI error opcode=0x5f
returned rq_status=0x17 cdb_status=0x0 key=0x0 asc=0x0 ascq=0x0 on path 8/0xb0

Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-3-0 dmp_pr_send_cmd failed
with transport error: uscsi_rqstatus = 23ret = -1 status = 0 on dev 8/0xb0


Sep  6 11:29:41 clabcctlx04 kernel: VxVM vxdmp V-5-3-0 dmp_scsi_ioctl: SCSI
ioctl completed host_byte = 0x0 rq_status = 0x8

DESCRIPTION:
After key for first path gets registered successfully, the second path gets a   
reservation conflict which is expected. But in case of synchronous mode i.e.
when dmp_fast_recovery is off, we don't set the proper reservation flag, due to
which the registration command fails with the transport error and PGR keys on
other paths don't get registered. In asynchronous mode we set it correctly,
hence don't see the issue there.

RESOLUTION:
Set the proper reservation flag so that the key can be registered for other
paths as well.

* 2929003 (Tracking ID: 2928987)

SYMPTOM:
vxconfigd hung is observed when IO failed by OS layer.

DESCRIPTION:
DMP is supposed to do number of IO retries that are defined by user. When it
receives IO failure from OS layer, due to bug it restarts IO without checking IO
retry count, thus IO gets stuck in loop infinitely

RESOLUTION:
Code changes are done in DMP to use the IO retry count defined by user.

* 2940448 (Tracking ID: 2940446)

SYMPTOM:
I/O can hang on volume with space optimized snapshot if the underlying cache 
object is of very large size. It can also lead to data corruption in cache-
object.

DESCRIPTION:
Cache volume maintains B+ tree for mapping the offset and its actual location 
in cache object. Copy-on-write I/O generated on snapshot volumes needs to 
determine the offset of particular I/O in cache object. Due to incorrect type-
casting the value calculated for large offset truncates to smaller value due to 
overflow, leading to data corruption.

RESOLUTION:
Code changes are done to avoid overflow during offset calculation in cache 
object.

* 2946948 (Tracking ID: 2406096)

SYMPTOM:
vxconfigd, VxVM configuration daemon, dumps core with the following stack:

vol_cbr_oplistfree()
vol_clntaddop()
vol_cbr_translog()
vold_preprocess_request()
request_loop()
main()

DESCRIPTION:
vxsnap utility forks a child process and the parent process exits. The child
process continues the remaining work as a background process. It does not create
a new connection with vxconfigd and continues to use the parent's connection.
Since the parent is dead, vxconfigd cleans up its client structure.
Corresponding to further requests from child process, vxconfigd tries accessing
the client structure that was already freed and hence, dumps core.

RESOLUTION:
The issue is solved by initiating a separate connection with vxconfigd from the
forked child.

* 2949855 (Tracking ID: 2943637)

SYMPTOM:
System panicked after the process of expanding DMP IO statistic queue size. The 
following stack message can be observed in syslog before panic:

oom_kill_process
select_bad_process
out_of_memory
__alloc_pages_nodemask
alloc_pages_current
__vmalloc_area_node
dmp_alloc
__vmalloc_node
dmp_alloc
vmalloc_32
dmp_alloc
dmp_zalloc
dmp_iostatq_add
dmp_iostatq_op
dmp_process_stats
dmp_daemons_loop

DESCRIPTION:
In the process of expanding DMP IO statistic queue size, memory is allocated in 
sleep/block way. When Linux kernel can't satisfy the memory allocation request, 
i.e. system under high load and the amount of per-CPU memory chunk can be large 
since amounts of CPU, it will invoke OOM killer to kill other processes/threads 
to free more memory, which may cause system panic.

RESOLUTION:
The code changes were made to allocate memory in non-sleep way in the process 
of expanding DMP IO statistic queue size, hence, it will return fail quickly if 
the system can't satisfy the request but not invoke OOM killer.

* 2957608 (Tracking ID: 2671241)

SYMPTOM:
When the DRL log plex is configured in a volume, vxnotify doesn't report volume 
enabled message.

DESCRIPTION:
When the DRL log plex is configured in a volume, we will make a two
phase start of the volume; The first is to start plexes and make the volume 
state DETACHED, then make the volume state ENABLED in the second phase after 
the log recovery. However when we are notifying the configuration change to the 
interested client, we are only checking the status change from DISABLED to 
ENABLED.

RESOLUTION:
With fix, notification is generated on state change of volume from any state 
to 'ENABLED' (and any state to 'DISABLED').

* 2962269 (Tracking ID: 2193755)

SYMPTOM:
The system panics with back trace similar to :
notifier_call_chain
notify_die
__die
no_context
__bad_area_nosemaphore
page_fault
dmp_send_scsipkt_req
dmp_send_scsipkt
dmp_send_scsireq
dmp_bypass_strategy
dmp_path_okay
dmp_error_action
dmp_daemons_loop
child_rip

DESCRIPTION:
In Linux, every IO buffer for a device has to be within the limit of maximum IO 
size which that device can handle. In addition the numbers of vectors pointing 
to the actual data segments within the IO buffer also have to be less than the 
maximum value. 
When IO cannot start on a page boundary one chunk of IO gets split into two IO 
vectors. As the number of vectors in the IO buffer exceeds the maximum IO vector 
limit on the subsystem, panic happens.

RESOLUTION:
Code changes are made to ensure that the IO size and the number of vectors in 
the IO buffer are within the allowed limit.

* 2978414 (Tracking ID: 2416341)

SYMPTOM:
Even though paths of a DMP(Dynamic multi-pathing) device are not manually 
disabled using "vxdmpadm disable" CLI, some times they can be in "DISABLED(M)" 
state, if these paths are disconnected and re-connected back due to SAN 
reconfiguration. Even DMP device can be in "DISABLED" state if all the paths of 
the DMP device are in "DISABLED(M)" state. "vxdmpadm getsubpaths" CLI will show 
the paths in DISABLED(M) state as shown below:

# vxdmpadm getsubpaths dmpnodename=emc0_00ec
sdo          DISABLED(M)   -          emc0_00ec    emc0         c2       -
sdu          DISABLED(M)   -          emc0_00ec    emc0         c1       -
sdx          DISABLED(M)   -          emc0_00ec    emc0         c3       -

DESCRIPTION:
DMP listens to UDEV device ADD/REMOVE events. In response to device UDEV 
"REMOVE" events corresponding paths under DMP device are marked as "DISABLED(M)" 
to avoid sending further i/os on the path. Now when UDEV ADD events are raised 
on device reconnect, some times these events are not properly communicated to 
DMP. Thus paths remains in "DISABLED(M)" state.

RESOLUTION:
Done code changes to properly communicate UDEV Device "ADD" events to DMP so 
that paths are enabled back.

* 2979692 (Tracking ID: 2575051)

SYMPTOM:
In a CVM environment, Master switch or master takeover operations results in
panic with below stack.		

volobject_iogen 
vol_cvol_volobject_iogen 
vol_cvol_recover3_start
voliod_iohandle 
voliod_loop 
kernel_thread

DESCRIPTION:
Panic happens while accessing  fields of stale cache object. The cache recovery
process gets initiated by master takeover or master switch operation. In the
recovery process VxVM do not take I/O count on cache objects. In meanwhile, same
cache object can go through transaction while recovery is still in progress.
Therefore cache object gets changed as part of transaction and in recovery code
path VxVM try to access stale cache object resulting in panic.

RESOLUTION:
This issue is addressed by code changes in cache recovery code path.


INSTALLING THE PATCH
--------------------
(sles11_x86_64)
# rpm -Uhv VRTSvxvm-5.1.132.100-SP1RP3P1_SLES11.x86_64.rpm


REMOVING THE PATCH
------------------
# rpm -e  <rpm-name>


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE