vm-sol_sparc-5.1SP1RP1P2
Obsolete
The latest patch(es) : sfha-sol_sparc-5.1SP1RP4 

 Basic information
Release type: P-patch
Release date: 2011-06-07
OS update support: None
Technote: None
Documentation: None
Popularity: 3569 viewed    downloaded
Download size: 46.58 MB
Checksum: 864564776

 Applies to one or more of the following products:
VirtualStore 5.1SP1 On Solaris 10 SPARC
VirtualStore 5.1SP1 On Solaris 9 SPARC
Dynamic Multi-Pathing 5.1SP1 On Solaris 10 SPARC
Dynamic Multi-Pathing 5.1SP1 On Solaris 9 SPARC
Storage Foundation 5.1SP1 On Solaris 10 SPARC
Storage Foundation 5.1SP1 On Solaris 9 SPARC
Storage Foundation Cluster File System 5.1SP1 On Solaris 10 SPARC
Storage Foundation Cluster File System 5.1SP1 On Solaris 9 SPARC
Storage Foundation for Oracle RAC 5.1SP1 On Solaris 10 SPARC
Storage Foundation for Oracle RAC 5.1SP1 On Solaris 9 SPARC
Storage Foundation HA 5.1SP1 On Solaris 10 SPARC
Storage Foundation HA 5.1SP1 On Solaris 9 SPARC

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-sol_sparc-5.1SP1RP4 2013-08-21
vm-sol_sparc-5.1SP1RP3P2 (obsolete) 2013-03-15
sfha-sol_sparc-5.1SP1RP3 (obsolete) 2012-10-02
vm-sol_sparc-5.1SP1RP2P3 (obsolete) 2012-06-13
vm-sol_sparc-5.1SP1RP2P2 (obsolete) 2011-11-03
vm-sol_sparc-5.1SP1RP2P1 (obsolete) 2011-10-19
sfha-sol_sparc-5.1SP1RP2 (obsolete) 2011-09-28

This patch supersedes the following patches: Release date
vm-sol_sparc-5.1SP1RP1P1 (obsolete) 2011-03-02
vm-sol_sparc-5.1SP1P2 (obsolete) 2010-12-07

This patch requires: Release date
sfha-sol_sparc-5.1SP1RP1 (obsolete) 2011-02-14

 Fixes the following incidents:
2256685, 2256686, 2256688, 2256689, 2256690, 2256691, 2256692, 2256722, 2257684, 2268733, 2276324, 2280640, 2291967, 2299977, 2318820, 2320613, 2322742, 2322757, 2333255, 2333257, 2337237, 2337354, 2339254, 2346469, 2349497, 2349553, 2353429, 2357935, 2364294, 2366071

 Patch ID:
142629-11

Readme file
                          * * * READ ME * * *
             * * * Veritas Volume Manager 5.1 SP1 RP1 * * *
                         * * * P-patch 2 * * *
                         Patch Date: 2011.05.20


This document provides the following information:

   * PATCH NAME
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Volume Manager 5.1 SP1 RP1 P-patch 2


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Volume Manager 5.1 SP1 RP1
   * Veritas Storage Foundation for Oracle RAC 5.1 SP1 RP1
   * Veritas Storage Foundation Cluster File System 5.1 SP1 RP1
   * Veritas Storage Foundation 5.1 SP1 RP1
   * Veritas Storage Foundation High Availability 5.1 SP1 RP1
   * Veritas Dynamic Multi-Pathing 5.1 SP1 RP1
   * Symantec VirtualStore 5.1 SP1 RP1


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Solaris 9 SPARC
Solaris 10 SPARC


INCIDENTS FIXED BY THE PATCH
----------------------------
This patch fixes the following Symantec incidents:

Patch ID: 142629-11

* 2280640 (Tracking ID: 2205108)

SYMPTOM:
On VxVM 5.1SP1 or later, device discovery operations such as vxdctl
enable, vxdisk scandisks and vxconfigd -k failed to claim new  disks correctly.
For example, if user provisions five new disks, VxVM, instead of creating five
different Dynamic Multi-Pathing (DMP) nodes, creates only one and includes the 
rest as its paths. Also, the following message is displayed at console during 
this problem.  

NOTICE: VxVM vxdmp V-5-0-34 added disk array , datype =

Please note that the cabinet serial number following "disk array" and the value
of "datype" is not printed in the above message.

DESCRIPTION:
VxVM's DDL (Device Discovery Layer) is responsible for appropriately claiming
the newly provisioned disks. Due to a bug in one of the routines within this
layer, though the disks are claimed, their LSN (Lun Serial Number, an unique
identifier of disks) is ignored thereby every disk is wrongly categorized 
within a DMP node.

RESOLUTION:
Modified the problematic code within the DDL thereby new disks are claimed
appropriately.

WORKAROUND:

If vxconfigd does not hang or dump a core with this issue, a reboot can be a
workaround to recover this situation or to break up once and rebuild the DMP/DDL
database on the devices as the following steps;

# vxddladm excludearray all
# mv /etc/vx/jbod.info /etc/vx/jbod.info.org
# vxddladm disablescsi3
# devfsadm -Cv
# vxconfigd -k

# vxddladm includearray all
# mv /etc/vx/jbod.info.org /etc/vx/jbod.info
# vxddladm enablescsi3
# rm /etc/vx/disk.info /etc/vx/array.info
# vxconfigd -k

* 2291967 (Tracking ID: 2286559)

SYMPTOM:
System panics in DMP (Dynamic Multi Pathing) kernel module due to kernel heap 
corruption while DMP path failover is in progress.

Panic stack may look like:

vpanic
kmem_error+0x4b4()
gen_get_enabled_ctlrs+0xf4()
dmp_get_enabled_ctlrs+0xf4()
dmp_info_ioctl+0xc8()
dmpioctl+0x20()
dmp_get_enabled_cntrls+0xac()
vx_dmp_config_ioctl+0xe8()
quiescesio_start+0x3e0()
voliod_iohandle+0x30()
voliod_loop+0x24c()
thread_start+4()

DESCRIPTION:
During path failover in DMP, the routine gen_get_enabled_ctlrs() allocates 
memory proportional to the number of enabled paths. However, while releasing 
the memory, the routine may end up freeing more memory because of the change in 
number of enabled paths.

RESOLUTION:
Code changes have been made in the routines to free allocated memory only.

* 2299977 (Tracking ID: 2299670)

SYMPTOM:
VxVM disk groups created on EFI (Extensible Firmware Interface) LUNs do not get
auto-imported during system boot in VxVM version 5.1SP1 and later.

DESCRIPTION:
While determining the disk format of EFI LUNs, stat() system call on the
corresponding DMP devices fail with ENOENT ("No such file or directory") error
because the DMP device nodes are not created in the root file system during
system boot. This leads to failure in auto-import of disk groups created on EFI
LUNs.

RESOLUTION:
VxVM code is modified to use OS raw device nodes if stat() fails on DMP device
nodes.

* 2318820 (Tracking ID: 2317540)

SYMPTOM:
System panic due to kernel heap corruption while DMP device driver unload. 

Panic stack on Solaris (when kmem_flags is set to either 0x100 or 0xf) should be
similar to as below:

vpanic()
kmem_error+0x4b4()
dmp_free_stats_table+0x118()
dmp_free_modules+0x24()
vxdmp`_fini+0x178()
moduninstall+0x148()
modunrload+0x6c()
modctl+0x54()
syscall_trap+0xac()

DESCRIPTION:
During DMP kernel device driver unload, it frees all the allocated kernel heap
memory. As part of freeing allocated memory, DMP is trying to free more than the
allocated buffer size for one of the allocated buffer, which is leading to
system panic when kernel memory audit is enabled.

RESOLUTION:
Source code is modified to free the kernel buffer, which is aligned to the
allocation size.

* 2320613 (Tracking ID: 2313021)

SYMPTOM:
In Sun Cluster environment, nodes fail to join the CVM cluster after 
their reboot displaying following messages on console :

<> vxio: [ID 557667 kern.notice] NOTICE: VxVM vxio V-5-3-1251 joinsio_done:
Overlapping reconfiguration, failing the join for node 1. The join will be 
retried.
<> vxio: [ID 976272 kern.notice] NOTICE: VxVM vxio V-5-3-672 abort_joinp: 
aborting
joinp for node 1 with err 11
<> vxvm:vxconfigd: [ID 702911 daemon.notice] V-5-1-12144 CVM_VOLD_JOINOVER 
command
received with error

DESCRIPTION:
A reboot of a node within CVM cluster involves a "node leave" followed 
by a "node join" reconfiguration. During CVM reconfiguration, each node 
exchanges reconfiguration messages with other nodes using the UDP protocol. At 
the end of a CVM reconfiguration, the messages exchanged should be deleted from 
all the nodes in the cluster. However, due to a bug in CVM, the messages weren't 
deleted as part of the "node leave" reconfiguration processing in some nodes 
that resulted in failure of subsequent "node join" reconfigurations.

RESOLUTION:
After every CVM reconfiguration, the processed reconfiguration messages on 
all the nodes in the CVM cluster are deleted properly.

* 2322742 (Tracking ID: 2108152)

SYMPTOM:
vxconfigd, the VxVM volume configuration daemon startup fails to get into
enabled mode and "vxdctl enable" command displays the error "VxVM vxdctl ERROR
V-5-1-1589 enable failed: Error in disk group configuration copies ".

DESCRIPTION:
vxconfigd issues input/output control system call (ioctl) to read the disk
capacity from disks. However, if it fails, the error number is not propagated
back to vxconfigd. The subsequent disk operations to these failed devices were
causing vxconfigd to get into disabled mode.

RESOLUTION:
The fix is made to propagate the actual "error number" returned by the ioctl
failure back to vxconfigd.

* 2322757 (Tracking ID: 2322752)

SYMPTOM:
Duplicate device names are observed for NR (Not Ready) devices, when vxconfigd 
is restarted (vxconfigd -k).

# vxdisk list 

emc0_0052    auto            -            -            error
emc0_0052    auto:cdsdisk    -            -            error
emc0_0053    auto            -            -            error
emc0_0053    auto:cdsdisk    -            -            error

DESCRIPTION:
During vxconfigd restart, disk access records are rebuilt in vxconfigd 
database. As part of this process IOs are issued on all the devices to read the 
disk private regions. The failure of these IOs on NR devicess resulted in 
creating duplicate disk access records.

RESOLUTION:
vxconfigd code is modified not to create dupicate disk access records.

* 2333255 (Tracking ID: 2253552)

SYMPTOM:
vxconfigd leaks memory while reading the default tunables related to
smartmove (a VxVM feature).

DESCRIPTION:
In Vxconfigd, memory allocated for default tunables related to
smartmove feature is not freed causing a memory leak.

RESOLUTION:
The memory is released after its scope is over.

* 2333257 (Tracking ID: 1675599)

SYMPTOM:
Vxconfigd leaks memory while excluding and including a Third party Driver
controlled LUN in a loop. As part of this vxconfigd loses its license information
and following error is seen in system log:
        "License has expired or is not available for operation"

DESCRIPTION:
In vxconfigd code, memory allocated for various data structures related to
device discovery layer is not freed which led to the memory leak.

RESOLUTION:
The memory is released after its scope is over.

* 2337237 (Tracking ID: 2337233)

SYMPTOM:
Excluding a TPD device with "vxdmpadm exclude" command does not work. The
excluded device is still shown in "vxdisk list" outout.

Example:
# vxdmpadm exclude vxvm dmpnodename=emcpower22s2


# cat /etc/vx/vxvm.exclude
exclude_all 0
paths
emcpower22c /pseudo/emcp@22 emcpower22s2
#
controllers
#
product
#
pathgroups
#


# vxdisk scandisks


# vxdisk list | grep emcpower22s2
emcpower22s2 auto:sliced     -            -            online

DESCRIPTION:
Because of a bug in the logic of path name comparison, DMP ends up including the
disks in device discovery which are part of the exclude list.

RESOLUTION:
The code in DMP is corrected to handle path name comparison appropriately.

* 2337354 (Tracking ID: 2337353)

SYMPTOM:
The "vxdmpadm include" command is including all the excluded devices along with 
the device given in the command.

Example:

# vxdmpadm exclude vxvm dmpnodename=emcpower25s2
# vxdmpadm exclude vxvm dmpnodename=emcpower24s2

# more /etc/vx/vxvm.exclude
exclude_all 0
paths
emcpower24c /dev/rdsk/emcpower24c emcpower25s2
emcpower10c /dev/rdsk/emcpower10c emcpower24s2
#
controllers
#
product
#
pathgroups
#

# vxdmpadm include vxvm dmpnodename=emcpower24s2

# more /etc/vx/vxvm.exclude
exclude_all 0
paths
#
controllers
#
product
#
pathgroups
#

DESCRIPTION:
When a dmpnode is excluded, an entry is made in /etc/vx/vxvm.exclude file. This 
entry has to be removed when the dmpnode is included later. Due to a bug in 
comparison of dmpnode device names, all the excluded devices are included.

RESOLUTION:
The bug in the code which compares the dmpnode device names is rectified.

* 2339254 (Tracking ID: 2339251)

SYMPTOM:
In Solaris 10 version, newfs/mkfs_ufs(1M) fails to create UFS file system 
on "VxVM volume > 2 Tera Bytes" with the following error:

    # newfs /dev/vx/rdsk/[disk group]/[volume]
    newfs: construct a new file system /dev/vx/rdsk/[disk group]/[volume]: 
(y/n)? y
    Can not determine partition size: Inappropriate ioctl for device

The truss output of the newfs/mkfs_ufs(1M) shows that the ioctl() system calls, 
to identify the size of the disk or volume device, fails with ENOTTY error.

    ioctl(3, 0x042A, ...)                    Err#25 ENOTTY
    ...
    ioctl(3, 0x0412, ...)                    Err#25 ENOTTY

DESCRIPTION:
In Solaris 10 version, newfs/mkfs_ufs(1M) uses ioctl() system calls, to 
identify the size of the disk or volume device, when creating UFS file system 
on disk or volume devices "> 2TB". If the Operating System (OS) version is less 
than Solaris 10 Update 8, the above ioctl system calls are invoked on "volumes 
> 1TB" as well.

VxVM, Veritas Volume Manager exports the ioctl interfaces for VxVM volumes. 
VxVM 5.1 SP1 RP1 P1 and VxVM 5.0 MP3 RP3 introduced the support for Extensible 
Firmware Interface (EFI) for VxVM volumes in Solaris 9 and Solaris 10 
respectively. However the corresponding EFI specific build time definition in 
Veritas Kernel IO driver (VXIO) was not updated in Solaris 10 in VxVM 5.1 SP1 
RP1 P1 and onwards.

RESOLUTION:
The code changes to add the build time definition for EFI in VXIO entails in 
newfs/mkfs_ufs(1M) successfully creating UFS file system on VxVM volume 
devices "> 2TB" ("> 1TB" if OS version is less than Solaris 10 Update 8).

* 2346469 (Tracking ID: 2346470)

SYMPTOM:
The Dynamic Multi Pathing Administration operations such as "vxdmpadm 
exclude vxvm dmpnodename=<daname>" and "vxdmpadm include vxvm dmpnodename=
<daname>" triggers memory leaks in the heap segment of VxVM Configuration Daemon 
(vxconfigd).

DESCRIPTION:
vxconfigd allocates chunks of memory to store VxVM specific information 
of the disk being included during "vxdmpadm include vxvm dmpnodename=<daname>" 
operation. The allocated memory is not freed while excluding the same disk from 
VxVM control. Also when excluding a disk from VxVM control, another chunk of 
memory is temporarily allocated by vxconfigd to store more details of the device 
being excluded. However this memory is not freed at the end of exclude 
operation.

RESOLUTION:
Memory allocated during include operation of a disk is freed during 
corresponding exclude operation of the disk. Also temporary memory allocated 
during exclude operation of a disk is freed at the end of exclude operation.

* 2349497 (Tracking ID: 2320917)

SYMPTOM:
vxconfigd, the VxVM configuration daemon dumps core and loses disk group 
configuration while invoking the following VxVM reconfiguration steps:

1)	Volumes which were created on thin reclaimable disks are deleted.
2)	Before the space of the deleted volumes is reclaimed, the disks (whose 
volume is deleted) are removed from the DG with  'vxdg rmdisk' command using '-
k' option.
3)	The disks  are removed using  'vxedit rm' command.
4)	 New disks are added to the disk group using 'vxdg addisk' command.

The stack trace of the core dump is :
[
 0006f40c rec_lock3 + 330
 0006ea64 rec_lock2 + c
 0006ec48 rec_lock2 + 1f0
 0006e27c rec_lock + 28c
 00068d78 client_trans_start + 6e8
 00134d00 req_vol_trans + 1f8
 00127018 request_loop + adc
 000f4a7c main  + fb0
 0003fd40 _start + 108
]

DESCRIPTION:
When a volume is deleted from a disk group that uses thin reclaim luns, 
subdisks are not removed immediately, rather it is marked with a special flag. 
The reclamation happens at a scheduled time every day. "vxdefault" command can 
be invoked to list and modify the settings.

After the disk is removed from disk group using 'vxdg -k rmdisk' and 'vxedit 
rm' command, the subdisks records are still in core database and they are 
pointing to disk media record which has been freed. When the next command is 
run to add another new disk to the disk group, vxconfigd dumps core when 
locking the disk media record which has already been freed.

The subsequent disk group deport and import commands erase all disk group 
configuration as it detects an invalid association between the subdisks and the 
removed disk.

RESOLUTION:
1)	The following message will be printed when 'vxdg rmdisk' is used to 
remove disk that has reclaim pending subdisks:

VxVM vxdg ERROR V-5-1-0 Disk <diskname> is used by one or more subdisks which
are pending to be reclaimed.
        Use "vxdisk reclaim <diskname>" to reclaim space used by these subdisks,
        and retry "vxdg rmdisk" command.
        Note: reclamation is irreversible.

2)	Add a check when using 'vxedit rm' to remove disk. If the disk is in 
removed state and has reclaim pending subdisks, following error message will be 
printed:

VxVM vxedit ERROR V-5-1-10127 deleting <diskname>:
        Record is associated

* 2349553 (Tracking ID: 2353493)

SYMPTOM:
On Solaris 10, "pkgchk" command on VxVM package fails with the following error:
#pkgchk -a VRTSvxvm 
 ERROR: /usr/lib/libvxscsi.so.SunOS_5.10
    pathname does not exist

DESCRIPTION:
During installation of the VxVM package, the VxVM's libraries libvxscsi.so did
not get installed in the path /usr/lib/libvxscsi.so.SunOS_5.10, which is a
pre-requisite for successful execution of 'pkgchk'command.

RESOLUTION:
VxVM's installation scripts are modified to include the library at the correct
location.
installation.

* 2353429 (Tracking ID: 2334757)

SYMPTOM:
Vxconfigd consumes a lot of memory when the DMP tunable
dmp_probe_idle_lun is set on.  "pmap" command on vxconfigd process shows
continuous growing heap.

DESCRIPTION:
DMP path restoration daemon probes idle LUNs(Idle LUNs are VxVM disks on
which no I/O requests are scheduled) and generates notify events to vxconfigd. 
        Vxconfigd in turn send the nofification of these events to its clients.
For any reasons, if vxconfigd could not deliver  these events (because client is
busy processing earlier sent event), it keeps these events to itself.
        Because of this slowness of events consumption by its clients, memory
consumption of vxconfigd grows.

RESOLUTION:
dmp_probe_idle_lun is set to off by default.

* 2357935 (Tracking ID: 2349352)

SYMPTOM:
Data corruption is observed on DMP device with single path during Storage 
reconfiguration (LUN addition/removal).

DESCRIPTION:
Data corruption can occur in the following configuration, when new LUNs are 
provisioned or removed under VxVM, while applications are on-line.
 
1. The DMP device naming scheme is EBN (enclosure based naming) and 
persistence=no
2. The DMP device is configured with single path or the devices are controlled 
by Third Party Multipathing Driver (Ex: MPXIO, MPIO etc.,)
 
There is a possibility of change in name of the VxVM devices (DA record), when 
LUNs are removed or added followed by the following commands, since the 
persistence naming is turned off.
 
(a) vxdctl enable
(b) vxdisk scandisks
 
Execution of above commands discovers all the devices and rebuilds the device 
attribute list with new DMP device names. The VxVM device records are then 
updated with this new attributes. Due to a bug in the code, the VxVM device 
records are mapped to wrong DMP devices. 
 
Example:
 
Following are the device before adding new LUNs.
 
sun6130_0_16 auto            -            -            nolabel
sun6130_0_17 auto            -            -            nolabel
sun6130_0_18 auto:cdsdisk    disk_0       prod_SC32    online nohotuse
sun6130_0_19 auto:cdsdisk    disk_1       prod_SC32    online nohotuse
 
The following are after adding new LUNs
 
sun6130_0_16 auto            -            -            nolabel
sun6130_0_17 auto            -            -            nolabel
sun6130_0_18 auto            -            -            nolabel
sun6130_0_19 auto            -            -            nolabel
sun6130_0_20 auto:cdsdisk    disk_0       prod_SC32    online nohotuse
sun6130_0_21 auto:cdsdisk    disk_1       prod_SC32    online nohotuse
 
The name of the VxVM device sun6130_0_18 is changed to sun6130_0_20.

RESOLUTION:
The code that updates the VxVM device records is rectified.

* 2364294 (Tracking ID: 2364253)

SYMPTOM:
In case of Space Optimized snapshots at secondary site, VVR leaks kernel memory.

DESCRIPTION:
In case of Space Optimized snapshots at secondary site, VVR proactively starts
the copy-on-write on the snapshot volume. The I/O buffer allocated for this
proactive copy-on-write was not freed even after I/Os are completed which lead
to the memory leak.

RESOLUTION:
After the proactive copy-on-write is complete, memory allocated for the I/O
buffers is released.

* 2366071 (Tracking ID: 2366066)

SYMPTOM:
The VxVM (Veritas Volume Manager) vxstat command displays absurd statistics for
READ & WRITE operations on VxVM objects. The
absurd statistics is near to the max value of a 32-bit unsigned integer.

For example  :
# vxstat -g <disk group name> -i <interval>

                      OPERATIONS          BLOCKS           AVG TIME(ms)
TYP NAME              READ     WRITE      READ     WRITE   READ  WRITE

<Start Time>
vol <volume name>             10       303       112      2045   6.15  14.43

<Start Time> + 60 seconds
vol <volume name>              2        67        32       476   6.00  14.28

<Start Time> + 60*2 seconds
vol <volume name>      4294967288 4294966980 4294967199 4294965129   0.00   0.00

DESCRIPTION:
vxio, a VxVM driver, uses 32-bit unsigned integer variable to keep track of the
number of READ & WRITE blocks on VxVM objects.
Whenever the 32-bit unsigned integer overflows, vxstat displays the absurd
statistics as shown in SYMPTOM section above.

RESOLUTION:
Both vxio driver and vxstat command have been modified to accommodate larger
number of READ & WRITE blocks on VxVM objects.

Patch ID: 142629-10

* 2256685 (Tracking ID: 2080730)

SYMPTOM:
On Linux, exclusion of devices using the "vxdmpadm exclude" CLI is not
persistent across reboots.

DESCRIPTION:
On Linux, names of OS devices (/dev/sd*) are not persistent. The
"vxdmpadm exclude" CLI uses the OS device names to keep track of
devices to be excluded by VxVM/DMP. As a result, on reboot, if the OS
device names change, then the devices which are intended to be excluded
will be included again.

RESOLUTION:
The resolution is to use persistent physical path names to keep track of the
devices that have been excluded.

* 2256686 (Tracking ID: 2152830)

SYMPTOM:
Sometimes the storage admins create multiple copies/clones of the same device. 
Diskgroup import fails with a non-descriptive error message when multiple
copies(clones) of the same device exists and original device(s) are either
offline or not available.

# vxdg import mydg
VxVM vxdg ERROR V-5-1-10978 Disk group mydg: import failed: 
No valid disk found containing disk group

DESCRIPTION:
If the original devices are offline or unavailable, vxdg import picks
up cloned disks for import. DG import fails by design unless the clones
are tagged and tag is specified during DG import. While the import
failure is expected, but the error message is non-descriptive and
doesn't provide any corrective action to be taken by user.

RESOLUTION:
Fix has been added to give correct error meesage when duplicate clones
exist during import. Also, details of duplicate clones is reported in
the syslog.

Example:

[At CLI level]
# vxdg import testdg             
VxVM vxdg ERROR V-5-1-10978 Disk group testdg: import failed:
DG import duplcate clone detected

[In syslog]
vxvm:vxconfigd: warning V-5-1-0 Disk Group import failed: Duplicate clone disks are
detected, please follow the vxdg (1M) man page to import disk group with
duplicate clone disks. Duplicate clone disks are: c2t20210002AC00065Bd0s2 :
c2t50060E800563D204d1s2  c2t50060E800563D204d0s2 : c2t50060E800563D204d1s2

* 2256688 (Tracking ID: 2202710)

SYMPTOM:
Transactions on Rlink are not allowed during SRL to DCM flush.

DESCRIPTION:
Present implementation doesnat allow rlink transaction to go through if SRL
to DCM flush is in progress. As SRL overflows, VVR start reading from SRL and
mark the dirty regions in corresponding DCMs of data volumes, it is called SRL
to DCM flush. During SRL to DCM flush transactions on rlink is not allowed. Time
to complete SRL flush depend on SRL size, it could range from minutes to many
hours. If user initiate any transaction on rlink then it will hang until SRL
flush completes.

RESOLUTION:
Changed the code behavior to allow rlink transaction during SRL flush. Fix stops
the SRL flush for transaction to go ahead and restart the flush after
transaction completion.

* 2256689 (Tracking ID: 2233889)

SYMPTOM:
The volume recovery happens in a serial fashion when any of the volumes has a
log volume attached to it.

DESCRIPTION:
When recovery is initiated on a disk group, vxrecover creates lists of each type
of volumes such as cache volume, data volume, log volume etc. The log volumes
are recovered in a serial fashion by design. Due to a bug the data volumes are
added to the log volume list if there exists a log volume. Hence even the data
volumes were recovered in a serial fashion if any of the volumes has a log
volume attached.

RESOLUTION:
The code was fixed such that the data volume list, cache volume list and the log
volume list are maintained separately and the data volumes are not added to the
log volumes list. The recovery for the volumes in each list is done in parallel.
--------------------------------------------------------------------------------

* 2256690 (Tracking ID: 2226304)

SYMPTOM:
In Solaris 9 platform, newfs(1M)/mkfs_ufs(1M) cannot create ufs file system on 
>1 Tera byte(TB) VxVM volume and it displays the following error:

# newfs /dev/vx/rdsk/<diskgroup name>/<volume>
newfs: construct a new file system /dev/vx/rdsk/<diskgroup name>/<volume>: 
(y/n)? y
Can not determine partition size: Inappropriate ioctl for device

# prtvtoc /dev/vx/rdsk/<diskgroup name>/<volume>
prtvtoc: /dev/vx/rdsk/<diskgroup name>/<volume>: Unknown problem reading VTOC

DESCRIPTION:
newfs(1M)/mkfs_ufs(1M) invokes DKIOCGETEFI ioctl. During the enhancement of EFI 
support on Solaris 10 on 5.0MP3RP3 or later, DKIOCGETEFI ioctl functionality 
was not implemented on Solaris 9 because of the following limitations:

1.	EFI feature has not been introduced from Solaris 9 FCS and has been 
introduced from Solaris 9 U3(4/03) which includes 114127-03(libefi) and 114129-
02(libuuid and efi/uuid headers).

2.	During the enhancement of EFI support on Solaris 10, for solaris 9, 
DKIOCGVTOC ioctl was only supported on a volume <= 1TB since the VTOC 
specification was defined for only <= 1 TB LUN/volume. If the size of the 
volume is > 1 TB DKIOCGVTOC ioctl would return an inaccurate vtoc structure due 
to value overflow.

RESOLUTION:
The resolution is to enhance the VxVM code to handle DKIOCGETEFI ioctl 
correctly on VxVM volume on Solaris 9 platform. When newfs(1M)/mkfs_ufs(1M) 
invokes DKIOCGETEFI ioctl on a VxVM volume device, VxVM shall return the 
relevant EFI label information so that the UFS utilities can determine the 
volume size correctly.

* 2256691 (Tracking ID: 2197254)

SYMPTOM:
vxassist, the VxVM volume creation utility when creating volume with
alogtype=nonea doesnat function as expected.

DESCRIPTION:
While creating volumes on thinrclm disks, Data Change Object(DCO) version 20 log
is attached to every volume by default. If the user do not want this default
behavior then alogtype=nonea option can be specified as a parameter to vxassist
command. But with VxVM on HP 11.31 , this option does not work and DCO version
20 log is created by default.  The reason for this inconsistency is that  when
alogtype=nonea option is specified, the utility sets the flag to prevent
creation of log. However, VxVM wasnat checking whether the flag is set before
creating DCO log which led to this issue.

RESOLUTION:
This is a logical issue which is addressed by code fix. The solution is to check
for this corresponding flag of  alogtype=nonea before creating DCO version 20 by
default.

* 2256692 (Tracking ID: 2240056)

SYMPTOM:
'vxdg move/split/join' may fail during high I/O load.

DESCRIPTION:
During heavy I/O load 'dg move' transcation may fail because of open/close 
assertion and retry will be done. As the retry limit is set to 30 'dg move' 
fails if retry hits the limit.

RESOLUTION:
Change the default transaction retry to unlimit, introduce a new option 
to 'vxdg move/split/join' to set transcation retry limit as follows:

vxdg [-f] [-o verify|override] [-o expand] [-o transretry=retrylimit] move 
src_diskgroup dst_diskgroup objects ...

vxdg [-f] [-o verify|override] [-o expand] [-o transretry=retrylimit] split 
src_diskgroup dst_diskgroup objects ...

vxdg [-f] [-o verify|override] [-o transretry=retrylimit] join src_diskgroup 
dst_diskgroup

* 2256722 (Tracking ID: 2215256)

SYMPTOM:
Volume Manager is unable to recognize the devices connected through F5100 HBA

DESCRIPTION:
During device discovery volume manager does not scan the luns that are connected
through SAS HBA (F5100 is a new SAS HBA). So the commands like 'vxdisk list'
does not even show the luns that are connected through F5100 HBA

RESOLUTION:
Modified the device discovery code in volume manager to include the paths/luns
that are connected through SAS HBA.

* 2257684 (Tracking ID: 2245121)

SYMPTOM:
Rlinks do not connect for NAT (Network Address Translations) configurations.

DESCRIPTION:
When VVR (Veritas Volume Replicator) is replicating over a Network Address 
Translation (NAT) based firewall, rlinks fail to connect resulting in 
replication failure.

Rlinks do not connect as there is a failure during exchange of VVR heartbeats.
For NAT based firewalls, conversion of mapped IPV6 (Internet Protocol Version 
6) address to IPV4 (Internet Protocol Version 4) address is not handled which 
caused VVR heartbeat exchange with incorrect IP address leading to VVR 
heartbeat failure.

RESOLUTION:
Code fixes have been made to appropriately handle the exchange of VVR 
heartbeats under NAT based firewall.

* 2268733 (Tracking ID: 2248730)

SYMPTOM:
Command hungs if "vxdg import" called from script with STDERR
redirected.

DESCRIPTION:
If script is having "vxdg import" with STDERR redirected then
script does not finish till DG import and recovery is finished. Pipe between
script and vxrecover is not closed properly which keeps calling script waiting
for vxrecover to complete.

RESOLUTION:
Closed STDERR in vxrecover and redirected the output to
/dev/console.

* 2276324 (Tracking ID: 2270880)

SYMPTOM:
On Solaris 10 (SPARC only), if the size of EFI(Extensible Firmware Interface)
labeled disk is greater than 2TB, the disk capacity will be truncated to 2TB
when it is initialized with CDS(Cross-platform Data Sharing) under VxVM(Veritas
Volume Manager).

For example, the sizes shown as the sector count by prtvtoc(1M) and public
region size by vxdisk(1M) will be truncated to the sizes approximate 2TB.

# prtvtoc /dev/rdsk/c0t500601604BA07D17d13
<snip>
*                          First      Sector    Last
* Partition  Tag  Flags    Sector     Count     Sector     Mount Directory
       2     15    00         48    4294967215  4294967262

# vxdisk list c0t500601604BA07D17d13 | grep public
public:    slice=2 offset=65744 len=4294901456 disk_offset=48

DESCRIPTION:
From VxVM 5.1 SP1 and onwards, the CDS format is enhanced to support for disks
of greater than 1TB. VxVM will use EFI layout to support CDS functionality for
disks of greater than 1TB, however on Solaris 10 (SPARC only), a problem is seen
that the disk capacity will be truncated to 2TB if the size of EFI labeled disk
is greater than 2TB.

This is because the library /usr/lib/libvxscsi.so in Solaris 10 (SPARC only)
package does not contain the required enhancement on Solaris 10 to support CDS
format for disks greater than 2TB.

RESOLUTION:
The VxVM package for Solaris has been changed to contain all the libvxscsi.so
binaries which is built for Solaris platforms(versions) respectively, for
example libvxscsi.so.SunOS_5.9 and libvxscsi.so.SunOS_5.10.

From this fix and onwards, the appropriate platform's built of the binary will
be installed as /usr/lib/libvxscsi.so during the installation of the VxVM package.


INSTALLING THE PATCH
--------------------
For Solaris 9, and 10 releases, refer to the man pages for instructions on
using 'patchadd' and 'patchrm' scripts provided with Solaris.
Any other special or non-generic installation instructions should be
described below as special instructions.  The following example
installs a patch to a standalone machine:

        example# patchadd 122058-xx


REMOVING THE PATCH
------------------
The following example removes a patch from a standalone system:

        example# patchrm 122058-xx

For additional examples please see the appropriate man pages.


SPECIAL INSTRUCTIONS
--------------------
You need to use the shutdown command to reboot the system after patch
installation or de-installation:

    shutdown -g0 -y -i6


A Solaris 10 issue prevents this patch from complete installation.
Before installing this VM patch, install the Solaris patch
119254-70 (or a later revision). This Solaris patch fixes packaging,
installation and patch utilities. [Sun Bug ID 6337009]

Download Solaris 10 patch 119254-70 (or later) from Sun at
http://sunsolve.sun.com