This page lists publically-released patches for Veritas Enterprise Products.
For Product GA build, see Veritas Entitlement Management System(VEMS) by clicking the Veritas Support 'Licensing' option.
For information on private patches, contact Veritas Technical Support.
Veritas is making it easier to find all software installers and updates for Veritas products with a completely redesigned experience. NetBackup HotFixes and NetBackup Appliance patches are now also available at the new Veritas Download Center.
Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.
vm-sol11_sparc-Patch-6.2.1.200
Obsolete
The latest patch(es) : vm-sol11_sparc-Patch-6.2.1.500 
Sign in if you want to rate this patch.

 Basic information
Release type: Patch
Release date: 2016-01-13
OS update support: Solaris 11 SPARC Update 3
Technote: None
Documentation: None
Popularity: 2626 viewed    313 downloaded
Download size: 55.73 MB
Checksum: 1106678564

 Applies to one or more of the following products:
Dynamic Multi-Pathing 6.2 On Solaris 11 SPARC
Storage Foundation 6.2 On Solaris 11 SPARC
Storage Foundation Cluster File System 6.2 On Solaris 11 SPARC
Storage Foundation for Oracle RAC 6.2 On Solaris 11 SPARC
Storage Foundation HA 6.2 On Solaris 11 SPARC
Volume Manager 6.2 On Solaris 11 SPARC

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
vm-sol11_sparc-Patch-6.2.1.500 2017-10-23
vm-sol11_sparc-Patch-6.2.1.400 (obsolete) 2017-10-10
vm-sol11_sparc-Patch-6.2.1.300 (obsolete) 2017-10-10

This patch requires: Release date
sfha-sol11_sparc-MR-6.2.1 2015-04-24

 Fixes the following incidents:
3795710, 3802857, 3803497, 3804299, 3812192, 3835562, 3847745, 3850890, 3851117, 3852148, 3854788, 3859226, 3862240, 3862632

 Patch ID:
VRTSvxvm-6.2.1.200

 Readme file  [Save As...]
                          * * * READ ME * * *
               * * * Symantec Volume Manager 6.2.1 * * *
                         * * * Patch 200 * * *
                         Patch Date: 2016-01-07

NOTE:
----
The Patch Installer is updated to this patch on Nov. 07, 2016 to fix the following issue.
Incident: 3902959, The unload of DMP driver fails during the "stop" operation of the install scripts.
SYMPTOM:
The installer fails to stop the DMP driver process(vxdmp) during the "stop" process phase.
DESCRIPTION:
The "stop" process fails to unload the DMP driver due to positive reference count on the DMP driver handle.
This is because there are stale entries under the /dev/vx/[r]dmp directories on 
the  physical disk, causing vxconfigd to hold two additional holds on the vxdmp driver.
RESOLUTION:
The installer script has been modified to remove stale entries from the /dev/vx/[r]dmp directories.



This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Symantec Volume Manager 6.2.1 Patch 200 (Adds Solaris 11.3 Support)


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Solaris 11 SPARC (Update 3)


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Symantec Dynamic Multi-Pathing 6.2
   * Symantec Storage Foundation 6.2
   * Symantec Storage Foundation Cluster File System HA 6.2
   * Symantec Storage Foundation for Oracle RAC 6.2
   * Symantec Storage Foundation HA 6.2
   * Symantec Volume Manager 6.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 6.2.1.200
* 3795710 (3508122) After one node preempts SCSI-3 reservation for the other node, the I/O from the victim node does not fail.
* 3802857 (3726110) On systems with high number of CPUs, Dynamic Multi-Pathing (DMP) devices may perform considerably slower than OS device paths.
* 3803497 (3802750) VxVM (Veritas Volume Manager) volume I/O-shipping functionality is not disabled even after the user issues the correct command to disable it.
* 3804299 (3804298) Not recording the setting/unsetting of the 'lfailed/lmissing' flag in the syslog
* 3812192 (3764326) VxDMP(Veritas Dynamic Multi-Pathing) repeatedly reports "failed to get devid".
* 3835562 (3835560) Auto-import of the diskgroup fails if some of the disks in diskgroup are missing.
* 3847745 (3677359) VxDMP (Veritas Dynamic MultiPathing) causes system panic after a shutdown or reboot.
* 3850890 (3603792) The first boot after live upgrade to new version of Solaris 11 and VxVM 
(Veritas Volume Manager) takes long time.
* 3851117 (3662392) In the Cluster Volume Manager (CVM) environment, if I/Os are getting executed 
on slave node, corruption can happen when the vxdisk resize(1M) command is 
executing on the master node.
* 3852148 (3852146) Shared DiskGroup(DG) fails to import when "-c" and "-o noreonline" options 
are
specified together
* 3854788 (3783356) After Dynamic Multi-Pathing (DMP) module fails to load, dmp_idle_vector is not NULL.
* 3859226 (3287880) In a clustered environment, if a node doesn't have storage
connectivity to clone disks, then the vxconfigd on the node may dump core during
the clone disk group import.
* 3862240 (3856146) The Solaris sparc 11.2 latest SRUs and Solaris sparc 11.3 System  panics during 
reboot and fails to come up, after turning off the dmp_native_support.
* 3862632 (3769927) "vxdmpadm settune dmp_native_support=off" command fails on Solaris.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: 6.2.1.200

* 3795710 (Tracking ID: 3508122)

SYMPTOM:
When running vxfentsthdw during the preempt key operation, the I/O on the victim node is expected to fail, but sometimes it doesn't fail

DESCRIPTION:
After node 1 preempts the SCSI-3 reservations for node 2, it is expected that write I/Os from victim node 2 will fail. It is observed that sometimes the storage does not preempt all the keys of the victim node fast enough, but it fails the I/O with reservation conflict. In such case, the victim node could not correctly identify that it has been preempted and can still do re-registration of keys to perform the I/O.

RESOLUTION:
The code is modified to correctly identify that SCSI-3 keys of a node has been preempted.

* 3802857 (Tracking ID: 3726110)

SYMPTOM:
On systems with high number of CPUs, DMP devices may perform  considerably slower than OS device  paths.

DESCRIPTION:
In high CPU configuration, I/O statistics related functionality in DMP takes more CPU time because DMP statistics are collected on per CPU basis. This stat collection happens in DMP I/O code path hence it reduces the I/O performance. Because of this, DMP devices perform slower than OS device paths.

RESOLUTION:
The code is modified to remove some of the stats collection functionality from DMP I/O code path. Along with this, the following tunable need to be turned off: 
1. Turn off idle lun probing. 
#vxdmpadm settune dmp_probe_idle_lun=off
2. Turn off statistic gathering functionality.  
#vxdmpadm iostat stop

Notes: 
1. Please apply this patch if system configuration has large number of CPU and if DMP performs considerably slower than OS device paths. For normal systems this issue is not applicable.

* 3803497 (Tracking ID: 3802750)

SYMPTOM:
Once VxVM (Veritas Volume Manager) volume I/O-shipping functionality is turned on, it is not getting disabled even after the user issues the correct command to disable it.

DESCRIPTION:
VxVM (Veritas Volume Manager) volume I/O-shipping functionality is turned off by default. The following two commands can be used to turn it on and off:
	vxdg -g <dgname> set ioship=on
	vxdg -g <dgname> set ioship=off

The command to turn off I/O-shipping is not working as intended because I/O-shipping flags are not reset properly.

RESOLUTION:
The code is modified to correctly reset I/O-shipping flags when the user issues the CLI command.

* 3804299 (Tracking ID: 3804298)

SYMPTOM:
For CVM (Cluster Volume Manager) environment, the setting/unsetting of the 'lfailed/lmissing' flag is not recorded in the syslog.

DESCRIPTION:
For CVM environment, when VxVM (Verita's volume Manager) discovers that a diskis not accessible from a node of CVM cluster, it marks theLFAILED (locally failed) flag on the disk. And when VxVM discovers that a disk isnot discovered by DMP (Dynamic Multipathing) on a node of CVM cluster, it marks the LMISSING (locally missing) flag on the disk. Messages of the setting andunsetting of the 'lmissing/lfailed' flag are not recorded in the syslog.

RESOLUTION:
The code is modified to record the setting and unsetting of the 'lfailed/lmissing' flag in syslog.

* 3812192 (Tracking ID: 3764326)

SYMPTOM:
VxDMP repeatedly reports warning messages in system log:
	WARNING: VxVM vxdmp V-5-0-2046 : Failed to get devid for device 
0x70259720
	WARNING: VxVM vxdmp V-5-3-2065 dmp_devno_to_devidstr ldi_get_devid 
failed for devno 0x13800000a60

DESCRIPTION:
Due to VxDMP code issue, the device path name is inconsistent during creation and deletion. It leaves stale device file under /devices. Because some devices don't support Solaris devid operations, the devid related functions fail against such devices. VxDMP doesn't skip such devices when creating or removing minor nodes.

RESOLUTION:
The code is modified to address the device path name inconsistency and skip devid manipulation for third party devices.

* 3835562 (Tracking ID: 3835560)

SYMPTOM:
Auto-import of the diskgroup fails if some of the disks in diskgroup are  missing.

DESCRIPTION:
The auto-import of diskgroup fails if some of the disks in diskgroup are missing. Veritas Volume Manager (VxVM) doesn't want to auto-import diskgroup if few of the disks in the diskgroup are missing, because it can lead to data corruption,  For some reason, auto-import of diskgroup may be required even with missing disks. A switch of auto-import will be helpful in such case.

RESOLUTION:
The code is modified to add "forceautoimport" tunable, in case the auto-import of diskgroup is required even with missing disks. It can be set by "vxtune forceautoimport on" and its value is off by default.

* 3847745 (Tracking ID: 3677359)

SYMPTOM:
VxDMP causes system panic after a shutdown or reboot with the following stack trace:
mutex_enter() 
volinfo_ioct()
volsioctl_real()
cdev_ioctl()
dmp_signal_vold()
dmp_throttle_paths()
dmp_process_stats()
dmp_daemons_loop()
thread_start()
or
panicsys()
vpanic_common()
panic+0x1c()
mutex_enter()
cdev_ioctl()
dmp_signal_vold()
dmp_check_path_state()
dmp_restore_callback()
dmp_process_scsireq()
dmp_daemons()
thread_start()

DESCRIPTION:
In a special scenario of system shutdown or reboot, the DMP (Dynamic MultiPathing) I/O statistic daemon tries to call the ioctl functions in VxIO module which is being unloaded. As a result, the system panics.

RESOLUTION:
The code is modified to stop the DMP I/O statistic daemon and DMP restore daemon before system shutdown or reboot. Also, the code is modified to avoid other probes to VxIO devices during shutdown.

* 3850890 (Tracking ID: 3603792)

SYMPTOM:
The first boot after live upgrade to new version of Solaris 11 and VxVM takes 
long time since post installation stalled for a long time.

DESCRIPTION:
In Solaris 11, the OS command devlinks which used to add /dev entries stalled 
for a long time in post installation of VxVM. The OS command devfsadm should be 
used in the post-install script.

RESOLUTION:
The code is modified to replace devlinks with devfsadm in the post installation 
process of VxVM.

* 3851117 (Tracking ID: 3662392)

SYMPTOM:
In the CVM environment, if I/Os are getting executed on slave node, corruption 
can happen when the vxdisk resize(1M) command is executing on the master 
node.

DESCRIPTION:
During the first stage of resize transaction, the master node re-adjusts the 
disk offsets and public/private partition device numbers.
On a slave node, the public/private partition device numbers are not adjusted 
properly. Because of this, the partition starting offset is are added twice 
and causes the corruption. The window is small during which public/private 
partition device numbers are adjusted. If I/O occurs during this window then 
only corruption is observed. After the resize operation completes its execution,
no further corruption will happen.

RESOLUTION:
The code has been changed to add partition starting offset properly to an I/O 
on slave node during execution of a resize command.

* 3852148 (Tracking ID: 3852146)

SYMPTOM:
Shared DiskGroup fails to import when "-c" and "-o noreonline" options are
specified together with the below error:

VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed:
Disk for disk group not found

DESCRIPTION:
When "-c" option is specified we update the DISKID and DGID of the disks in 
the DG. When the
information about the disks in the DG is passed to Slave node, slave node 
does not 
have the latest information since the online of the disks would not happen
because of "-o noreonline" being specified. Now since slave node does not 
have
the latest 
information, it would not be able to identify proper disks belonging to the 
DG
which leads to DG import failing with "Disk for disk group not found".

RESOLUTION:
Code changes have been done to handle the working of "-c" and "-o 
noreonline"
together.

* 3854788 (Tracking ID: 3783356)

SYMPTOM:
After DMP module fails to load, dmp_idle_vector is not NULL.

DESCRIPTION:
After DMP module load failure, DMP resources are not cleared off from the system memory, so some of the 
resources are in NON-NULL value. When system retries to load, it frees invalid data, leading to system panic with error 
message BAD FREE, because the data being freed is not valid at that point.

RESOLUTION:
The code is modified to clear up the DMP resources when module failure happens.

* 3859226 (Tracking ID: 3287880)

SYMPTOM:
In a clustered environment, if a node doesn't have storage connectivity to clone
disks, then the vxconfigd on the node may dump core during the clone disk group
import. The stack trace is as follows:
 
chosen_rlist_delete()
dg_import_complete_clone_tagname_update()
req_dg_import()
vold_process_request()

DESCRIPTION:
In a clustered environment, if a node doesn't have storage connectivity to clone
disks due to improper cleanup handling in clone database, then the vxconfigd on
the node may dump core during the clone disk group import.

RESOLUTION:
The code has been modified to properly cleanup clone database.

* 3862240 (Tracking ID: 3856146)

SYMPTOM:
Two issues are hit on latest SRUs Solaris 11.2.8 and greater and Solaris sparc 
11.3 when dmp_native 
support is on. These issues are mentioned below:
1.            Turning of dmp_native_support  on and off requires reboot. 
System gets panic during the reboot as a part of setting dmp_native_support 
off.
2.            Sometimes, system comes up after reboot when dmp_native_support 
is 
set to off. In such case, panic is observed when system is rebooted after 
uninstallation of SF and it fails to boot up.
The panic string is same for both the issues.
panic[cpu0]/thread=20012000: read_binding_file: /etc/name_to_major file not 
found

DESCRIPTION:
The issue happened because of /etc/system and /etc/name_to_major files.

As per the discussion with Oracle through SR(3-11640878941)

Removal of aforementioned 2 files from the boot-archive is causing this panic.
Because  files -> /etc/name_to_major & /etc/system are included in the SPARC 
boot_archive of Solaris 11.2.8.4.0 ( and greater versions) and they should not 
be removed. 
The system will fail to come up if they are removed."

RESOLUTION:
The code has been modified to avoid panic while setting dmp_native_support to 
off.

* 3862632 (Tracking ID: 3769927)

SYMPTOM:
Turning off dmp_native_support tunable fails with the following errors:

VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more zpools 
VxVM vxdmpadm ERROR V-5-1-15686 The following zpool(s) could not be migrated 
as they are not healthy - <zpool_name>.

DESCRIPTION:
Turning off the dmp_native_support tunable fails even if the zpools are healthy.
The vxnative script doesn't allow turning off the dmp_native_support if it detects that the zpool is unhealthy, which means the zpool state is ONLINE and some action is required to be taken on zpool. "upgrade zpool" is considered as one of the actions indicating unhealthy zpool state. This is not correct.

RESOLUTION:
The code is modified to consider "upgrade zpool" action as expected. Turning off dmp_native_support tunable is supported if the action is "upgrade zpool"..



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch vm-sol11_sparc-Patch-6.2.1.200.tar.gz to /tmp
2. Untar vm-sol11_sparc-Patch-6.2.1.200.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/vm-sol11_sparc-Patch-6.2.1.200.tar.gz
    # tar xf /tmp/vm-sol11_sparc-Patch-6.2.1.200.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSvxvm621P2 [<host1> <host2>...]

You can also install this patch together with 6.2.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 6.2.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
o Before-the-upgrade :-
  (a) Stop applications using any VxVM volumes.
  (b) Stop I/Os to all the VxVM volumes.
  (c) Umount any filesystems with VxVM volumes.
  (d) In case of multiple boot environments, boot using the BE you wish to install the patch on.

For Solaris 11 release, refer to the man pages for instructions on using install and uninstall options of 'pkg'
command provided with Solaris.
Any other special or non-generic installation instructions should be described below as special instructions.
The following example installs a patch to a standalone machine:

        example# pkg install --accept -g /patch_location/VRTSvxvm.p5p VRTSvxvm

After 'pkg install' please follow mandatory configuration steps mentioned in special instructions


REMOVING THE PATCH
------------------
The following example removes a patch from a standalone system:

        example# pkg uninstall VRTSvxvm
Note: Uninstalling the patch will remove the entire package. If you need earlier version of the package, install it f
rom the original source media.


SPECIAL INSTRUCTIONS
--------------------
1) Delete '.vxvm-configured'
    # rm  /etc/vx/reconfig.d/state.d/.vxvm-configured
2) Refresh vxvm-configure
    # svcadm refresh vxvm-configure
3) Delete  'install-db'
    # rm /etc/vx/reconfig.d/state.d/install-db
4) Reboot the system using shutdown command.

You need to use the shutdown command to reboot the system after patch installation or de-installation:

    shutdown -g0 -y -i6

VRTSVxVM 6.2.1.100 is linux-specific patch. This is the first patch for Solaris on top of VRTSvxvm 6.2.1.


OTHERS
------
NONE



Read and accept Terms of Service