This page lists publically-released patches for Veritas Enterprise Products.
For Product GA build, see Veritas Entitlement Management System(VEMS) by clicking the Veritas Support 'Licensing' option.
For information on private patches, contact Veritas Technical Support.
Veritas is making it easier to find all software installers and updates for Veritas products with a completely redesigned experience. NetBackup HotFixes and NetBackup Appliance patches are now also available at the new Veritas Download Center.
Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.
sfha-sles12sp1_x86_64-Patch-6.2.1.200
Sign in if you want to rate this patch.

 Basic information
Release type: Patch
Release date: 2016-07-04
OS update support: SLES12 x86-64 SP 1
Technote: http://www.veritas.com/docs/000107832
Documentation: None
Popularity: 785 viewed    73 downloaded
Download size: 109.66 MB
Checksum: 343168599

 Applies to one or more of the following products:
Application HA 6.2.1 On SLES12 x86-64
Cluster Server 6.2.1 On SLES12 x86-64
Dynamic Multi-Pathing 6.2.1 On SLES12 x86-64
File System 6.2.1 On SLES12 x86-64
Storage Foundation 6.2.1 On SLES12 x86-64
Storage Foundation Cluster File System 6.2.1 On SLES12 x86-64
Storage Foundation for Oracle RAC 6.2.1 On SLES12 x86-64
Storage Foundation HA 6.2.1 On SLES12 x86-64
Volume Manager 6.2.1 On SLES12 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vcs-sles12sp1_x86_64-Patch-6.2.1.100 (obsolete) 2016-04-12
sfha-sles12sp1_x86_64-Patch-6.2.1.100 (obsolete) 2016-03-21

 Fixes the following incidents:
3752475, 3753724, 3754492, 3756002, 3765324, 3765998, 3769992, 3780334, 3793241, 3795710, 3798437, 3802857, 3803497, 3804299, 3808134, 3808285, 3816222, 3817120, 3821688, 3821699, 3823288, 3848011, 3849458, 3864460, 3864963, 3865600, 3865603, 3865605, 3865608, 3865826, 3865885, 3865905, 3868570, 3869873, 3870949, 3871617

 Patch ID:
VRTSvxfs-6.2.1.200-SLES12
VRTSglm-6.2.1.200-SLES12
VRTSgms-6.2.1.100-SLES12
VRTSodm-6.2.1.100-SLES12
VRTSveki-6.2.1.100-SLES12
VRTSdbac-6.2.1.100-SLES12
VRTSgab-6.2.1.200-SLES12
VRTSllt-6.2.1.300-SLES12
VRTSvxfen-6.2.1.200-SLES12
VRTSamf-6.2.1.200-SLES12
VRTSvxvm-6.2.1.200-SLES12
VRTSaslapm-6.2.1.400-SLES12
VRTSvcs-6.2.1.100-SLES12
VRTSvcsea-6.2.1.100-SLES12

 Readme file  [Save As...]
                          * * * READ ME * * *
            * * * Symantec Storage Foundation HA 6.2.1 * * *
                         * * * Patch 200 * * *
                         Patch Date: 2016-06-28


Note:
----
The CPI Installer of this patch was updated on Dec. 6, 2016 to fix the following incident.
Incident: 3906229
SYMPTOM:
The installer fails to install VRTSvxvm, VRTSaslapm, VRTScavf when installing 6.2.1
product with the patch sfha-sles12sp1_x86_64-Patch-6.2.1.200

DESCRIPTION:
The installer fails to install VRTSvxvm, VRTSaslapm, VRTScavf when installing 6.2.1
product with the patch sfha-sles12sp1_x86_64-Patch-6.2.1.200

RESOLUTION:
The code of the Installer is modified to fix it.


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Symantec Storage Foundation HA 6.2.1 Patch 200


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES12 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSvcs
VRTSvcsea
VRTSveki
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Symantec Application HA 6.2.1
   * Symantec Cluster Server 6.2.1
   * Symantec Dynamic Multi-Pathing 6.2.1
   * Symantec File System 6.2.1
   * Symantec Storage Foundation 6.2.1
   * Symantec Storage Foundation Cluster File System HA 6.2.1
   * Symantec Storage Foundation for Oracle RAC 6.2.1
   * Symantec Storage Foundation HA 6.2.1
   * Symantec Volume Manager 6.2.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxvm-6.2.1.200-SLES12
* 3780334 (3762580) In Linux kernels greater than or equal to RHEL6.6 (e.g. RHEL7 and SLES11SP3), the vxfen module fails to register the SCSI-3 PR keys to EMC devices when powerpath co-exists  
with DMP (Dynamic Multi-Pathing).
* 3795710 (3508122) After one node preempts SCSI-3 reservation for the other node, the I/O from the victim node does not fail.
* 3802857 (3726110) On systems with high number of CPUs, Dynamic Multi-Pathing (DMP) devices may perform considerably slower than OS device paths.
* 3803497 (3802750) VxVM (Veritas Volume Manager) volume I/O-shipping functionality is not disabled even after the user issues the correct command to disable it.
* 3804299 (3804298) Not recording the setting/unsetting of the 'lfailed/lmissing' flag in the syslog
* 3808134 (3808135) While you install the hotfix(HF) on thetop of Veritas Volume Manager(VxVM)6.2.1, the symbolic link creation is failed as old links are not cleaned from the installation scripts.
* 3816222 (3816219) VxDMP event source daemon keeps reporting UDEV change event in syslog.
* 3823288 (3823283) While unencapsulating a boot disk in SAN environment (Storage Area etwork), 
Linux operating system sticks in grub after reboot.
* 3848011 (3806909) Due to some modification in licensing , for STANDALONE DMP, DMP keyless 
license was not working.
* 3849458 (3776520) Filters are not updated properly in lvm.conf file in VxDMP initrd (initial ramdisk) while Dynamic Multipathing (DMP) Native Support is being 
enabled.
* 3865905 (3865904) VRTSvxvm patch version 6.2.1 failed to install on new kernel update 
SLES12SP1
* 3870949 (3845712) On SLES12 when encapsulation is performed, System is not able to boot up 
through vxvm_root grub entry.
Patch ID: VRTSaslapm-6.2.1.310-SLES12
* 3868570 (3868569) While VRTSaslapm package installation for SLES12SP1(new suse 12 update), APM (array policy modules) are not loading properly.
Patch ID: VRTSvxfs-6.2.1.200-SLES12
* 3864963 (3853338) Files on VxFS are corrupted while running the sequential asynchronous write workload under high memory pressure.
* 3865600 (3865599) VxFS module failed to load on SLES12 SP1.
Patch ID: VRTSvxfs-6.2.1.100-SLES12
* 3753724 (3731844) umount -r option fails for vxfs 6.2.
* 3754492 (3761603) Internal assert failure because of invalid extop processing 
at the mount time.
* 3756002 (3764824) Internal cluster file system(CFS) testing hit debug assert
* 3765324 (3736398) NULL pointer dereference panic in lazy unmount.
* 3765998 (3759886) In case of nested mount, force umount of parent leaves 
stale child entry in /etc/mtab even after subsequent umount of child.
* 3769992 (3729158) Deadlock due to incorrect locking order between write advise
and dalloc flusher thread.
* 3793241 (3793240) Vxrestore command dumps core file because of invalid 
japanese strings.
* 3798437 (3812914) On RHEL 6.5 and RHEL 6.4 latest kernel patch, umount(8) system call hangs if an
application watches for inode events using inotify(7) APIs.
* 3808285 (3808284) fsdedupadm status Japanese text includes strange character.
* 3817120 (3804400) VRTS/bin/cp does not return any error when quota hard 
limit is reached and partial write is encountered.
* 3821688 (3821686) VxFS module failed to load on SLES11 SP4.
Patch ID: VRTSodm-6.2.1.100-SLES12
* 3865603 (3865602) ODM module failed to load on SLES12 SP1.
Patch ID: VRTSglm-6.2.1.200-SLES12
* 3865605 (3865604) GLM module failed to load on SLES12 SP1.
Patch ID: VRTSglm-6.2.1.100-SLES12
* 3752475 (3758102) In Cluster File System(CFS) on Linux, stack overflow while
creating ODM file.
* 3821699 (3821698) GLM module failed to load on SLES11 SP4.
Patch ID: VRTSgms-6.2.1.100-SLES12
* 3865608 (3865607) GMS module failed to load on SLES12 SP1.
Patch ID: VRTSamf-6.2.1.200-SLES12
* 3865826 (3865825) Veritas Cluster Server (VCS) does not support SUSE Linux Enterprise Server 
12 Service Pack 1 (SLES 12 SP1).
Patch ID: VRTSgab-6.2.1.200-SLES12
* 3865826 (3865825) Veritas Cluster Server (VCS) does not support SUSE Linux Enterprise Server 
12 Service Pack 1 (SLES 12 SP1).
Patch ID: VRTSllt-6.2.1.300-SLES12
* 3865826 (3865825) Veritas Cluster Server (VCS) does not support SUSE Linux Enterprise Server 
12 Service Pack 1 (SLES 12 SP1).
Patch ID: VRTSvxfen-6.2.1.200-SLES12
* 3865826 (3865825) Veritas Cluster Server (VCS) does not support SUSE Linux Enterprise Server 
12 Service Pack 1 (SLES 12 SP1).
Patch ID: VRTSveki-6.2.1.100-SLES12
* 3865885 (3865884) VEKI module failed to load on SLES12 SP1.
Patch ID: VRTSdbac-6.2.1.100-SLES12
* 3864460 (3866590) 6.2.1 vcsmm module does not load with SLES12SP1 (3.12.49-11-default)
Patch ID: VRTSvcs-6.2.1.100-SLES12
* 3869873 (3869872) HAD (High Availability Daemon) starts in 'user.slice' after the 'hastart' operation.
Patch ID: VRTSvcsea-6.2.1.100-SLES12
* 3871617 (3871614) With HAD in user.slice, Oracle (or applications) are not shut down gracefully during system reboot.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxvm-6.2.1.200-SLES12

* 3780334 (Tracking ID: 3762580)

SYMPTOM:
In Linux kernels greater than or equal to RHEL6.6 (e.g. RHEL7 and SLES11SP3), the vxfen module fails to register the SCSI-3 PR keys to EMC devices when powerpath co-exists 
with DMP (Dynamic Multi-Pathing). The following logs are printed while  setting up fencing for the cluster.

VXFEN: vxfen_reg_coord_pt: end ret = -1
vxfen_handle_local_config_done: Could not register with a majority of the
coordination points.

DESCRIPTION:
In Linux kernels greater than or equal to RHEL6.6 (e.g. RHEL7 and SLES11SP3), the interface used by DMP to send the SCSI commands to block devices does not transfer the 
data to or from the device. Therefore, the SCSI-3 PR keys do not get registered.

RESOLUTION:
The code is modified to use SCSI request_queue to send the SCSI commands to the 
underlying block device.
Additional patch is required from EMC to support processing SCSI commands via the request_queue mechanism on EMC PowerPath devices. Please contact EMC for patch details 
for a specific kernel version.

* 3795710 (Tracking ID: 3508122)

SYMPTOM:
When running vxfentsthdw during the preempt key operation, the I/O on the victim node is expected to fail, but sometimes it doesn't fail

DESCRIPTION:
After node 1 preempts the SCSI-3 reservations for node 2, it is expected that write I/Os from victim node 2 will fail. It is observed that sometimes the storage does not preempt all the keys of the victim node fast enough, but it fails the I/O with reservation conflict. In such case, the victim node could not correctly identify that it has been preempted and can still do re-registration of keys to perform the I/O.

RESOLUTION:
The code is modified to correctly identify that SCSI-3 keys of a node has been preempted.

* 3802857 (Tracking ID: 3726110)

SYMPTOM:
On systems with high number of CPUs, DMP devices may perform  considerably slower than OS device  paths.

DESCRIPTION:
In high CPU configuration, I/O statistics related functionality in DMP takes more CPU time because DMP statistics are collected on per CPU basis. This stat collection happens in DMP I/O code path hence it reduces the I/O performance. Because of this, DMP devices perform slower than OS device paths.

RESOLUTION:
The code is modified to remove some of the stats collection functionality from DMP I/O code path. Along with this, the following tunable need to be turned off: 
1. Turn off idle lun probing. 
#vxdmpadm settune dmp_probe_idle_lun=off
2. Turn off statistic gathering functionality.  
#vxdmpadm iostat stop

Notes: 
1. Please apply this patch if system configuration has large number of CPU and if DMP performs considerably slower than OS device paths. For normal systems this issue is not applicable.

* 3803497 (Tracking ID: 3802750)

SYMPTOM:
Once VxVM (Veritas Volume Manager) volume I/O-shipping functionality is turned on, it is not getting disabled even after the user issues the correct command to disable it.

DESCRIPTION:
VxVM (Veritas Volume Manager) volume I/O-shipping functionality is turned off by default. The following two commands can be used to turn it on and off:
	vxdg -g <dgname> set ioship=on
	vxdg -g <dgname> set ioship=off

The command to turn off I/O-shipping is not working as intended because I/O-shipping flags are not reset properly.

RESOLUTION:
The code is modified to correctly reset I/O-shipping flags when the user issues the CLI command.

* 3804299 (Tracking ID: 3804298)

SYMPTOM:
For CVM (Cluster Volume Manager) environment, the setting/unsetting of the 'lfailed/lmissing' flag is not recorded in the syslog.

DESCRIPTION:
For CVM environment, when VxVM (Verita's volume Manager) discovers that a diskis not accessible from a node of CVM cluster, it marks theLFAILED (locally failed) flag on the disk. And when VxVM discovers that a disk isnot discovered by DMP (Dynamic Multipathing) on a node of CVM cluster, it marks the LMISSING (locally missing) flag on the disk. Messages of the setting andunsetting of the 'lmissing/lfailed' flag are not recorded in the syslog.

RESOLUTION:
The code is modified to record the setting and unsetting of the 'lfailed/lmissing' flag in syslog.

* 3808134 (Tracking ID: 3808135)

SYMPTOM:
While you install the HF on the top of VxVM 6.2.1, following errors are reported. 
ln: failed to create symbolic link '/etc/rc.d/init.d/vras-vradmind': File exists
ln: failed to create symbolic link '/etc/rc.d/init.d/vxrsyncd': File exists

DESCRIPTION:
During the installation of the HF, symbolic links are recreated, but the  stale links from the previously installed VxVM are not deleted, therefore the errors occur.

RESOLUTION:
The code is modified for a smooth HFinstallation session on the top of VxVM 6.2.1.

* 3816222 (Tracking ID: 3816219)

SYMPTOM:
VxDMP  (Veritas Dynamic Multi-Pathing) event source daemon (vxesd) keeps 
reporting a lot of messages in syslog as below:
"vxesd: Device sd*(*/*) is changed"

DESCRIPTION:
The vxesd daemon registers with the UDEV framework and keeps VxDMP up-to-date 
with devices' status. Due to some change at device, vxesd keeps reporting this 
kind of change-event 
listened by udev. VxDMP only cares about "add" and "remove" UDEV events. For 
UDEV "change" event, we can avoid logging for these events to VxDMP.

RESOLUTION:
The code is modified to stop logging UDEV change-event related messages in 
syslog.

* 3823288 (Tracking ID: 3823283)

SYMPTOM:
Linux operating system sticks in grub after reboot. Manual kernel load is 
required to make operating system functional.

DESCRIPTION:
During unencapsulation of a boot disk in SAN environment, multiple entries 
corresponding to root disk are found in by-id device directory. As a 
result, a parse command fails, leading to the creation of an improper menu 
file in grub directory. This menu file defines the device path to load 
kernel and other modules.

RESOLUTION:
The code is modified to handle multiple entries for SAN boot disk.

* 3848011 (Tracking ID: 3806909)

SYMPTOM:
During installation of volume manager installation using CPI in key-less 
mode, following logs were observed.
VxVM vxconfigd DEBUG  V-5-1-5736 No BASIC license
VxVM vxconfigd ERROR  V-5-1-1589 enable failed: License has expired or is 
not available for operation transactions are disabled.

DESCRIPTION:
While using CPI for STANDALONE DMP installation in key less mode, volume 
manager Daemon(vxconfigd) cannot be started due to a modification in a DMP 
NATIVE license string that is used for license verification and this 
verification was failing.

RESOLUTION:
Appropriate code changes are incorporated to resolve the DMP keyless License 
issue to work with STANDALONE DMP.

* 3849458 (Tracking ID: 3776520)

SYMPTOM:
Filters are not updated properly in lvm.conf file in VxDMP initrd while DMP Native Support is being enabled. As a result, root Logical Volume 
(LV) is mounted on OS device upon reboot.

DESCRIPTION:
From LVM version 105, global_filter was introduced as part of lvm.conf file. VxDMP updates initird lvm.conf file with the filters required for 
DMP Native Support to function. While updating the lvm.conf, VxDMP checks for the filter field to be updated, but ideally we should check for 
global_filter field to be updated in the latest LVM version. This leads to lvm.conf file not updated with the proper filters.

RESOLUTION:
The code is modified to properly update global_filter field in lvm.conf file in VxDMP initrd.

* 3865905 (Tracking ID: 3865904)

SYMPTOM:
The installation failed while installing VxVM 6.2.1 on SLES12SP1

superm2028-49472:/sles12sp0 # rpm -ivh VRTSvxvm-6.2.1.000-
SLES12.x86_64.rpm
Preparing...                          ################################# 
[100%]
Updating / installing...
   1:VRTSvxvm-6.2.1.000-SLES12        ################################# 
[100%]
Installing file /etc/init.d/vxvm-boot
vxvm-boot                 0:off  1:off  2:on   3:on   4:on   5:on   
6:off
creating VxVM device nodes under /dev
ERROR: No appropriate modules found. Error in loading module "vxdmp". 
See 
documentation.
warning: %post(VRTSvxvm-6.2.1.000-SLES12.x86_64) scriptlet failed, exit 
status 1

DESCRIPTION:
Installation of VRTSvxvm patch version 6.2.1 failed on
SLES12SP1 due to change in kernel version.

RESOLUTION:
The VxVM package has been re-compiled with SLES12SP1 build environment.

* 3870949 (Tracking ID: 3845712)

SYMPTOM:
After booting through vxvm_root entry of the grub, Following message is 
seen at the console -
"/dev/disk/by-uuid/**UUID** doesn't exist."

DESCRIPTION:
This is because, when the VxVM-initrd image is created as part of root 
device encapsulation, the SWAP device has old Swap UUID and when vxvm 
creates swap-volumes, it uses new UUID for swap device, which will not 
be present in VxVM-initrd image. Dracut has checks for mapping of swap 
UUID to get the device-id while booting up, and it doesn't match with 
the UUID of swap device present in the VxVM-initrd image, hence this 
issue.

RESOLUTION:
Code changes are implemented so that UUID checks for swap device in 
dracut  passes properly.

Patch ID: VRTSaslapm-6.2.1.310-SLES12

* 3868570 (Tracking ID: 3868569)

SYMPTOM:
lsmod is not showing the required APM modules loaded.

DESCRIPTION:
For supporting SLES12SP1 update, dmp module is recompiled with latest SLES12SP1 kernel version. During post install of the package the APM modules fails to load due to mismatch in dmp and additional APM module kernel version.

RESOLUTION:
ASLAPM package is recompiled with SLES12SP1 kernel.

Patch ID: VRTSvxfs-6.2.1.200-SLES12

* 3864963 (Tracking ID: 3853338)

SYMPTOM:
Files on VxFS are corrupted while running the sequential write workload under high memory pressure.

DESCRIPTION:
VxFS may miss out writes sometimes under excessive write workload. Corruption occurs because of the race between the writer thread which is doing sequential asynchronous writes and the flusher thread which flushes the in-core dirty pages. Due to an overlapping write, they are serialized 
over a page lock. Because of an optimization, this lock is released, leading to a small window where the waiting thread could race.

RESOLUTION:
The code is modified to fix the race by reloading the inode write size after taking the page lock.

* 3865600 (Tracking ID: 3865599)

SYMPTOM:
VxFS module failed to load on SLES12 SP1.

DESCRIPTION:
Since SLES12 SP1 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for SLES12 SP1.

Patch ID: VRTSvxfs-6.2.1.100-SLES12

* 3753724 (Tracking ID: 3731844)

SYMPTOM:
umount -r option fails for vxfs 6.2 with error "invalid options"

DESCRIPTION:
Till 6.2 vxfs did not have a umount helper on linux. We added a helper in 6.2,
because of this, each call to linux's umount also gets called to the umount
helper binary. Due to this the -r option, which was only handled by the linux
native umount, is forwarded to the umount.vxfs helper, which exits while
processing the option string becase we don't support readonly remounts.

RESOLUTION:
To solve this, we've changed the umount.vxfs code to not exit on
"-r" option, although we do not support readonly remounts, so if umount -r
actually fails and the os umount attempts a readonly remount, the mount.vxfs
binary will then exit with an error. This solves the problem of linux's default
scripts not working for our fs.

* 3754492 (Tracking ID: 3761603)

SYMPTOM:
Full fsck flag will be set incorrectly at the mount time.

DESCRIPTION:
There might be possibility that extop processing will be deferred 
during umount (i.e. in case of crash or disk failure) and will be kept on 
disk, so that mount can process them. During mount, inode can have multiple 
extop set. Previously if inode has trim and reorg extop set during mount, we 
were incorrectly setting fullfsck. This patch avoids this situation.

RESOLUTION:
Code is modified to avoid such unnecessary setting of fullfsck.

* 3756002 (Tracking ID: 3764824)

SYMPTOM:
Internal cluster file system(CFS) testing hit debug assert

DESCRIPTION:
Internal debug assert is seen when there is a glm recovery while one 
of the secondary  nodes is doing mount, specifically when glm recovery happens 
between attaching a file system and mounting file system.

RESOLUTION:
Code is modified to handle glm reconfiguration issue.

* 3765324 (Tracking ID: 3736398)

SYMPTOM:
Panic in the lazy unmount path during deinit of VxFS-VxVM API.

DESCRIPTION:
The panic is caused when an exiting thread drops the last reference
to a lazy-unmounted VxFS file-system where that fs is the last VxFS mount in the
system. The exiting thread does unmount, which then calls into VxVM to
de-initialize the private FS-VM API(as it is the last VxFS mounted fs). 
The function to be called in VxVM is looked-up via the files under /proc, this
requires an opening of a file but the exit processing has removed the structs
needed by the thread to open a file.

RESOLUTION:
The solution is to cache the de-init function (vx_fsvm_api_deinit)
when the VxFS-VxVM API is initialized, so no function look-up is needed during
an unmount. The cached function pointer can then be called during the last
unmount bypassing the need to open the file by the exiting thread.

* 3765998 (Tracking ID: 3759886)

SYMPTOM:
In case of nested mount, force umount of parent leaves stale child 
entry in /etc/mtab even after subsequent umount of child.

DESCRIPTION:
On rhel6 and sles11, in case of nested mount, if parent mount 
(say /mnt1) was removed/umounted forcefully, then child mounts (like /mnt1/dir) 
also get umounted but the "/etc/mtab" entry was not getting updated accordingly 
for child mount. Previously it was possible to remove such child entries from 
"/etc/mtab" by using os's umount binary. But from shikra on words, we have 
added helper umount binary in "/sbin/umount.vxfs". So now os's umount binary 
will call this helper binary which in turn call vxumount for child umount 
which will fail since path was not present. Hence mtab entry will not get
updated and will show child as mounted.

RESOLUTION:
Code is modified to update mnttab when ENOENT error is returned 
by umount() system call.

* 3769992 (Tracking ID: 3729158)

SYMPTOM:
fuser and other commands hang on vxfs file systems.

DESCRIPTION:
The hang is seen while 2 threads contest for 2 locks -ILOCK and
PLOCK. The writeadvise thread owns the ILOCK but is waiting for the PLOCK.
The dalloc thread owns the PLOCK and is waiting for the ILOCK.

RESOLUTION:
Correct order of locking is PLOCK followed by the ILOCK.

* 3793241 (Tracking ID: 3793240)

SYMPTOM:
Vxrestore command dumps core file because of invalid japanese 
strings.

DESCRIPTION:
Vxrestore command dumps core file because of invalid characters 
such as %, $ etc. are present in the japanese strings.

RESOLUTION:
code is modified to remove the extra characters from the 
Japanese message strings.

* 3798437 (Tracking ID: 3812914)

SYMPTOM:
On RHEL 6.5 and RHEL 6.4 latest kernel patch, umount(8) system call hangs if an
application watches for inode events using inotify(7) APIs.

DESCRIPTION:
On RHEL 6.5 and RHEL 6.4 latest kernel patch, additional OS counters were added in
the super block to track inotify Watches. These new counters were not implemented
in VxFS for RHEL6.5/RHEL6.4 kernel. Hence, while doing umount, the operation hangs
until the counter in the superblock drops to zero, which would never happen since
they are not handled in VxFS.

RESOLUTION:
The code is modified to handle additional counters added in super block of
RHEL6.5/RHEL6.4 latest kernel.

* 3808285 (Tracking ID: 3808284)

SYMPTOM:
fsdedupadm status Japanese text includes strange character.

DESCRIPTION:
The translation of FAILED string in English  is incorrect in Japanese 
and is "I/O" which stands for The failed I / O.So the translation from 
English to Japanese is not correct.

RESOLUTION:
Corrected the translation for "FAILED" string in Japanese.

* 3817120 (Tracking ID: 3804400)

SYMPTOM:
VRTS/bin/cp does not return any error when quota hard limit is 
reached and partial write is encountered.

DESCRIPTION:
When quota hard limit is reached, VRTS/bin/cp may encounter a 
partial write, but it may not return any error to up layer application in 
such situation.

RESOLUTION:
Adjust VRTS/bin/cp to detect the partial write caused by quota 
limit, and return a proper error to up layer application.

* 3821688 (Tracking ID: 3821686)

SYMPTOM:
VxFS module might not get loaded on SLES11 SP4.

DESCRIPTION:
Since SLES11 SP4 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for SLES11 SP4.

Patch ID: VRTSodm-6.2.1.100-SLES12

* 3865603 (Tracking ID: 3865602)

SYMPTOM:
ODM module may not get loaded on SLES12 SP1.

DESCRIPTION:
Since SLES12 SP1 is new release therefore ODM module was not
getting loaded on it.

RESOLUTION:
Added ODM support for SLES12 SP1.

Patch ID: VRTSglm-6.2.1.200-SLES12

* 3865605 (Tracking ID: 3865604)

SYMPTOM:
GLM module will not get loaded on SLES12 SP1.

DESCRIPTION:
Since SLES12 SP1 is new release therefore GLM module failed to load
on it.

RESOLUTION:
Added GLM support for SLES12 SP1.

Patch ID: VRTSglm-6.2.1.100-SLES12

* 3752475 (Tracking ID: 3758102)

SYMPTOM:
In Cluster File System(CFS) on Linux, stack overflow while creating
ODM file.

DESCRIPTION:
In case of CFS, while creating a ODM file, cluster inode needs
initialization, which takes GLM(Group Lock Manager) lock.  While taking GLM
lock, processing within GLM module may lead to the system panic due to stack
overflow in Linux while doing memory allocation.

RESOLUTION:
Modified handoff values.

* 3821699 (Tracking ID: 3821698)

SYMPTOM:
GLM module will not get loaded on SLES11 SP4.

DESCRIPTION:
Since SLES11 SP4 is new release therefore GLM module failed to load
on it.

RESOLUTION:
Added GLM support for SLES11 SP4.

Patch ID: VRTSgms-6.2.1.100-SLES12

* 3865608 (Tracking ID: 3865607)

SYMPTOM:
GMS module will not get loaded on SLES12 SP1.

DESCRIPTION:
Since SLES12 SP1 is new release therefore GMS module failed to load
on it.

RESOLUTION:
Added GMS support for SLES12 SP1.

Patch ID: VRTSamf-6.2.1.200-SLES12

* 3865826 (Tracking ID: 3865825)

SYMPTOM:
Veritas Cluster Server (VCS) does not support SUSE Linux Enterprise Server 
12 Service Pack 1 (SLES 12 SP1).

DESCRIPTION:
VCS did not support SLES versions released after SLES 12.

RESOLUTION:
VCS support for SLES 12 SP1 is now introduced.

Patch ID: VRTSgab-6.2.1.200-SLES12

* 3865826 (Tracking ID: 3865825)

SYMPTOM:
Veritas Cluster Server (VCS) does not support SUSE Linux Enterprise Server 
12 Service Pack 1 (SLES 12 SP1).

DESCRIPTION:
VCS did not support SLES versions released after SLES 12.

RESOLUTION:
VCS support for SLES 12 SP1 is now introduced.

Patch ID: VRTSllt-6.2.1.300-SLES12

* 3865826 (Tracking ID: 3865825)

SYMPTOM:
Veritas Cluster Server (VCS) does not support SUSE Linux Enterprise Server 
12 Service Pack 1 (SLES 12 SP1).

DESCRIPTION:
VCS did not support SLES versions released after SLES 12.

RESOLUTION:
VCS support for SLES 12 SP1 is now introduced.

Patch ID: VRTSvxfen-6.2.1.200-SLES12

* 3865826 (Tracking ID: 3865825)

SYMPTOM:
Veritas Cluster Server (VCS) does not support SUSE Linux Enterprise Server 
12 Service Pack 1 (SLES 12 SP1).

DESCRIPTION:
VCS did not support SLES versions released after SLES 12.

RESOLUTION:
VCS support for SLES 12 SP1 is now introduced.

Patch ID: VRTSveki-6.2.1.100-SLES12

* 3865885 (Tracking ID: 3865884)

SYMPTOM:
VEKI module will not get loaded on SLES12 SP1.

DESCRIPTION:
Since SLES12 SP1 is new release therefore VEKI module failed to load
on it.

RESOLUTION:
Added VEKI support for SLES12 SP1.

Patch ID: VRTSdbac-6.2.1.100-SLES12

* 3864460 (Tracking ID: 3866590)

SYMPTOM:
VRTSdbac patch version does not work with SLES12SP1 (3.12.49-11-default 
kernel) and is unable to load the vcsmm module on SLES12SP1.

DESCRIPTION:
Installation of VRTSdbac patch version 6.2.1 fails on SLES12SP1 as 
the VCSMM module is not available on SLES12SP1 kernel 3.12.49-11-default. 
The system log file logs the following messages:

Starting VCSMM:
ERROR: No appropriate modules found.
Error in loading module "vcsmm". See documentation.
Error : VCSMM driver could not be loaded.
Error : VCSMM could not be started.

RESOLUTION:
The VRTSdbac package is re-compiled with SLES12SP1 kernel in the build 
environment to mitigate the failure

Patch ID: VRTSvcs-6.2.1.100-SLES12

* 3869873 (Tracking ID: 3869872)

SYMPTOM:
After executing the hastart command, the HAD, hashadow and agent processes 
start in user.slice instead of system.slice.

DESCRIPTION:
After a reboot operation, HAD, hashadow, CmdServer and agent processes 
start in system.slice under vcs.service. But after executing the hastop operation 
followed by the hastart operation, CmdServer remains in 'system.slice', while HAD, 
hashadow and the agent processes wrongly start in 'user.slice'. If the system 
reboots or shuts down while HAD is in user.slice, the HAD gets killed before it can 
execute offline operations for the resources.

RESOLUTION:
The code is modified to start HAD, hashadow and agent processes under 
'system.slice'.
Note: This issue is specific to SLES 12 SP1

Patch ID: VRTSvcsea-6.2.1.100-SLES12

* 3871617 (Tracking ID: 3871614)

SYMPTOM:
If HAD is running in the user.slice then during system reboot, Oracle (or its applications) running as a non-root user do not shut down gracefully.

DESCRIPTION:
On RHEL7 and SLES12, systemd is enabled. Therefore, all the processes running under user.slice are killed on system reboot. Since Oracle (or its applications) run under user.slice by default, a system reboot may cause Oracle to crash or udergo an abrupt shut down.

RESOLUTION:
Move the Oracle processes to system.slice to prevent them from an abrupt shut down during a system reboot.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch sfha-sles12sp1_x86_64-Patch-6.2.1.200.tar.gz to /tmp
2. Untar sfha-sles12sp1_x86_64-Patch-6.2.1.200.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/sfha-sles12sp1_x86_64-Patch-6.2.1.200.tar.gz
    # tar xf /tmp/sfha-sles12sp1_x86_64-Patch-6.2.1.200.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installSFHA621P2 [<host1> <host2>...]

You can also install this patch together with 6.2.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 6.2.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
You can also install this patch together with 6.2.0 GA release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas SFHA 6.2.0 directory and invoke the installer script
with -patch_path option where -patch_path should point to the patch directory.
 # ./installer -patch_path [patch path] [host1 host2...]


REMOVING THE PATCH
------------------
rpm -e rpm-name


SPECIAL INSTRUCTIONS
--------------------
For Incident 3705579, An additional patch is required from EMC to support processing SCSI commands via the request_queue mechanism on EMC PowerPath devices. Please contact EMC for patch details for a specific kernel version,
and refer to EMC incident id OPT 472862.


OTHERS
------
NONE



Read and accept Terms of Service