sfha-rhel7.3_x86_64-Patch-6.2.1.300
Obsolete
The latest patch(es) : sfha-rhel7.3_x86_64-Patch-6.2.1.400 

 Basic information
Release type: Patch
Release date: 2016-12-30
OS update support: None
Technote: None
Documentation: None
Popularity: 4026 viewed    downloaded
Download size: 89.12 MB
Checksum: 634862666

 Applies to one or more of the following products:
Application HA 6.2 On RHEL7 x86-64
Cluster Server 6.2 On RHEL7 x86-64
Dynamic Multi-Pathing 6.2 On RHEL7 x86-64
File System 6.2 On RHEL7 x86-64
Storage Foundation 6.2 On RHEL7 x86-64
Storage Foundation Cluster File System 6.2 On RHEL7 x86-64
Storage Foundation for Oracle RAC 6.2 On RHEL7 x86-64
Storage Foundation HA 6.2 On RHEL7 x86-64
Volume Manager 6.2 On RHEL7 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-rhel7.3_x86_64-Patch-6.2.1.400 2017-11-15

This patch supersedes the following patches: Release date
vm-rhel7_x86_64-Patch-6.2.1.300 (obsolete) 2017-10-10
odm-rhel7_x86_64-Patch-6.2.1.300 (obsolete) 2017-04-12
llt-rhel7_x86_64-Patch-6.2.1.600 (obsolete) 2016-12-01
vcsea-rhel7_x86_64-Patch-6.2.1.200 (obsolete) 2016-05-20
gab-rhel7_x86_64-Patch-6.2.1.300 (obsolete) 2016-05-04
fs-rhel7_x86_64-Patch-6.2.1.100 (obsolete) 2015-09-02

This patch requires: Release date
sfha-rhel7_x86_64-MR-6.2.1 2015-04-24

 Fixes the following incidents:
3752475, 3753724, 3754492, 3756002, 3759910, 3760226, 3765324, 3765998, 3769992, 3780334, 3793241, 3798437, 3802857, 3803497, 3808285, 3816222, 3817120, 3817229, 3821688, 3839293, 3850478, 3851117, 3852148, 3854788, 3863971, 3868653, 3871040, 3871124, 3871617, 3873145, 3874737, 3875807, 3875933, 3879170, 3880573, 3881334, 3881335, 3889284, 3889850, 3891789, 3893134, 3893362, 3894783, 3896150, 3896151, 3896154, 3896156, 3896160, 3896223, 3896231, 3896261, 3896267, 3896269, 3896270, 3896273, 3896277, 3896281, 3896285, 3896303, 3896304, 3896306, 3896308, 3896310, 3896311, 3896312, 3896313, 3896314, 3897764, 3898129, 3898168, 3898169, 3898296, 3901379, 3902626, 3903647, 3903657, 3904790, 3904796, 3904797, 3904800, 3904801, 3904802, 3904804, 3904805, 3904806, 3904807, 3904810, 3904811, 3904819, 3904822, 3904824, 3904825, 3904830, 3904831, 3904833, 3904834, 3904841, 3904851, 3904858, 3904859, 3904861, 3904863, 3904864, 3905056, 3905431, 3905471, 3906065, 3906148, 3906251, 3906409, 3906410, 3906411, 3906412, 3906566, 3906846, 3906961, 3907017, 3907179, 3907210, 3907350, 3907593, 3907595

 Patch ID:
VRTScavf-6.2.1.100-RHEL7
VRTSglm-6.2.1.100-RHEL7
VRTSvcsea-6.2.1.200-RHEL7
VRTSdbac-6.2.1.200-RHEL7
VRTSllt-6.2.1.700-RHEL7
VRTSgab-6.2.1.400-RHEL7
VRTSvxfen-6.2.1.300-RHEL7
VRTSamf-6.2.1.300-RHEL7
VRTSodm-6.2.1.300-RHEL7
VRTSaslapm-6.2.1.600-RHEL7
VRTSvxvm-6.2.1.300-RHEL7
VRTSvxfs-6.2.1.300-RHEL7

Readme file
                          * * * READ ME * * *
            * * * Symantec Storage Foundation HA 6.2.1 * * *
                         * * * Patch 300 * * *
                         Patch Date: 2016-12-16


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH
   * KNOWN ISSUES


PATCH NAME
----------
Symantec Storage Foundation HA 6.2.1 Patch 300


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL7 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTScavf
VRTSdbac
VRTSgab
VRTSglm
VRTSllt
VRTSodm
VRTSvcsea
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Symantec Application HA 6.2
   * Symantec Cluster Server 6.2
   * Symantec Dynamic Multi-Pathing 6.2
   * Symantec File System 6.2
   * Symantec Storage Foundation 6.2
   * Symantec Storage Foundation Cluster File System HA 6.2
   * Symantec Storage Foundation for Oracle RAC 6.2
   * Symantec Storage Foundation HA 6.2
   * Symantec Volume Manager 6.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxvm-6.2.1.300-RHEL7
* 3780334 (3762580) In Linux kernels greater than or equal to RHEL6.6 (e.g. RHEL7 and SLES11SP3), the vxfen module fails to register the SCSI-3 PR keys to EMC devices when powerpath co-exists  
with DMP (Dynamic Multi-Pathing).
* 3802857 (3726110) On systems with high number of CPUs, Dynamic Multi-Pathing (DMP) devices may perform considerably slower than OS device paths.
* 3803497 (3802750) VxVM (Veritas Volume Manager) volume I/O-shipping functionality is not disabled even after the user issues the correct command to disable it.
* 3816222 (3816219) VxDMP event source daemon keeps reporting UDEV change event in syslog.
* 3839293 (3776520) Filters are not updated properly in lvm.conf file in VxDMP initrd (initial ramdisk) while Dynamic Multipathing (DMP) Native Support is being 
enabled.
* 3850478 (3850477) kmalloc-1024 and kmalloc-2048 memory consuming keeps increasing when reading or writing data against VxVM volume with big block size
* 3851117 (3662392) In the Cluster Volume Manager (CVM) environment, if I/Os are getting executed 
on slave node, corruption can happen when the vxdisk resize(1M) command is 
executing on the master node.
* 3852148 (3852146) Shared DiskGroup(DG) fails to import when "-c" and "-o noreonline" options 
are
specified together
* 3854788 (3783356) After Dynamic Multi-Pathing (DMP) module fails to load, dmp_idle_vector is not NULL.
* 3863971 (3736502) Memory leakage is found when transaction aborts.
* 3868653 (3866051) Driver name over 32 bytes may cause vxconfigd unable to startup
* 3871040 (3868444) Disk header timestamp is updated even if the disk group(DG) import fails.
* 3871124 (3823283) While unencapsulating a boot disk in SAN environment (Storage Area etwork), 
Linux operating system sticks in grub after reboot.
* 3873145 (3872197) vxconfigd panics when NVME devices are attached to the system
* 3874737 (3874387) Disk header information is not logged to the syslog 
sometimes even if the disk is missing and dg import fails.
* 3875933 (3737585) "Uncorrectable write error" with IOHINT in VVR (Veritas Volume Replicator) 
environment
* 3880573 (3886153) vradmind daemon core dump occurs in a VVR primary-primary configuration 
because of assert() failure.
* 3881334 (3864063) Application IO hang happens because of a race between Master Pause SIO(Staging IO) 
and Error Handler SIO.
* 3881335 (3867236) Application IO hang happens because of a race between Master Pause SIO(Staging IO) 
and RVWRITE1 SIO.
* 3889284 (3878153) VVR 'vradmind' deamon core dump.
* 3889850 (3878911) QLogic driver returns an error due to Incorrect aiusize in FC header
* 3891789 (3873625) System panicked when pulling out FC cables on SFHA6.2.1/RHEL7.2
* 3893134 (3864318) Memory consuming keeps increasing when reading/writing data against VxVM volume
with 
big block size.
* 3893362 (3881132) vxcommands hang following san change.
* 3894783 (3628743) New BE takes too much time to startup during live upgrade on Solaris 11.2
* 3897764 (3741003) After removing storage from one of multiple plex in a mirrored DCO (Data 
Change Object) volume, entire DCO volume is detached and DCO object is having 
BADLOG flag marked because of a flag reset missing.
* 3898129 (3790136) File system hang observed due to IO's in Ditry Region Logging (DRL).
* 3898168 (3739933) Allow VxVM package installation on EFI enabled Linux machines.
* 3898169 (3740730) While creating volume using vxassist CLI, dco log volume length specified at
command line was not getting honored.
* 3898296 (3767531) In Layered volume layout with FSS configuration, when few 
of the FSS_Hosts are rebooted, Full resync is happening for non-affected disks 
on master.
* 3902626 (3795739) In a split brain scenario, cluster formation takes very long time.
* 3903647 (3868934) System panic happens while deactivate the SIO (staging IO).
* 3904790 (3795788) Performance degrades when many application sessions open the same data file on the VxVMvolume.
* 3904796 (3853049) The display of stats delayed beyond the set interval for vxstat and multiple 
sessions of vxstat impacted the IO performance.
* 3904797 (3857120) Commands like vxdg deport which try to close a VxVM volume might hang.
* 3904800 (3860503) Poor performance of vxassist mirroring is observed on some high end servers.
* 3904801 (3686698) vxconfigd was getting hung due to deadlock between two threads
* 3904802 (3721565) vxconfigd hang is seen.
* 3904804 (3486861) Primary node panics when storage is removed while replication is going on with heavy 
IOs.
* 3904805 (3788644) Reuse raw device number when checking for available raw devices.
* 3904806 (3807879) User data corrupts because of the writing of the backup EFT GPT disk label 
during the VxVM disk-group flush operation.
* 3904807 (3867145) When VVR SRL occupation > 90%, then output the SRL occupation is shown by 10 
percent.
* 3904810 (3871750) In parallel VxVM vxstat commands report abnormal disk IO statistic data
* 3904811 (3875563) While dumping the disk header information, human readable
timestamp was not converted correctly from corresponding epoch time.
* 3904819 (3811946) When invoking "vxsnap make" command with cachesize option to create space optimized snapshot, the command succeeds but a plex I/O error message is displayed in syslog.
* 3904822 (3755209) The Veritas Dynamic Multi-pathing(VxDMP) device configured in Solaris Logical 
DOMains(LDOM) guest is disabled when an active controller of an ALUA array is 
failed.
* 3904824 (3795622) With Dynamic Multipathing (DMP) Native Support enabled, Logical Volume Manager
(LVM) global_filter is not updated properly in lvm.conf file.
* 3904825 (3859009) global_filter of lvm.conf is not updated due to some paths of LVM dmpnode are 
reused during DDL(Device Discovery Layer) discovery cycle.
* 3904830 (3840359) Some VxVM commands fail on using the localized messages.
* 3904831 (3802075) Foreign disks with name having digit in it, defined by udev rules goes into error 
state after vxdisk scandisks/
* 3904833 (3729078) VVR(Veritas Volume Replication) secondary site panic occurs during patch 
installation because of flag overlap issue.
* 3904834 (3819670) When smartmove with 'vxevac' command is run in background by hitting 'ctlr-z' key and 'bg' command, the execution of 'vxevac' is terminated abruptly.
* 3904851 (3804214) VxDMP (Dynamic Multi-Pathing) path enable operation fails after the disk label is
changed from guest LDOM. Open fails with error 5 on the path being enabled.
* 3904858 (3899568) Adding tunable dmp_compute_iostats to start/stop the iostat gathering
persistently.
* 3904859 (3901633) vxrsync reports error during rvg sync because of incorrect volume end offset 
calculation.
* 3904861 (3904538) IO hang happens during slave node leave or master node switch because of racing 
between RV(Replicate Volume) recovery SIO(Staged IO) and new coming IOs.
* 3904863 (3851632) Some VxVM commands fail when you use the localized messages.
* 3904864 (3769303) System pancis when Cluster Volume Manager (CVM) group is brought online
* 3905471 (3868533) IO hang happens because of a deadlock situation.
* 3906251 (3806909) Due to some modification in licensing , for STANDALONE DMP, DMP keyless 
license was not working.
* 3906566 (3907654) Storage of cold data on dedicated SAN storage spaces 
increases storage cost and maintenance. 
Move cold data from local storage to cloud storage.
* 3907017 (3877571) Disk header is updated even if the dg import operation fails
* 3907593 (3660869) Enhance the Dirty region logging (DRL) dirty-ahead logging for sequential write 
workloads
* 3907595 (3907596) vxdmpadm setattr command gives error while setting the path attribute.
Patch ID: VRTSvxfs-6.2.1.300-RHEL7
* 3817229 (3762174) fsfreeze and vxdump commands may not work together.
* 3896150 (3833816) Read returns stale data on one node of the CFS.
* 3896151 (3827491) Data relocation is not executed correctly if the IOTEMP policy is set to AVERAGE.
* 3896154 (1428611) 'vxcompress' can spew many GLM block lock messages over the 
LLT network.
* 3896156 (3633683) vxfs thread consumes high CPU while running an 
application 
that makes excessive sync() calls.
* 3896160 (3808033) When using 6.2.1 ODM on RHEL7, Oracle resource cannot be killed after forced umount via VCS.
* 3896223 (3735697) vxrepquota reports error
* 3896231 (3708836) fallocate causes data corruption
* 3896261 (3855726) Panic in vx_prot_unregister_all().
* 3896267 (3861271) Missing an inode clear operation when a Linux inode is being de-initialized on
SLES11.
* 3896269 (3879310) File System may get corrupted after a failed vxupgrade.
* 3896270 (3707662) Race between reorg processing and fsadm timer thread (alarm expiry) leads to panic in vx_reorg_emap.
* 3896273 (3558087) The ls -l and other commands which uses stat system call may
take long time to complete.
* 3896277 (3691633) Remove RCQ Full messages
* 3896281 (3830300) Degraded CPU performance during backup of Oracle archive logs
on CFS vs local filesystem
* 3896285 (3757609) CPU usage going high because of contention over ODM_IO_LOCK
* 3896303 (3762125) Directory size increases abnormally.
* 3896304 (3846521) "cp -p" fails if modification time in nano seconds have 10 
digits.
* 3896306 (3790721) High cpu usage caused by vx_send_bcastgetemapmsg_remaus
* 3896308 (3695367) Unable to remove volume from multi-volume VxFS using "fsvoladm" command.
* 3896310 (3859032) System panics in vx_tflush_map() due to NULL pointer 
de-reference.
* 3896311 (3779916) vxfsconvert fails to upgrade layout verison for a vxfs file 
system with large number of inodes.
* 3896312 (3811849) On cluster file system (CFS), while executing lookup() function in a directory
with Large Directory Hash (LDH), the system panics and displays an error.
* 3896313 (3817734) Direct command to run  fsck with -y|Y option was mentioned in
the message displayed to user when file system mount fails.
* 3896314 (3856363) Filesystem inodes have incorrect blocks.
* 3901379 (3897793) Panic happens because of race where the mntlock ID is 
cleared while mntlock flag still set.
* 3903657 (3857254) Assert failure because of missed flush before taking 
filesnap of the file.
* 3904841 (3901318) VxFS module failed to load on RHEL7.3.
* 3905056 (3879761) Performance issue observed due to contention on vxfs spin lock
vx_worklist_lk.
* 3906148 (3894712) ACL permissions are not inherited correctly on cluster 
file system.
* 3906846 (3872202) VxFS internal test hits an assert.
* 3906961 (3891801) Internal test hit debug assert.
* 3907350 (3817734) Direct command to run  fsck with -y|Y option was mentioned in
the message displayed to user when file system mount fails.
Patch ID: VRTSvxfs-6.2.1.100-RHEL7
* 3753724 (3731844) umount -r option fails for vxfs 6.2.
* 3754492 (3761603) Internal assert failure because of invalid extop processing 
at the mount time.
* 3756002 (3764824) Internal cluster file system(CFS) testing hit debug assert
* 3765324 (3736398) NULL pointer dereference panic in lazy unmount.
* 3765998 (3759886) In case of nested mount, force umount of parent leaves 
stale child entry in /etc/mtab even after subsequent umount of child.
* 3769992 (3729158) Deadlock due to incorrect locking order between write advise
and dalloc flusher thread.
* 3793241 (3793240) Vxrestore command dumps core file because of invalid 
japanese strings.
* 3798437 (3812914) On RHEL 6.5 and RHEL 6.4 latest kernel patch, umount(8) system call hangs if an
application watches for inode events using inotify(7) APIs.
* 3808285 (3808284) fsdedupadm status Japanese text includes strange character.
* 3817120 (3804400) VRTS/bin/cp does not return any error when quota hard 
limit is reached and partial write is encountered.
* 3821688 (3821686) VxFS module failed to load on SLES11 SP4.
Patch ID: VRTSodm-6.2.1.300-RHEL7
* 3906065 (3757609) CPU usage going high because of contention over ODM_IO_LOCK
Patch ID: VRTSamf-6.2.1.200-RHEL7
* 3906412 (3896877) Veritas Cluster Server does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).
Patch ID: VRTSvxfen-6.2.1.300-RHEL7
* 3906411 (3896877) Veritas Cluster Server does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).
Patch ID: VRTSgab-6.2.1.400-RHEL7
* 3906410 (3896877) Veritas Cluster Server does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).
Patch ID: VRTSgab-6.2.1.300-RHEL7
* 3875807 (3875805) In some rare cases, if a few unicast messages are stuck in 
the Group Membership Atomic Broadcast (GAB) receive queue of a port, the port 
might receive a GAB I/O fence message.
Patch ID: VRTSllt-6.2.1.700-RHEL7
* 3906409 (3896877) Veritas Cluster Server does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).
* 3907179 (3907854) Node may  panic during data transfer when LLT is configured over RDMA
Patch ID: VRTSllt-6.2.1.600-RHEL7
* 3905431 (3905430) Application IO hangs in case of FSS with LLT over RDMA during heavy data transfer.
Patch ID: VRTSdbac-6.2.1.200-RHEL7
* 3907210 (3896877) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).
Patch ID: VRTSvcsea-6.2.1.200-RHEL7
* 3879170 (3879366) Extended support for all shells in oracle systemd startup 
script.
Patch ID: VRTSvcsea-6.2.1.100-RHEL7
* 3871617 (3871614) With HAD in user.slice, Oracle (or applications) are not shut down gracefully during system reboot.
Patch ID: VRTSglm-6.2.1.100-RHEL7
* 3752475 (3758102) In Cluster File System(CFS) on Linux, stack overflow while
creating ODM file.
Patch ID: VRTScavf-6.2.1.100-RHEL7
* 3759910 (3765928) "cfsshare config -p all" doesn't populating smb.conf(samba
configuration file) properly for some global variables.
* 3760226 (3765921) On RHEL7 onwards, pluggable Authentication Modules(PAM)
related error messages for Samba daemon may be observed in system logs after
adding CIFS(common internet file system) share


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxvm-6.2.1.300-RHEL7

* 3780334 (Tracking ID: 3762580)

SYMPTOM:
In Linux kernels greater than or equal to RHEL6.6 (e.g. RHEL7 and SLES11SP3), the vxfen module fails to register the SCSI-3 PR keys to EMC devices when powerpath co-exists 
with DMP (Dynamic Multi-Pathing). The following logs are printed while  setting up fencing for the cluster.

VXFEN: vxfen_reg_coord_pt: end ret = -1
vxfen_handle_local_config_done: Could not register with a majority of the
coordination points.

DESCRIPTION:
In Linux kernels greater than or equal to RHEL6.6 (e.g. RHEL7 and SLES11SP3), the interface used by DMP to send the SCSI commands to block devices does not transfer the 
data to or from the device. Therefore, the SCSI-3 PR keys do not get registered.

RESOLUTION:
The code is modified to use SCSI request_queue to send the SCSI commands to the 
underlying block device.
Additional patch is required from EMC to support processing SCSI commands via the request_queue mechanism on EMC PowerPath devices. Please contact EMC for patch details 
for a specific kernel version.

* 3802857 (Tracking ID: 3726110)

SYMPTOM:
On systems with high number of CPUs, DMP devices may perform  considerably slower than OS device  paths.

DESCRIPTION:
In high CPU configuration, I/O statistics related functionality in DMP takes more CPU time because DMP statistics are collected on per CPU basis. This stat collection happens in DMP I/O code path hence it reduces the I/O performance. Because of this, DMP devices perform slower than OS device paths.

RESOLUTION:
The code is modified to remove some of the stats collection functionality from DMP I/O code path. Along with this, the following tunable need to be turned off: 
1. Turn off idle lun probing. 
#vxdmpadm settune dmp_probe_idle_lun=off
2. Turn off statistic gathering functionality.  
#vxdmpadm iostat stop

Notes: 
1. Please apply this patch if system configuration has large number of CPU and if DMP performs considerably slower than OS device paths. For normal systems this issue is not applicable.

* 3803497 (Tracking ID: 3802750)

SYMPTOM:
Once VxVM (Veritas Volume Manager) volume I/O-shipping functionality is turned on, it is not getting disabled even after the user issues the correct command to disable it.

DESCRIPTION:
VxVM (Veritas Volume Manager) volume I/O-shipping functionality is turned off by default. The following two commands can be used to turn it on and off:
	vxdg -g <dgname> set ioship=on
	vxdg -g <dgname> set ioship=off

The command to turn off I/O-shipping is not working as intended because I/O-shipping flags are not reset properly.

RESOLUTION:
The code is modified to correctly reset I/O-shipping flags when the user issues the CLI command.

* 3816222 (Tracking ID: 3816219)

SYMPTOM:
VxDMP  (Veritas Dynamic Multi-Pathing) event source daemon (vxesd) keeps 
reporting a lot of messages in syslog as below:
"vxesd: Device sd*(*/*) is changed"

DESCRIPTION:
The vxesd daemon registers with the UDEV framework and keeps VxDMP up-to-date 
with devices' status. Due to some change at device, vxesd keeps reporting this 
kind of change-event 
listened by udev. VxDMP only cares about "add" and "remove" UDEV events. For 
UDEV "change" event, we can avoid logging for these events to VxDMP.

RESOLUTION:
The code is modified to stop logging UDEV change-event related messages in 
syslog.

* 3839293 (Tracking ID: 3776520)

SYMPTOM:
Filters are not updated properly in lvm.conf file in VxDMP initrd while DMP Native Support is being enabled. As a result, root Logical Volume 
(LV) is mounted on OS device upon reboot.

DESCRIPTION:
From LVM version 105, global_filter was introduced as part of lvm.conf file. VxDMP updates initird lvm.conf file with the filters required for 
DMP Native Support to function. While updating the lvm.conf, VxDMP checks for the filter field to be updated, but ideally we should check for 
global_filter field to be updated in the latest LVM version. This leads to lvm.conf file not updated with the proper filters.

RESOLUTION:
The code is modified to properly update global_filter field in lvm.conf file in VxDMP initrd.

* 3850478 (Tracking ID: 3850477)

SYMPTOM:
kmalloc-1024 and kmalloc-2048 memory consuming keeps increasing when reading or writing data against VxVM volume with big block size

DESCRIPTION:
In case the incoming I/O size is too big for a disk to handle, VxVM splits it into smaller ones to move forward. VxVM then allocates memory to backup those splited I/Os. Due to code issue, the allocated space is not freed when I/O splitting completes.

RESOLUTION:
The code is modified to free VxVM allocated memory after I/O splitting completes.

* 3851117 (Tracking ID: 3662392)

SYMPTOM:
In the CVM environment, if I/Os are getting executed on slave node, corruption 
can happen when the vxdisk resize(1M) command is executing on the master 
node.

DESCRIPTION:
During the first stage of resize transaction, the master node re-adjusts the 
disk offsets and public/private partition device numbers.
On a slave node, the public/private partition device numbers are not adjusted 
properly. Because of this, the partition starting offset is are added twice 
and causes the corruption. The window is small during which public/private 
partition device numbers are adjusted. If I/O occurs during this window then 
only corruption is observed. After the resize operation completes its execution,
no further corruption will happen.

RESOLUTION:
The code has been changed to add partition starting offset properly to an I/O 
on slave node during execution of a resize command.

* 3852148 (Tracking ID: 3852146)

SYMPTOM:
Shared DiskGroup fails to import when "-c" and "-o noreonline" options are
specified together with the below error:

VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed:
Disk for disk group not found

DESCRIPTION:
When "-c" option is specified we update the DISKID and DGID of the disks in 
the DG. When the
information about the disks in the DG is passed to Slave node, slave node 
does not 
have the latest information since the online of the disks would not happen
because of "-o noreonline" being specified. Now since slave node does not 
have
the latest 
information, it would not be able to identify proper disks belonging to the 
DG
which leads to DG import failing with "Disk for disk group not found".

RESOLUTION:
Code changes have been done to handle the working of "-c" and "-o 
noreonline"
together.

* 3854788 (Tracking ID: 3783356)

SYMPTOM:
After DMP module fails to load, dmp_idle_vector is not NULL.

DESCRIPTION:
After DMP module load failure, DMP resources are not cleared off from the system memory, so some of the 
resources are in NON-NULL value. When system retries to load, it frees invalid data, leading to system panic with error 
message BAD FREE, because the data being freed is not valid at that point.

RESOLUTION:
The code is modified to clear up the DMP resources when module failure happens.

* 3863971 (Tracking ID: 3736502)

SYMPTOM:
When FMR is configured in VVR environment, 'vxsnap refresh' fails with below 
error message:
"VxVM VVR vxsnap ERROR V-5-1-10128 DCO experienced IO errors during the
operation. Re-run the operation after ensuring that DCO is accessible".
Also, multiple messages of connection/disconnection of replication 
link(rlink) are seen.

DESCRIPTION:
Inherently triggered rlink connection/disconnection causes the transaction 
retries. During transaction, memory is allocated for Data Change Object(DCO) 
maps and is not cleared on abortion of a transaction.
This leads to a problem of memory leak and eventually to exhaustion of maps.

RESOLUTION:
The fix has been added to clear the allocated DCO maps when transaction 
aborts.

* 3868653 (Tracking ID: 3866051)

SYMPTOM:
After we load a driver with name over 32 bytes in kernel, we will not be able to
restart 
vxconfigd.

DESCRIPTION:
In Kernel, if we have any driver with name over 32 bytes AND when we restart
vxconfigd. Then due to 
a defect in our code about size of driver name we accept, the process stack will
be corrupted. Hence, vxconfigd becomes unable to startup.

RESOLUTION:
Code changes are made to fix the memory corruption issue.

* 3871040 (Tracking ID: 3868444)

SYMPTOM:
Disk header timestamp is updated even if the disk group import fails.

DESCRIPTION:
While doing dg import operation, during join operation disk header timestamps are updated. This makes difficult for 
support to understand which disk is having latest config copy if dg import is failed and decision is to be made if
force dg import is safe or not.

RESOLUTION:
Dump the old disk header timestamp and sequence number in the syslog which can be referred on deciding if force dg 
import would be safe or not

* 3871124 (Tracking ID: 3823283)

SYMPTOM:
Linux operating system sticks in grub after reboot. Manual kernel load is 
required to make operating system functional.

DESCRIPTION:
During unencapsulation of a boot disk in SAN environment, multiple entries 
corresponding to root disk are found in by-id device directory. As a 
result, a parse command fails, leading to the creation of an improper menu 
file in grub directory. This menu file defines the device path to load 
kernel and other modules.

RESOLUTION:
The code is modified to handle multiple entries for SAN boot disk.

* 3873145 (Tracking ID: 3872197)

SYMPTOM:
vxconfigd panics when NVME devices are attached to the system with the following stack:
panic+0xa7/0x16f
oops_end+0xe4/0x100
no_context+0xfb/0x260
__bad_area_nosemaphore+0x125/0x1e0
bad_area+0x4e/0x60
__do_page_fault+0x473/0x500
dmp_rel_shared_lock+0x20/0x30 [vxdmp]
dmp_send_scsipkt+0xd8/0x120 [vxdmp]
do_page_fault+0x3e/0xa0
page_fault+0x25/0x30
elv_may_queue+0xd/0x20
get_request+0x49/0x3c0
get_request_wait+0x2a/0x1d0
swiotlb_map_sg_attrs+0x79/0x130
blk_get_request+0x46/0xa0
dmp_kernel_scsi_ioctl+0x11d/0x3a0 [vxdmp]
dmp_scsi_ioctl+0xae/0x2a0 [vxdmp]
__wake_up+0x53/0x70
dmp_send_scsireq+0x5f/0xc0 [vxdmp]
dmp_do_scsi_gen+0xab/0x1b0 [vxdmp]
dmp_pr_check_aptpl+0xcd/0x150 [vxdmp]
dmp_make_mp_node+0x239/0x280 [vxdmp]
dmp_decode_add_disk+0x816/0x1110 [vxdmp]
dmp_decipher_instructions+0x270/0x350 [vxdmp]
dmp_process_instruction_buffer+0x1be/0x1d0 [vxdmp]
dmp_reconfigure_db+0x6e/0xf0 [vxdmp]
gendmpioctl+0x2c2/0x610 [vxdmp]
dmpioctl+0x35/0x70 [vxdmp]
dmp_ioctl+0x2b/0x50 [vxdmp]
dmp_compat_ioctl+0x56/0x70 [vxdmp]

DESCRIPTION:
When vxconfigd is started it tries to send a SGIO IOCTL to send a SCSI command to NVME devices using request queue mechanism. 
The NVME devices does not have elevator set leading to the failure of SGIO command leading to panic of the system.

RESOLUTION:
Code changes have been done to bypass the SGIO command for NVME devices.

* 3874737 (Tracking ID: 3874387)

SYMPTOM:
Disk header information is not logged to the syslog sometimes
even if the disk is missing and dg import fails.

DESCRIPTION:
In scenarios where disk has config copy enabled
and get active disk record, then disk header information was not getting
logged even though the disk is missing thereafter dg import fails.

RESOLUTION:
Dump the disk header information even if the disk record is
active and attached to the disk group.

* 3875933 (Tracking ID: 3737585)

SYMPTOM:
Customer encounters following error in VVR environment:
"Uncorrectable write error"

DESCRIPTION:
IOHINT structure allocated from VxFS is also freed by VxFS after IO done from VxVM. 
IOs to VxVM with VVR needs 2 phases, SRL(Serial Replication Log) write and Data 
volume write, VxFS gets IO done after SRL write and doesnt wait for Data Volume 
write completion, so if Data volume write gets started after VxFS frees IOHINT, it 
accesses stale one which may cause write IO error.

RESOLUTION:
Code changes were done to clone the IOHINT structure before writing to data volume

* 3880573 (Tracking ID: 3886153)

SYMPTOM:
In a VVR primary-primary configuration, if 'vrstat' command is runing, vradmind 
core dump may occur with the stack like below:

__assert_c99 
StatsSession::sessionInitReq 
StatsSession::processOpReq 
StatsSession::processOpMsgs  
RDS::processStatsOpMsg 
DBMgr::processStatsOpMsg  
process_message

DESCRIPTION:
vrstat command initiates StatSession which need to send initilization request to 
secondary. On Secondary there is assert() to ensure it's secondary that 
processing the request. In primary-primary configuration it leads to core dump.

RESOLUTION:
The code changes have been made to fix the issue by returning failure to 
StatSession initiator.

* 3881334 (Tracking ID: 3864063)

SYMPTOM:
Application IO hang happens after issuing Master Pause command.

DESCRIPTION:
Some flags(VOL_RIFLAG_DISCONNECTING or VOL_RIFLAG_REQUEST_PENDING) in VVR(Veritas 
Volume Replicator) kernel are not cleared because of a race between Master Pause SIO 
and Error Handler SIO resulting in RU (Replication Update) SIO to fail to proceed 
thereby causing IO hang.

RESOLUTION:
Code changes have been made to handle the race condition.

* 3881335 (Tracking ID: 3867236)

SYMPTOM:
Application IO hang happens after issuing Master Pause command.

DESCRIPTION:
The flag VOL_RIFLAG_REQUEST_PENDING in VVR(Veritas Volume Replicator) kernel is 
not cleared because of a race between Master Pause SIO and RVWRITE1 SIO resulting 
in RU (Replication Update) SIO to fail to proceed thereby causing IO hang.

RESOLUTION:
Code changes have been made to handle the race condition.

* 3889284 (Tracking ID: 3878153)

SYMPTOM:
VVR (Veritas Volume Replicator)  'vradmind' deamon core dump.

DESCRIPTION:
Under certain circumstances 'vradmind' daemon may core dump freeing a variable allocated in 
stack.

RESOLUTION:
Code change has been done to address the issue.

* 3889850 (Tracking ID: 3878911)

SYMPTOM:
QLogic driver returns following error due to Incorrect aiusize in FC header
FC_ELS_MALFORMED, cnt=c60h, size=314h

DESCRIPTION:
When creating CT pass-through command to be sent, the ct_aiusize we specify in 
request header does not conform to FT standard. Hence during the sanity check of FT 
header in OS layer, it reports error and get_topology() failed.

RESOLUTION:
Code changes have been done so that ct_aiusize is in compliance with FT standard.

* 3891789 (Tracking ID: 3873625)

SYMPTOM:
System panicked when pulling out FC cables on SFHA6.2.1/RHEL7.2, the stack 
trace of the panic is like following:

 #8 [ffff880fecb23a90] page_fault at ffffffff8163d408
    [exception RIP: blk_rq_map_kern+31]
    RIP: ffffffff812cfd8f  RSP: ffff880fecb23b48  RFLAGS: 00010296
    RAX: ffffffffffffffed  RBX: ffff880fcf847230  RCX: 0000000000001010
    RDX: 0000000000001010  RSI: ffff880fd10d2000  RDI: ffff880fcf847230
    RBP: ffff880fecb23b70   R8: 0000000000000010   R9: ffff880fcf7a5b40
    R10: ffff88080f803b00  R11: 0000000000000001  R12: 0000000000000000
    R13: ffffffffffffffed  R14: ffffffffffffffed  R15: ffff8807fcfd5b00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff880fecb23b78] dmp_kernel_scsi_ioctl at ffffffffa0a4e499 [vxdmp]
#10 [ffff880fecb23bb8] dmp_scsi_ioctl at ffffffffa0a908a9 [vxdmp]
#11 [ffff880fecb23c50] dmp_send_scsireq at ffffffffa0a91580 [vxdmp]
#12 [ffff880fecb23c70] dmp_do_scsi_gen at ffffffffa0a868b4 [vxdmp]
#13 [ffff880fecb23c98] dmp_pr_send_cmd at ffffffffa0a8eec3 [vxdmp]
#14 [ffff880fecb23d30] dmp_pr_do_read at ffffffffa0a61d30 [vxdmp]
#15 [ffff880fecb23da8] dmp_def_get_reservation at ffffffffa0a63284 [vxdmp]
#16 [ffff880fecb23db8] dmp_pgr_read at ffffffffa0a8d3c4 [vxdmp]
#17 [ffff880fecb23df0] gendmpioctl at ffffffffa0a5abf3 [vxdmp]
#18 [ffff880fecb23e18] dmpioctl at ffffffffa0a5b181 [vxdmp]
#19 [ffff880fecb23e30] dmp_ioctl at ffffffffa0a811aa [vxdmp]
#20 [ffff880fecb23e50] blkdev_ioctl at ffffffff812d8da3
#21 [ffff880fecb23ea8] block_ioctl at ffffffff81219701
#22 [ffff880fecb23eb8] do_vfs_ioctl at ffffffff811f1ef5
#23 [ffff880fecb23f30] sys_ioctl at ffffffff811f2171
#24 [ffff880fecb23f80] system_call_fastpath at ffffffff81645909

DESCRIPTION:
Linux kernel function __get_request() may fail under memory pressure and 
returns negative value, DMP doesn't check this and dereference it as a valid 
req pointer, hence the panic.

RESOLUTION:
Modified the code to check if the req is valid or error code.

* 3893134 (Tracking ID: 3864318)

SYMPTOM:
Memory consuming keeps increasing when reading/writing data against VxVM volume
with 
big block size.

DESCRIPTION:
In case incoming IO size is too big for a disk to handle, VxVM will split it into 
smaller ones to move forward. VxVM will allocate memory to backup the those split 
IOs. Due to code defect, the allocated space doesn't got freed when split IOs are 
completed.

RESOLUTION:
The code is modified to free VxVM allocated memory after split IOs competed.

* 3893362 (Tracking ID: 3881132)

SYMPTOM:
vxcommands hangs following the SAN change. OS device handles and DMP devices are
not cleaned up, causing kernel core dump.

DESCRIPTION:
Prior to discovering NEW LUNs, we need to check the PQ values of the OS device
handles and deletes them when non-zero. OS and VM Device Trees should be in-Sync
after adding/removing Luns. Devices were not getting cleaned up. Fix is to
clean-up device-tree to make them in-sync.

RESOLUTION:
Code changes made in DMPDR tool, which will delete stale OS device handles to fix
the issue.

* 3894783 (Tracking ID: 3628743)

SYMPTOM:
On Solaris 11.2, New boot environment takes long time to start up during 
live upgrade. Here deadlock is seen in ndi_devi_enter( ), when loading VxDMP 
driver and Deadlocks caused by VXVM drivers due to use of Solaris 
ddi_pathname_to_dev_t or ddi_hold_devi_by_path private interfaces.

DESCRIPTION:
Here deadlocks caused by VXVM drivers due to use of Solaris 
ddi_pathname_to_dev_t or e_ddi_hold_devi_by_path private interface and 
ddi_pathname_to_dev_t/e_ddi_hold_devi_by_path are Solaris internal use only 
routine and is not multi-thread safe.  Normally this is not a problem as the 
various VXVM drivers don't unload or detach, however there are certain 
conditions where our _init routines might be called which can expose this 
deadlock condition.

RESOLUTION:
Code is modified to resolve deadlock.

* 3897764 (Tracking ID: 3741003)

SYMPTOM:
In CVM (Cluster Volume Manager) environment, after removing storage from one 
of multiple plex in a mirrored DCO volume, the DCO volume is detached and DCO 
object is having BADLOG flag marked.

DESCRIPTION:
When one plexs storage of a mirrored volume is removed, only that plex should 
be detached instead of entire volume. While IO reading is undergoing on a 
failed DCO plex, the local failed IO gets restarted and shiped to other nodes 
for retry, which also gets failed, since the storage is removed from other 
nodes as well. Because a flag reset is missing, the failed IO returns error 
result in entire volume is detached and marked as BADLOG flag even though the 
IO is successful from an alternate plex.

RESOLUTION:
Code changes are added to handle this case for the Resiliency of VxVM in 
Partial-Storage outage scenario.

* 3898129 (Tracking ID: 3790136)

SYMPTOM:
File system hang can be observed sometimes due to IO's hung in DRL.

DESCRIPTION:
There might be some IO's hung in DRL of mirrored volume due to incorrect 
calculation of outstanding IO's on volume and number of active IO's which are 
currently in progress on DRL. The value of the outstanding IO on volume can get 
modified incorrectly leading to IO's on DRL not to progress further which in 
turns results in a hang kind of scenario.

RESOLUTION:
Code changes have been done to avoid incorrect modification of value of 
outstanding IO's on volume and prevent the hang.

* 3898168 (Tracking ID: 3739933)

SYMPTOM:
VxVM package installation fails when Linux-server is having EFI support enabled.

DESCRIPTION:
In LINUX, VxVM install scripts assumes GRUB bootloader in BIOS mode, and tries
to locate
corresponding grub config file. In case system has GRUB bootloader in EFI mode,
VxVM fails
to locate required grub config file and so installation gets aborted.

RESOLUTION:
Code changes added to allow VxVM installation on LINUX machine, where EFI
support is Enabled.

* 3898169 (Tracking ID: 3740730)

SYMPTOM:
While creating volume using vxassist CLI, dco-log length specified as command
line
parameter was not getting honored.

E.g. ->
bash # vxassist -g <dgname> make <volume-name> <volume-size> logtype=dco
dcolen=<dcolog-length>
VxVM vxassist ERROR V-5-1-16707  Specified dcologlen(<specified dcolog length>)
is
less than minimum dcologlen(17152)

DESCRIPTION:
While creating volume, using dcologlength attribute of dco-volume in vxassist
CLI,
the size of dcolog specified is not correctly parsed in the code, because of
which
it internally compares the size with incorrectly calculated size & throws the
error indicating that size specified isn't sufficient.
So the values in comparison was incorrect. Hence changed the code to compare the
user-specified value passes the minimum-threshold value or not.

RESOLUTION:
The code is changed to Fix the issue, which honors the length value of dcolog
volume specified by user in vxassist CLI.

* 3898296 (Tracking ID: 3767531)

SYMPTOM:
In Layered volume layout with FSS configuration, when few of the 
FSS_Hosts are rebooted, Full resync is happening for non-affected disks on 
master.

DESCRIPTION:
In configuration, where there are multiple FSS-Hosts, with 
layered volume created on the hosts. When the slave nodes are rebooted , few 
of the 
sub-volumes of non-affected disks are fully getting synced on master.

RESOLUTION:
Code-changes have been made to sync only needed part of sub-
volume.

* 3902626 (Tracking ID: 3795739)

SYMPTOM:
In a split brain scenario, cluster formation takes very long time.

DESCRIPTION:
In a split brain scenario, the surviving nodes in the cluster try to preempt the keys of nodes leaving the cluster. If the keys have been already preempted by one of the surviving nodes, other surviving nodes will receive UNIT Attention. DMP (Dynamic Multipathing) then retries the preempt command after a delayof 1 second if it receives Unit attention. Cluster formation cannot complete untill PGR keys of all the leaving nodes are removed from all the disks. If the number of disks are very large, the preemption of keys takes a lot of time, leading to the very long time for cluster formation.

RESOLUTION:
The code is modified to avoid adding delay for first couple of retries when reading PGR keys. This allows faster cluster formation with arrays that clear the Unit Attention condition sooner.

* 3903647 (Tracking ID: 3868934)

SYMPTOM:
System panic in the stack like below, while deactivate the VVR(VERITAS Volume 
Replicator) batch write SIO:
 
panic_trap+000000 
vol_cmn_err+000194 
vol_rv_inactive+000090 
vol_rv_batch_write_start+001378
voliod_iohandle+000050 
voliod_loop+0002D0
vol_kernel_thread_init+000024

DESCRIPTION:
When VVR do batch write SIO, if it fails to reserve VVR IO memory, the SIO will be 
put 
on queue for restart and then will be deactivated. If the deactivation blocked  for 
some time since cannot get the lock, and during this period, the SIO is restarted due 
to the IO memory reservation request satisfied, the SIO would be corrupted due to 
be 
deactivated twice. Hence it causes the system panic.

RESOLUTION:
Code changes have been made to remove the unnecessary SIO deactivation after 
VVR 
IO memory reservation failed.

* 3904790 (Tracking ID: 3795788)

SYMPTOM:
Performance degradation is seen when many application sessions open the same data file on Veritas Volume Manager (VxVM) volume.

DESCRIPTION:
This issue occurs because of the lock contention. When many application sessions open the same data file on the VxVM volume,  the exclusive lock is occupied on all CPUs. If there are a lot of CPUs in the system, this process could be quite time- consuming, which leads to performance degradation at the initial start of applications.

RESOLUTION:
The code is modified to change the exclusive lock to the shared lock when  the data file on the volume is open.

* 3904796 (Tracking ID: 3853049)

SYMPTOM:
On a server with more number CPUs, the stats output of vxstat is delayed beyond 
the set interval. Also multiple sessions of vxstat is impacting the IO 
performance.

DESCRIPTION:
The vxstat acquires an exclusive lock on each CPU in order to gather the stats. 
This would affect the consolidation and display of stats in an environment with 
huge number of CPUs and disks. The output of stats for interval of 1 second can 
get delayed beyond the set interval. Also the acquisition of lock happens in 
the IO path which would affect the IO performance due contention of these locks.

RESOLUTION:
The code modified to remove the exclusive spin lock.

* 3904797 (Tracking ID: 3857120)

SYMPTOM:
Commands like vxdg deport which try to close a VxVM volume might hang.

DESCRIPTION:
VxVM(Veritas Volume Manager) maintains an IO count of the IO currently
in-progress on the volume.When two threads from VxVM(Veritas Volume Manager) are
trying to asynchronously manipulate the I/O count on the volume, the race
between these threads might lead to misleading IO count being present on the
volume i.e. volume might be seen having an IO count even if it does not have any
in-progress I/O.Since there is an invalid pending IO count on the volume due to
the race condition, hence volume can't be closed.

RESOLUTION:
VxVM code has been changed to avoid the race condition happening between the two
threads.

* 3904800 (Tracking ID: 3860503)

SYMPTOM:
Poor performance of vxassist mirroring is observed compared to using raw dd
utility to do mirroring .

DESCRIPTION:
There is huge lock contention on high end server with large number of cpus,
because doing copy on each region needs to obtain some unnecessary cpu locks.

RESOLUTION:
VxVM code has been changed to decrease the lock contention.

* 3904801 (Tracking ID: 3686698)

SYMPTOM:
vxconfigd was getting hung due to deadlock between two threads

DESCRIPTION:
Two threads were waiting for same lock causing deadlock between 
them. This will lead to block all vx commands. 
untimeout function will not return until pending callback is cancelled  (which 
is set through timeout function) OR pending callback has completed its 
execution (if it has already started). Therefore locks acquired by callback 
routine should not be held across call to untimeout routine or deadlock may 
result.

Thread 1: 
    untimeout_generic()   
    untimeout()
    voldio()
    volsioctl_real()
    fop_ioctl()
    ioctl()
    syscall_trap32()
 
Thread 2:
    mutex_vector_enter()
    voldsio_timeout()
    callout_list_expire()
    callout_expire()
    callout_execute()
    taskq_thread()
    thread_start()

RESOLUTION:
Code changes have been made to call untimeout outside the lock 
taken by callback handler.

* 3904802 (Tracking ID: 3721565)

SYMPTOM:
vxconfigd hang is seen with below stack.
genunix:cv_wait_sig_swap_core
genunix:cv_wait_sig_swap 
genunix:pause
unix:syscall_trap32

DESCRIPTION:
In FMR environment, write is done on a source volume having space-optimized(SO) 
snapshot. Memory is acquired first and then ILOCKs are acquired on individual SO 
volumes for pushed writes. On the other hand, a user write on SO snapshot will first 
acquire ILOCK and then acquire memory. This causes deadlock.

RESOLUTION:
Code is modified to resolve deadlock.

* 3904804 (Tracking ID: 3486861)

SYMPTOM:
Primary node panics with below stack when storage is removed while replication is 
going on with heavy IOs.
Stack:
oops_end 
no_context 
page_fault 
vol_rv_async_done 
vol_rv_flush_loghdr_done 
voliod_iohandle 
voliod_loop

DESCRIPTION:
In VVR environment, when write to data volume failson primary node, error handling 
is initiated. As a part 
of it, SRL header will be flushed. As primary storage is removed, flushing will 
fail. Panic will be hit as 
invalid values will be accessed while logging error message.

RESOLUTION:
Code is modified to resolve the issue.

* 3904805 (Tracking ID: 3788644)

SYMPTOM:
When DMP (Dynamic Multi-Pathing) native support enabled for Oracle ASM 
environment, if we constantly adding and removing DMP devices, it will cause error 
like:
/etc/vx/bin/vxdmpraw enable oracle dba 775 emc0_3f84
VxVM vxdmpraw INFO V-5-2-6157
Device enabled : emc0_3f84
Error setting raw device (Invalid argument)

DESCRIPTION:
There is a limitation (8192) of maximum raw device number N (exclusive) of 
/dev/raw/rawN. This limitation is defined in boot configuration file. When binding a 
raw 
device to a dmpnode, it uses /dev/raw/rawN to bind the dmpnode. The rawN is 
calculated by one-way incremental process. So even if we unbind the device later on, 
the "released" rawN number will not be reused in the next binding. When the rawN 
number is increased to exceed the maximum limitation, the error will be reported.

RESOLUTION:
Code has been changed to always use the smallest available rawN number instead of 
calculating by one-way incremental process.

* 3904806 (Tracking ID: 3807879)

SYMPTOM:
Writing the backup EFI GPT disk label during the disk-group flush 
operation may cause data corruption on volumes in the disk group. The backup 
label could incorrectly get flushed to the disk public region and overwrite the 
user data with the backup disk label.

DESCRIPTION:
For EFI disks initialized under VxVM (Veritas Volume Manager), it 
is observed that during a disk-group flush operation, vxconfigd (veritas 
configuration daemon) could stop writing the EFI GPT backup label to the volume 
public region, thereby causing user data corruption. When this issue happens, 
the real user data are replaced with the backup EFI disk label

RESOLUTION:
The code is modified to prevent the writing of the EFI GPT backup 
label during the VxVM disk-group flush operation.

* 3904807 (Tracking ID: 3867145)

SYMPTOM:
When VVR SRL occupation > 90%, then output the SRL occupation is shown by 10 
percent.

DESCRIPTION:
This is kind of enhancement, to show the SRL Occupation when it's more than 
90% is previously shown with 10 percentage gap.
Here the enhancement is to show the logs with 1 percentage granularity.

RESOLUTION:
Changes are done to show the syslog messages wih 1 percent granularity, when 
SRL is filled > 90%.

* 3904810 (Tracking ID: 3871750)

SYMPTOM:
In parallel VxVM(Veritas Volume Manager) vxstat commands report abnormal 
disk IO statistic data. Like below:
# /usr/sbin/vxstat -g <dg name> -u k -dv -i 1 -S
...... 
dm  emc0_2480                       4294967210 4294962421           -382676k  
4294967.38 4294972.17
......

DESCRIPTION:
After VxVM IO statistics was optimized for huge CPUs and disks, there's a 
race condition when multiple vxstat commands are running to collect disk IO 
statistic data. It causes disk's latest IO statistic value become smaller 
than previous one, hence VxVM treates the value overflow so that abnormal 
large IO statistic value is printed.

RESOLUTION:
Code changes are done to eliminate such race condition.

* 3904811 (Tracking ID: 3875563)

SYMPTOM:
While dumping the disk header information, human readable timestamp was
not converted correctly from corresponding epoch time.

DESCRIPTION:
When disk group import fails if one of the disk is missing while
importing the disk group, it will dump the disk header information the syslog.
But, human readable time stamp was not getting converted correctly from
corresponding epoch time.

RESOLUTION:
Code changes done to dump disk header information correctly.

* 3904819 (Tracking ID: 3811946)

SYMPTOM:
When invoking "vxsnap make" command with cachesize option to create space optimized snapshot, the command succeeds but the following error message is displayed in syslog:

kernel: VxVM vxio V-5-0-603 I/O failed.  Subcache object <subcache-name> does 
not have a valid sdid allocated by cache object <cache-name>.
kernel: VxVM vxio V-5-0-1276 error on Plex <plex-name> while writing volume 
<volume-name> offset 0 length 2048

DESCRIPTION:
When space optimized snapshot is created using "vxsnap make" command along with cachesize option, cache and subcache objects are created by the same command. During the creation of snapshot, I/Os from the volumes may be pushed onto a subcache even though the subcache ID has not yet been allocated. As a result, the I/O fails.

RESOLUTION:
The code is modified to make sure that I/Os on the subcache are 
pushed only after the subcache ID has been allocated.

* 3904822 (Tracking ID: 3755209)

SYMPTOM:
The Dynamic Multi-pathing(DMP) device configured in Solaris LDOM guest is 
disabled when an active controller of an ALUA array is failed.

DESCRIPTION:
DMP in guest environment monitors cached target port ID of virtual paths in 
LDOM. If a controller of an ALUA array fails for some reason, active/primary 
target port ID of an ALUA array will be changed in I/O domain resulting in 
stale entry in the guest. DMP in the guest wrongly interprets this target port 
change to mark the path as unavailable. This causes I/O on the path to be 
failed. As a result the DMP device is disabled in LDOM.

RESOLUTION:
The code is modified to not use the cached target port IDs for LDOM virtual 
disks.

* 3904824 (Tracking ID: 3795622)

SYMPTOM:
With Dynamic Multi-Pathing (DMP) Native Support enabled, LVM global_filter is
not updated properly in lvm.conf file to reject the newly added paths.

DESCRIPTION:
With DMP Native Support enabled, when new paths are added to existing LUNs, LVM
global_filter is not updated properly in lvm.conf file to reject the newly added
paths. This can lead to duplicate PV (physical volumes) found error reported by
LVM commands.

RESOLUTION:
The code is modified to properly update global_filter field in lvm.conf file
when new paths are added to existing disks.

* 3904825 (Tracking ID: 3859009)

SYMPTOM:
pvs command will show the duplicate PV messages since global_filter of 
lvm.conf is not updated after fiber switch or storage controller get rebooted.

DESCRIPTION:
When fiber switch or storage controller reboot, some paths dev No. may get reused 
during DDL reconfig cycle, in this case VxDMP(Veritas Dynamic Multi-Pathing) wont 
treat them as newly added devices. For those devices belong to LVM dmpnode, VxDMP 
will not trigger lvm.conf update for them. As a result, the global_filter of 
lvm.conf will not be updated. Hence the issue.

RESOLUTION:
The code has been changed to update lvm.conf correctly.

* 3904830 (Tracking ID: 3840359)

SYMPTOM:
On using localized messages, some VxVM commands fail while executing vxrootadm. The error message is as follows:  
VxVM vxmkrootmir ERROR V-5-2-3943 The Master Boot Record (MBR) could not be copied to the root disk mirror.To manually install it, follow the procedures in the VxVM Boot Disk Recovery chapter of the VxVM Trouble Shooting Guide.

DESCRIPTION:
The issue occurs when the output of the sfdisk command appears in the localized format. When the output is not translated into English language, a mismatch of messages is observed and command fails.

RESOLUTION:
The code is modified to convert the output of necessary commands in the scripts into English language before comparing it with the expected output.

* 3904831 (Tracking ID: 3802075)

SYMPTOM:
Disks having digit in its name and which are added as foreign path using "vxddladm 
addforeign" goes into ERROR state 
after running vxdisk scandisks.

DESCRIPTION:
When the disk is added as foreign using 'vxddladm addforeign' , and 
after performing device-discovery 
using vxdisk scandisks, we use disk whole disk name, which is not the exact name of 
the disk. When digits are added in 
the name of the disk using udev rule, we now use the actual name of disk instead of 
whole disk name.

RESOLUTION:
The code is modified to use the exact disk-device name which adds the 
foreign disk successfully.

* 3904833 (Tracking ID: 3729078)

SYMPTOM:
In VVR environment, the panic may occur after SF(Storage Foundation) patch 
installation or uninstallation on the secondary site.

DESCRIPTION:
VXIO Kernel reset invoked by SF patch installation removes all Disk Group 
objects that have no preserved flag set, because the preserve flag is overlapped 
with RVG(Replicated Volume Group) logging flag, the RVG object won't be removed, 
but its rlink object is removed, result of system panic when starting VVR.

RESOLUTION:
Code changes have been made to fix this issue.

* 3904834 (Tracking ID: 3819670)

SYMPTOM:
When smartmove with 'vxevac' command is run in background by hitting 'ctlr-z' key and 'bg' command, the execution of 'vxevac' is terminated abruptly.

DESCRIPTION:
As part of "vxevac" command for data movement, VxVM submits the data as a task in the kernel, and use select() primitive on the task file descriptor to wait for task finishing events to arrive. When "ctlr-z" and bg is used to run vxevac in background, the select() returns -1 with errno EINTR. VxVM wrongly interprets it as user termination action and hence vxevac is terminated.  
Instead of terminating vxevac, the select() should be retried untill task completes.

RESOLUTION:
The code is modified so that when select() returns with errno EINTR, it checks whether vxevac task is finished. If not, the select() is retried.

* 3904851 (Tracking ID: 3804214)

SYMPTOM:
VxDMP (Dynamic Multi-Pathing) path enable operation fails after the disk label is
changed from guest LDOM. Open fails with error 5 (EIO) on the path being enabled.

Following error messages can be seen in /var/adm/messages:

<time-stamp hostname> vxdmp: [ID 808364 kern.notice] NOTICE: VxVM vxdmp V-5-3-0
dmp_open_path: Open failed with 5 for path 237/0x30
<time-stamp hostname> vxdmp: [ID 382146 kern.notice] NOTICE: VxVM vxdmp V-5-0-112
[Warn] disabled path 237/0x30 belonging to the dmpnode 307/0x38 due to open failure

DESCRIPTION:
While a disk is exported to the Solaris LDOM, Solaris OS in the control/IO domain
holds NORMAL mode open on the existing partitions of the DMP node. If the disk
partitions/label is changed from LDOM such that some of the older partitions are
removed, Solaris OS in the control/IO domain does not know about this change and
continues to hold NORMAL mode open on those deleted partitions. If a disabled DMP
path is enabled in this scenario, the NORMAL mode open the path fails and path
enable operation errors out. This can be worked around by detaching and
reattaching the disk to the LDOM. Due to a problem in DMP code, the stale NORMAL
mode open flag was not being reset even when the DMP disk was detached from the
LDOM. This was preventing the DMP path to be enabled even after the DMP disk was
detached from the LDOM.

RESOLUTION:
Code was fixed to reset NORMAL mode open when the DMP disk is detached from
the LDOM. With this fix, DMP disk will have to reattached to the LDOM only
once after the disk labels change. When the disk is reattached, it will get the
correct open mode (NORMAL/NDELAY) on the partitions that exist after label change.

* 3904858 (Tracking ID: 3899568)

SYMPTOM:
"vxdmpadm iostat stop" as per design cannot stop the iostat gathering
persistently. To avoid Performance & Memory crunch related issues, it is
generally recommended to stop the iostat gathering.There is a requirement
to provide such ability to stop/start the iostat gathering persistently
in those cases.

DESCRIPTION:
Today DMP iostat daemon is stopped using - "vxdmpadm iostat stop". but this 
is not persistent setting. After reboot this would be lost and hence 
customer
needs to also have to put this in init scripts at appropriate place for
persistent effect.

RESOLUTION:
Code is modified to provide a  tunable "dmp_compute_iostats" which can
start/stop the iostat gathering persistently.

Notes:
Use following command to start/stop the iostat gathering persistently.
# vxdmpadm settune dmp_compute_iostats=on/off.

* 3904859 (Tracking ID: 3901633)

SYMPTOM:
Lots of error messages like the following are reported while performing RVG 
sync.
VxVM VVR vxrsync ERROR V-5-52-2027 getdigest response err 
[192.168.10.101:/dev/vx/dsk/testdg/v1 <- 
192.168.10.105:/dev/vx/dsk/testdg/v1] [[ndigests sent=-1 ndigests 
received=0]]
VxVM VVR vxrsync ERROR V-5-52-2027 getdigest response err 
[192.168.10.101:/dev/vx/dsk/testdg/v1 <- 
192.168.10.105:/dev/vx/dsk/testdg/v1] [[ndigests sent=-2 ndigests 
received=0]]

DESCRIPTION:
While performing last volume region read and sync, volume end offset 
calculation is not correct, which may lead to over volume end read and sync, 
result in an internal variable became negative number and vxrsync reports 
error. It can happen if volume size is not multiple of 512KB, plus the last 
512KB volume region is partly in use by VxFS.

RESOLUTION:
Code changes have been done to fix the issue.

* 3904861 (Tracking ID: 3904538)

SYMPTOM:
RV(Replicate Volume) IO hang happens during slave node leave or master node 
switch.

DESCRIPTION:
RV IO hang happens because of SRL(Serial Replicate Log) header is updated by RV 
recovery SIO. After slave node leave or master node switch, RV recovery could 
be 
initiated. During RV recovery, all new coming IOs should be quiesced by setting 
NEED 
RECOVERY flag on RV to avoid racing. Due to a code defect, this flag is removed 
by 
transaction commit, result in conflicting between new IOs and RV recovery SIO.

RESOLUTION:
Code changes have been made to fix this issue.

* 3904863 (Tracking ID: 3851632)

SYMPTOM:
When you use the localized messages, some VxVM commands fail while 
mirroring the volume through vxdiskadm. The error message is similar to the 
following:  
 ? [y, n, q,?] (: y) y
 /usr/lib/vxvm/voladm.d/bin/disk.repl: test: unknown operator 1

DESCRIPTION:
The issue occurs when the output of the vxdisk list command 
appears in the localized format. When the output is not translated into English 
language, a mismatch of messages is observed and command fails.

RESOLUTION:
The code is modified to convert the output of the necessary commands in the 
scripts into English language before comparing it with the expected output.

* 3904864 (Tracking ID: 3769303)

SYMPTOM:
System pancis when CVM group is brought online with below stack:

voldco_acm_pagein
voldco_write_pervol_maps_instant
voldco_map_update
voldco_write_pervol_maps
volfmr_copymaps_instant
vol_mv_get_attmir
vol_subvolume_get_attmir
vol_plex_get_attmir
vol_mv_fmr_precommit
vol_mv_precommit
vol_commit_iolock_objects
vol_ktrans_commit
volconfig_ioctl
ns_capable
volsioctl_real
mntput
path_put
vfs_fstatat
from_kgid_munged
read_tsc
vols_ioctl
vols_compat_ioctl
compat_sys_ioctl
sysenter_dispatch
voldco_get_accumulator

DESCRIPTION:
In case of layered volumes, when 'vxvol' comamnd is triggered through 
'vxrecover' command with '-Z vols(implicit) option, only the volumes passed 
through CLI are started, the respective top level volumes remain unstarted. As 
a result, associated DCO volumes also remain unstarted. At this point of time, 
if any of the plex of sub-volume needs to be attached back, vxrecover will 
trigger it. 
With DCO version 30, vxplex command tries to perform some map manipulation 
as a part of plex-attach transaction.  If the DCO volume is not started before 
plex attach, the in-core DCO contents are improperly loaded and this leads to 
panic.

RESOLUTION:
The code is modified to handle the starting of appropriate associated volumes 
of a layered volume group.

* 3905471 (Tracking ID: 3868533)

SYMPTOM:
IO hang happens when starting replication. VXIO deamon hang with stack like 
following:

vx_cfs_getemap at ffffffffa035e159 [vxfs]
vx_get_freeexts_ioctl at ffffffffa0361972 [vxfs]
vxportalunlockedkioctl at ffffffffa06ed5ab [vxportal]
vxportalkioctl at ffffffffa06ed66d [vxportal]
vol_ru_start at ffffffffa0b72366 [vxio]
voliod_iohandle at ffffffffa09f0d8d [vxio]
voliod_loop at ffffffffa09f0fe9 [vxio]

DESCRIPTION:
While performing DCM replay in case Smart Move feature is enabled, VxIO 
kernel needs to issue IOCTL to VxFS kernel to get file system free region. 
VxFS kernel needs to clone map by issuing IO to VxIO kernel to complete this 
IOCTL. Just at the time RLINK disconnection happened, so RV is serialized to 
complete the disconnection. As RV is serialized, all IOs including the 
clone map IO form VxFS is queued to rv_restartq, hence the deadlock.

RESOLUTION:
Code changes have been made to handle the dead lock situation.

* 3906251 (Tracking ID: 3806909)

SYMPTOM:
During installation of volume manager installation using CPI in key-less 
mode, following logs were observed.
VxVM vxconfigd DEBUG  V-5-1-5736 No BASIC license
VxVM vxconfigd ERROR  V-5-1-1589 enable failed: License has expired or is 
not available for operation transactions are disabled.

DESCRIPTION:
While using CPI for STANDALONE DMP installation in key less mode, volume 
manager Daemon(vxconfigd) cannot be started due to a modification in a DMP 
NATIVE license string that is used for license verification and this 
verification was failing.

RESOLUTION:
Appropriate code changes are incorporated to resolve the DMP keyless License 
issue to work with STANDALONE DMP.

* 3906566 (Tracking ID: 3907654)

SYMPTOM:
Storage of cold data on dedicated SAN storage spaces increases 
storage 
cost and maintenance.

DESCRIPTION:
The local storage capacity is consumed by cold or legacy files 
which are not consumed or processed frequently. 
These files occupy dedicated SAN storage space, which is expensive. 
Moving such files to public or private S3 cloud storage services is a better 
cost-effective solution. 
Additionally, cloud storage is elastic allowing varying service levels based 
on changing needs. 
Operational charges apply for managing objects in buckets for public cloud 
services 
using the Storage Transfer Service.

RESOLUTION:
You can now migrate or move legacy data from local SAN storage 
to a target Private or public cloud.

* 3907017 (Tracking ID: 3877571)

SYMPTOM:
Disk header is updated even if the dg import operation fails

DESCRIPTION:
When dg import fails because of the disk failure, importing dg
forcefully needs checking the disks having latest configuration copy. But, it is
very difficult to decide which disk to choose without disk header update logs.

RESOLUTION:
Improved the logging to track the disk header changes.

* 3907593 (Tracking ID: 3660869)

SYMPTOM:
Enhance the DRL dirty-ahead logging for sequential write workloads.

DESCRIPTION:
With the current DRL implementation, when sequential hints are passed by the above 
FS layer, further regions in the DRL are dirtied to ensure that the write on the DRL 
is saved when the new IO on the region comes. But with the current design, there is 
a flaw and the number of IO's on the DRL are similar to the number of IO's on the 
data volume. Because of the flaw, same region is being dirtied again and again as 
part of the DRL IO. This can lead to performance hit as well.

RESOLUTION:
In order to improve the performance, the number of IO's on the DRL are reduced by 
enhancing the implementation of Dirty-ahead logging with DRL.

* 3907595 (Tracking ID: 3907596)

SYMPTOM:
vxdmpadm setattr command gives the below error while setting the path attribute:
"VxVM vxdmpadm ERROR V-5-1-14526 Failed to save path information persistently"

DESCRIPTION:
Device names on linux change once the system is rebooted. Thus the persistent attributes of the device are stored using persistent 
hardware path. The hardware paths are stored as symbolic links in the directory /dev/vx/.dmp. The hardware paths are obtained from 
/dev/disk/by-path using the path_id command. In SLES12, the command to extract the hardware path changes to path_id_compat. Since 
the command changed, the script was failing to generate the hardware paths in /dev/vx/.dmp directory leading to the persistent 
attributes not being set.

RESOLUTION:
Code changes have been made to use the command path_id_compat to get the hardware path from /dev/disk/by-path directory.

Patch ID: VRTSvxfs-6.2.1.300-RHEL7

* 3817229 (Tracking ID: 3762174)

SYMPTOM:
When fsfreeze is used together with vxdump, the fsfreeze command gets timeout and vxdump command fails.

DESCRIPTION:
The vxdump command may try to read mount list file to get information of the corresponding mount points. This behavior results in taking a file system active level, in order to synchronize with file system reinit. But in case of fsfreeze, taking the active level will never succeed, since the file system is already freezed, so this causes a deadlock and finally results in the fsfreeze timeout.

RESOLUTION:
Don't use fsfreeze and vxdump command together.

* 3896150 (Tracking ID: 3833816)

SYMPTOM:
In a CFS cluster, one node returns stale data.

DESCRIPTION:
In a 2-node CFS cluster, when node 1 opens the file and writes to
it, the locks are used with CFS_MASTERLESS flag set. But when node 2 tries to
open the file and write to it, the locks on node 1 are normalized as part of
HLOCK revoke. But after the Hlock revoke on node 1, when node 2 takes the PG
Lock grant to write, there is no PG lock revoke on node 1, so the dirty pages on
node 1 are not flushed and invalidated. The problem results in reads returning
stale data on node 1.

RESOLUTION:
The code is modified to cache the PG lock before normalizing it in
vx_hlock_putdata, so that after the normalizing, the cache grant is still with
node 1.When node 2 requests PG lock, there is a revoke on node 1 which flushes
and invalidates the pages.

* 3896151 (Tracking ID: 3827491)

SYMPTOM:
Data relocation is not executed correctly if the IOTEMP policy is set to AVERAGE.

DESCRIPTION:
Database table is not created correctly which results in an error on the database query. This affects the relocation policy of data and the files are not relocated properly.

RESOLUTION:
The code is modified fix the database table creation issue. Therelocation policy based calculations are done correctly.

* 3896154 (Tracking ID: 1428611)

SYMPTOM:
'vxcompress' command can cause many GLM block lock messages to be 
sent over the network. This can be observed with 'glmstat -m' output under the 
section "proxy recv", as shown in the example below -

bash-3.2# glmstat -m
         message     all      rw       g      pg       h     buf     oth    
loop
master send:
           GRANT     194       0       0       0       2       0     192      
98
          REVOKE     192       0       0       0       0       0     192      
96
        subtotal     386       0       0       0       2       0     384     
194

master recv:
            LOCK     193       0       0       0       2       0     191      
98
         RELEASE     192       0       0       0       0       0     192      
96
        subtotal     385       0       0       0       2       0     383     
194

    master total     771       0       0       0       4       0     767     
388

proxy send:
            LOCK      98       0       0       0       2       0      96      
98
         RELEASE      96       0       0       0       0       0      96      
96
      BLOCK_LOCK    2560       0       0       0       0    2560       0       
0
   BLOCK_RELEASE    2560       0       0       0       0    2560       0       
0
        subtotal    5314       0       0       0       2    5120     192     
194

DESCRIPTION:
'vxcompress' creates placeholder inodes (called IFEMR inodes) to 
hold the compressed data of files. After the compression is finished, IFEMR 
inode exchange their bmap with the original file and later given to inactive 
processing. Inactive processing truncates the IFEMR extents (original extents 
of the regular file, which is now compressed) by sending cluster-wide buffer 
invalidation requests. These invalidations need GLM block lock. Regular file 
data need not be invalidated across the cluster, thus making these GLM block 
lock requests unnecessary.

RESOLUTION:
Pertinent code has been modified to skip the invalidation for the 
IFEMR inodes created during compression.

* 3896156 (Tracking ID: 3633683)

SYMPTOM:
"top" command output shows vxfs thread consuming high CPU while 
running an application that makes excessive sync() calls.

DESCRIPTION:
To process sync() system call vxfs scans through inode cache 
which is a costly operation. If an user application is issuing excessive 
sync() calls and there are vxfs file systems mounted, this can make vxfs 
sync 
processing thread to consume high CPU.

RESOLUTION:
Combine all the sync() requests issued in last 60 second into a 
single request.

* 3896160 (Tracking ID: 3808033)

SYMPTOM:
After a service group is set offline via VOM or VCSOracle process is left in an unkillable state.

DESCRIPTION:
Whenever ODM issues an async request to FDD, FDD is required to do iodone processing on it, regardless of how far the request gets. The forced unmount causes FDD to take one of the early error branch which misses iodone routine for this particular async request. From ODM's perspective, the request is submitted, but iodone will never be called. This has several bad consequences, one of which is a user thread is blocked uninterruptibly forever, if it waits for request.

RESOLUTION:
The code is modified to add iodone routine in the error handling code.

* 3896223 (Tracking ID: 3735697)

SYMPTOM:
vxrepquota reports error like,
# vxrepquota -u /vx/fs1
UX:vxfs vxrepquota: ERROR: V-3-20002: Cannot access 
/dev/vx/dsk/sfsdg/fs1:ckpt1: 
No such file or directory
UX:vxfs vxrepquota: ERROR: V-3-24996: Unable to get disk layout version

DESCRIPTION:
vxrepquota checks each mount point entry in mounted file system 
table. If any checkpoint mount point entry presents before the mount point 
specified in the vxrepquota command, vxrepquota will report errors, but the 
command can succeed.

RESOLUTION:
Skip checkpoint mount point in the mounted file system table.

* 3896231 (Tracking ID: 3708836)

SYMPTOM:
When using fallocate together with delayed extending write, data corruption may happen.

DESCRIPTION:
When doing fallocate after EOF, vxfs grows the file by splitting the last extent of the file into two parts, then converts the part after EOF to a ZFOD extent. During this procedure, a stale file size is used to calculate the start offset of the newly zeroed extent. This may overwrite the blocks which contain the unflushed data generated by the extending write and cause data corruption.

RESOLUTION:
The code is modified to use up-to-date file size instead of the stale file size, to make sure the new ZFOD extent is created correctly.

* 3896261 (Tracking ID: 3855726)

SYMPTOM:
Panic happens in vx_prot_unregister_all(). The stack looks like this:

- vx_prot_unregister_all
- vxportalclose
- __fput
- fput
- filp_close
- sys_close
- system_call_fastpath

DESCRIPTION:
The panic is caused by a NULL fileset pointer, which is due to referencing the
fileset before it's loaded, plus, there's a race on fileset identity array.

RESOLUTION:
Skip the fileset if it's not loaded yet. Add the identity array lock to prevent
the possible race.

* 3896267 (Tracking ID: 3861271)

SYMPTOM:
Due to the missing inode clear action, a page can also be in a strange state.
Also, inode is not fully quiescent which leads to races in the inode code.
Sometime this can cause panic from iput_final().

DESCRIPTION:
We're missing an inode clear operation when a Linux inode is being
de-initialized on SLES11.

RESOLUTION:
Add the inode clear operation on SLES11.

* 3896269 (Tracking ID: 3879310)

SYMPTOM:
The file system may get corrupted after the file system freeze during
vxupgrade. The full fsck gives following errors:

UX:vxfs fsck: ERROR: V-3-20451: No valid device inodes found
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate

DESCRIPTION:
The vxupgrade requires file system to be frozen during it's
functional operation. It may happen that the corruption can be detected while
freeze is in progress and full fsck flag can be set on the file system.
However, this doesn't stop vxupgrade to proceed.
At later stage of vxupgrade, after structures related to new disk layout are
updated on disk, 
vxfs frees up and zeroes out some of the old metadata inodes. If an error occurs
after this 
point (because of full fsck being set), the file system completely needs to go
back to previous version, at the tile of full fsck. 
Since the metadata corresponding to previous version is already cleared, the
full fsck cannot proceed and gives error.

RESOLUTION:
Check for full fsck flag after freezing the file system during
vxupgrade. Also, disable the file system if an error occurs after writing new 
metadata on disk. This will force the newly written metadata to be loaded in 
memory on next mount.

* 3896270 (Tracking ID: 3707662)

SYMPTOM:
Race between reorg processing and fsadm timer thread (alarm expiry) leads to panic in vx_reorg_emap with the following stack::

vx_iunlock
vx_reorg_iunlock_rct_reorg
vx_reorg_emap
vx_extmap_reorg
vx_reorg
vx_aioctl_full
vx_aioctl_common
vx_aioctl
vx_ioctl
fop_ioctl
ioctl

DESCRIPTION:
When the timer expires (fsadm with -t option), vx_do_close() calls vx_reorg_clear() on local mount which performs cleanup on reorg rct inode. Another thread currently active in vx_reorg_emap() will panic due to null pointer dereference.

RESOLUTION:
When fop_close is called in alarm handler context, we defer the cleaning up untill the kernel thread performing reorg completes its operation.

* 3896273 (Tracking ID: 3558087)

SYMPTOM:
When stat system call is executed on VxFS File System with delayed
allocation feature enabled, it may take long time or it may cause high cpu
consumption.

DESCRIPTION:
When delayed allocation (dalloc) feature is turned on, the
flushing process takes much time. The process keeps the get page lock held, and
needs writers to keep the inode reader writer lock held. Stat system call may
keeps waiting for inode reader writer lock.

RESOLUTION:
Delayed allocation code is redesigned to keep the get page lock
unlocked while flushing.

* 3896277 (Tracking ID: 3691633)

SYMPTOM:
Remove RCQ Full messages

DESCRIPTION:
Too many unnecessary RCQ Full messages were logging in the system log.

RESOLUTION:
The RCQ Full messages removed from the code.

* 3896281 (Tracking ID: 3830300)

SYMPTOM:
Heavy cpu usage while oracle archive process are running on a clustered
fs.

DESCRIPTION:
The cause of the poor read performance in this case was due to fragmentation,
fragmentation mainly happens when there are multiple archivers running on the
same node. The allocation pattern of the oracle archiver processes is 

1. write header with O_SYNC
2. ftruncate-up the file to its final size ( a few GBs typically)
3. do lio_listio with 1MB iocbs

The problem occurs because all the allocations in this manner go through
internal allocations i.e. allocations below file size instead of allocations
past the file size. Internal allocations are done at max 8 Pages at once. So if
there are multiple processes doing this, they all get these 8 Pages alternately
and the fs becomes very fragmented.

RESOLUTION:
Added a tunable, which will allocate zfod extents when ftruncate
tries to increase the size of the file, instead of creating a hole. This will
eliminate the allocations internal to file size thus the fragmentation. Fixed
the earlier implementation of the same fix, which ran into
locking issues. Also fixed the performance issue while writing from secondary node.

* 3896285 (Tracking ID: 3757609)

SYMPTOM:
High CPU usage because of contention over ODM_IO_LOCK

DESCRIPTION:
While performing ODM IO, to update some of the ODM counters we take
ODM_IO_LOCK which leads to contention from multiple  of iodones trying to update
 these counters at the same time. This is results in high CPU usage.

RESOLUTION:
Code modified to remove the lock contention.

* 3896303 (Tracking ID: 3762125)

SYMPTOM:
Directory size sometimes keeps increasing even though the number of files inside it doesn't 
increase.

DESCRIPTION:
This only happens to CFS. A variable in the directory inode structure marks the start of 
directory free space. But when the directory ownership changes, the variable may become stale, which 
could cause this issue.

RESOLUTION:
The code is modified to reset this free space marking variable when there's 
ownershipchange. Now the space search goes from beginning of the directory inode.

* 3896304 (Tracking ID: 3846521)

SYMPTOM:
cp -p is failing with EINVAL for files with 10 digit 
modification time. EINVAL error is returned if the value in tv_nsec field is 
greater than/outside the range of 0 to 999, 999, 999.  VxFS supports the 
update in usec but when copying in the user space, we convert the usec to 
nsec. So here in this case, usec has crossed the upper boundary limit i.e 
999, 999.

DESCRIPTION:
In a cluster, its possible that time across nodes might 
differ.so 
when updating mtime, vxfs check if it's cluster inode and if nodes mtime is 
newer 
time than current node time, then accordingly increment the tv_usec instead of 
changing mtime to older time value. There might be chance that it,  tv_usec 
counter got overflowed here, which resulted in 10 digit mtime.tv_nsec.

RESOLUTION:
Code is modified to reset usec counter for mtime/atime/ctime when 
upper boundary limit i.e. 999999 is reached.

* 3896306 (Tracking ID: 3790721)

SYMPTOM:
High CPU usage on the vxfs thread process. The backtrace of such kind of threads
usually look like this:

schedule
schedule_timeout
__down
down
vx_send_bcastgetemapmsg_remaus
vx_send_bcastgetemapmsg
vx_recv_getemapmsg
vx_recvdele
vx_msg_recvreq
vx_msg_process_thread
vx_kthread_init
kernel_thread

DESCRIPTION:
The locking mechanism in vx_send_bcastgetemapmsg_process() is inefficient. So that
every
time vx_send_bcastgetemapmsg_process() is called, it will perform a series of
down-up
operation on a certain semaphore. This can result in a huge CPU cost when multiple
threads have contention on this semaphore.

RESOLUTION:
Optimize the locking mechanism in vx_send_bcastgetemapmsg_process(),
so that it only do down-up operation on the semaphore once.

* 3896308 (Tracking ID: 3695367)

SYMPTOM:
Unable to remove volume from multi-volume VxFS using "fsvoladm" command. It fails with "Invalid argument" error.

DESCRIPTION:
Volumes are not being added in the in-core volume list structure correctly. Therefore while removing volume from multi-volume VxFS using "fsvoladm", command fails.

RESOLUTION:
The code is modified to add volumes in the in-core volume list structure correctly.

* 3896310 (Tracking ID: 3859032)

SYMPTOM:
System panics in vx_tflush_map() due to NULL pointer dereference.

DESCRIPTION:
When converting VxFS using vxconvert, new blocks are allocated to 
the structural files like smap etc which can contain garbage. This is done with 
the expectation that fsck will rebuild the correct smap. but in fsck, we have 
missed to distinguish between EAU fully EXPANDED and ALLOCATED. because of
which, if allocation to the file which has the last allocation from such
affected EAU is done, it will create the sub transaction on EAU which are in
allocated state. Map buffers of such EAUs are not initialized properly in VxFS
private buffer cache, as a result, these buffers will be released back as stale
during the transaction commit. Later, if any file-system wide sync tries to
flush the metadata, it can refer to these buffer pointers and panic as these
buffers are already released and reused.

RESOLUTION:
Code is modified in fsck to correctly set the state of EAU on 
disk. Also, modified the involved code paths as to avoid using doing
transactions on unexpanded EAUs.

* 3896311 (Tracking ID: 3779916)

SYMPTOM:
vxfsconvert fails to upgrade layout verison for a vxfs file system with 
large number of inodes. Error message will show some inode discrepancy.

DESCRIPTION:
vxfsconvert walks through the ilist and converts inode. It stores 
chunks of inodes in a buffer and process them as a batch. The inode number 
parameter for this inode buffer is of type unsigned integer. The offset of a 
particular inode in the ilist is calculated by multiplying the inode number with 
size of inode structure. For large inode numbers this product of inode_number * 
inode_size can overflow the unsigned integer limit, thus giving wrong offset 
within the ilist file. vxfsconvert therefore reads wrong inode and eventually 
fails.

RESOLUTION:
The inode number parameter is defined as unsigned long to avoid 
overflow.

* 3896312 (Tracking ID: 3811849)

SYMPTOM:
On cluster file system (CFS), due to a size mismatch in the cluster-wide buffers
containing hash bucket for large directory hashing (LDH), the system panics with
the following stack trace:
  
   vx_populate_bpdata()
   vx_getblk_clust()
   vx_getblk()
   vx_exh_getblk()
   vx_exh_get_bucket()
   vx_exh_lookup()
   vx_dexh_lookup()
   vx_dirscan()
   vx_dirlook()
   vx_pd_lookup()
   vx_lookup_pd()
   vx_lookup()
   
On some platforms, instead of panic, LDH corruption is reported. Full fsck
reports some meta-data inconsistencies as displayed in the following sample
messages:

fileset 999 primary-ilist inode 263 has invalid alternate directory index
        (fileset 999 attribute-ilist inode 8193), clear index? (ynq)y

DESCRIPTION:
On a highly fragmented file system with a file system block size of 1K, 2K or
4K, the bucket(s) of an LDH inode, which has a fixed size of 8K, can spread
across multiple small extents. Currently in-core allocation for bucket of LDH
inode happens in parallel to on-disk allocation, which results in small in-core
buffer allocations. Combination of these small in-core allocations will be
merged for final in memory representation of LDH inodes bucket. On two Cluster
File System (CFS) nodes, this may result in same LDH metadata/bucket represented
as in-core buffers of different sizes. This may result in system panic as LDH
inodes bucket are passed around the cluster, or this may result in on-disk
corruption of LDH inode's buckets, if these buffers are flushed to disk.

RESOLUTION:
The code is modified to separate the on-disk allocation and in-core buffer
initialization in LDH code paths, so that in-core LDH bucket will always be
represented by a single 8K buffer.

* 3896313 (Tracking ID: 3817734)

SYMPTOM:
If file system with full fsck flag set is mounted, direct command message
is printed to the user to clean the file system with full fsck.

DESCRIPTION:
When mounting file system with full fsck flag set, mount will fail
and a message will be printed to clean the file system with full fsck. This
message contains direct command to run, which if run without collecting file
system metasave will result in evidences being lost. Also since fsck will remove
the file system inconsistencies it may lead to undesired data being lost.

RESOLUTION:
More generic message is given in error message instead of direct
command.

* 3896314 (Tracking ID: 3856363)

SYMPTOM:
vxfs reports mapbad errors in the syslog as below:
vxfs: msgcnt 15 mesg 003: V-2-3: vx_mapbad - vx_extfind - 
/dev/vx/dsk/vgems01/lvems01 file system free extent bitmap in au 0 marked 
bad.

And, full fsck reports following metadata inconsistencies:

fileset 999 primary-ilist inode 6 has invalid number of blocks 
(18446744073709551583)
fileset 999 primary-ilist inode 6 failed validation clear? (ynq)n
pass2 - checking directory linkage
fileset 999 directory 8192 block devid/blknum 0/393216 offset 68 references 
free 
inode
                                ino 6 remove entry? (ynq)n
fileset 999 primary-ilist inode 8192 contains invalid directory blocks
                                clear? (ynq)n
pass3 - checking reference counts
fileset 999 primary-ilist inode 5 unreferenced file, reconnect? (ynq)n
fileset 999 primary-ilist inode 5 clear? (ynq)n
fileset 999 primary-ilist inode 8194 unreferenced file, reconnect? (ynq)n
fileset 999 primary-ilist inode 8194 clear? (ynq)n
fileset 999 primary-ilist inode 8195 unreferenced file, reconnect? (ynq)n
fileset 999 primary-ilist inode 8195 clear? (ynq)n
pass4 - checking resource maps

DESCRIPTION:
While processing the VX_IEZEROEXT extop, VxFS frees the extent without 
setting VX_TLOGDELFREE flag. Similarly, there are other cases where the flag 
VX_TLOGDELFREE is not set in the case of the delayed extent free, this could 
result in mapbad errors and invalid block counts.

RESOLUTION:
Since the flag VX_TLOGDELFREE need to be set on every extent free, 
modified to code to discard this flag and treat every extent free as delayed 
extent free implicitly.

* 3901379 (Tracking ID: 3897793)

SYMPTOM:
Panic happens because of race where the mntlock ID is cleared while 
mntlock flag still set.

DESCRIPTION:
Panic happened because of race where mntlockid is null even after 
mntlock flag is set. Race is between fsadm thread and proc mount show_option 
thread. The fsadm thread deintialize mntlock id first and then removes mntlock 
flag. If other thread race with this fsadm thread, then it is possible to have 
mntlock flag set and mntlock id as a NULL. The fix is to remove flag first and 
deintialize mntlock id later.

RESOLUTION:
The code is modified to remove mntlock flag first.

* 3903657 (Tracking ID: 3857254)

SYMPTOM:
Assert failure because of missed flush before taking filesnap of the file.

DESCRIPTION:
If the delayed extended write on the file is not completed but the snap of the file is taken, then the inode size is not updated correctly. This will trigger internal assert because of incorrect inode size.

RESOLUTION:
The code is modified to flush the delayed extended write before taking filesnap.

* 3904841 (Tracking ID: 3901318)

SYMPTOM:
VxFS module failed to load on RHEL7.3.

DESCRIPTION:
Since RHEL7.3 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for RHEL7.3.

* 3905056 (Tracking ID: 3879761)

SYMPTOM:
Performance issue observed due to contention on vxfs spin lock vx_worklist_lk.

DESCRIPTION:
ODM IOs are performed asynchronously, by queuing the ODM work items to
the worker threads. It wakes up more number of worker threads than required after
enqueuing the ODM work items which leads to contention of vx_worklist_lk spinlock.

RESOLUTION:
Modified the code such that, it will wake up one worker thread if only
one workitem is enqueued.

* 3906148 (Tracking ID: 3894712)

SYMPTOM:
ACL permissions are not inherited correctly on cluster file system.

DESCRIPTION:
The ACL counts stored on a directory inode gets reset every 
time directory inodes 
ownership is switched between the nodes. When ownership on directory inode 
comes back to the node, 
which  previously abdicated it, ACL permissions were not getting inherited 
correctly for the newly 
created files.

RESOLUTION:
Modified the source such that the ACLs are inherited correctly.

* 3906846 (Tracking ID: 3872202)

SYMPTOM:
VxFS internal test hits an assert.

DESCRIPTION:
In page create case VxFS was taking the ipglock twice in a thread,
due to which the VxFS test hit the internal assert.

RESOLUTION:
Removed the ipglock from vx_wb_dio_write().

* 3906961 (Tracking ID: 3891801)

SYMPTOM:
Internal test hit debug assert.

DESCRIPTION:
Got an debug assert while creating page in shared page cache for
zfod extent which is same as creating for HOLEs, which VxFS don't do.

RESOLUTION:
Added a check for page creation so that we don't create shared pages
for zfod extent.

* 3907350 (Tracking ID: 3817734)

SYMPTOM:
If file system with full fsck flag set is mounted, direct command message
is printed to the user to clean the file system with full fsck.

DESCRIPTION:
When mounting file system with full fsck flag set, mount will fail
and a message will be printed to clean the file system with full fsck. This
message contains direct command to run, which if run without collecting file
system metasave will result in evidences being lost. Also since fsck will remove
the file system inconsistencies it may lead to undesired data being lost.

RESOLUTION:
More generic message is given in error message instead of direct
command.

Patch ID: VRTSvxfs-6.2.1.100-RHEL7

* 3753724 (Tracking ID: 3731844)

SYMPTOM:
umount -r option fails for vxfs 6.2 with error "invalid options"

DESCRIPTION:
Till 6.2 vxfs did not have a umount helper on linux. We added a helper in 6.2,
because of this, each call to linux's umount also gets called to the umount
helper binary. Due to this the -r option, which was only handled by the linux
native umount, is forwarded to the umount.vxfs helper, which exits while
processing the option string becase we don't support readonly remounts.

RESOLUTION:
To solve this, we've changed the umount.vxfs code to not exit on
"-r" option, although we do not support readonly remounts, so if umount -r
actually fails and the os umount attempts a readonly remount, the mount.vxfs
binary will then exit with an error. This solves the problem of linux's default
scripts not working for our fs.

* 3754492 (Tracking ID: 3761603)

SYMPTOM:
Full fsck flag will be set incorrectly at the mount time.

DESCRIPTION:
There might be possibility that extop processing will be deferred 
during umount (i.e. in case of crash or disk failure) and will be kept on 
disk, so that mount can process them. During mount, inode can have multiple 
extop set. Previously if inode has trim and reorg extop set during mount, we 
were incorrectly setting fullfsck. This patch avoids this situation.

RESOLUTION:
Code is modified to avoid such unnecessary setting of fullfsck.

* 3756002 (Tracking ID: 3764824)

SYMPTOM:
Internal cluster file system(CFS) testing hit debug assert

DESCRIPTION:
Internal debug assert is seen when there is a glm recovery while one 
of the secondary  nodes is doing mount, specifically when glm recovery happens 
between attaching a file system and mounting file system.

RESOLUTION:
Code is modified to handle glm reconfiguration issue.

* 3765324 (Tracking ID: 3736398)

SYMPTOM:
Panic in the lazy unmount path during deinit of VxFS-VxVM API.

DESCRIPTION:
The panic is caused when an exiting thread drops the last reference
to a lazy-unmounted VxFS file-system where that fs is the last VxFS mount in the
system. The exiting thread does unmount, which then calls into VxVM to
de-initialize the private FS-VM API(as it is the last VxFS mounted fs). 
The function to be called in VxVM is looked-up via the files under /proc, this
requires an opening of a file but the exit processing has removed the structs
needed by the thread to open a file.

RESOLUTION:
The solution is to cache the de-init function (vx_fsvm_api_deinit)
when the VxFS-VxVM API is initialized, so no function look-up is needed during
an unmount. The cached function pointer can then be called during the last
unmount bypassing the need to open the file by the exiting thread.

* 3765998 (Tracking ID: 3759886)

SYMPTOM:
In case of nested mount, force umount of parent leaves stale child 
entry in /etc/mtab even after subsequent umount of child.

DESCRIPTION:
On rhel6 and sles11, in case of nested mount, if parent mount 
(say /mnt1) was removed/umounted forcefully, then child mounts (like /mnt1/dir) 
also get umounted but the "/etc/mtab" entry was not getting updated accordingly 
for child mount. Previously it was possible to remove such child entries from 
"/etc/mtab" by using os's umount binary. But from shikra on words, we have 
added helper umount binary in "/sbin/umount.vxfs". So now os's umount binary 
will call this helper binary which in turn call vxumount for child umount 
which will fail since path was not present. Hence mtab entry will not get
updated and will show child as mounted.

RESOLUTION:
Code is modified to update mnttab when ENOENT error is returned 
by umount() system call.

* 3769992 (Tracking ID: 3729158)

SYMPTOM:
fuser and other commands hang on vxfs file systems.

DESCRIPTION:
The hang is seen while 2 threads contest for 2 locks -ILOCK and
PLOCK. The writeadvise thread owns the ILOCK but is waiting for the PLOCK.
The dalloc thread owns the PLOCK and is waiting for the ILOCK.

RESOLUTION:
Correct order of locking is PLOCK followed by the ILOCK.

* 3793241 (Tracking ID: 3793240)

SYMPTOM:
Vxrestore command dumps core file because of invalid japanese 
strings.

DESCRIPTION:
Vxrestore command dumps core file because of invalid characters 
such as %, $ etc. are present in the japanese strings.

RESOLUTION:
code is modified to remove the extra characters from the 
Japanese message strings.

* 3798437 (Tracking ID: 3812914)

SYMPTOM:
On RHEL 6.5 and RHEL 6.4 latest kernel patch, umount(8) system call hangs if an
application watches for inode events using inotify(7) APIs.

DESCRIPTION:
On RHEL 6.5 and RHEL 6.4 latest kernel patch, additional OS counters were added in
the super block to track inotify Watches. These new counters were not implemented
in VxFS for RHEL6.5/RHEL6.4 kernel. Hence, while doing umount, the operation hangs
until the counter in the superblock drops to zero, which would never happen since
they are not handled in VxFS.

RESOLUTION:
The code is modified to handle additional counters added in super block of
RHEL6.5/RHEL6.4 latest kernel.

* 3808285 (Tracking ID: 3808284)

SYMPTOM:
fsdedupadm status Japanese text includes strange character.

DESCRIPTION:
The translation of FAILED string in English  is incorrect in Japanese 
and is "I/O" which stands for The failed I / O.So the translation from 
English to Japanese is not correct.

RESOLUTION:
Corrected the translation for "FAILED" string in Japanese.

* 3817120 (Tracking ID: 3804400)

SYMPTOM:
VRTS/bin/cp does not return any error when quota hard limit is 
reached and partial write is encountered.

DESCRIPTION:
When quota hard limit is reached, VRTS/bin/cp may encounter a 
partial write, but it may not return any error to up layer application in 
such situation.

RESOLUTION:
Adjust VRTS/bin/cp to detect the partial write caused by quota 
limit, and return a proper error to up layer application.

* 3821688 (Tracking ID: 3821686)

SYMPTOM:
VxFS module might not get loaded on SLES11 SP4.

DESCRIPTION:
Since SLES11 SP4 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for SLES11 SP4.

Patch ID: VRTSodm-6.2.1.300-RHEL7

* 3906065 (Tracking ID: 3757609)

SYMPTOM:
High CPU usage because of contention over ODM_IO_LOCK

DESCRIPTION:
While performing ODM IO, to update some of the ODM counters we take
ODM_IO_LOCK which leads to contention from multiple  of iodones trying to update
 these counters at the same time. This is results in high CPU usage.

RESOLUTION:
Code modified to remove the lock contention.

Patch ID: VRTSamf-6.2.1.200-RHEL7

* 3906412 (Tracking ID: 3896877)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 2.

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 7 Update 
3(RHEL7.3) is now introduced.

Patch ID: VRTSvxfen-6.2.1.300-RHEL7

* 3906411 (Tracking ID: 3896877)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 2.

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 7 Update 
3(RHEL7.3) is now introduced.

Patch ID: VRTSgab-6.2.1.400-RHEL7

* 3906410 (Tracking ID: 3896877)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 2.

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 7 Update 
3(RHEL7.3) is now introduced.

Patch ID: VRTSgab-6.2.1.300-RHEL7

* 3875807 (Tracking ID: 3875805)

SYMPTOM:
A port on a node receives an I/O fence message when the membership for that 
port changes. This is caused by some unicast messages being stuck in the GAB 
receive queue of that port.

DESCRIPTION:
Under certain rare situation, a few unicast messages which belong to a future 
generation get stuck in the GAB receive queue of a port. This causes unintended 
consequences like preventing a RECONFIG message from being delivered to that 
port. In this case, the port receives an I/O fence message from the GAB to 
ensure the consistency in membership.

RESOLUTION:
The code is modified to ensure that the unicast messages belonging to future 
generation are never left pending in the GAB receive queue of a port.

Patch ID: VRTSllt-6.2.1.700-RHEL7

* 3906409 (Tracking ID: 3896877)

SYMPTOM:
Veritas Cluster Server does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).

DESCRIPTION:
Veritas Cluster Server does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 2.

RESOLUTION:
Veritas Cluster Server support for Red Hat Enterprise Linux 7 Update 
3(RHEL7.3) is now introduced.

* 3907179 (Tracking ID: 3907854)

SYMPTOM:
Node may  panic during data transfer when LLT is configured over RDMA.

DESCRIPTION:
When LLT is configured over RDMA, in case of heavy data transfer cluster node may
panic.

RESOLUTION:
Stop the cluster. Reconfigure the cluster with LLT configured over non RDMA links
(either Ethernet or UDP). Start the cluster. Using non RDMA links may affect
application performance.

Patch ID: VRTSllt-6.2.1.600-RHEL7

* 3905431 (Tracking ID: 3905430)

SYMPTOM:
Application IO hangs in case of FSS with LLT over RDMA during heavy data transfer.

DESCRIPTION:
In case of FSS using LLT over RDMA, sometimes IO may hang because of race conditions in LLT 
code.

RESOLUTION:
LLT module is modified to fix the race conditions arising due to heavy load with multiple 
application threads.

Patch ID: VRTSdbac-6.2.1.200-RHEL7

* 3907210 (Tracking ID: 3896877)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
3(RHEL7.3) is now introduced.

Patch ID: VRTSvcsea-6.2.1.200-RHEL7

* 3879170 (Tracking ID: 3879366)

SYMPTOM:
Systemd service for oracle resource fails to start when user shell 
is csh

DESCRIPTION:
Oracle resource fails to online when the owner has csh set as 
default shell. The systemd service which is started during online E.P. fails 
to start if owner has csh set as default login shell. After a few attempts, 
clean is called which causes the Oracle resource to fault.

RESOLUTION:
Support for all types of shell used by the oracle user in 
systemd startup script.

Patch ID: VRTSvcsea-6.2.1.100-RHEL7

* 3871617 (Tracking ID: 3871614)

SYMPTOM:
If HAD is running in the user.slice then during system reboot, Oracle (or its applications) running as a non-root user do not shut down gracefully.

DESCRIPTION:
On RHEL7 and SLES12, systemd is enabled. Therefore, all the processes running under user.slice are killed on system reboot. Since Oracle (or its applications) run under user.slice by default, a system reboot may cause Oracle to crash or udergo an abrupt shut down.

RESOLUTION:
Move the Oracle processes to system.slice to prevent them from an abrupt shut down during a system reboot.

Patch ID: VRTSglm-6.2.1.100-RHEL7

* 3752475 (Tracking ID: 3758102)

SYMPTOM:
In Cluster File System(CFS) on Linux, stack overflow while creating
ODM file.

DESCRIPTION:
In case of CFS, while creating a ODM file, cluster inode needs
initialization, which takes GLM(Group Lock Manager) lock.  While taking GLM
lock, processing within GLM module may lead to the system panic due to stack
overflow in Linux while doing memory allocation.

RESOLUTION:
Modified handoff values.

Patch ID: VRTScavf-6.2.1.100-RHEL7

* 3759910 (Tracking ID: 3765928)

SYMPTOM:
"cfsshare config -p all" doesn't populating smb.conf(samba configuration
file) properly for some global variables.

DESCRIPTION:
"cfsshare config -p all" doesn't populating smb.conf properly for
some global variables. Attributes "encryptpasswords" and "smbpasswd file" should
have space in the name.

RESOLUTION:
Code is modified so that correct attribute names are present in samba
configuration file.

* 3760226 (Tracking ID: 3765921)

SYMPTOM:
On RHEL7 onwards, pluggable Authentication Modules(PAM) related error
messages for Samba daemon may be observed in system logs after adding CIFS share
and the CIFS share is may not be accessible from windows client.

DESCRIPTION:
There is a file /etc/pam.d/samba which is not available by default
on RHEL 7 onwards and also "obey pam restrictions" attribute from Samba
configuration file (smb.conf) is set to "yes", where default is "no". This
parameter will control whether or not Samba should obey PAM's account and
session management directives. The default behavior is to use PAM for clear 
text authentication only and to ignore any account or session management. Samba
always ignores PAM for authentication in the case of "encrypt passwords = yes".
If we set "obey pam restrictions" attribute to "no", then there are no PAM
related error messages and also the CIFS share is accessible from windows 
clients.

RESOLUTION:
Set obey pam restrictions = no in
/opt/VRTSvcs/bin/ApplicationNone/smb.conf before doing cfsshare config and
adding share.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch sfha-rhel7_x86_64-Patch-6.2.1.300.tar.gz to /tmp
2. Untar sfha-rhel7_x86_64-Patch-6.2.1.300.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/sfha-rhel7_x86_64-Patch-6.2.1.300.tar.gz
    # tar xf /tmp/sfha-rhel7_x86_64-Patch-6.2.1.300.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installSFHA621P3 [<host1> <host2>...]

You can also install this patch together with 6.2.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 6.2.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
o Before-the-upgrade :-
 (a) Stop I/Os to all the VxVM volumes.
 (b) Umount any filesystems with VxVM volumes.
 (c) Stop applications using any VxVM volumes.
o Select the appropriate RPMs for your system, and upgrade to the new patch.
 # rpm -Uhv VRTSvxvm-6.2.1.300-GA_RHEL7.x86_64.rpm


REMOVING THE PATCH
------------------
rpm -e rpm-name


KNOWN ISSUES
------------
* Tracking ID: 3898173

SYMPTOM: In CVR environment, if primary IP on log-owner node goes down then replication
continues(being configured on other interface) but vradmind 
fails to communicate. The vradmind commands stop working due to the following error:
Config Errors:
             Vradmind not reachable on master or logowner
 
This is because with the current design, vradmind uses primary IP addresses to 
communicate within the cluster.

WORKAROUND: Assign multiple addresses to host using /etc/hosts file. Then restart vradmind.
An enhancement incident has already been opened and will be considered in the
next major releases to make 'vradmind' communication highly available so that IP
addr/interface used for its communication can failover to other available IP in
cases where it's  existing IP interface is brought down .

* Tracking ID: 3904857

SYMPTOM: Vxconfigd core dump in dg_trans_start or ddl_scan_devices while running vxdisk
scandisks command on a system having SSD devices.

WORKAROUND: If the vxconfigd core dump has already occurred then follow the below steps:
 
1.	Remove the file /etc/vx/vxddl.exclude
rm -rf /etc/vx/vxddl.exclude
   2. Run vxdisk scandisks
 
If there is still a need to exclude all the array then please follow the below
procedure before excluding all the arrays from the system:
 
tag all ssd device with mediya type as SSD using the command:
vxdisk -f -g <dg1> set  <intel_ssd0_0> mediatype=ssd

* Tracking ID: 3907791

SYMPTOM: On RHEL 7.3, booting from mirrored root disk may not work as expected.

WORKAROUND: There is no any work-around that we could deduce But mirrioring root
disk by using local disk instead of SAN disk should not cause any issues.


* Tracking ID: 3852045

SYMPTOM: DB2 theads (db2sysc) hang, we can see from the crash dump:
Many of them get stuck in vx_dio_physio():
 - schedule
 - rwsem_down_failed_common
 - call_rwsem_down_read_failed
 - down_read
 - vx_dio_physio

And many of them get stuck in vx_rwlock():
 - schedule
 - rwsem_down_failed_common
 - call_rwsem_down_read_failed
 - down_read
 - vx_rwlock

WORKAROUND: No

* Tracking ID: 3896222

SYMPTOM: slow ls -l across cluster nodes.

WORKAROUND: No

* Tracking ID: 3896260

SYMPTOM: Oracle database start failure, with trace log like this:

ORA-63999: data file suffered media failure
ORA-01114: IO error writing block to file 304 (block # 722821)
ORA-01110: data file 304: <file_name>
ORA-17500: ODM err:ODM ERROR V-41-4-2-231-28 No space left on device

WORKAROUND: No

* Tracking ID: 3896276

SYMPTOM: IO service times increased with IO intensive workload on high end 
server.

WORKAROUND: No


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE