infoscale-sol11_sparc-Patch-7.3.1.200

 Basic information
Release type: Patch
Release date: 2019-06-24
OS update support: None
Technote: None
Documentation: None
Popularity: 4639 viewed    downloaded
Download size: 85.74 MB
Checksum: 301765066

 Applies to one or more of the following products:
InfoScale Enterprise 7.3.1 On Solaris 11 SPARC
InfoScale Foundation 7.3.1 On Solaris 11 SPARC
InfoScale Storage 7.3.1 On Solaris 11 SPARC

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vm-sol11_sparc-Patch-7.3.1.100 (obsolete) 2018-06-06
fs-sol11_sparc-Patch-7.3.1.100 (obsolete) 2018-04-11

 Fixes the following incidents:
3929952, 3932464, 3933810, 3933816, 3933819, 3933820, 3933823, 3933824, 3933828, 3933843, 3933874, 3933875, 3933877, 3933878, 3933880, 3933882, 3933883, 3933888, 3933890, 3933893, 3933894, 3933897, 3933899, 3933900, 3933901, 3933903, 3933904, 3933907, 3933910, 3933913, 3934841, 3935903, 3935967, 3936286, 3936428, 3937536, 3937541, 3937545, 3937549, 3937550, 3937808, 3937811, 3938392, 3942697, 3943715, 3944743, 3944902, 3947560, 3947561, 3947651, 3952304, 3952305, 3952309, 3953466, 3955886, 3957433, 3958475, 3958776, 3958860, 3959451, 3959452, 3959453, 3959455, 3959458, 3959460, 3959461, 3959463, 3959471, 3959473, 3959475, 3959476, 3959477, 3959478, 3959479, 3959480, 3960468, 3961353, 3961356, 3961358, 3961359, 3961468, 3961469, 3961480, 3964315, 3964360, 3967893, 3967894, 3967895, 3967898, 3967901, 3967903, 3968786, 3968821, 3968854, 3969218, 3969591, 3969997, 3970119, 3970370, 3971575, 3973086, 3974355, 3974652, 3978644

 Patch ID:
VRTSvxvm-7.3.1.2500
VRTSvxfs-7.3.1.2500

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.3.1 * * *
                         * * * Patch 200 * * *
                         Patch Date: 2019-05-27


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.3.1 Patch 200


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Solaris 11 SPARC


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Enterprise 7.3.1
   * InfoScale Foundation 7.3.1
   * InfoScale Storage 7.3.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxvm-7.3.1.2500
* 3957433 (3941844) VVR secondary hang while deleting replication ports.
* 3969218 (3968334) Solaris 11.4: Appropriate library VRTS_vxvm_link.so is not copied on the system after OS upgrade.
* 3971575 (3968020) (Solaris 11.4)Appropriate modules are not loaded on the system after OS upgrade
* 3973086 (3956134) System panic might occur when IO is in progress in VVR (veritas volume replicator) environment.
* 3974355 (3931048) VxVM (Veritas Volume Manager) creates particular log files with write permission
to all users.
* 3974652 (3899568) Adding tunable dmp_compute_iostats to start/stop the iostat gathering
persistently.
Patch ID: VRTSvxvm-7.3.1.2300
* 3933888 (3868533) IO hang happens because of a deadlock situation.
* 3935967 (3935965) Update VxVM(Veritas Volume Manager) package on SunOS alternate BE(Boot 
Environment) not supported well.
* 3958860 (3953681) Data corruption issue is seen when more than one plex of volume is detached.
* 3959451 (3913949) The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.
* 3959452 (3931678) Memory allocation and locking optimizations during the CVM
(Cluster Volume Manager) IO shipping.
* 3959453 (3932241) VxVM (Veritas Volume Manager) creates some required files under /tmp and
/var/tmp directories. These directories could be modified by non-root users and
will affect the Veritas Volume Manager Functioning.
* 3959455 (3932496) In an FSS environment, volume creation might fail on the 
SSD devices if vxconfigd was earlier restarted.
* 3959458 (3936535) Poor performance due to frequent cache drops.
* 3959460 (3942890) IO hang as DRL flush gets into infinite loop.
* 3959461 (3946350) kmalloc-1024 and kmalloc-2048 memory consuming keeps increasing when VVR IO 
size is more than 256K.
* 3959463 (3954787) Data corruption may occur in GCO along with FSS environment on RHEL 7.5 Operating system.
* 3959471 (3932356) vxconfigd dumping core while importing DG
* 3959473 (3945115) VxVM (Veritas Volume Manager) vxassist relayout command fails for volumes with 
RAID layout.
* 3959475 (3950384) In a scenario where volume encryption at rest is enabled, data corruption may
occur if the file system size exceeds 1TB.
* 3959476 (3950675) vxdg import appears to hang forever
* 3959477 (3953845) IO hang can be experienced when there is memory pressure situation because of "vxencryptd".
* 3959478 (3956027) System panicked while removing disks from disk group because of race condition between IO stats and disk removal code.
* 3959479 (3956727) In SOLARIS DDL discovery when SCSI ioctl fails, direct disk IO on device can lead to high memory consumption and vxconfigd hangs.
* 3959480 (3957227) Disk group import succeeded, but with error message. This may cause confusion.
* 3961353 (3950199) System may panic while DMP(Dynamic Multipathing) path restoration.
* 3961356 (3953481) A stale entry of the old disk is left under /dev/[r]dsk even after replacing it.
* 3961358 (3955101) Panic observed in GCO environment (cluster to cluster replication) during replication.
* 3961359 (3955725) Utility  to clear "failio" flag on disk after storage connectivity is back.
* 3961468 (3926067) vxassist relayout /vxassist commands may fail in Campus Cluster environment.
* 3961469 (3948140) System panic can occur if size of RTPG (Report Target Port Groups) data returned
by underlying array is greater than 255.
* 3961480 (3957549) Server panicked when tracing event because of NULL pointer check missing.
* 3964315 (3952042) vxdmp iostat memory allocation might cause memory fragmentation and pagecache drop.
* 3964360 (3964359) The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.
* 3967893 (3966872) Deport and rename clone DG changes the name of clone DG along with source DG
* 3967895 (3877431) System panic after filesystem expansion.
* 3967898 (3930914) Master node panic occurs while sending responding message to slave node.
* 3968854 (3964779) Changes to support Solaris 11.4 with Volume Manager
* 3969591 (3964337) Partition size getting set to default after running vxdisk scandisks.
* 3969997 (3964359) The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.
* 3970119 (3943952) Rolling upgrade to Infoscale 7.4 and above is broken.
* 3970370 (3970368) Sol11.4 DMP+DR observed error messages in dmpdr -o refresh utility
Patch ID: VRTSvxvm-7.3.1.100
* 3932464 (3926976) Frequent loss of VxVM functionality due to vxconfigd unable to validate license.
* 3933874 (3852146) Shared DiskGroup(DG) fails to import when "-c" and "-o noreonline" options 
are
specified together
* 3933875 (3872585) System panics with storage key exception.
* 3933877 (3914789) System may panic when reclaiming on secondary in VVR environment.
* 3933878 (3918408) Data corruption when volume grow is attempted on thin reclaimable disks whose space is just freed.
* 3933880 (3864063) Application I/O hangs because of a race between the Master Pause SIO (Staging
I/O) and the Error Handler SIO.
* 3933882 (3865721) Vxconfigd may hang while pausing the replication in CVR(cluster Veritas Volume 
Replicator) environment.
* 3933883 (3867236) Application IO hang happens because of a race between Master Pause SIO(Staging IO) 
and RVWRITE1 SIO.
* 3933890 (3879324) VxVM DR tool fails to handle busy device problem while LUNs are removed from  OS
* 3933893 (3890602) OS command cfgadm command hangs after reboot when hundreds devices are under 
DMP's(Dynamic Multi-Pathing) control.
* 3933894 (3893150) VxDMP vxdmpadm native ls command sometimes doesn't report imported disks' 
pool name
* 3933897 (3907618) vxdisk resize leads to data corruption on filesystem
* 3933899 (3910675) Disks directly attached to the system cannot be exported in FSS environment
* 3933900 (3915523) Local disk from other node belonging to private DG(diskgroup) is exported to the
node when a private DG is imported on current 
node.
* 3933901 (3915953) Enabling dmp_native_support takes much more time to get complete.
* 3933903 (3918356) zpools are imported automatically when DMP(Dynamic Multipathing) native support is set to on which may lead to zpool corruption.
* 3933904 (3921668) vxrecover command with -m option fails when executed on the slave
nodes.
* 3933907 (3873123) If the disk with CDS EFI label is used as remote
disk on the cluster node, restarting the vxconfigd
daemon on that particular node causes vxconfigd
to go into disabled state
* 3933910 (3910228) Registration of GAB(Global Atomic Broadcast) port u fails on slave nodes after 
multiple new devices are added to the system.
* 3933913 (3905030) system hang when install/uninstall VxVM(Veritas Volume Manager)
* 3936428 (3932714) OS panicked while performing IO on dmpnode.
* 3937541 (3911930) Provide a way to clear the PGR_FLAG_NOTSUPPORTED on the device instead of using
exclude/include commands
* 3937545 (3932246) vxrelayout operation fails to complete.
* 3937549 (3934910) DRL map leaks during snapshot creation/removal cycle with dg reimport.
* 3937550 (3935232) Replication and IO hang during master takeover because of racing between log 
owner change and master switch.
* 3937808 (3931936) VxVM(Veritas Volume Manager) command hang on master node after 
restarting 
slave node.
* 3937811 (3935974) When client process shuts down abruptly or resets connection during 
communication with the vxrsyncd daemon, it may terminate
vxrsyncd daemon.
* 3938392 (3909630) OS Panic happens while registering DMP(Dynamic Multi Pathing) statistic
information.
* 3944743 (3945411) System wasn't able to boot after enabling DMP native support for ZFS boot 
devices
Patch ID: VRTSvxfs-7.3.1.2500
* 3978644 (3978615) VxFS filesystem is not getting mounted after OS upgrade and first reboot
Patch ID: VRTSvxfs-7.3.1.2300
* 3929952 (3929854) Enabling event notification support on CFS for Weblogic watchService on 
SOLARIS platform
* 3933816 (3902600) Contention observed on vx_worklist_lk lock in cluster 
mounted file system with ODM
* 3943715 (3944884) ZFOD extents shouldn't be pushed on clones in case of logged 
writes.
* 3944902 (3944901) File system unmount operation might get hang waiting for client locks when restart and scope leave api collides.
* 3947560 (3947421) DLV upgrade operation fails while upgrading Filesystem from DLV 9 to DLV 10.
* 3947561 (3947433) While adding a volume (part of vset) in already mounted filesystem, fsvoladm
displays error.
* 3947651 (3947648) Mistuning of vxfs_ninode and vx_bc_bufhwm to very small 
value.
* 3952304 (3925281) Hexdump the incore inode data and piggyback data when inode 
revalidation fails.
* 3952305 (3939996) In CFS environment, GLM performance enhancement for gp_gen_lock 
bottleneck/contention.
* 3952309 (3941942) Unable to handle kernel NULL pointer dereference while freeing fiostats.
* 3953466 (3953464) On solaris, Heavy lock contention for pl_msgq_lock lock on mutilcore cpus.
* 3955886 (3955766) CFS hung when doing extent allocating.
* 3958475 (3958461) Man page changes to support updated DLVs
* 3958776 (3958759) fsadm "-i" option can't be used vxfs7.3.1 for Solaris env.
* 3960468 (3957092) System panic with spin_lock_irqsave thru splunkd.
* 3967894 (3932163) Temporary files are being created in /tmp
* 3967901 (3932804) Temporary files are being created in /tmp
* 3967903 (3932845) Temporary files are being created in /tmp
* 3968786 (3968785) VxFS module failed to load on Solaris 11.4.
* 3968821 (3968885) The fsadm utility might hit userspace core
Patch ID: VRTSvxfs-7.3.1.100
* 3933810 (3830300) Degraded CPU performance during backup of Oracle archive logs
on CFS vs local filesystem
* 3933819 (3879310) The file system may get corrupted after a failed vxupgrade.
* 3933820 (3894712) ACL permissions are not inherited correctly on cluster 
file system.
* 3933823 (3904794) Extending qio file fail with EINVAL error if reservation block is not set.
* 3933824 (3908785) System panic observed because of null page address in writeback 
structure in case of 
kswapd process.
* 3933828 (3921152) Performance drop caused by vx_dalloc_flush().
* 3933843 (3926972) A recovery event can result in a cluster wide hang.
* 3934841 (3930267) Deadlock between fsq flush threads and writer threads.
* 3935903 (3933763) Oracle Hang in VxFS.
* 3936286 (3936285) fscdsconv command may fsil the conversion for disk layout version(DLV) 12 and above.
* 3937536 (3940516) File resize thread loops infinitely for file resize operation crossing 32 bit
boundary.
* 3942697 (3940846) vxupgrade fails while upgrading the filesystem from disk
layout version(DLV) 9 to DLV 10


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxvm-7.3.1.2500

* 3957433 (Tracking ID: 3941844)

SYMPTOM:
The VVR secondary node may hang with VVR replication ports deleting in below stack: 

#4 [ffff883f1f297cd8] vol_rp_delete_port at ffffffffc170bf59 [vxio]
#5 [ffff883f1f297ce8] vol_rv_replica_reconfigure at ffffffffc1768ab3 [vxio]
#6 [ffff883f1f297d98] vol_rv_error_handle at ffffffffc1775394 [vxio]
#7 [ffff883f1f297dd0] vol_rv_errorhandler_callback at ffffffffc1775418 [vxio]
#8 [ffff883f1f297df0] vol_klog_start at ffffffffc16434cd [vxio]
#9 [ffff883f1f297e48] voliod_iohandle at ffffffffc15a1f0a [vxio]
#10 [ffff883f1f297e80] voliod_loop at ffffffffc15a2110 [vxio]

DESCRIPTION:
During VVR connection, at VVR secondary side, a flag pt_connecting was set till the VVR port connection is done fully. In some cases, the port connection server thread may go into abort process without clear the flag. Thus, the port deleting thread could run into a dead loop since the flag keeps setting and the system hang issue.

RESOLUTION:
The code is modified to clear the pt_connecting flag before calling the abort process.

* 3969218 (Tracking ID: 3968334)

SYMPTOM:
If the OS of the system is upgraded from 11.3 to 11.4 then on reboot newer library were not copied pertaining to Solaris 11.4.

DESCRIPTION:
In Solaris 11.4 the concept of dual VRTS_vxvm_link.so library support for Solaris was included. In Solaris 11 the work of postinstall script is done by vxvm-configure script. The script runs only once and then it skips because of the .vxvm-configured file being present. If the OS of the system is upgraded from 11.3 to 11.4 then on reboot newer library should be copied pertaining to Solaris 11.4. But since the check for the file is there, copying of the library is skipped on boot.

RESOLUTION:
Changes have been done to verify the cksum of the library and then decide where new module replacing is required or not even if the .vxvm-configured file is present.

* 3971575 (Tracking ID: 3968020)

SYMPTOM:
If the OS of the system is upgraded from 11.3 to 11.4 , the newer modules fail to load

DESCRIPTION:
If the OS of the system is upgraded from 11.3 to 11.4 then on reboot newer modules should be loaded pertaining to Solaris 11.4. But since the check for the file is there, loading of the module is skipped on boot.

RESOLUTION:
Changes have been done to verify the cksum of the modules and then decide where new module replacing is required or not even if the .vxvm-configure file is present.

* 3973086 (Tracking ID: 3956134)

SYMPTOM:
System panic might occur when IO is in progress in VVR (veritas volume replicator) environment with below stack:

page_fault()
voliomem_grab_special()
volrv_seclog_wsio_start()
voliod_iohandle()
voliod_loop()
kthread()
ret_from_fork()

DESCRIPTION:
In a memory crunch scenario in some cases the memory reservation for SIO (staged IO) in VVR configuration might fail. If this situation occurs then SIO is tried at a later time when the memory becomes available again but while doing some of the fields of SIO are passed NULL values which leads to panic in the VVR code.

RESOLUTION:
Code changes have been done to pass proper values to IO when it is retired in VVR environment.

* 3974355 (Tracking ID: 3931048)

SYMPTOM:
Few VxVM log files listed below are created with write permission to all users
which might lead to security issues.

/etc/vx/log/vxloggerd.log
/var/adm/vx/logger.txt
/var/adm/vx/kmsg.log

DESCRIPTION:
The log files are created with write permissions to all users, which is a
security hole. 
The files are created with default rw-rw-rw- (666) permission because the umask
is set to 0 while creating these files.

RESOLUTION:
Changed umask to 022 while creating these files and fixed an incorrect open
system call. Log files will now have rw-r--r--(644) permissions.

* 3974652 (Tracking ID: 3899568)

SYMPTOM:
"vxdmpadm iostat stop" as per design cannot stop the iostat gathering
persistently. To avoid Performance & Memory crunch related issues, it is
generally recommended to stop the iostat gathering.There is a requirement
to provide such ability to stop/start the iostat gathering persistently
in those cases.

DESCRIPTION:
Today DMP iostat daemon is stopped using - "vxdmpadm iostat stop". but this 
is not persistent setting. After reboot this would be lost and hence 
customer
needs to also have to put this in init scripts at appropriate place for
persistent effect.

RESOLUTION:
Code is modified to provide a  tunable "dmp_compute_iostats" which can
start/stop the iostat gathering persistently.

Notes:
Use following command to start/stop the iostat gathering persistently.
# vxdmpadm settune dmp_compute_iostats=on/off.

Patch ID: VRTSvxvm-7.3.1.2300

* 3933888 (Tracking ID: 3868533)

SYMPTOM:
IO hang happens when starting replication. VXIO deamon hang with stack like 
following:

vx_cfs_getemap at ffffffffa035e159 [vxfs]
vx_get_freeexts_ioctl at ffffffffa0361972 [vxfs]
vxportalunlockedkioctl at ffffffffa06ed5ab [vxportal]
vxportalkioctl at ffffffffa06ed66d [vxportal]
vol_ru_start at ffffffffa0b72366 [vxio]
voliod_iohandle at ffffffffa09f0d8d [vxio]
voliod_loop at ffffffffa09f0fe9 [vxio]

DESCRIPTION:
While performing DCM replay in case Smart Move feature is enabled, VxIO 
kernel needs to issue IOCTL to VxFS kernel to get file system free region. 
VxFS kernel needs to clone map by issuing IO to VxIO kernel to complete this 
IOCTL. Just at the time RLINK disconnection happened, so RV is serialized to 
complete the disconnection. As RV is serialized, all IOs including the 
clone map IO form VxFS is queued to rv_restartq, hence the deadlock.

RESOLUTION:
Code changes have been made to handle the dead lock situation.

* 3935967 (Tracking ID: 3935965)

SYMPTOM:
After updating VxVM package on alternate BE, some VxVM binaries aren't 
updated.

DESCRIPTION:
VxVM isn't supporting updating package on alternate BE very well. In post-
install script, SunOS version specific binaries should be copied with installation 
root directory defined by PKG_INSTALL_ROOT environment variable, but this 
variable is defined as fixed value "/", hence these binaries aren't copied.

RESOLUTION:
Code changes have been made to post-install script to handle situation of 
installation on alternate BE.

* 3958860 (Tracking ID: 3953681)

SYMPTOM:
Data corruption issue is seen when more than one plex of volume is detached.

DESCRIPTION:
When a plex of volume gets detached, DETACH map gets enabled in the DCO (Data Change Object). The incoming IO's are tracked in DRL (Dirty Region Log) and then asynchronously copied to DETACH map for tracking.
If one more plex gets detached then it might happen that some of the new incoming regions are missed in the DETACH map of the previously detached plex.
This leads to corruption when the disk comes back and plex resync happens using corrupted DETACH map.

RESOLUTION:
Code changes are done to correctly track the IO's in the DETACH map of previously detached plex and avoid corruption.

* 3959451 (Tracking ID: 3913949)

SYMPTOM:
The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.

The DG import may fail due to split brain with following messages in syslog:
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF000043042DD4A5
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF00004003FE9356

DESCRIPTION:
When a disk is detached, the SSB ID of the remaining DA and DM records
shall be incremented. Unfortunately for some reason, the SSB ID of DA 
record is only incremented, but the SSB ID of DM record is NOT updated. 
One probable reason may be because the disks get detached before updating
the DM records.

RESOLUTION:
A work-around option is provided to bypass the SSB checks while importing the DG, the user
can import the DG with 'vxdg -o overridessb import <dgname>' command if a false split brain
happens.
For using '-o overridessb', one should confirm that all DA records 
of the DG are available in ENABLED state and are differing with DM records against 
SSB by 1.

* 3959452 (Tracking ID: 3931678)

SYMPTOM:
There was a performance issue while shipping IO to the remote disks 
due
to non-cached memory allocation and the redundant locking.

DESCRIPTION:
There was redundant locking while checking if the flow control is
enabled by GAB during IO shipping. The redundant locking is optimized.
Additionally, the memory allocation during IO shipping is optimized.

RESOLUTION:
Changes are done in VxVM code to optimize the memory allocation and reduce the
redundant locking to improve the performance.

* 3959453 (Tracking ID: 3932241)

SYMPTOM:
VxVM (Veritas Volume Manager) creates some required files under /tmp
and /var/tmp directories.

DESCRIPTION:
VxVM (Veritas Volume Manager) creates some required files under /tmp and
/var/tmp directories. 
The non-root users have access to these folders, and they may accidently modify,
move or delete those files.
Such actions may interfere with the normal functioning of the Veritas Volume
Manager.

RESOLUTION:
This Hot Fix address the issue by moving the required Veritas Volume Manager
files to secure location.

* 3959455 (Tracking ID: 3932496)

SYMPTOM:
In an FSS environment, volume creation might fail on SSD devices if 
vxconfigd was earlier restarted on the master node.

DESCRIPTION:
In an FSS environment, if a shared disk group is created using 
SSD devices and vxconfigd is restarted, then the volume creation might fail. 
The problem is because the mediatype attribute of the disk was NOT propagated 
from kernel to vxconfigd while restarting the vxconfigd daemon.

RESOLUTION:
Changes are done in VxVM code to correctly propagate mediatype for remote 
devices during vxconfigd startup.

* 3959458 (Tracking ID: 3936535)

SYMPTOM:
The IO performance is poor due to frequent cache drops on snapshots 
configured system.

DESCRIPTION:
On VxVM snapshots configured system, along with the IO going, DCO map update 
will happen and it could allocate lots of chucks of pages memory, which 
triggered kswapd to swap the cache memory out, so cache drops were seen.

RESOLUTION:
Code changes are done to allocate big size memory for DCO map update without 
triggering memory swap out.

* 3959460 (Tracking ID: 3942890)

SYMPTOM:
In case of Data Change Object (DCO) configured, IO hang may happen with 
heavy IO
load plus slow Storage Replicator Log (SRL) 
flush.

DESCRIPTION:
Application IO needs to wait for DRL flush complete to proceed. Due to a 
defect
in DRL 
code, DRL flush couldn't proceed in case there're large amount of IO which
exceeded 
avaiable DRL chunks, hence IO hang.

RESOLUTION:
Code changes have been made to fix the issue.

* 3959461 (Tracking ID: 3946350)

SYMPTOM:
kmalloc-1024 and kmalloc-2048 memory consuming keeps increasing when Veritas 
Volume Replicator (VVR) IO size is more than 256K.

DESCRIPTION:
In case of VVR , if I/O size is more than 256K, then the IO is broken into 
child 
IOs. Due to code defect, the allocated space doesn't got freed when splited 
IOs are completed.

RESOLUTION:
The code is modified to free VxVM allocated memory after split IOs competed.

* 3959463 (Tracking ID: 3954787)

SYMPTOM:
On a RHEL 7.5 FSS environment with GCO configured having NVMe devices and Infiniband network, data corruption might occur when sending the IO from Master to slave node.

DESCRIPTION:
In the recent RHEL 7.5 release, linux stopped allowing IO on the underlying NVMe device which has gaps in between BIO vectors. In case of VVR, the SRL header of 3 blocks is added to the BIO . When the BIO is sent through LLT to the other node because of LLT limitation of 32 fragments can lead to unalignment of BIO vectors. When this unaligned BIO is sent to the underlying NVMe device, the last 3 blocks of the BIO are skipped and not written to the disk on the slave node. This leads to incomplete data written on the slave node which leads to data corruption.

RESOLUTION:
Code changes have been done to handle this case and send the BIO aligned to the underlying NVMe device.

* 3959471 (Tracking ID: 3932356)

SYMPTOM:
In a two node cluster vxconfigd dumps core while importing the DG -

 dapriv_da_alloc ()
 in setup_remote_disks ()
 in volasym_remote_getrecs ()
 req_dg_import ()
 vold_process_request ()
 start_thread () from /lib64/libpthread.so.0
 from /lib64/libc.so.6

DESCRIPTION:
The vxconfigd is dumping core due to address alignment issue.

RESOLUTION:
The alignment issue is fixed.

* 3959473 (Tracking ID: 3945115)

SYMPTOM:
VxVM vxassist relayout command fails for volumes with RAID layout with the 
following message:
VxVM vxassist ERROR V-5-1-2344 Cannot update volume <vol-name>
VxVM vxassist ERROR V-5-1-4037 Relayout operation aborted. (7)

DESCRIPTION:
During relayout operation, the target volume inherits the attributes from 
original volume. One of those attributes is read policy. In case if the layout 
of original volume is RAID, it will set RAID read policy. RAID read policy 
expects the target volume to have appropriate log required for RAID policy. 
Since the target volume is of different layout, it does not have the log 
present and hence relayout operation fails.

RESOLUTION:
Code changes have been made to set the read policy to SELECT for target 
volumes rather than inheriting it from original volume in case original volume 
is of RAID layout.

* 3959475 (Tracking ID: 3950384)

SYMPTOM:
In a scenario where volume data encryption at rest is enabled, data corruption
may occur if the file system size exceeds 1TB and the data is located in a file
extent which has an extent size bigger than 256KB.

DESCRIPTION:
In a scenario where data encryption at rest is enabled, data corruption may
occur when both the following cases are satisfied:
- File system size is over 1TB
- The data is located in a file extent which has an extent size bigger than 256KB
This issue occurs due to a bug which causes an integer overflow for the offset.

RESOLUTION:
As a part of this fix, appropriate code changes have been made to improve data
encryption behavior such that the data corruption does not occur.

* 3959476 (Tracking ID: 3950675)

SYMPTOM:
The following command is not progressing and appears stuck.

# vxdg import <dg-name>

DESCRIPTION:
The DG import command is found to be non-progressing some times on backup system.
The analysis of the situation has shown that devices belonging to the DG are
found to be reporting "devid mismatch" more than once, due to not following
graceful DR steps. An erroneous processing of such situation resulted in not
allowing IOs on such devices leading to DG import hang.

RESOLUTION:
The code processing the "devid mismatch" is rectified.

* 3959477 (Tracking ID: 3953845)

SYMPTOM:
IO hang can be experienced when there is memory pressure situation because of "vxencryptd".

DESCRIPTION:
For large size IO, VxVM(Veritas Volume Manager)  tries to acquire contiguous pages in memory for some of its internal data structures. In heavy memory pressure scenarios, there can be a possibility that contiguous pages are not available. In such case it waits till the required pages are available for allocation and does not process the request further. This causes IO hang kind of situation where IO can't progress further or progresses very slowly.

RESOLUTION:
Code changes are done to avoid the IO hang situation.

* 3959478 (Tracking ID: 3956027)

SYMPTOM:
System panicked while removing disks from disk group, stack likes the following:

[0000F4C4]___memmove64+0000C4 ()
[077ED5FC]vol_get_one_io_stat+00029C ()
[077ED8FC]vol_get_io_stats+00009C ()
[077F1658]volinfo_ioctl+0002B8 ()
[07809954]volsioctl_real+0004B4 ()
[079014CC]volsioctl+00004C ()
[07900C40]vols_ioctl+000120 ()
[00605730]rdevioctl+0000B0 ()
[008012F4]spec_ioctl+000074 ()
[0068FE7C]vnop_ioctl+00005C ()
[0069A5EC]vno_ioctl+00016C ()
[006E2090]common_ioctl+0000F0 ()
[00003938]mfspurr_sc_flih01+000174 ()

DESCRIPTION:
IO stats function was trying to access a freed disk that was being removed from disk group,  result in panic for illegal memory access.

RESOLUTION:
Code changes have been made to resolve this race condition.

* 3959479 (Tracking ID: 3956727)

SYMPTOM:
In SOLARIS DDL discovery when SCSI ioctl fails, direct disk IO on device can lead to high memory consumption and vxconfigd hangs.

DESCRIPTION:
In SOLARIS DDL discovery when SCSI ioctls on disk for private region IO fails we attempt a direct disk read/write to the disk.
Due to compiler issue this direct read/write gets invalid arguments which leads to high memory consumption and vxconfigd hangs.

RESOLUTION:
Changes are done in VxVM code to ensure correct arguments are passed to disk read/write.

* 3959480 (Tracking ID: 3957227)

SYMPTOM:
Disk group import succeeded, but with below error message:

vxvm:vxconfigd: [ID ** daemon.error] V-5-1-0 dg_import_name_to_dgid: Found dgid = **

DESCRIPTION:
When do disk group import, two configuration copies may be found. Volume Manager will use the latest configuration copy, then print a message to indicate this scenario. Due to wrong log level, this message got printed in error category.

RESOLUTION:
Code changes has been made to suppress this harmless message.

* 3961353 (Tracking ID: 3950199)

SYMPTOM:
System may panic with following stack while DMP(Dynamic Mulitpathing) path 
restoration:

#0 [ffff880c65ea73e0] machine_kexec at ffffffff8103fd6b
 #1 [ffff880c65ea7440] crash_kexec at ffffffff810d1f02
 #2 [ffff880c65ea7510] oops_end at ffffffff8154f070
 #3 [ffff880c65ea7540] no_context at ffffffff8105186b
 #4 [ffff880c65ea7590] __bad_area_nosemaphore at ffffffff81051af5
 #5 [ffff880c65ea75e0] bad_area at ffffffff81051c1e
 #6 [ffff880c65ea7610] __do_page_fault at ffffffff81052443
 #7 [ffff880c65ea7730] do_page_fault at ffffffff81550ffe
 #8 [ffff880c65ea7760] page_fault at ffffffff8154e2f5
    [exception RIP: _spin_lock_irqsave+31]
    RIP: ffffffff8154dccf  RSP: ffff880c65ea7818  RFLAGS: 00210046
    RAX: 0000000000010000  RBX: 0000000000000000  RCX: 0000000000000000
    RDX: 0000000000200246  RSI: 0000000000000040  RDI: 00000000000000e8
    RBP: ffff880c65ea7818   R8: 0000000000000000   R9: ffff8824214ddd00
    R10: 0000000000000002  R11: 0000000000000000  R12: ffff88302d2ce400
    R13: 0000000000000000  R14: ffff880c65ea79b0  R15: ffff880c65ea79b7
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff880c65ea7820] dmp_open_path at ffffffffa07be2c5 [vxdmp]
#10 [ffff880c65ea7980] dmp_restore_node at ffffffffa07f315e [vxdmp]
#11 [ffff880c65ea7b00] dmp_revive_paths at ffffffffa07ccee3 [vxdmp]
#12 [ffff880c65ea7b40] gendmpopen at ffffffffa07cbc85 [vxdmp]
#13 [ffff880c65ea7c10] dmpopen at ffffffffa07cc51d [vxdmp]
#14 [ffff880c65ea7c20] dmp_open at ffffffffa07f057b [vxdmp]
#15 [ffff880c65ea7c50] __blkdev_get at ffffffff811d7f7e
#16 [ffff880c65ea7cb0] blkdev_get at ffffffff811d82a0
#17 [ffff880c65ea7cc0] blkdev_open at ffffffff811d8321
#18 [ffff880c65ea7cf0] __dentry_open at ffffffff81196f22
#19 [ffff880c65ea7d50] nameidata_to_filp at ffffffff81197294
#20 [ffff880c65ea7d70] do_filp_open at ffffffff811ad180
#21 [ffff880c65ea7ee0] do_sys_open at ffffffff81196cc7
#22 [ffff880c65ea7f30] compat_sys_open at ffffffff811eee9a
#23 [ffff880c65ea7f40] symev_compat_open at ffffffffa0c9b08f

DESCRIPTION:
System panic can be encounter due to race condition.  There is a possibility 
that a path picked by DMP restore daemon for processing
may be deleted before the restoration process is complete. Hence when the 
restoration daemon tries to access the path properties it leads to system panic 
as the path properties are already freed.

RESOLUTION:
Code changes are done to handle the race condition.

* 3961356 (Tracking ID: 3953481)

SYMPTOM:
A stale entry of a replaced disk was left behind under /dev/[r]dsk 
to represent the replaced disk.

DESCRIPTION:
Whenever a disk is removed from DMP view, the driver property information of 
the disk has to be removed from the kernel, if not a stale entry will be 
left out under the /dev/[r]dsk. Now when a new disk replaces with the 
same minor number, instead of refreshing the property, the stale information is left.

RESOLUTION:
Code is modified to remove the stale device property when a disk is removed.

* 3961358 (Tracking ID: 3955101)

SYMPTOM:
Server might panic in a GCO environment with the following stack:

nmcom_server_main_tcp()
ttwu_do_wakeup()
ttwu_do_activate.constprop.90()
try_to_wake_up()
update_curr()
update_curr()
account_entity_dequeue()
 __schedule()
nmcom_server_proc_tcp()
kthread()
kthread_create_on_node()
ret_from_fork()
kthread_create_on_node()

DESCRIPTION:
There are recent changes done in the code to handle Dynamic port changes i.e deletion and addition of ports can now happen dynamically. It might happen that while accessing the port, it was deleted in the background by other thread. This would lead to a panic in the code since the port to be accessed has been already deleted.

RESOLUTION:
Code changes have been done to take care of this situation and check if the port is available before accessing it.

* 3961359 (Tracking ID: 3955725)

SYMPTOM:
Utility to clear "failio" flag on disk after storage connectivity is back.

DESCRIPTION:
If I/Os to the disks timeout due to some hardware failures like weak Storage  Area Network (SAN) cable link or Host Bus Adapter (HBA) failure, VxVM assumes 
that disk is bad or slow and it sets "failio" flag on the disk. Because of this  flag, all the subsequent I/Os fail with the No such device error. After the connectivity is back, the "failio" needs to clear using "vxdisk  <disk_name> failio=off". We have come up with a utility  "vxcheckfailio" which will
clear the "failio" flag for all the disks whose all paths are enabled.

RESOLUTION:
Code changes are done to add utility "vxcheckfailio" that will clear the "failio" flag on the disks.

* 3961468 (Tracking ID: 3926067)

SYMPTOM:
In a Campus Cluster environment, vxassist relayout command may fail with 
following error:
VxVM vxassist ERROR V-5-1-13124  Site  offline or detached
VxVM vxassist ERROR V-5-1-4037 Relayout operation aborted. (20)

vxassist convert command also might fail with following error:
VxVM vxassist ERROR V-5-1-10128  No complete plex on the site.

DESCRIPTION:
For vxassist "relayout" and "convert" operations in Campus Cluster 
environment, VxVM (Veritas Volume Manager) needs to sort the plexes of volume 
according to sites. When the number of 
plexes of volumes are greater than 100, the sorting of plexes fail due to a 
bug in the code. Because of this sorting failure, vxassist relayout/convert 
operations fail.

RESOLUTION:
Code changes are done to properly sort the plexes according to site.

* 3961469 (Tracking ID: 3948140)

SYMPTOM:
System may panic if RTPG data returned by the array is greater than 255 with
below stack:

dmp_alua_get_owner_state()
dmp_alua_get_path_state()
dmp_get_path_state()
dmp_check_path_state()
dmp_restore_callback()
dmp_process_scsireq()
dmp_daemons_loop()

DESCRIPTION:
The size of the buffer given to RTPG SCSI command is currently 255 bytes. But the
size of data returned by underlying array for RTPG can be greater than 255
bytes. As a result
incomplete data is retrieved (only the first 255 bytes) and when trying to read
the RTPG data, it causes invalid access of memory resulting in error while
claiming the devices. This invalid access of memory may lead to system panic.

RESOLUTION:
The RTPG buffer size has been increased to 1024 bytes for handling this.

* 3961480 (Tracking ID: 3957549)

SYMPTOM:
Server panicked when resyncing mirror volume with the following stack:

voliot_object_event+0x2e0
vol_oes_sio_start+0x80
voliod_iohandle+0x30
voliod_loop+0x248
thread_start+4

DESCRIPTION:
In case IO error happened during mirror resync, need to log trace event for the IO error. As the IO is from mirror resync, KIO should be NULL. But NULL pointer check for KIO is missed during logging trace event, hence the panic.

RESOLUTION:
Code changes have been made to fix the issue.

* 3964315 (Tracking ID: 3952042)

SYMPTOM:
dmpevents.log is flooding with below messages:
Tue Jul 11 09:28:36.620: Lost 12 DMP I/O statistics records
Tue Jul 11 10:05:44.257: Lost 13 DMP I/O statistics records
Tue Jul 11 10:10:05.088: Lost 6 DMP I/O statistics records
Tue Jul 11 11:28:24.714: Lost 6 DMP I/O statistics records
Tue Jul 11 11:46:35.568: Lost 1 DMP I/O statistics records
Tue Jul 11 12:04:10.267: Lost 13 DMP I/O statistics records
Tue Jul 11 12:04:16.298: Lost 5 DMP I/O statistics records
Tue Jul 11 12:44:05.656: Lost 31 DMP I/O statistics records
Tue Jul 11 12:44:38.855: Lost 2 DMP I/O statistics records

DESCRIPTION:
when DMP (Dynamic Multi-Pathing) expand iostat table, DMP allocates a new larger table, replaces the old table with the new one and frees the old one. This 
increases the possibility of memory fragmentation.

RESOLUTION:
The code is modified to increase the initial value for iostat table.

* 3964360 (Tracking ID: 3964359)

SYMPTOM:
The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.

The DG import may fail due to split brain with following messages in syslog:
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF000043042DD4A5
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF00004003FE9356

DESCRIPTION:
When a disk is detached, the SSB ID of the remaining DA and DM records
shall be incremented. Unfortunately for some reason, the SSB ID of DA 
record is only incremented, but the SSB ID of DM record is NOT updated. 
One probable reason may be because the disks get detached before updating
the DM records.

RESOLUTION:
The code changes are done in DG import process to identify the false split brain condition and correct the
disk SSB IDs during the import. With this fix, the import shall NOT be fail due to a false split brain condition.

Additionally one more improvement is done in -o overridessb option to correct the disk
SSB IDs during import. 

Ideally with this fix the disk group import shall ideally NOT fail due to false split brain conditions. 
But if the disk group import still fails with a false split brain condition, then user can try -o overridessb option. 
For using '-o overridessb', one should confirm that all the DA records of the DG are available in ENABLED state 
and are differing with DM records against SSB by 1.

* 3967893 (Tracking ID: 3966872)

SYMPTOM:
The deport and rename of a cloned DG is renaming both source and cloned DG.

DESCRIPTION:
On a DR site both source and clone DGs can co-exist, where the source DG is deported while
the cloned DG is in imported state. Now the user attempts to deport-rename the cloned DG,
then names of both source and cloned DGs are changed.

RESOLUTION:
The deport code is fixed to take care of the situation.

* 3967895 (Tracking ID: 3877431)

SYMPTOM:
System panic after filesystem expansion with below stack:
#volkio_to_kio_copy at [vxio]
#volsio_nmstabilize at [vxio]
#vol_rvsio_preprocess at [vxio]
#vol_rv_write1_start at [vxio]
#voliod_iohandle at [vxio]
#voliod_loop at [vxio]

DESCRIPTION:
Veritas Volume Replicator (VVR) will generate different IOs in different stages. In some cases, parent IO doesn't wait till child IO is freed. Due a bug, a child IO accessed a freed parent IO's memory, further caused system panic.

RESOLUTION:
Code changes have been made to avoid modifying freed memory.

* 3967898 (Tracking ID: 3930914)

SYMPTOM:
Master node panicked with following stack:
[exception RIP: vol_kmsg_respond_common+111]
#9 [ffff880898513d18] vol_kmsg_respond at ffffffffa08fd8af [vxio]
#10 [ffff880898513d30] vol_rv_wrship_srv_done at ffffffffa0b9a955 [vxio]
#11 [ffff880898513d98] volkcontext_process at ffffffffa09e7e5c [vxio]
#12 [ffff880898513de0] vol_rv_write2_start at ffffffffa0ba3489 [vxio]
#13 [ffff880898513e50] voliod_iohandle at ffffffffa09e743a [vxio]
#14 [ffff880898513e88] voliod_loop at ffffffffa09e7640 [vxio]
#15 [ffff880898513ec8] kthread at ffffffff810a5b8f
#16 [ffff880898513f50] ret_from_fork at ffffffff81646a98

DESCRIPTION:
In case sending response for write shipping request to slave node, it may end up using the stale pointer of message handler, in which message block is NULL. When dereferencing the message block panic occurs.

RESOLUTION:
Code changes have been made to fix the issue.

* 3968854 (Tracking ID: 3964779)

SYMPTOM:
Current load of Vxvm modules i.e vxio and vxspec is failing on Solaris 11.4

DESCRIPTION:
The function page_numtopp_nolock has been replaced and renamed as pp_for_pfn_canfail. The _depends_on has been deprecated and cannot be used. VxVM was making use of the attribute to specify the dependency between the modules.

RESOLUTION:
The changes are mainly around the way we handle unmapped buf in vxio driver.
The Solaris API that we were using is no longer valid and is a private API.
Replaced hat_getpfnum() -> ppmapin/ppmapout calls with bp_copyin/bp_copyout in I/O code path.
In ioshipping, replaced it with miter approach and hat_kpm_paddr_mapin()/hat_kpm_paddr_mapout.

* 3969591 (Tracking ID: 3964337)

SYMPTOM:
After running vxdisk scandisks, the partition size gets set to default value of 512.

DESCRIPTION:
During device discovery, VxVM (Veritas Volume Manager) compares the original partition size present and the new partition size which is reported. In the code while reading the partition size from Kernel memory, the buffer utilized in the userland memory is not initialized and has a garbage value. Because of this the different between old partition size and new partition size is detected which leads to partition size being set to a default value.

RESOLUTION:
Code changes have been done to properly initialize the buffer in userland which is used to read data from kernel.

* 3969997 (Tracking ID: 3964359)

SYMPTOM:
The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.

The DG import may fail due to split brain with following messages in syslog:
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF000043042DD4A5
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF00004003FE9356

DESCRIPTION:
When a disk is detached, the SSB ID of the remaining DA and DM records
shall be incremented. Unfortunately for some reason, the SSB ID of DA 
record is only incremented, but the SSB ID of DM record is NOT updated. 
One probable reason may be because the disks get detached before updating
the DM records.

RESOLUTION:
The code changes are done in DG import process to identify the false split brain condition and correct the
disk SSB IDs during the import. With this fix, the import shall NOT be fail due to a false split brain condition.

Additionally one more improvement is done in -o overridessb option to correct the disk
SSB IDs during import. 

Ideally with this fix the disk group import shall ideally NOT fail due to false split brain conditions. 
But if the disk group import still fails with a false split brain condition, then user can try -o overridessb option. 
For using '-o overridessb', one should confirm that all the DA records of the DG are available in ENABLED state 
and are differing with DM records against SSB by 1.

* 3970119 (Tracking ID: 3943952)

SYMPTOM:
Rolling upgrade from Infoscale 7.3.1.100 and above to Infoscale 7.4 and above 
in Flexible Storage Sharing (FSS) environment may lead to system panic.

DESCRIPTION:
As a part of the code changes for the option (islocal=yes/no) which was added to the the command "vxddladm addjbod"
in IS 7.3.1.100, the size of UDID of the DMP nodes has been increased. In case of Flexible Storage Sharing,
when performing rolling upgrade from the patches 7.3.1.100 and above, to any Infoscale 7.4 and above releases,
mismatch of this UDID between the nodes may cause the systems to panic when IO is shipped from one node to the other.

RESOLUTION:
Code changes have been made to handle the mismatch of the UDID and rolling upgrade to IS 7.4 and above is now fixed.

* 3970370 (Tracking ID: 3970368)

SYMPTOM:
while performing the DMP + DR test case error messages are observed while running the dmpdr -o refresh utility.
/usr/lib/vxvm/voladm.d/bin/dmpdr -o refresh
WARN: Please Do not Run any Device Discovery Operations outside the Tool during Reconfiguration operations
INFO: The logs of current operation can be found at location /var/adm/vx/dmpdr_20181128_1638.log
INFO: Collecting OS Version Info
ERROR: Collecting LeadVille Version - Failed..Because, command [modinfo | grep "SunFC FCP"] failed with the Error:[]
INFO: Collecting SF Product version Info
INFO: Checking if MPXIO is enabled

DESCRIPTION:
In Solaris 11.4 the module "fcp (SunFC FCP)" has been renamed to "fcp (Fibre Channel SCSI ULP)". The module is used during DMPDR testing for refreshing/checking the FC devices. Because the name of the module has changed, failure is observed while running "dmpdr -o refresh" command.

RESOLUTION:
The code has been changed to take care of name change between Solaris 11.3 and 11.4.

Patch ID: VRTSvxvm-7.3.1.100

* 3932464 (Tracking ID: 3926976)

SYMPTOM:
Excessive number of connections are found in open state causing FD leak and
eventually reporting license errors.

DESCRIPTION:
The vxconfigd reports license errors as it fails to open the license files. The
failure to open is due to FD exhaustion, caused by excessive FIFO connections
left in open state.

The FIFO connections used to communicate with vxconfigd by clients (vx
commands). Usually these should get closed once the client exits. One of such
client "vxdclid" which is a daemon connecting frequently and leaving the
connection is open state, causing FD leak. 

This issue is applicable to Solaris platform only.

RESOLUTION:
One of the API, a library call is leaving the connection in open state while
leaving, which is fixed.

* 3933874 (Tracking ID: 3852146)

SYMPTOM:
In a CVM cluster, when importing a shared diskgroup specifying both -c and -o
noreonline options, the following error may be returned: 
VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed: Disk for disk
group not found.

DESCRIPTION:
The -c option will update the disk ID and disk group ID on the private region
of the disks in the disk group being imported. Such updated information is not
yet seen by the slave because the disks have not been re-onlined (given that
noreonline option is specified). As a result, the slave cannot identify the
disk(s) based on the updated information sent from the master, causing the
import to fail with the error Disk for disk group not found.

RESOLUTION:
The code is modified to handle the working of the "-c" and "-o noreonline"
options together.

* 3933875 (Tracking ID: 3872585)

SYMPTOM:
System running with VxFS and VxVM panics with storage key exception with the 
following stack:

simple_lock
dispatch
flih_util
touchrc
pin_seg_range
pin_com
pinx_plock
plock_pinvec
plock
mfspurr_sc_flih01

DESCRIPTION:
The xntpd process mounted on a vxfs filesystem could panic with storage key 
exception. The xntpd binary page faulted and did an IO, after which the storage 
key exception was detected OS as it couldn't locate it's keyset. From the code 
review it was found that in a few error cases in the vxvm, the storage key may 
not be restored after they're replaced.

RESOLUTION:
Do storage key restore even when in the error cases in vxio and dmp layer.

* 3933877 (Tracking ID: 3914789)

SYMPTOM:
System may panic when reclaiming on secondary in VVR(Veritas Volume Replicator)
environment. It's due to accessing invalid address, error message is similiar to
"data access MMU miss".

DESCRIPTION:
VxVM maintains a linked list to keep memory segment information. When accessing
its content with certain offset, linked list is traversed. Due to code defect
when offset is equal to segment chunk size, end of such segement is returned
instead of start of next segment. It can result silent memory corruption because
it tries to access memory out of its boundary. System can panic when out of
boundary address isn't allocated yet.

RESOLUTION:
Code changes have been made to fix the out-of-boundary access.

* 3933878 (Tracking ID: 3918408)

SYMPTOM:
Data corruption when volume grow is attempted on thin reclaimable disks whose space is just freed.

DESCRIPTION:
When the space in the volume is freed by deleting some data or subdisks, the corresponding subdisks are marked for 
reclamation. It might take some time for the periodic reclaim task to start if not issued manually. In the meantime, if 
same disks are used for growing another volume, it can happen that reclaim task will go ahead and overwrite the data 
written on the new volume. Because of this race condition between reclaim and volume grow operation, data corruption 
occurs.

RESOLUTION:
Code changes are done to handle race condition between reclaim and volume grow operation. Also reclaim is skipped for 
those disks which have been already become part of new volume.

* 3933880 (Tracking ID: 3864063)

SYMPTOM:
Application I/O hangs after the Master Pause command is issued.

DESCRIPTION:
Some flags (VOL_RIFLAG_DISCONNECTING or VOL_RIFLAG_REQUEST_PENDING) in VVR
(Veritas Volume Replicator) kernel are not cleared because of a race between the
Master Pause SIO and the Error Handler SIO. This causes the RU (Replication
Update) SIO to fail to proceed, which leads to I/O hang.

RESOLUTION:
The code is modified to handle the race condition.

* 3933882 (Tracking ID: 3865721)

SYMPTOM:
Vxconfigd hang in dealing transaction while pausing the replication in 
Clustered VVR environment.

DESCRIPTION:
In Clustered VVR (CVM VVR) environment, while pausing replication which is in 
DCM (Data Change Map) mode, the master pause SIO (staging IO) can not finish 
serialization since there are metadata shipping SIOs in the throttle queue 
with the activesio count added. Meanwhile, because master pause 
SIOs SERIALIZE flag is set, DCM flush SIO can not be started to flush the 
throttle queue. It leads to a dead loop hang state. Since the master pause 
routine needs to sync up with transaction routine, vxconfigd hangs in 
transaction.

RESOLUTION:
Code changes were made to flush the metadata shipping throttle queue if master 
pause SIO can not finish serialization.

* 3933883 (Tracking ID: 3867236)

SYMPTOM:
Application IO hang happens after issuing Master Pause command.

DESCRIPTION:
The flag VOL_RIFLAG_REQUEST_PENDING in VVR(Veritas Volume Replicator) kernel is 
not cleared because of a race between Master Pause SIO and RVWRITE1 SIO resulting 
in RU (Replication Update) SIO to fail to proceed thereby causing IO hang.

RESOLUTION:
Code changes have been made to handle the race condition.

* 3933890 (Tracking ID: 3879324)

SYMPTOM:
VxVM(Veritas Volume Manager) DR(Dynamic Reconfiguration) tool fails to 
handle busy device problem while LUNs are removed from OS

DESCRIPTION:
OS devices may still be busy after removing them from OS, it fails 'luxadm -
e offline <disk>' operation and leaves staled entries in 'vxdisk list' 
output 
like:
emc0_65535   auto            -            -            error
emc0_65536   auto            -            -            error

RESOLUTION:
Code changes have been done to address busy devices issue.

* 3933893 (Tracking ID: 3890602)

SYMPTOM:
OS command cfgadm command hangs after reboot when hundreds devices are under 
DMP's control.

DESCRIPTION:
DMP generates the same entry for each of the partitions (8). A large number of 
vxdmp properties that devfsadmd has to touch causes anything that is touching 
devlinks to temporarily hang behind it.

RESOLUTION:
Code changes have been done to reduce the properties count by a factor of 8.

* 3933894 (Tracking ID: 3893150)

SYMPTOM:
VxDMP(Veritas Dynamic Multi-Pathing) vxdmpadm native ls command sometimes 
doesn't report imported disks' pool name

DESCRIPTION:
When Solaris pool is imported with extra options like -d or -R, paths in 
'zpool status <pool name>' can be disk full path. 'Vxdmpadm native ls' 
command doesn't handle such situation hence fails to report its pool name.

RESOLUTION:
Code changes have been made to correctly handle disk full path to get its 
pool name.

* 3933897 (Tracking ID: 3907618)

SYMPTOM:
vxdisk resize leads to data corruption on filesystem with MSDOS labelled disk having VxVM sliced format.

DESCRIPTION:
vxdisk resize changes the geometry on the device if required. When vxdisk resize is in progress, absolute offsets i.e offsets starting 
from start of the device are used. For MSDOS labelled disk, the full disk is devoted on Slice 4 but not slice 0. Thus when IO is 
scheduled on the device an extra 32 sectors gets added to the IO which is not required since we are already starting the IO from start of 
the device. This leads to data corruption since the IO on the device shifted by 32 sectors.

RESOLUTION:
Code changes have been made to not add 32 sectors to the IO when vxdisk resize is in progress to avoid corruption.

* 3933899 (Tracking ID: 3910675)

SYMPTOM:
Disks directly attached to the system cannot be exported in FSS environment

DESCRIPTION:
In some cases, UDID (Unique Disk Identifier) of the disk directly connected to a
cluster node might not be globally 
unique i.e another different disk might have a similar UDID which is directly
connected to a different node in the 
cluster. This leads to issues while exporting the device in FSS (Flexible
Storage Sharing) environment since two 
different disks have the same UDID which is not expected.

RESOLUTION:
A new option "islocal=yes" has been added to the vxddladm addjbod command so
that hostguid will get appended to UDID to make it unique.

* 3933900 (Tracking ID: 3915523)

SYMPTOM:
Local disk from other node belonging to private DG is exported to the node when
a private DG is imported on current node.

DESCRIPTION:
When we try to import a DG, all the disks belonging to the DG are automatically
exported to the current node so as to make sure 
that the DG gets imported. This is done to have same behaviour as SAN with local
disks as well. Since we are exporting all disks in 
the DG, then it happens that disks which belong to same DG name but different
private DG on other node get exported to current node 
as well. This leads to wrong disk getting selected while DG gets imported.

RESOLUTION:
Instead of DG name, DGID (diskgroup ID) is used to decide whether disk needs to
be exported or not.

* 3933901 (Tracking ID: 3915953)

SYMPTOM:
When we enable dmp_native_support using 'vxdmpadm settune 
dmp_native_support=on', it takes too long to get completed.

DESCRIPTION:
When we enable dmp_native_support , we import all the zpools so as to make 
them come under DMP, so that when we import them later on with native 
support on they should be under the DMP. For this the command used was 
taking much time and now the command has been modified in the script to 
reduce the time.

RESOLUTION:
Instead of searching the whole /dev/vx/dmp directory to import the zpools ,
import them by using their specific attributes.

* 3933903 (Tracking ID: 3918356)

SYMPTOM:
zpools are imported automatically when DMP native support is set to on which may lead to zpool corruption.

DESCRIPTION:
When DMP native support is set to on all zpools are imported using DMP devices so that when the import happens for the same zpool again it is 
automatically imported using DMP device. In clustered environment if the import of the same zpool is triggered on two different nodes at the 
same time it can lead to zpool corruption. A way needs to be provided so that zpools are not imported.

RESOLUTION:
Changes are made to provide a way to customer to not import the zpools if required. The way is to set the variable auto_import_exported_pools 
to off in the file /var/adm/vx/native_input like below:
bash:~# cat /var/adm/vx/native_input
auto_import_exported_pools=off

* 3933904 (Tracking ID: 3921668)

SYMPTOM:
Running the vxrecover command with -m option fails when run on the
slave node with message "The command can be executed only on the master."

DESCRIPTION:
The issue occurs as currently vxrecover -g <dgname> -m command on shared
disk groups is not shipped using the command shipping framework from CVM
(Cluster Volume Manager) slave node to the master node.

RESOLUTION:
Implemented code change to ship the vxrecover -m command to the master
node, when its triggered from the slave node.

* 3933907 (Tracking ID: 3873123)

SYMPTOM:
When remote disk on node is EFI disk, vold enable fails.
And following message get logged, and eventually causing the vxconfigd to go 
into disabled state:
Kernel and on-disk configurations don't match; transactions are disabled.

DESCRIPTION:
This is becasue one of the cases of EFI remote disk is not properly handled
in disk recovery part when vxconfigd is enabled.

RESOLUTION:
Code changes have been done to set the EFI flag on darec in recovery code

* 3933910 (Tracking ID: 3910228)

SYMPTOM:
Registration of GAB(Global Atomic Broadcast) port u fails on slave nodes after 
multiple new devices are added to the system..

DESCRIPTION:
Vxconfigd sends command to GAB for port u registration and waits for a respnse 
from GAB. During this timeframe if the vxconfigd is interrupted by any other 
module apart from GAB then it will not be able to receive the signal from GAB 
of successful registration. Since the signal is not received, vxconfigd 
believes the registration did not succeed and treats it as a failure.

RESOLUTION:
Mask the signals which vxconfigd can receive before waiting for the signal from 
GAB for registration of gab u port.

* 3933913 (Tracking ID: 3905030)

SYMPTOM:
system hang when install/uninstall VxVM with bellow stack:

genunix:cv_wait+0x3c()
genunix:ndi_devi_enter+0x54()
genunix:devi_config_one+0x114()
genunix:ndi_devi_config_one+0xd0()
genunix:resolve_pathname_noalias+0x244()
genunix:resolve_pathname+0x10()
genunix:ldi_vp_from_name+0x100()
genunix:ldi_open_by_name+0x40()
vxio:vol_ldi_init+0x60()
vxio:vol_attach+0x5c()

Or
genunix:cv_wait+0x38
genunix:ndi_devi_enter
genunix:devi_config_one
genunix:ndi_devi_config_one
genunix:resolve_pathname_noalias
genunix:resolve_pathname
genunix:ldi_vp_from_name
genunix:ldi_open_by_name
vxdmp:dmp_setbootdev
vxdmp:dmp_attach

DESCRIPTION:
According to Oracle, ldi_open_by_name should not be called from a device's
attach, detach, 
or power entry point. This could result in a system crash or deadlock.

RESOLUTION:
Code changes have been done to avoid calling ldi_open_by_name during device
attach.

* 3936428 (Tracking ID: 3932714)

SYMPTOM:
Turning off DMP_FAST_RECOVERY and performing IO directly on dmpnode 
whose PGR key reserved by other host, OS panicked with following stack:

void unix:panicsys+0x40()
unix:vpanic_common+0x78()
void unix:panic)
size_t unix:miter_advance+0x36c()
unix:miter_next_paddr(
int unix:as_pagelock+0x108()
genunix:physio(
int scsi:scsi_uscsi_handle_cmdf+0x254()
int scsi:scsi_uscsi_handle_cmd+0x1c)
int ssd:ssd_ssc_send+0x2a8()
int ssd:ssdioctl+0x13e4()
genunix:cdev_ioctl(
int vxdmp:dmp_scsi_ioctl+0x1c0()
int vxdmp:dmp_send_scsireq+0x74()
int vxdmp:dmp_bypass_strategy+0x98()
void vxdmp:dmp_path_okay+0xf0()
void vxdmp:dmp_error_action+0x68()
vxdmp:dmp_process_scsireq()
void vxdmp:dmp_daemons_loop+0x164()

DESCRIPTION:
DMP issues USCSI_CMD IOCTL to SSD driver to fire the IO request in case IO 
failed with conventional way, path status is okay and DMP_FAST_RECOVERY is 
set. Because IO request is performed on dmpnode directly instead of from VxIO 
driver, IO data buffer isn't copied into kernel space and it has user virtual 
address. SSD driver can't map the user virtual address to kernel address 
without user process information, hence the panic.

RESOLUTION:
Code changes have been made to skip USCSI_CMD IOCTL and returns error in 
case of user address specified.

* 3937541 (Tracking ID: 3911930)

SYMPTOM:
Valid PGR operations sometimes fail on a dmpnode.

DESCRIPTION:
As part of the PGR operations, if the inquiry command finds that PGR is not
supported on the dmpnode node, a flag PGR_FLAG_NOTSUPPORTED is set on the
dmpnode.
Further PGR operations check this flag and issue PGR commands only if this flag
is
NOT set.
This flag remains set even if the hardware is changed so as to support PGR.

RESOLUTION:
A new command (namely enablepr) is provided in the vxdmppr utility to clear this
flag on the specified dmpnode.

* 3937545 (Tracking ID: 3932246)

SYMPTOM:
vxrelayout operation fails to complete.

DESCRIPTION:
IF we lose connectivity to underlying storage  while volume relayout is in 
progress, some intermediate volumes for the relayout could be in disabled or 
undesirable state either due to I/O error. Once the storage connectivity is 
back 
such intermediate volumes should be recovered by vxrecover  utility and resume 
the vxrelayout operation automatically. But due to bug in vxrecover utility 
the 
volumes remained in disable state due to which the vxrelayout operation didn't 
complete.

RESOLUTION:
Changes are done in  vxrecover utility to enable the intermediate volumes.

* 3937549 (Tracking ID: 3934910)

SYMPTOM:
IO errors on data volume or file system happen after some cycles of snapshot 
creation/removal with dg reimport.

DESCRIPTION:
With the snapshot of the data volume removal and the dg reimport, the DRL map 
keep active rather than to be inactivated. With the new snapshot created, the 
DRL would be re-enabled and new DRL map allocated with the first write to the 
data volume. The original active DRL map would not be used and leaked. After 
some such cycles, the extent of the DCO volume would be exhausted due to the 
active but not be used DRL maps, then no more DRL map could be allocated and 
the IOs would be failed or unable to be issued on the data volume.

RESOLUTION:
Code changes are done to inactivate the DRL map if the DRL is disabled during 
the volume start, then it could be reused later safely.

* 3937550 (Tracking ID: 3935232)

SYMPTOM:
Replication and IO hang may happen on new master node during master 
takeover.

DESCRIPTION:
During master switch is in progress if log owner change kicks in, flag 
VOLSIO_FLAG_RVC_ACTIVE will be set by log owner change SIO. 
RVG(Replicated Volume Group) recovery initiated by master switch  will clear 
flag VOLSIO_FLAG_RVC_ACTIVE after RVG recovery done. When log owner 
change done,  as flag VOLSIO_FLAG_RVC_ACTIVE has been cleared, resetting 
flag VOLOBJ_TFLAG_VVR_QUIESCE is skipped. The present of flag 
VOLOBJ_TFLAG_VVR_QUIESCE will make replication and application IO on RVG 
always be in pending state.

RESOLUTION:
Code changes have been done to make log owner change wait until master 
switch completed.

* 3937808 (Tracking ID: 3931936)

SYMPTOM:
In FSS(Flexible Storage Sharing) environment, after restarting slave node VxVM 
command on master node hang result in failed disks on slave node could not 
rejoin disk group.

DESCRIPTION:
While lost remote disks on slave node comes back, online these disk and add 
them to disk group operations are performed on master node. Disk online 
includes operations from both master and slave node. On slave node these 
disks 
should be offlined then reonlined, but due to code defect reonline disks are 
missed result in these disks are kept in reonlining state. The following add disk 
to 
disk group operation needs to issue private region IOs on the disk. These IOs 
are 
shipped to slave node to complete. As the disks are in reonline state, busy error 
gets returned and remote IOs keep retrying, hence VxVM command hang on 
master node.

RESOLUTION:
Code changes have been made to fix the issue.

* 3937811 (Tracking ID: 3935974)

SYMPTOM:
While communicating with client process, vxrsyncd daemon terminates and after 
sometime it gets started or may require a reboot to start.

DESCRIPTION:
When the client process shuts down abruptly and vxrsyncd daemon attempt to write 
on the client socket, SIGPIPE signal is generated. The default action for this 
signal is to terminate the process. Hence vxrsyncd gets terminated.

RESOLUTION:
This SIGPIPE signal should be handled in order to prevent the termination of 
vxrsyncd.

* 3938392 (Tracking ID: 3909630)

SYMPTOM:
OS panic happens as the following stack after some DMP devices migrated to
TPD(Third 
Party Driver) devices.

void vxdmp:dmp_register_stats+0x120()
int vxdmp:gendmpstrategy+0x244()
vxdmp:dmp_restart_io()
int vxdmp:dmp_process_deferbp+0xec()
void vxdmp:dmp_process_deferq+0x68()
void vxdmp:dmp_daemons_loop+0x160()

DESCRIPTION:
When updating CPU index for new path migrated to TPD, IOs on this path are 
unquiesced before increasing last CPU's stats table, as a result , while
registering 
IO stat for restarted IO on this path, if need to access last CPUs stats
table, 
invalid memory access and panic may happen.

RESOLUTION:
Code changes have been made to fix this issue.

* 3944743 (Tracking ID: 3945411)

SYMPTOM:
system kept in cyclic reboot after enabling DMP(Dynamic Multi-Pathing) 
native support for ZFS boot devices with below error:
NOTICE: VxVM vxdmp V-5-0-1990 driver version VxVM  Multipathing Driver 
installed
WARNING: VxVM vxdmp V-5-3-2103 dmp_claim_device: Boot device not found in OS 
tree
NOTICE: zfs_parse_bootfs: error 19
Cannot mount root on rpool/40 fstype zfs
panic[cpu0]/thread=20012000: vfs_mountroot: cannot mount root
Warning - stack not written to the dumpbuf
000000002000fa00 genunix:main+1dc ()

DESCRIPTION:
The boot device was under DMP control after enabling DMP native support. Hence
DMP failed to get its device number by inquiring the device under OS control,
hence the issue.

RESOLUTION:
code changes were made to get the correct device number of boot device.

Patch ID: VRTSvxfs-7.3.1.2500

* 3978644 (Tracking ID: 3978615)

SYMPTOM:
VxFS filesystem is not getting mounted after OS upgrade and first reboot

DESCRIPTION:
The vxfs-modload service is not getting called before local_fs service. The vxfs-modload service is use to replace the appropriate kernel modules after OS upgrade and reboot. The vxfs device files are also not configuring properly after system boot from sol11.4. Due to this the filesystem is not getting mounted after OS upgrade and first reboot.

RESOLUTION:
Code changes are done in service file.

Patch ID: VRTSvxfs-7.3.1.2300

* 3929952 (Tracking ID: 3929854)

SYMPTOM:
Event notification was not supported on CFS mount point so getting following 
errors in log file.
-bash-4.1# /usr/jdk/jdk1.8.0_121/bin/java test1
myWatcher: sun.nio.fs.SolarisWatchService@70dea4e filesystem provider is : sun.nio.fs.SolarisFileSystemProvider@5c647e05
java.nio.file.FileSystemException: /mnt1: Operation not supported
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.asIOException(UnixException.java:111)
        at sun.nio.fs.SolarisWatchService$Poller.implRegister(SolarisWatchService.java:311)
        at sun.nio.fs.AbstractPoller.processRequests(AbstractPoller.java:260)
        at sun.nio.fs.SolarisWatchService$Poller.processEvent(SolarisWatchService.java:425)
        at sun.nio.fs.SolarisWatchService$Poller.run(SolarisWatchService.java:397)
        at java.lang.Thread.run(Thread.java:745)

DESCRIPTION:
WebLogic watchservice was failing to register with CFS mount point directory.
which is  resulting into "/mnt1: Operation not supported" on cfs mount point.

RESOLUTION:
Added new module parameter "vx_cfsevent_notify" to enable event notification
support on CFS.
By default vx_cfsevent_notify is disable.

This will work only in Active-Passive scenario:

-The Primary node(Active) which has set this tunable will receive notifications 
for the respective events happened on cfs mount point directory.

-Secondary node (Passive) will not receive any notifications.

* 3933816 (Tracking ID: 3902600)

SYMPTOM:
Contention observed on vx_worklist_lk lock in cluster mounted file 
system with ODM

DESCRIPTION:
In CFS environment for ODM async i/o reads, iodones are done 
immediately,  calling into ODM itself from the interrupt handler. But all 
CFS writes are currently processed in delayed fashion, where the requests
are queued and processed later by the worker thread. This was adding delays
in ODM writes.

RESOLUTION:
Optimized the IO processing of ODM work items on CFS so that those
are processed in the same context if possible.

* 3943715 (Tracking ID: 3944884)

SYMPTOM:
ZFOD extents are being pushed on the clones.

DESCRIPTION:
In case of logged writes on ZFOD extents on primary, ZFOD extents are 
are pushed on clones which is not expected which results into internal write test 
failures.

RESOLUTION:
Code has been modified not to push ZFOD extents on clones.

* 3944902 (Tracking ID: 3944901)

SYMPTOM:
"hastop -all" initiate un-mounting operation on all the mount point resource. This operation might hang in infoscale version of 7.3.1

DESCRIPTION:
Before starting the recovery, glm scope flag is cleared in order to force lock consumers to wait, since local recovery will soon follow. Recovery initiated by RESTART can leave in between if some other recovery is initiated. Due to collision of SCOPE_LEAVE and RESTART GLM API's, there can be a case where no one did the recovery and all other operations hung waiting for recovery to complete.

RESOLUTION:
Code changes have been done to ensure that during such races, anyone of them must complete the local recovery.

* 3947560 (Tracking ID: 3947421)

SYMPTOM:
DLV upgrade operation fails while upgrading Filesystem from DLV 9 to DLV 10 with
following error message:
ERROR: V-3-22567: cannot upgrade /dev/vx/rdsk/metadg/metavol - Invalid argument

DESCRIPTION:
If the filesystem has been created with DLV 5 or lesser and later the successful
upgrade opration has been done from 5 to 6, 6 to 7, 7 to 8, 8 to 9. The newly
written code tries to find the "mkfs" logging in the history log. There was no
concept of logging the mkfs operation in the history log for DLV 5 or lesser so
Upgrade operation fails while upgrading from DLV 9 to 10

RESOLUTION:
Code changes have been done to complete upgrade operation even in case mkfs
logging is not found.

* 3947561 (Tracking ID: 3947433)

SYMPTOM:
While adding a volume (part of vset) in already mounted filesystem, fsvoladm
displays following error:
UX:vxfs fsvoladm: ERROR: V-3-28487: Could not find the volume <volume name> in vset

DESCRIPTION:
The code to find the volume in the vset requires the file descriptor of character
special device but in the concerned code path, the file descriptor that is being
passed is of block device.

RESOLUTION:
Code changes have been done to pass the file descriptor of character special device.

* 3947651 (Tracking ID: 3947648)

SYMPTOM:
Due to the wrong auto tuning of vxfs_ninode/inode cache, there could be hang 
observed due to lot of memory pressure.

DESCRIPTION:
If kernel heap memory is very large(particularly observed from SOLARIS T7 
servers), there can be overflow due to smaller size data type.

RESOLUTION:
Changed the code to handle overflow.

* 3952304 (Tracking ID: 3925281)

SYMPTOM:
Hexdump the incore inode data and piggyback data when inode revalidation fails.

DESCRIPTION:
While assuming the inode ownership, if inode revalidation fails with piggyback 
data, then we doesn't hexdump the piggyback and incore inode data. This will loose the 
current 
state of inode. Added inode revalidation failure message and hexdump the incore inode data 
and 
piggyback data.

RESOLUTION:
Code is modified to print hexdump of incore inode and piggyback data when 
revalidation of inode fails.

* 3952305 (Tracking ID: 3939996)

SYMPTOM:
Multiple threads sharing a single file and at least one thread modifying file. 
Cluster must have a large no of CPU's calling CFS doing operations of files
being 
shared by multiple threads. Most probable in AIX with large no of CPU's.

DESCRIPTION:
GLM has a global per port lock called as gp_gen_lock, which protects recovery 
related counters.
Although gp_gen_lock is taken for a very short time but on LOCK request side is 
taken more 
frequent and multi threaded, where it's causing contention on high end servers.

RESOLUTION:
Code changes are being made to avoid bottleneck/contention.

* 3952309 (Tracking ID: 3941942)

SYMPTOM:
If fiostats_enabled filesystem is created, and if odmwrites are in progress, 
forcefully unmounting the filesystem can panic the system.

crash_kexec
oops_end
no_context
__bad_area_nosemaphore
bad_area_nosemaphore
__do_page_fault
do_page_fault
vx_fiostats_free
fdd_chain_inactive_common
fdd_chain_inactive
fdd_odm_close
odm_vx_close
odm_fcb_free
odm_fcb_rel
odm_ident_close
odm_exit
odm_tsk_daemon_deathlist
odm_tsk_daemon
odm_kthread_init
kernel_thread

DESCRIPTION:
When we are freeing fiostats assigned to an inode, when we unmount the 
filesystem forcefully, we have to validate fs field. Otherwise we may end up 
in a situation where we dereference NULL pointer for checks in this codepath, 
which panics.

RESOLUTION:
Code is modified to add checks to validate fs in such scenarios of force 
unmount.

* 3953466 (Tracking ID: 3953464)

SYMPTOM:
On solaris, Heavy lock contention for pl_msgq_lock lock on mutilcore cpus.

DESCRIPTION:
GLM and VxFS maintains a port to communicate in CFS environment which is essentially a channel 
for communication and maintains various states. In order to safeguard these states there is a 
lock(pl_msgq_lock) which is there in this port. Since, this port is common for all the locks in all the 
VxFS, this lock becomes the contention point.

RESOLUTION:
Code is modified to allow sleeping on multicore cpus when serving interrupt which will avoid taking 
this spinlock.

* 3955886 (Tracking ID: 3955766)

SYMPTOM:
CFS hung when doing extent allocating, there is a thread like following to loop forever doing extent allocation:

#0 [ffff883fe490fb30] schedule at ffffffff81552d9a
#1 [ffff883fe490fc18] schedule_timeout at ffffffff81553db2
#2 [ffff883fe490fcc8] vx_delay at ffffffffa054e4ee [vxfs]
#3 [ffff883fe490fcd8] vx_searchau at ffffffffa036efc6 [vxfs]
#4 [ffff883fe490fdf8] vx_extentalloc_device at ffffffffa036f945 [vxfs]
#5 [ffff883fe490fea8] vx_extentalloc_device_proxy at ffffffffa054c68f [vxfs]
#6 [ffff883fe490fec8] vx_worklist_process_high_pri_locked at ffffffffa054b0ef [vxfs]
#7 [ffff883fe490fee8] vx_worklist_dedithread at ffffffffa0551b9e [vxfs]
#8 [ffff883fe490ff28] vx_kthread_init at ffffffffa055105d [vxfs]
#9 [ffff883fe490ff48] kernel_thread at ffffffff8155f7d0

DESCRIPTION:
In the current code of emtran_process_commit(), it is possible that the EAU summary got updated without delegation of the corresponding EAU, because we clear the VX_AU_SMAPFREE flag before updating EAU summary, which could lead to possible hang. Also, some improper error handling in case of bad map can also cause some hang situations.

RESOLUTION:
To avoid potential hang, modify the code to clear the VX_AU_SMAPFREE flag after updating the EAU summary, and improve some error handling in emtran_commit/undo.

* 3958475 (Tracking ID: 3958461)

SYMPTOM:
mkfs_vxfs(1m) and vxupgrade_vxfs(1m) man pages contain the outdated information of supported DLVs.

DESCRIPTION:
mkfs_vxfs(1m) and vxupgrade_vxfs(1m) man page contain the older DLVs information.

RESOLUTION:
Code changes have been done to reflect the updated DLV support in man pages of mkfs_vxfs(1m) and vxupgrade_vxfs(1m).

* 3958776 (Tracking ID: 3958759)

SYMPTOM:
fsadm "-i" option can't be used vxfs7.3.1 for Solaris env..

DESCRIPTION:
The code is expecting some argument value for the option "i" in Solaris.

RESOLUTION:
Code changes done accordingly.

* 3960468 (Tracking ID: 3957092)

SYMPTOM:
System panic with spin_lock_irqsave thru splunkd in rddirahead path.

DESCRIPTION:
As per current state of, it seems to be spinlock getting re-initialized somehow in rddirahead path which is causing this deadlock.

RESOLUTION:
Code changes done accordingly to avoid this situation.

* 3967894 (Tracking ID: 3932163)

SYMPTOM:
Temporary files are being created in /tmp

DESCRIPTION:
The VxFS component is creating files in /tmp

RESOLUTION:
Added code to redirect temporary files in common location.

* 3967901 (Tracking ID: 3932804)

SYMPTOM:
Temporary files are being created in /tmp

DESCRIPTION:
The odm component is creating files in /tmp

RESOLUTION:
Added code to redirect temporary files in common location.

* 3967903 (Tracking ID: 3932845)

SYMPTOM:
Temporary files are being created in /tmp

DESCRIPTION:
The GLM component is creating files in /tmp

RESOLUTION:
Added code to redirect temporary files in common location.

* 3968786 (Tracking ID: 3968785)

SYMPTOM:
VxFS module failed to load on Solaris 11.4.

DESCRIPTION:
The VxFS module failed to load on Solaris 11.4 release, due to the kernel level changes in 11.4 kernel.

RESOLUTION:
Added VxFS support for Solaris 11.4 release.

* 3968821 (Tracking ID: 3968885)

SYMPTOM:
The fsadm utility might hit userspace core

DESCRIPTION:
The fsadm utility might hit userspace core with below stack trace

pth_signal.pthread_kill
pth_signal._p_raise
raise.raise
abort()
print.assert
build_list
reorg_inode
fset_ilist_process
do_reorg
ext_reorg
do_fsadm
main

RESOLUTION:
Added code to fix this issue

Patch ID: VRTSvxfs-7.3.1.100

* 3933810 (Tracking ID: 3830300)

SYMPTOM:
Heavy cpu usage while oracle archive process are running on a clustered
fs.

DESCRIPTION:
The cause of the poor read performance in this case was due to fragmentation,
fragmentation mainly happens when there are multiple archivers running on the
same node. The allocation pattern of the oracle archiver processes is 

1. write header with O_SYNC
2. ftruncate-up the file to its final size ( a few GBs typically)
3. do lio_listio with 1MB iocbs

The problem occurs because all the allocations in this manner go through
internal allocations i.e. allocations below file size instead of allocations
past the file size. Internal allocations are done at max 8 Pages at once. So if
there are multiple processes doing this, they all get these 8 Pages alternately
and the fs becomes very fragmented.

RESOLUTION:
Added a tunable, which will allocate zfod extents when ftruncate
tries to increase the size of the file, instead of creating a hole. This will
eliminate the allocations internal to file size thus the fragmentation. Fixed
the earlier implementation of the same fix, which ran into
locking issues. Also fixed the performance issue while writing from secondary node.

* 3933819 (Tracking ID: 3879310)

SYMPTOM:
The file system may get corrupted after the file system freeze during 
vxupgrade. The full fsck gives the following errors:

UX:vxfs fsck: ERROR: V-3-20451: No valid device inodes found
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate

DESCRIPTION:
The vxupgrade requires the file system to be frozen during its functional 
operation. It may happen that the corruption can be detected while the freeze 
is in progress and the full fsck flag can be set on the file system. However, 
this doesn't stop the vxupgrade from proceeding.
At later stage of vxupgrade, after structures related to the new disk layout 
are updated on the disk, vxfs frees up and zeroes out some of the old metadata 
inodes. If any error occurs after this point (because of full fsck being set), 
the file system needs to go back completely to the previous version at the tile 
of full fsck. Since the metadata corresponding to the previous version is 
already cleared, the full fsck cannot proceed and gives the error.

RESOLUTION:
The code is modified to check for the full fsck flag after freezing the file 
system during vxupgrade. Also, disable the file system if an error occurs after 
writing new metadata on the disk. This will force the newly written metadata to 
be loaded in memory on the next mount.

* 3933820 (Tracking ID: 3894712)

SYMPTOM:
ACL permissions are not inherited correctly on cluster file system.

DESCRIPTION:
The ACL counts stored on a directory inode gets reset every 
time directory inodes 
ownership is switched between the nodes. When ownership on directory inode 
comes back to the node, 
which  previously abdicated it, ACL permissions were not getting inherited 
correctly for the newly 
created files.

RESOLUTION:
Modified the source such that the ACLs are inherited correctly.

* 3933823 (Tracking ID: 3904794)

SYMPTOM:
Extending qio file fail with EINVAL error if reservation block is not set.

DESCRIPTION:
when extending file size through the qiomkfile, extend size will be calculated 
depending on 
reserve block. but in a case, if the reserve block is reset, extending the file will fail with 
EINVAL error because 
qiomkfile issues setext ioctl with a size which is smaller than current file size.

RESOLUTION:
Code is modified to calculate new extend size depending on the maximum of 
reserved blocks 
and currently allocated blocks to the file.

* 3933824 (Tracking ID: 3908785)

SYMPTOM:
System panic observed because of null page address in writeback structure in case of 
kswapd 
process.

DESCRIPTION:
Secfs2/Encryptfs layers had used write VOP as a hook when Kswapd is triggered to 
free page. 
Ideally kswapd should call writepage() routine where writeback structure are correctly filled.  When 
write VOP is 
called because of hook in secfs2/encrypts, writeback structures are cleared, resulting in null page 
address.

RESOLUTION:
Code changes has been done to call VxFS kswapd routine only if valid page address is 
present.

* 3933828 (Tracking ID: 3921152)

SYMPTOM:
Performance drop. Core dump shows threads doing vx_dalloc_flush().

DESCRIPTION:
An implicit typecast error in vx_dalloc_flush() can cause this performance issue.

RESOLUTION:
The code is modified to do an explicit typecast.

* 3933843 (Tracking ID: 3926972)

SYMPTOM:
Once a node reboots or goes out of the cluster, the whole cluster can hang.

DESCRIPTION:
This is a three way deadlock, in which a glock grant could block the recovery while trying to 
cache the grant against an inode. But when it tries for ilock, if the lock is held by hlock revoke 
and 
waiting to get a glm lock, in our case cbuf lock, then it won't be able to get that because a 
recovery is in progress. The recovery can't proceed because glock grant thread blocked it.

Hence the whole cluster hangs.

RESOLUTION:
The fix is to avoid taking ilock in GLM context, if it's not available.

* 3934841 (Tracking ID: 3930267)

SYMPTOM:
Deadlock between fsq flush threads and writer threads.

DESCRIPTION:
In linux, under certain circumstances i.e. to account dirty pages, writer threads 
takes lock on inode 
and start flushing dirty pages which will need page lock. In this case, if fsq flush threads start 
flushing transaction 
on the same inode then it will need the inode lock which was held by writer thread. The page 
lock was taken by 
another writer thread which is waiting for transaction space which can be only freed by fsq 
flush thread. This 
leads to deadlock between these 3 threads.

RESOLUTION:
Code is modified to add a new flag which will skip dirty page accounting.

* 3935903 (Tracking ID: 3933763)

SYMPTOM:
Oracle was hung, plsql session and ssh both were hanging.

DESCRIPTION:
There is a case in reuse code path where we are hitting dead loop while processing
inactivation thread.

RESOLUTION:
To fix this loop, we are going to try only once to finish inactivation of inodes
before we allocate a new inode for structural/attribute inode processing.

* 3936286 (Tracking ID: 3936285)

SYMPTOM:
fscdsconv command may fail the conversion for disk layout version 12 and
above.After exporting file system for use on the specified target, it fails to
mount on that specified target with below error:

# /opt/VRTS/bin/mount <vol> <mount-point>
UX:vxfs mount: ERROR: V-3-20012: not a valid vxfs file system
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk layout version

when importing file system on target for use on the same system, it asks for
'fullfsck' during mount. After 'fullfsck', file system mounted successfully. But
fsck gives below
meesages:

# /opt/VRTS/bin/fsck -y -o full /dev/vx/rdsk/mydg/myvol
log replay in progress
intent log does not contain valid log entries
pass0 - checking structural files
fileset 1 primary-ilist inode 34 (SuperBlock)
                failed validation clear? (ynq)y
pass1 - checking inode sanity and blocks
rebuild structural files? (ynq)y
pass0 - checking structural files
pass1 - checking inode sanity and blocks
pass2 - checking directory linkage
pass3 - checking reference counts
pass4 - checking resource maps
corrupted CUT entries, clear? (ynq)y
au 0 emap incorrect - fix? (ynq)y
OK to clear log? (ynq)y
flush fileset headers? (ynq)y
set state to CLEAN? (ynq)y

DESCRIPTION:
While checking for filesystem version in fscdsconv, the check for DLV 12 and above
was missing and that triggered this issue.

RESOLUTION:
Code changes have been done to handle filesystem version 12 and above for
fscdsconv command.

* 3937536 (Tracking ID: 3940516)

SYMPTOM:
The file resize thread loops infinitely if tried to resize file to a size 
greater than 4TB

DESCRIPTION:
Because of vx_u32_t typecast in vx_odm_resize function, resize threads gets stuck
inside an infinite loop.

RESOLUTION:
Removed vx_u32_t typcast in vx_odm_resize() to handle such scenarios.

* 3942697 (Tracking ID: 3940846)

SYMPTOM:
While upgrading the filesystem from DLV 9 to DLV 10, vxupgrade fails
with following error :

# vxupgrade -n 10 <mount point>
UX:vxfs vxupgrade: ERROR: V-3-22567: cannot upgrade <volname> - Not owner

DESCRIPTION:
while upgrading from DLV 9 to DLV 10 or upwards, upgrade code path serches for
mkfs version in histlog. This changes has been introduced to perform some
specific operations while upgrading the filesystem. If mkfs has been done for
the filesystem with version 6 and then the upgrade has been performed then only
this issue pops up as the histlog conversion from DLV 6 to DLV 7 does not
propagate the mkfs version field. This issue will occur only on Infoscale 7.3.1
onwards.

RESOLUTION:
Code changes have been done to allow DLV upgrade even in cases where mkfs
version is not present in the histlog.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sol11_sparc-Patch-7.3.1.200.tar.gz to /tmp
2. Untar infoscale-sol11_sparc-Patch-7.3.1.200.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sol11_sparc-Patch-7.3.1.200.tar.gz
    # tar xf /tmp/infoscale-sol11_sparc-Patch-7.3.1.200.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale731P200 [<host1> <host2>...]

You can also install this patch together with 7.3.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.3.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE