infoscale-sles12_x86_64-Patch-7.3.1.3200

 Basic information
Release type: Patch
Release date: 2020-05-09
OS update support: SLES12 x86-64 SP 5
Technote: None
Documentation: None
Popularity: 56 viewed    0 downloaded
Download size: 189.61 MB
Checksum: 3711215587

 Applies to one or more of the following products:
InfoScale Availability 7.3.1 On SLES12 x86-64
InfoScale Enterprise 7.3.1 On SLES12 x86-64
InfoScale Foundation 7.3.1 On SLES12 x86-64
InfoScale Storage 7.3.1 On SLES12 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
infoscale-sles12.4_x86_64-Patch-7.3.1.1100 (obsolete) 2019-07-30
fs-sles12_x86_64-Patch-7.3.1.100 (obsolete) 2018-03-14
odm-sles12_x86_64-Patch-7.3.1.100 (obsolete) 2018-03-14

 Fixes the following incidents:
3927713, 3932464, 3933242, 3933810, 3933816, 3933819, 3933820, 3933824, 3933828, 3933834, 3933843, 3933844, 3933874, 3933875, 3933876, 3933877, 3933878, 3933880, 3933882, 3933883, 3933884, 3933888, 3933889, 3933890, 3933897, 3933898, 3933900, 3933904, 3933907, 3933910, 3933911, 3933912, 3934775, 3934841, 3935528, 3936184, 3936286, 3937536, 3937540, 3937541, 3937542, 3937544, 3937549, 3937550, 3937808, 3937811, 3938258, 3939406, 3939411, 3939938, 3940039, 3940143, 3940266, 3940368, 3940652, 3940830, 3953920, 3957433, 3958860, 3959451, 3959452, 3959453, 3959455, 3959458, 3959460, 3959461, 3959462, 3959463, 3959465, 3959469, 3959471, 3959473, 3959475, 3959476, 3959477, 3959478, 3959479, 3959480, 3960383, 3961353, 3961355, 3961356, 3961358, 3961359, 3961468, 3961469, 3961480, 3964315, 3964360, 3966132, 3967893, 3967895, 3967898, 3969591, 3969997, 3970119, 3973086, 3974348, 3974355, 3974652, 3974669, 3978690, 3978691, 3978693, 3978694, 3978916, 3978917, 3978918, 3979133, 3979144, 3979145, 3979677, 3979678, 3979679, 3979680, 3979681, 3979682, 3980401, 3980787, 3981458, 3981512, 3982860, 3983303, 3990046, 3990153, 3990334, 3991434, 3992449, 3993348, 3993931, 3994732, 3995202, 3996402, 3997075, 3997077, 3997112, 3997861, 3998798, 3999406, 4000509, 4000510, 4000511, 4000512, 4000513, 4000615, 4000617, 4000618, 4000619, 4000620, 4000621, 4000624, 4000625, 4000629, 4000761, 4001051, 4001069, 4001880, 4001901, 4002231, 4003009, 4003010, 4003011, 4003012, 4003013, 4003022, 4003400

 Patch ID:
VRTSvxfs-7.3.1.2900-SLES12
VRTSodm-7.3.1.2700-SLES12
VRTSglm-7.3.1.1400-SLES12
VRTSgms-7.3.1.1400-SLES12
VRTSveki-7.3.1.1200-SLES12
VRTSllt-7.3.1.4300-SLES12
VRTSgab-7.3.1.2300-SLES12
VRTSvxfen-7.3.1.3300-SLES12
VRTSamf-7.3.1.3300-SLES12
VRTSdbac-7.3.1.2300-SLES12
VRTSaslapm-7.3.1.2900-SLES12
VRTSvxvm-7.3.1.2900-SLES12
VRTSpython-3.5.1.2-SLES12

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.3.1 * * *
                         * * * Patch 3200 * * *
                         Patch Date: 2020-05-07


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.3.1 Patch 3200


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES12 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSpython
VRTSveki
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.3.1
   * InfoScale Enterprise 7.3.1
   * InfoScale Foundation 7.3.1
   * InfoScale Storage 7.3.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxfs-7.3.1.2900
* 3933816 (3902600) Contention observed on vx_worklist_lk lock in cluster 
mounted file system with ODM
* 3983303 (3983148) System crashed when installing RPM from VxFS filesystem because of unsupported attribute.
* 3990046 (3985839) Cluster hang is observed during allocation of extent to file because of lost delegation of AU.
* 3990334 (3969280) Buffer not getting invalidated or marked stale in transaction failure error code path for Large Directory Hash (LDH) feature
* 3991434 (3990830) File system detected inconsistency with link count table and FSCK flag gets set on the file system.
* 3993348 (3993442) File system mount hang on a node where NBUMasterWorker group is online
* 3993931 (3984750) Code changes to prevent memory leak in the "fsck" binary.
* 3995202 (3990257) VxFS may face buffer overflow in case of doing I/O on File Change Log (FCL) file through Userspace Input Output (UIO) interface
* 3996402 (3984557) Fsck core dumped during sanity check of directory.
* 3997075 (3983191) Fullfsck failing on FS due to invalid attribute length.
* 3997077 (3995526) Fsck command of vxfs may coredump.
* 3997861 (3978149) FIFO file's timestamps are not updated in case of writes.
* 3998798 (3998162) If the file system is disabled while iau allocation log replay may fail for this intermediate state.
* 3999406 (3998168) vxresize operations results in system freeze for 8-10 mins causing application 
hangs and VCS timeouts
* 4000510 (3990485) VxFS module failed to load on SLES12SP5.
* 4002231 (3982869) Kernel panic seen in internal testing when SSD cache enabled
Patch ID: VRTSvxfs-7.3.1.2600
* 3978690 (3973943) VxFS module failed to load on SLES12 SP4
* 3978916 (3975019) Under IO load running with NFS v4 using NFS lease, may panic the server
* 3978917 (3978305) The vx_upgrade command causes VxFS to panic.
* 3978918 (3975962) Mounting a VxFS file system with more than 64 PDTs may panic the server.
* 3980401 (3980043) A file system corruption occurred during a filesystem mount operation.
* 3980787 (3980754) In function vx_io_proxy_thread(), system may hit kernel panic due to general protection fault.
* 3981512 (3980808) Kernel panic - not syncing: Hard LOCKUP
Patch ID: VRTSvxfs-7.3.1.100
* 3933810 (3830300) Degraded CPU performance during backup of Oracle archive logs
on CFS vs local filesystem
* 3933819 (3879310) The file system may get corrupted after a failed vxupgrade.
* 3933820 (3894712) ACL permissions are not inherited correctly on cluster 
file system.
* 3933824 (3908785) System panic observed because of null page address in writeback 
structure in case of 
kswapd process.
* 3933828 (3921152) Performance drop caused by vx_dalloc_flush().
* 3933834 (3931761) Cluster wide hang may be observed in case of high workload.
* 3933843 (3926972) A recovery event can result in a cluster wide hang.
* 3933844 (3922259) Force umount hang in vx_idrop
* 3933912 (3922986) Dead lock issue with buffer cache iodone routine in CFS.
* 3934841 (3930267) Deadlock between fsq flush threads and writer threads.
* 3936286 (3936285) fscdsconv command may fsil the conversion for disk layout version(DLV) 12 and above.
* 3937536 (3940516) File resize thread loops infinitely for file resize operation crossing 32 bit
boundary.
* 3938258 (3938256) When checking file size through seek_hole, it will return incorrect offset/size 
when delayed allocation is enabled on the file.
* 3939406 (3941034) VxFS worker thread may continuously spin on a CPU
* 3940266 (3940235) A hang might be observed in case filesystem gets disbaled while enospace
handling is being taken care by inactive processing
* 3940368 (3940268) File system might get disabled in case the size of the directory surpasses the
vx_dexh_sz value.
* 3940652 (3940651) The vxupgrade command might fail while upgrading Disk Layout Version (DLV) 10 to
any upper DLV version.
* 3940830 (3937042) Data corruption seen when issuing writev with mixture of named page and 
anonymous page buffers.
Patch ID: VRTSpython-3.5.1.2
* 4003400 (4003401) VxVM fails to support KMS type of encryption when VRTSpython 3.5.1.1 is used
Patch ID: VRTSvxvm-7.3.1.2900
* 3982860 (3981092) vxdg command with the option -c"  does reminoring Of DG leads NFS stale file handler
* 3990153 (3990056) Fixing a memory leak due to difference in size during allocation and freeing of 
RTPG buffer.
* 3992449 (3991913) Fixes for regression due to earlier check-in for DG level encryption feature.
* 3994732 (3983759) While mirroring the volume on AIX system panic was observed because of interrupt disabled state.
* 3997112 (4000298) New feature development of DG level encryption and Rekey support.
* 4000615 (3942652) Layered volume creation with DCO (Data Change Object) log is consuming excessive time. Additionally, FMR (Fast Mirror Resync) tracking gets enabled for layered volumes even though plex is not detached.
* 4000617 (3991019) Configured with "use_all_paths=yes", Veritas Dynamic Multi-Pathing (DMP) failed to balance IO when disabled/enabled pathes from ALUA array.
* 4000618 (3976392) Memory corruption might happen in VxVM (Veritas Volume Manager) while processing Plex detach request.
* 4000619 (3950335) Support for throttling of Administrative IO for layered volumes
* 4000620 (3991580) Deadlock may happen if IO performed on both source and snapshot volumes.
* 4000621 (3992053) Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex.
* 4000624 (3995474) VxVM sub-disks IO error occurs unexpectedly on SLES12SP3.
* 4000625 (3987937) VxVM command hang may happen when snapshot volume is configured.
* 4000629 (3975667) Softlock in vol_ioship_sender kernel thread
* 4000761 (3969487) Data corruption observed with layered volumes when mirror of the volume is detached and attached back.
* 4001051 (3982862) SLES12 SP5 support for VxVM
* 4001880 (3907596) vxdmpadm setattr command gives error while setting the path attribute.
* 4001901 (3930312) Short CPU spike while running vxpath_links
Patch ID: VRTSvxvm-7.3.1.2600
* 3957433 (3941844) VVR secondary hang while deleting replication ports.
* 3964360 (3964359) The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.
* 3966132 (3960576) ODM cfgmgr rule for vxdisk scandisks is added twice in the database.
* 3967893 (3966872) Deport and rename clone DG changes the name of clone DG along with source DG
* 3967895 (3877431) System panic after filesystem expansion.
* 3967898 (3930914) Master node panic occurs while sending responding message to slave node.
* 3969591 (3964337) Partition size getting set to default after running vxdisk scandisks.
* 3969997 (3964359) The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.
* 3970119 (3943952) Rolling upgrade to Infoscale 7.4 and above is broken.
* 3973086 (3956134) System panic might occur when IO is in progress in VVR (veritas volume replicator) environment.
* 3974348 (3968279) Vxconfigd dumping core for NVME disk setup.
* 3974355 (3931048) VxVM (Veritas Volume Manager) creates particular log files with write permission
to all users.
* 3974652 (3899568) Adding tunable dmp_compute_iostats to start/stop the iostat gathering
persistently.
* 3974669 (3966378) Sles11sp4 support
* 3979133 (3959618) VxVM support on SLES12 SP4 with kernel version 4.12
* 3979144 (3941784) System may panic while using vxfenadm or during the SCSI-3 PGR operations for the IO fencing.
* 3979145 (3976985) Node from secondary site which is part of Veritas Volume Replicator(VVR) cluster may panic.
Patch ID: VRTSvxvm-7.3.1.2100
* 3933888 (3868533) IO hang happens because of a deadlock situation.
* 3937544 (3929246) When tried with yum, VxVM installation fails in the chroot environment.
* 3953920 (3949954) Dumpstack messages are printed when vxio module is loaded for the first time when called blk_register_queue.
* 3958860 (3953681) Data corruption issue is seen when more than one plex of volume is detached.
* 3959451 (3913949) The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.
* 3959452 (3931678) Memory allocation and locking optimizations during the CVM
(Cluster Volume Manager) IO shipping.
* 3959453 (3932241) VxVM (Veritas Volume Manager) creates some required files under /tmp and
/var/tmp directories. These directories could be modified by non-root users and
will affect the Veritas Volume Manager Functioning.
* 3959455 (3932496) In an FSS environment, volume creation might fail on the 
SSD devices if vxconfigd was earlier restarted.
* 3959458 (3936535) Poor performance due to frequent cache drops.
* 3959460 (3942890) IO hang as DRL flush gets into infinite loop.
* 3959461 (3946350) kmalloc-1024 and kmalloc-2048 memory consuming keeps increasing when VVR IO 
size is more than 256K.
* 3959462 (3947265) Delay added in vxvm-startup script to wait for infiniband devices to get 
discovered leads to various issues.
* 3959463 (3954787) Data corruption may occur in GCO along with FSS environment on RHEL 7.5 Operating system.
* 3959465 (3956732) systemd-udevd message can be seen in journalctl logs.
* 3959469 (3922529) VxVM (Veritas Volume Manager) creates some required files under /tmp and
/var/tmp directories. These directories could be modified by non-root users and
will affect the Veritas Volume Manager Functioning.
* 3959471 (3932356) vxconfigd dumping core while importing DG
* 3959473 (3945115) VxVM (Veritas Volume Manager) vxassist relayout command fails for volumes with 
RAID layout.
* 3959475 (3950384) In a scenario where volume encryption at rest is enabled, data corruption may
occur if the file system size exceeds 1TB.
* 3959476 (3950675) vxdg import appears to hang forever
* 3959477 (3953845) IO hang can be experienced when there is memory pressure situation because of "vxencryptd".
* 3959478 (3956027) System panicked while removing disks from disk group because of race condition between IO stats and disk removal code.
* 3959479 (3956727) In SOLARIS DDL discovery when SCSI ioctl fails, direct disk IO on device can lead to high memory consumption and vxconfigd hangs.
* 3959480 (3957227) Disk group import succeeded, but with error message. This may cause confusion.
* 3960383 (3958062) When boot lun is migrated, enabling and disabling dmp_native_support fails.
* 3961353 (3950199) System may panic while DMP(Dynamic Multipathing) path restoration.
* 3961355 (3952529) vxdmpadm settune dmp_native_support command fails with "vg is in use" error if vg is in mounted state.
* 3961356 (3953481) A stale entry of the old disk is left under /dev/[r]dsk even after replacing it.
* 3961358 (3955101) Panic observed in GCO environment (cluster to cluster replication) during replication.
* 3961359 (3955725) Utility  to clear "failio" flag on disk after storage connectivity is back.
* 3961468 (3926067) vxassist relayout /vxassist commands may fail in Campus Cluster environment.
* 3961469 (3948140) System panic can occur if size of RTPG (Report Target Port Groups) data returned
by underlying array is greater than 255.
* 3961480 (3957549) Server panicked when tracing event because of NULL pointer check missing.
* 3964315 (3952042) vxdmp iostat memory allocation might cause memory fragmentation and pagecache drop.
* 3966132 (3960576) ODM cfgmgr rule for vxdisk scandisks is added twice in the database.
Patch ID: VRTSvxvm-7.3.1.100
* 3932464 (3926976) Frequent loss of VxVM functionality due to vxconfigd unable to validate license.
* 3933874 (3852146) Shared DiskGroup(DG) fails to import when "-c" and "-o noreonline" options 
are
specified together
* 3933875 (3872585) System panics with storage key exception.
* 3933876 (3894657) VxVM commands may hang when using space optimized snapshot.
* 3933877 (3914789) System may panic when reclaiming on secondary in VVR environment.
* 3933878 (3918408) Data corruption when volume grow is attempted on thin reclaimable disks whose space is just freed.
* 3933880 (3864063) Application I/O hangs because of a race between the Master Pause SIO (Staging
I/O) and the Error Handler SIO.
* 3933882 (3865721) Vxconfigd may hang while pausing the replication in CVR(cluster Veritas Volume 
Replicator) environment.
* 3933883 (3867236) Application IO hang happens because of a race between Master Pause SIO(Staging IO) 
and RVWRITE1 SIO.
* 3933884 (3868154) When DMP Native Support is set to ON, dmpnode with multiple VGs cannot be listed
properly in the 'vxdmpadm native ls' command
* 3933889 (3879234) dd read on the Veritas Volume Manager (VxVM) character device fails with 
Input/Output error while accessing end of device.
* 3933890 (3879324) VxVM DR tool fails to handle busy device problem while LUNs are removed from  OS
* 3933897 (3907618) vxdisk resize leads to data corruption on filesystem
* 3933898 (3908987) False vxrelocd messages being generated by joining CVM slave.
* 3933900 (3915523) Local disk from other node belonging to private DG(diskgroup) is exported to the
node when a private DG is imported on current 
node.
* 3933904 (3921668) vxrecover command with -m option fails when executed on the slave
nodes.
* 3933907 (3873123) If the disk with CDS EFI label is used as remote
disk on the cluster node, restarting the vxconfigd
daemon on that particular node causes vxconfigd
to go into disabled state
* 3933910 (3910228) Registration of GAB(Global Atomic Broadcast) port u fails on slave nodes after 
multiple new devices are added to the system.
* 3933911 (3925377) Not all disks could be discovered by DMP after first startup.
* 3934775 (3907800) VxVM package installation will fail on SLES12 SP2.
* 3937540 (3906534) After enabling DMP (Dynamic Multipathing) Native support, enable /boot to be
mounted on DMP device.
* 3937541 (3911930) Provide a way to clear the PGR_FLAG_NOTSUPPORTED on the device instead of using
exclude/include commands
* 3937542 (3917636) Filesystems from /etc/fstab file are not mounted automatically on boot 
through systemd on RHEL7 and SLES12.
* 3937549 (3934910) DRL map leaks during snapshot creation/removal cycle with dg reimport.
* 3937550 (3935232) Replication and IO hang during master takeover because of racing between log 
owner change and master switch.
* 3937808 (3931936) VxVM(Veritas Volume Manager) command hang on master node after 
restarting 
slave node.
* 3937811 (3935974) When client process shuts down abruptly or resets connection during 
communication with the vxrsyncd daemon, it may terminate
vxrsyncd daemon.
* 3939938 (3939796) Installation of VxVM package fails on SLES12 SP3.
* 3940039 (3897047) Filesystems are not mounted automatically on boot through systemd on RHEL7 and
SLES12.
* 3940143 (3941037) VxVM (Veritas Volume Manager) creates some required files under /tmp and
/var/tmp directories. These directories could be modified by non-root users and
will affect the Veritas Volume Manager Functioning.
Patch ID: VRTSaslapm-7.3.1.2900
* 4003022 (3991649) VRTSaslapm package(rpm) support for SLES 12 SP5
Patch ID: VRTSdbac-7.3.1.2300
* 4003013 (3991274) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).
Patch ID: VRTSdbac-7.3.1.2100
* 3979681 (3969613) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).
Patch ID: VRTSamf-7.3.1.3300
* 4003012 (3991274) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).
Patch ID: VRTSamf-7.3.1.3100
* 3979680 (3969613) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).
Patch ID: VRTSvxfen-7.3.1.3300
* 4003011 (3991274) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).
Patch ID: VRTSvxfen-7.3.1.3100
* 3979679 (3969613) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).
Patch ID: VRTSvxfen-7.3.1.100
* 3935528 (3931654) VxFEN module fails to come up after reboot
Patch ID: VRTSgab-7.3.1.2300
* 4003010 (3991274) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).
Patch ID: VRTSgab-7.3.1.2100
* 3979678 (3969613) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).
Patch ID: VRTSllt-7.3.1.4300
* 4003009 (3991274) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).
Patch ID: VRTSllt-7.3.1.4100
* 3979677 (3969613) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).
Patch ID: VRTSllt-7.3.1.300
* 3933242 (3948201) Kernel panics in case of FSS with LLT over RDMA during 
heavy data transfer.
Patch ID: VRTSllt-7.3.1.100
* 3927713 (3927712) LLT on SLES12SP3 shows soft lockup with RDMA configuration
Patch ID: VRTSveki-7.3.1.1200
* 4000509 (3991689) VEKI module failed to load on SLES12 SP5.
Patch ID: VRTSveki-7.3.1.1100
* 3979682 (3969613) Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).
Patch ID: VRTSgms-7.3.1.1400
* 4000513 (3991226) GMS module failed to load on SLES12SP5.
Patch ID: VRTSgms-7.3.1.1200
* 3978691 (3973946) GMS module failed to load on SLES12 SP4
Patch ID: VRTSglm-7.3.1.1400
* 4000512 (3991223) GLM module failed to load on SLES12SP5.
* 4001069 (3927489) GLM service failed to start during system startup
Patch ID: VRTSglm-7.3.1.1300
* 3978693 (3973945) GLM module failed to load on SLES12 SP4
Patch ID: VRTSodm-7.3.1.2700
* 3936184 (3897161) Oracle Database on Veritas filesystem with Veritas ODM
library has high log file sync wait time.
* 4000511 (3990489) ODM module failed to load on SLES12SP5.
Patch ID: VRTSodm-7.3.1.2500
* 3978694 (3973944) ODM module failed to load on SLES12 SP4
* 3981458 (3980810) Kernel panic - not syncing: Hard LOCKUP
Patch ID: VRTSodm-7.3.1.100
* 3939411 (3941018) VRTSodm driver will not load with 7.3.1.100 VRTSvxfs patch.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxfs-7.3.1.2900

* 3933816 (Tracking ID: 3902600)

SYMPTOM:
Contention observed on vx_worklist_lk lock in cluster mounted file 
system with ODM

DESCRIPTION:
In CFS environment for ODM async i/o reads, iodones are done 
immediately,  calling into ODM itself from the interrupt handler. But all 
CFS writes are currently processed in delayed fashion, where the requests
are queued and processed later by the worker thread. This was adding delays
in ODM writes.

RESOLUTION:
Optimized the IO processing of ODM work items on CFS so that those
are processed in the same context if possible.

* 3983303 (Tracking ID: 3983148)

SYMPTOM:
System crashed when attempting to install RPM from VxFS filesystem with the following stack:

#0 machine_kexec at ffffffff9d05dd02
#1 __crash_kexec at ffffffff9d12107a
#2 crash_kexec at ffffffff9d122069
#3 oops_end at ffffffff9d02e091
#4 no_context at ffffffff9d06dc7b
#5 __do_page_fault at ffffffff9d06e15c
#6 do_page_fault at ffffffff9d06e5bb
#7 page_fault at ffffffff9d801665
[exception RIP: vx_get_eatype+20]
#8 vx_linux_removexattr at ffffffffc0e67526 [vxfs]
#9 __vfs_removexattr at ffffffff9d273145
#10	vfs_removexattr at ffffffff9d273967
#11 removexattr at ffffffff9d273a1d
#12 path_removexattr at ffffffff9d273ac1
#13 sys_removexattr at ffffffff9d2742bf
#14 do_syscall_64 at ffffffff9d003934
#15 entry_SYSCALL_64_after_hwframe at ffffffff9d80009a

DESCRIPTION:
For linux kernel above 4.12, value ea_type will be decided on flags from xttr_handler. The new OS kernel provided extended attribute name (prefix+suffix) should be used.

RESOLUTION:
Code changes have been made to fix the issue.

* 3990046 (Tracking ID: 3985839)

SYMPTOM:
Cluster hang is observed during allocation of extent which is larger than 32K blocks to file.

DESCRIPTION:
when there is request more than 32k blocks allocation to file from secondary node, VxFS sends this request to primary. To serve this request, primary node start allocating extent (or AU) based on the last successful allocation unit number. VxFS delegates the AU's to all nodes including primary and release these delegation after some time (10sec).  There is a 3 way race between delegation release thread, allocater thread and extent removal thread. If delegation release thread picks up the AU to release delegation and in the interim, allocater thread picked same AU to allocate, then allocater thread allocates the extent from this AU and change the state of AU. If another thread comes and removes this extent then that thread will race with delegation release thread. This will cause the lost delegation of that AU and allocater engine will fail to recognize this. Subsequent write on that AU will hang which later on causes system hang.

RESOLUTION:
Code is modified to serialize these operations which will avoid the race.

* 3990334 (Tracking ID: 3969280)

SYMPTOM:
Buffer not getting invalidated or marked stale in transaction failure error code path for Large Directory Hash (LDH) feature

DESCRIPTION:
Buffers allocated in Large Directory Hash (LDH) feature code path are not invalidated or marked stale, if the transaction commit fails.

RESOLUTION:
Code changed to ensure correct invalidation of buffers in transaction undo routine for Large Directory Hash (LDH) feature code path.

* 3991434 (Tracking ID: 3990830)

SYMPTOM:
File system detected inconsistency with link count table and FSCK flag gets set on the file system with following messages in the syslog

kernel: vxfs: msgcnt 259 mesg 036: V-2-36: vx_lctbad - /dev/vx/dsk/<dg>/<vol> file system link count table 0 bad
kernel: vxfs: msgcnt 473 mesg 096: V-2-96: vx_setfsflags - /dev/vx/dsk/<dg>/<vol> file system fullfsck flag set - vx_lctbad

DESCRIPTION:
Full FSCK flag is getting set because of inconsistency with Link count table. Inconsistency is caused because of race condition when files are being removed and created in parallel. This leads to incorrect LCT updates.

RESOLUTION:
Fixed the race condition between the file removal thread and creation thread.

* 3993348 (Tracking ID: 3993442)

SYMPTOM:
After NTP services are enabled, after unmounting the filesystem, removal of watchers on the root inode might stuck with below stack trace -

__schedule()
schedule()
schedule_timeout()
vx_delay()
vx_fsnotify_alloc_sb()
vx_fsnotify_flush()
vx_unmount_cleanup_notify()
vx_kill_sb()
deactivate_locked_super()
deactivate_super()
cleanup_mnt()
__cleanup_mnt()
task_work_run()
do_exit()
do_group_exit()
sys_exit_group()
system_call_fastpath()

DESCRIPTION:
While unmounting the FS, inode watch is removed from the root inode. if it is not done in the root context then the operation to remove watch on 
root inode gets stuck.

RESOLUTION:
Code changes have been done to resolve this issue.

* 3993931 (Tracking ID: 3984750)

SYMPTOM:
"fsck" can leak memory in some error scenarios.

DESCRIPTION:
There are some cases in the "fsck" binary where it is not cleaning up memory in some error scenarios. Because of this, some pending buffers can be leaked.

RESOLUTION:
Code changes have been done in "fsck" to clean up those memories properly to prevent any potential memory leak.

* 3995202 (Tracking ID: 3990257)

SYMPTOM:
VxFS may face buffer overflow in case of doing I/O on File Change Log (FCL) file through Userspace Input Output (UIO) interface

DESCRIPTION:
In case of Userspace Input Output (UIO) interface, VxFS is not able to handle larger I/O request properly, resulting in buffer overflow.

RESOLUTION:
VxFS code is modified to limit the length of I/O request came through Userspace Input Output (UIO) interface.

* 3996402 (Tracking ID: 3984557)

SYMPTOM:
Fsck core dumped during sanity check of directory.

DESCRIPTION:
Fsck core dumped during sanity check of directory in case dentry is corrupted/invalid.

RESOLUTION:
Modified the code to validate the inode number before referencing it during sanity check.

* 3997075 (Tracking ID: 3983191)

SYMPTOM:
Fullfsck failing on FS due to invalid attribute length

DESCRIPTION:
Due to invalid attribute length fullfsck failing on FS and reporting corruption.

RESOLUTION:
VxFS fsck code is modified to handle invalid attribute length.

* 3997077 (Tracking ID: 3995526)

SYMPTOM:
Fsck command of vxfs may coredump with following stack.
#0  __memmove_sse2_unaligned_erms ()
#1  check_nxattr )
#2  check_attrbuf ()
#3  attr_walk ()
#4  check_attrs ()
#5  pass1d ()
#6  iproc_do_work()
#7  start_thread ()
#8  clone ()

DESCRIPTION:
The length passed to bcopy operation was invalid.

RESOLUTION:
Code has been modified to allow bcopy operation only if the length is valid. Otherwise EINVAL error is returned which is handled by caller.

* 3997861 (Tracking ID: 3978149)

SYMPTOM:
When the FIFO file is created on VXFS filesystem, its timestamps are
not updated when writes are done to it.

DESCRIPTION:
In write context, Linux kernel calls update_time inode op in order to update timestamp
fields.This op was not implemented in VXFS.

RESOLUTION:
Implemented update_time inode op in VXFS.

* 3998798 (Tracking ID: 3998162)

SYMPTOM:
Log replay fails for fsck

DESCRIPTION:
It is possible for IFIAU that extent allocation was done but the write to header block failed. In that case ieshorten extop is processed and the allocated extents are freed. Log replay does not consider this case and fails as the header does not have valid magic. So while doing log replay if the iauino file has IESHORTEN set and au number is equal to number of aus in fset, iau header should have magic, fset and aun all 0 or all valid values. For any other value return error.

RESOLUTION:
Fixed the log replay code.

* 3999406 (Tracking ID: 3998168)

SYMPTOM:
For multi-TB filesystems, vxresize operations results in system freeze for 8-10 mins
causing application hangs and VCS timeouts.

DESCRIPTION:
During resize, primary node get the delegation of all the allocation units. In case of larger filesystem,
the total time taken by delegation operation is quite large. Also, flushing the summary maps takes considerable
amount of time. This results in filesystem freeze of around 8-10 mins.

RESOLUTION:
Code changes have been done to reduce the total time taken by vxresize.

* 4000510 (Tracking ID: 3990485)

SYMPTOM:
VxFS module failed to load on SLES12SP5.

DESCRIPTION:
The SLES12SP5 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on SLES12SP5.

* 4002231 (Tracking ID: 3982869)

SYMPTOM:
Kernel panic seen in internal testing when SSD cache enabled

DESCRIPTION:
On 4.12 and above kernel we observed kernel panic in our internal testing when SSD cache enabled. The stack trace of panic was as below.

__writeback_inodes_sb_nr
sync_filesystem
generic_shutdown_super
kill_block_super
deactivate_locked_super
vxc_free_vfs
vxc_ioctl_unregister
vxc_ioctl
do_vfs_ioctl
sys_ioctl
do_syscall_64

On 4.12 and above kernel bdi reference count drops twice which causes this panic during VxFS cache offline.

RESOLUTION:
Added code to fix this issue.

Patch ID: VRTSvxfs-7.3.1.2600

* 3978690 (Tracking ID: 3973943)

SYMPTOM:
VxFS module failed to load on SLES12 SP4

DESCRIPTION:
The SLES12 SP4 is new release and it has 4.12 Linux kernel therefore VxFS module failed to load
on it.

RESOLUTION:
Enabled VxFS support for SLES12 SP4

* 3978916 (Tracking ID: 3975019)

SYMPTOM:
Under IO load running with NFS v4 using NFS lease, system may panic with below message.
Kernel panic - not syncing: GAB: Port h halting system due to client process failure

DESCRIPTION:
NFS v4 uses lease per file. This delegation can be taken in RD or RW mode and can be released conditionally. For CFS, we release such delegation from specific node while inode is being normalized (i.e. losing ownership). This can race with another setlease operation on this node and can end up into deadlock for ->i_lock.

RESOLUTION:
Code changes are made to disable lease.

* 3978917 (Tracking ID: 3978305)

SYMPTOM:
The vx_upgrade command causes VxFS to panic.

DESCRIPTION:
When the vx_upgrade command is executed, VxFS incorrectly accesses the freed memory, and then it panics if the memory is paged-out.

RESOLUTION:
The code is modified to make sure that VXFS does not access the freed memory locations.

* 3978918 (Tracking ID: 3975962)

SYMPTOM:
Mounting a VxFS file system with more than 64 PDTs may panic the server.

DESCRIPTION:
For large memory systems, the number of auto-tuned VMM buffers is huge. To accumulate these buffers, VxFS needs more PDTs. Currently up to 128 PDTs are supported. However, for more than 64 PDTs, VxFS fails to initialize the strategy routine and calls a wrong function in the mount code path causing the system to panic.

RESOLUTION:
VxFS has been updated to initialize strategy routine for more than 64 PDTs.

* 3980401 (Tracking ID: 3980043)

SYMPTOM:
During a filesystem mount operation, after the Intent log replay, a file system metadata corruption occurred.

DESCRIPTION:
As part of the log replay during mount, fsck replays the transactions, rebuilds the secondary maps, and updates the EAU and the superblock summaries. Fsck flushes the EAU secondary map and the EAU summaries to the disk in a delayed manner, but the EAU state is flushed to the disk synchronously. As a result, if the log replay fails once before succeeding during the filesystem mount, the state of the metadata on the disk may become inconsistent.

RESOLUTION:
The fsck log replay is updated to synchronously write secondary map and EAU summary to the disk.

* 3980787 (Tracking ID: 3980754)

SYMPTOM:
In function vx_io_proxy_thread(), system may hit kernel panic due to general protection fault.

DESCRIPTION:
In function vx_io_proxy_thread(), a value is being saved into memory through the uninitialized pointer. This may result in memory corruption.

RESOLUTION:
Function vx_io_proxy_thread() is changed to use the pointer after initializing it.

* 3981512 (Tracking ID: 3980808)

SYMPTOM:
Hang in the ODM write processes may be observed, with a subsequent crash with panic string as "Kernel panic - not syncing: Hard LOCKUP".

DESCRIPTION:
It is a ABBA deadlock, where A has taken the spinlock and waiting for sleep lock, but sleep lock owner B is interrupted, and interrupt service 
requires original spin lock taken by A. So since A is using CPU continuously which leads into crash due to hard lockup.

RESOLUTION:
Code has been modified to resolve this deadlock.

Patch ID: VRTSvxfs-7.3.1.100

* 3933810 (Tracking ID: 3830300)

SYMPTOM:
Heavy cpu usage while oracle archive process are running on a clustered
fs.

DESCRIPTION:
The cause of the poor read performance in this case was due to fragmentation,
fragmentation mainly happens when there are multiple archivers running on the
same node. The allocation pattern of the oracle archiver processes is 

1. write header with O_SYNC
2. ftruncate-up the file to its final size ( a few GBs typically)
3. do lio_listio with 1MB iocbs

The problem occurs because all the allocations in this manner go through
internal allocations i.e. allocations below file size instead of allocations
past the file size. Internal allocations are done at max 8 Pages at once. So if
there are multiple processes doing this, they all get these 8 Pages alternately
and the fs becomes very fragmented.

RESOLUTION:
Added a tunable, which will allocate zfod extents when ftruncate
tries to increase the size of the file, instead of creating a hole. This will
eliminate the allocations internal to file size thus the fragmentation. Fixed
the earlier implementation of the same fix, which ran into
locking issues. Also fixed the performance issue while writing from secondary node.

* 3933819 (Tracking ID: 3879310)

SYMPTOM:
The file system may get corrupted after the file system freeze during 
vxupgrade. The full fsck gives the following errors:

UX:vxfs fsck: ERROR: V-3-20451: No valid device inodes found
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate

DESCRIPTION:
The vxupgrade requires the file system to be frozen during its functional 
operation. It may happen that the corruption can be detected while the freeze 
is in progress and the full fsck flag can be set on the file system. However, 
this doesn't stop the vxupgrade from proceeding.
At later stage of vxupgrade, after structures related to the new disk layout 
are updated on the disk, vxfs frees up and zeroes out some of the old metadata 
inodes. If any error occurs after this point (because of full fsck being set), 
the file system needs to go back completely to the previous version at the tile 
of full fsck. Since the metadata corresponding to the previous version is 
already cleared, the full fsck cannot proceed and gives the error.

RESOLUTION:
The code is modified to check for the full fsck flag after freezing the file 
system during vxupgrade. Also, disable the file system if an error occurs after 
writing new metadata on the disk. This will force the newly written metadata to 
be loaded in memory on the next mount.

* 3933820 (Tracking ID: 3894712)

SYMPTOM:
ACL permissions are not inherited correctly on cluster file system.

DESCRIPTION:
The ACL counts stored on a directory inode gets reset every 
time directory inodes 
ownership is switched between the nodes. When ownership on directory inode 
comes back to the node, 
which  previously abdicated it, ACL permissions were not getting inherited 
correctly for the newly 
created files.

RESOLUTION:
Modified the source such that the ACLs are inherited correctly.

* 3933824 (Tracking ID: 3908785)

SYMPTOM:
System panic observed because of null page address in writeback structure in case of 
kswapd 
process.

DESCRIPTION:
Secfs2/Encryptfs layers had used write VOP as a hook when Kswapd is triggered to 
free page. 
Ideally kswapd should call writepage() routine where writeback structure are correctly filled.  When 
write VOP is 
called because of hook in secfs2/encrypts, writeback structures are cleared, resulting in null page 
address.

RESOLUTION:
Code changes has been done to call VxFS kswapd routine only if valid page address is 
present.

* 3933828 (Tracking ID: 3921152)

SYMPTOM:
Performance drop. Core dump shows threads doing vx_dalloc_flush().

DESCRIPTION:
An implicit typecast error in vx_dalloc_flush() can cause this performance issue.

RESOLUTION:
The code is modified to do an explicit typecast.

* 3933834 (Tracking ID: 3931761)

SYMPTOM:
cluster wide hang may be observed in a race scenario in case freeze gets
initiated and there are multiple pending workitems in the worklist related to
lazy isize update workitems.

DESCRIPTION:
If lazy_isize_enable tunable is ON and "ls -l" is getting executed from the
non-writing node of the cluster frequently, it accumulates a huge number of
workitems to get processed by worker threads. In case there is any workitem with
active level 1 held which is enqueued after these workitems and clusterwide
freeze
gets initiated, it leads to deadlock situation. The worker threads
would get exhausted in processing the lazy isize update work items and the
thread
which is enqueued in the worklist would never get a chance to be processed.

RESOLUTION:
code changes have been done to handle this race condition.

* 3933843 (Tracking ID: 3926972)

SYMPTOM:
Once a node reboots or goes out of the cluster, the whole cluster can hang.

DESCRIPTION:
This is a three way deadlock, in which a glock grant could block the recovery while trying to 
cache the grant against an inode. But when it tries for ilock, if the lock is held by hlock revoke and 
waiting to get a glm lock, in our case cbuf lock, then it won't be able to get that because a 
recovery is in progress. The recovery can't proceed because glock grant thread blocked it.

Hence the whole cluster hangs.

RESOLUTION:
The fix is to avoid taking ilock in GLM context, if it's not available.

* 3933844 (Tracking ID: 3922259)

SYMPTOM:
A force umount hang with stack like this:
- vx_delay
- vx_idrop
- vx_quotaoff_umount2
- vx_detach_fset
- vx_force_umount
- vx_aioctl_common
- vx_aioctl
- vx_admin_ioctl
- vxportalunlockedkioctl
- vxportalunlockedioctl
- do_vfs_ioctl
- SyS_ioctl
- system_call_fastpath

DESCRIPTION:
An opened external quota file was preventing the force umount from continuing.

RESOLUTION:
Code has been changed so that an opened external quota file will be processed
properly during the force umount.

* 3933912 (Tracking ID: 3922986)

SYMPTOM:
System panic since Linux NMI Watchdog detected LOCKUP in CFS.

DESCRIPTION:
The vxfs buffer cache iodone routine interrupted the inode flush thread 
which 
was trying to acquire the cfs buffer hash lock with releasing the cfs 
buffer. 
And the iodone routine was blocked by other threads on acquiring the free 
list 
lock. In the cycle, the other threads were contending the cfs buffer hash 
lock 
with the inode flush thread. On Linux, the spinlock is FIFO tickets lock, so 
if the inode flush thread set ticket on the spinlock earlier, other threads 
cant acquire the lock. This caused a dead lock issue.

RESOLUTION:
Code changes are made to ensure acquiring the cfs buffer hash lock with irq 
disabled.

* 3934841 (Tracking ID: 3930267)

SYMPTOM:
Deadlock between fsq flush threads and writer threads.

DESCRIPTION:
In linux, under certain circumstances i.e. to account dirty pages, writer threads takes lock on inode 
and start flushing dirty pages which will need page lock. In this case, if fsq flush threads start flushing transaction 
on the same inode then it will need the inode lock which was held by writer thread. The page lock was taken by 
another writer thread which is waiting for transaction space which can be only freed by fsq flush thread. This 
leads to deadlock between these 3 threads.

RESOLUTION:
Code is modified to add a new flag which will skip dirty page accounting.

* 3936286 (Tracking ID: 3936285)

SYMPTOM:
fscdsconv command may fail the conversion for disk layout version 12 and
above.After exporting file system for use on the specified target, it fails to
mount on that specified target with below error:

# /opt/VRTS/bin/mount <vol> <mount-point>
UX:vxfs mount: ERROR: V-3-20012: not a valid vxfs file system
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk layout version

when importing file system on target for use on the same system, it asks for
'fullfsck' during mount. After 'fullfsck', file system mounted successfully. But
fsck gives below
meesages:

# /opt/VRTS/bin/fsck -y -o full /dev/vx/rdsk/mydg/myvol
log replay in progress
intent log does not contain valid log entries
pass0 - checking structural files
fileset 1 primary-ilist inode 34 (SuperBlock)
                failed validation clear? (ynq)y
pass1 - checking inode sanity and blocks
rebuild structural files? (ynq)y
pass0 - checking structural files
pass1 - checking inode sanity and blocks
pass2 - checking directory linkage
pass3 - checking reference counts
pass4 - checking resource maps
corrupted CUT entries, clear? (ynq)y
au 0 emap incorrect - fix? (ynq)y
OK to clear log? (ynq)y
flush fileset headers? (ynq)y
set state to CLEAN? (ynq)y

DESCRIPTION:
While checking for filesystem version in fscdsconv, the check for DLV 12 and above
was missing and that triggered this issue.

RESOLUTION:
Code changes have been done to handle filesystem version 12 and above for
fscdsconv command.

* 3937536 (Tracking ID: 3940516)

SYMPTOM:
The file resize thread loops infinitely if tried to resize file to a size 
greater than 4TB

DESCRIPTION:
Because of vx_u32_t typecast in vx_odm_resize function, resize threads gets stuck
inside an infinite loop.

RESOLUTION:
Removed vx_u32_t typcast in vx_odm_resize() to handle such scenarios.

* 3938258 (Tracking ID: 3938256)

SYMPTOM:
When checking file size through seek_hole, it will return incorrect offset/size when 
delayed allocation is enabled on the file.

DESCRIPTION:
In recent version of RHEL7 onwards, grep command uses seek_hole feature to check 
current file size and then it reads data depends on this file size. In VxFS, when dalloc is enabled, we 
allocate the extent to file later but we increment the file size as soon as write completes. When 
checking the file size in seek_hole, VxFS didn't completely consider case of dalloc and it was 
returning stale size, depending on the extent allocated to file, instead of actual file size which was 
resulting in reading less amount of data than expected.

RESOLUTION:
Code is modified in such way that VxFS will now return correct size in case dalloc is 
enabled on file and seek_hole is called on that file.

* 3939406 (Tracking ID: 3941034)

SYMPTOM:
During forced umount the vxfs worker thread may continuously spin on a CPU

DESCRIPTION:
During forced unmount a vxfs worker thread need a semaphore to drop super block
reference but that semaphore is held by vxumount thread and this vxumount thread
waiting for a event to happened. This situation causing a softlockup panic on the
system because vxfs worker thread continuously spinning on a CPU to grab semaphore.

RESOLUTION:
Code changes are done to fix this issue.

* 3940266 (Tracking ID: 3940235)

SYMPTOM:
A hang might be observed in case filesystem gets disbaled while enospace
handling is being taken care by inactive processing.
The stacktrace might look like:

 cv_wait+0x3c() ]
 delay_common+0x70()
 vx_extfree1+0xc08()
 vx_extfree+0x228()
 vx_te_trunc_data+0x125c()
 vx_te_trunc+0x878()
 vx_trunc_typed+0x230()
 vx_trunc_tran2+0x104c()
 vx_trunc_tran+0x22c()
 vx_trunc+0xcf0()
 vx_inactive_remove+0x4ec()
 vx_inactive_tran+0x13a4()
 vx_local_inactive_list+0x14()
 vx_inactive_list+0x6e4()
 vx_workitem_process+0x24()
 vx_worklist_process+0x1ec()
 vx_worklist_thread+0x144()
 thread_start+4()

DESCRIPTION:
In function smapchange funtion, it is possible in case of races that SMAP can
record the oldstate as VX_EAU_FREE or VX_EAU_ALLOCATED. But, the corresponding
EMAP won't be updated. This will happen if the concerned flag can get reset to 0
by some other thread in between. This leads to fm_dirtycnt leak which causes a
hang sometime afterwards.

RESOLUTION:
Code changes has been done to fix the issue by using the local variable instead of
global dflag variable directly which can get reset to 0.

* 3940368 (Tracking ID: 3940268)

SYMPTOM:
File system having disk layout version 13 might get disabled in case the size of
the directory surpasses the vx_dexh_sz value.

DESCRIPTION:
When LDH (large directory Hash) hash directory is filled up and the buckets are
filled up, we extend the size of the hash directory. For this we create a reorg
inode and copy extent map of LDH attr inode into reorg inode. This is done using
extent map reorg function. In that function, we check whether extent reorg
structure was passed for the same inode or not. If its not, then we dont
proceed with extent copying. we setup the extent reorg structure accordingly but
while setting up the fileset index,  we use inodes i_fsetindex. But in disk
layout version 13 onwards, we have overlaid the attribute inode and because of
these changes, we no longer sets i_fsetindex in attribute inode and it will
remain 0. Hence the checks in extent map reorg function is failing and resulting in
disabling FS.

RESOLUTION:
Code has been modified to pass correct fileset.

* 3940652 (Tracking ID: 3940651)

SYMPTOM:
During Disk Layout Version (DLV) vxupgrade command might observe a hang.

DESCRIPTION:
vxupgrade does a lookup on histino to identify mkfs version. In case of CFS 
lookup requires RWLOCK or GLOCK on inode.

RESOLUTION:
Code changes have been done to take RWLOCK and GLOCK on inode.

* 3940830 (Tracking ID: 3937042)

SYMPTOM:
Data corruption seen when issuing writev with mixture of named page and anonymous page 
buffers.

DESCRIPTION:
During writes, VxFS prefaults all of the user buffers into kernel and decides the write length 
depending on this prefault length. In case of mixed page buffers, VxFs issues prefault separately for each 
page i.e. for named page and anon page. This reduces length to be wrote and will cause page create 
optimization. Since VxFs erroneously enables page create optimization, data corruption was seen on disk.

RESOLUTION:
Code is modified such that VxFS will not enable page create optimization when short prefault is 
seen.

Patch ID: VRTSpython-3.5.1.2

* 4003400 (Tracking ID: 4003401)

SYMPTOM:
VxVM fails to support KMS type of encryption when VRTSpython 3.5.1.1 is used

DESCRIPTION:
VRTSvxvm uses python from the VRTSpython package and needs pyKMIP python module included in it. As the VRTSpython 3.5.1.1 does not include the pyKMIP python module,  VxVM fails to support KMS type of encryptions.

RESOLUTION:
VRTSpython 3.5.1.2 for InfoScale 7.3.1 is updated to include the pyKMIP python module and all it's dependencies.

Patch ID: VRTSvxvm-7.3.1.2900

* 3982860 (Tracking ID: 3981092)

SYMPTOM:
vxdg command with the option -c" (ClearClone VCS DG agent option) does reminoring of disk group and this leads to NFS stale file handler on NFS clients in VCS NFS failover configuration.

DESCRIPTION:
customer has IBM SVC replication. In case of IBM SVC replication, Replicated attribute is not set on IBM SVC (HW replication)from vendor(array) side .so we are unable to get replicated attribute from IBM SVC.because of that DGI2_IMPORT_REPLICATED flag is not set  and it goes in dg_updateids() .in dg_updateids() It changes the minor even if autoreminor is off & even though there is no minor conflict.

RESOLUTION:
Code changes have been done in dg_updateids() so that DG does not reminor with vxdg -c option. Also updated man page as with updateid option base minor of DG will not change.

* 3990153 (Tracking ID: 3990056)

SYMPTOM:
System may panic due to kernel heap corruption.

DESCRIPTION:
A memory leak was introduced in fix 3948140. In case of ALUA storage, the RTPG buffer 
gets allocated with one size and is freed with a different size causing memory leak.
This leads to kernel heap corruption and may panic the system.

RESOLUTION:
Code changes made to free the RTPG buffer with the same size that it was allocated with.

* 3992449 (Tracking ID: 3991913)

SYMPTOM:
Potential regression in RU and DCLID due to earlier check-in.

DESCRIPTION:
For DG level encryption feature, some changes in dg_config struct were made. That change would have failed RU and DCLID interactions.
Those changes are reverted here. Also, the feature is blocked for this release as partial changes are reverted.

RESOLUTION:
Not applicable.

* 3994732 (Tracking ID: 3983759)

SYMPTOM:
While mirroring volume on AIX system panic was observed with below stack:

[00360098]specfs_intsdisabled+000018 ([??])
[00360094]specfs_intsdisabled+000014 (??, ??, ??)
[00609938]rdevioctl+000138 (??, ??, ??, ??, ??, ??)
[008062F4]spec_ioctl+000074 (??, ??, ??, ??, ??, ??)
[00693F7C]vnop_ioctl+00005C (??, ??, ??, ??, ??, ??)
[0069E76C]vno_ioctl+00016C (??, ??, ??, ??, ??)
[006E6310]common_ioctl+0000F0 (??, ??, ??, ??)
[00003938]mfspurr_sc_flih01+000174 ()
[kdb_get_virtual_memory] no real storage @ 200C1928
[D011A7B4]D011A7B4 ()
[kdb_read_mem] no real storage @ FFFFFFFFFFF6100

DESCRIPTION:
In CVM (Cluster Volume Manager) while doing volume mirroring an ILOCK (intent lock) is required to complete the operation. As part of the mirroring operation assumption is made is slave is requesting an ilock to master for the mirroring operation to complete. Now if the SIO (staged IO) gets complete as part of this ilock acquire operation from slave then the ilock associated would be freed. Now when the original thread continues with the operation then the ilock available would be stale since it already got freed and leads to this interrupt disabled state where the earlier acquired spinlock is not released before exiting from the thread.

RESOLUTION:
Code changes have been done to make decisions based on other variables and not ilock and free the spinlock before the thread exits.

* 3997112 (Tracking ID: 4000298)

SYMPTOM:
Not an issue. New development for customer.

DESCRIPTION:
For encrypted volumes having encryption type KMS, REKEY operation support is added to the branch.
DG level encryption is also implemented. These changes are for specific customer ask.
Support for this feature will be claimed in next patch release(not in 2900).

RESOLUTION:
Not applicable.

* 4000615 (Tracking ID: 3942652)

SYMPTOM:
Layered volume creation with DCO log is consuming excessive time. Additionally, FMR tracking gets enabled for layered volumes even though plex is not detached.

DESCRIPTION:
While creating a layered volume with DCO log, the sub-volumes are enabled in multiple transactions. A transaction which is enabling a volume may start detach map tracking because plexes in other sub-volumes are not yet enabled. This causes extra plex resync during volume creation and detach map stays enabled after volume is created even though plex is not detached, which may cause extra plex resynchronisation.

RESOLUTION:
The code is fixed to combine multiple transactions for layered volume creation and optimize the FMR map tracking as well.

* 4000617 (Tracking ID: 3991019)

SYMPTOM:
Configured with "use_all_paths=yes", Veritas Dynamic Multi-Pathing (DMP) failed to balance IO when disabled/enabled pathes from ALUA array.

DESCRIPTION:
DMP can balance IO on cross-paths for a ALUA array when system is configured with "use_all_paths=yes". After path failure, DMP failed to enable "use_all_paths=yes" feature, hence the issue.

RESOLUTION:
Code changes have been done to enable "use_all_paths=yes" feature after path filaure.

* 4000618 (Tracking ID: 3976392)

SYMPTOM:
Memory corruption might happen in VxVM (Veritas Volume Manager) while processing Plex detach request.

DESCRIPTION:
During processing of plex detach request, the VxVM volume is operated in serial manner. During serialization it might happen that current thread has queued the I/O and still accessing the same. In the meantime the same I/O is picked up by one of VxVM threads for processing. The processing of the I/O is completed and the same is deleted after that. The current thread is still accessing the same memory which was already deleted which might lead to memory corruption.

RESOLUTION:
Fix is to not use the same I/O in the current thread once the I/O is queued as part of serialization and the processing is done before queuing the I/O.

* 4000619 (Tracking ID: 3950335)

SYMPTOM:
Throttling of Administrative IO for layered volumes was not working properly.

DESCRIPTION:
Veritas Volume Manager (VxVM) provides support for throttling of Administrative IO for operations which use ATOMIC_COPY. During heavy IO load, VxVM throttles the IO it creates to perform administrative operations in order to minimize the impact on the application IO performance. This feature of throttling IO was not working for layered volumes and was not supported and hence the application performance was getting impacted when plex attach operation was executed for layered volumes.

RESOLUTION:
Code changes have been made to support throttling of Administrative IO operations for layered volumes also.

* 4000620 (Tracking ID: 3991580)

SYMPTOM:
IO and VxVM command hang may happen if IO performed on both source and snapshot volumes.

DESCRIPTION:
It's a deadlock situation occurring with heavy IOs on both source volume and snapshot volume. 
SIO (a), USER_WRITE, on snap volume, held ILOCK (a), waiting for memory(full).
SIO (b),  PUSHED_WRITE, on snap volume, waiting for ILOCK (a).
SIO (c),  parent of SIO (b), USER_WRITE, on the source volume, held ILOCK (c) and memory, waiting for SIO (b) done.

RESOLUTION:
User separate pool for IO writes on Snapshot volume to resolve the issue.

* 4000621 (Tracking ID: 3992053)

SYMPTOM:
Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex. This is due to 
inconsistent data across the plexes after attaching a plex in layered volumes.

DESCRIPTION:
When a plex is detached in a layered volume, the regions which are dirty/modified are tracked in DCO (Data change object) map.
When the plex is attached back, the data corresponding to these dirty regions is re-synced to the plex being attached.
There was a defect in the code due to which the some particular regions were NOT re-synced when a plex is attached.
This issue only happens only when the offset of the sub-volume is NOT aligned with the region size of DCO (Data change object) volume.

RESOLUTION:
The code defect is fixed to correctly copy the data for dirty regions when the sub-volume offset is NOT aligned with the DCO region size.

* 4000624 (Tracking ID: 3995474)

SYMPTOM:
The following IO errors are reported on VxVM sub-disks result in DRL log detached on SLES12SP3 without any SCSI errors detected.

VxVM vxio V-5-0-1276 error on Subdisk [xxxx] while writing volume [yyyy][log] offset 0 length [zzzz]
VxVM vxio V-5-0-145 DRL volume yyyy[log] is detached

DESCRIPTION:
Following the Linux kernel changes since 4.4.68(SLES12SP3), VxVM stores the bio flags, including B_ERROR, to bi_flags_ext field from bi_flags, as bi_flags was reduced from long to short. As bi_flags_ext is a kind of hacking by modifying bio struct, it may take unknown problems. Checking bi_error instead as the

RESOLUTION:
Code changes have been made in the VxIO Disk IO done routine to fix the issue.

* 4000625 (Tracking ID: 3987937)

SYMPTOM:
VxVM command hang happens when heavy IO load performed on VxVM volume with snapshot, IO memory pool full is also observed.

DESCRIPTION:
It's a deadlock situation occurring with heavy IOs on volume with snapshots. When a multistep SIO A acquired ilock and it's child MV write SIO is waiting for memory pool which is full, another multistep SIO B has acquired memory and waiting for the ilock held by multistep SIO A.

RESOLUTION:
Code changes have been made to fix the issue.

* 4000629 (Tracking ID: 3975667)

SYMPTOM:
NMI watchdog: BUG: soft lockup

DESCRIPTION:
When flow control on ioshipping channel is set there is window in code where vol_ioship_sender thread can go in tight loop.
This causes softlockup

RESOLUTION:
Relinquish CPU to schedule other process. vol_ioship_sender() thread will restart after some delay.

* 4000761 (Tracking ID: 3969487)

SYMPTOM:
Data corruption observed with layered volumes after resynchronisation when mirror of the volume is detached and attached back.

DESCRIPTION:
In case of layered volume, if the IO fails at the underlying subvolume layer before doing the mirror detach the top volume in layered volume has to be serialized (run IO's in serial fashion). When volume is serialized IO's on the volume are directly tracked into detach map of DCO (Data Change Object). During this time period if some of the new IO's occur on the volume then those IO's would not be tracked as part of the detach map inside DCO since detach map tracking is not yet enabled by failed IO's. The new IO's which are not being tracked in detach map would be missed when the plex resynchronisation happens later which leads to corruption.

RESOLUTION:
Fix is to delay the unserialization of the volume till the point failed IO's actually detach the plex and enable detach map tracking. This would make sure new IO's are tracked as part of detach map of DCO.

* 4001051 (Tracking ID: 3982862)

SYMPTOM:
Existing package failed to load on SUSE 12 Sp5 server.

DESCRIPTION:
The SUSE12 SP5 is a new release and hence VxVM module is compiled with this new kernel along with other few MQ changes .

RESOLUTION:
changes have been done to keep MQ code under single flag .

* 4001880 (Tracking ID: 3907596)

SYMPTOM:
vxdmpadm setattr command gives the below error while setting the path attribute:
"VxVM vxdmpadm ERROR V-5-1-14526 Failed to save path information persistently"

DESCRIPTION:
Device names on linux change once the system is rebooted. Thus the persistent attributes of the device are stored using persistent 
hardware path. The hardware paths are stored as symbolic links in the directory /dev/vx/.dmp. The hardware paths are obtained from 
/dev/disk/by-path using the path_id command. In SLES12, the command to extract the hardware path changes to path_id_compat. Since 
the command changed, the script was failing to generate the hardware paths in /dev/vx/.dmp directory leading to the persistent 
attributes not being set.

RESOLUTION:
Code changes have been made to use the command path_id_compat to get the hardware path from /dev/disk/by-path directory.

* 4001901 (Tracking ID: 3930312)

SYMPTOM:
Short CPU spike while running vxpath_links

DESCRIPTION:
The UDEV event triggers the "vxpath_links" which uses "ls -l" to find a specific SCSI target. 
The CPU time consumed by "ls -l" would be high depending on the number of paths. Hence the issue.

RESOLUTION:
The code is modified to reduce the time consumption in "vxpth_lines"

Patch ID: VRTSvxvm-7.3.1.2600

* 3957433 (Tracking ID: 3941844)

SYMPTOM:
The VVR secondary node may hang with VVR replication ports deleting in below stack: 

#4 [ffff883f1f297cd8] vol_rp_delete_port at ffffffffc170bf59 [vxio]
#5 [ffff883f1f297ce8] vol_rv_replica_reconfigure at ffffffffc1768ab3 [vxio]
#6 [ffff883f1f297d98] vol_rv_error_handle at ffffffffc1775394 [vxio]
#7 [ffff883f1f297dd0] vol_rv_errorhandler_callback at ffffffffc1775418 [vxio]
#8 [ffff883f1f297df0] vol_klog_start at ffffffffc16434cd [vxio]
#9 [ffff883f1f297e48] voliod_iohandle at ffffffffc15a1f0a [vxio]
#10 [ffff883f1f297e80] voliod_loop at ffffffffc15a2110 [vxio]

DESCRIPTION:
During VVR connection, at VVR secondary side, a flag pt_connecting was set till the VVR port connection is done fully. In some cases, the port connection server thread may go into abort process without clear the flag. Thus, the port deleting thread could run into a dead loop since the flag keeps setting and the system hang issue.

RESOLUTION:
The code is modified to clear the pt_connecting flag before calling the abort process.

* 3964360 (Tracking ID: 3964359)

SYMPTOM:
The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.

The DG import may fail due to split brain with following messages in syslog:
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF000043042DD4A5
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF00004003FE9356

DESCRIPTION:
When a disk is detached, the SSB ID of the remaining DA and DM records
shall be incremented. Unfortunately for some reason, the SSB ID of DA 
record is only incremented, but the SSB ID of DM record is NOT updated. 
One probable reason may be because the disks get detached before updating
the DM records.

RESOLUTION:
The code changes are done in DG import process to identify the false split brain condition and correct the
disk SSB IDs during the import. With this fix, the import shall NOT be fail due to a false split brain condition.

Additionally one more improvement is done in -o overridessb option to correct the disk
SSB IDs during import. 

Ideally with this fix the disk group import shall ideally NOT fail due to false split brain conditions. 
But if the disk group import still fails with a false split brain condition, then user can try -o overridessb option. 
For using '-o overridessb', one should confirm that all the DA records of the DG are available in ENABLED state 
and are differing with DM records against SSB by 1.

* 3966132 (Tracking ID: 3960576)

SYMPTOM:
With installation of the said VxVM patch 7.3.1.100, one more rule got added incorrectly.

DESCRIPTION:
The rules are added directly through the file vxdmp.PdDv. The install/upgrade scripts take care of adding these rules using the odmadd command. So ideally if we see all the rules should be added twice like the current rule. But in the install/upgrade scripts other rules are removed before they are added 
using the odmadd command.

RESOLUTION:
We need to add an entry similar to these for the "vxdisk scandisks" rule as well in the install/upgrade script. Once that is done the error for the duplicate 
entry would go away.

* 3967893 (Tracking ID: 3966872)

SYMPTOM:
The deport and rename of a cloned DG is renaming both source and cloned DG.

DESCRIPTION:
On a DR site both source and clone DGs can co-exist, where the source DG is deported while
the cloned DG is in imported state. Now the user attempts to deport-rename the cloned DG,
then names of both source and cloned DGs are changed.

RESOLUTION:
The deport code is fixed to take care of the situation.

* 3967895 (Tracking ID: 3877431)

SYMPTOM:
System panic after filesystem expansion with below stack:
#volkio_to_kio_copy at [vxio]
#volsio_nmstabilize at [vxio]
#vol_rvsio_preprocess at [vxio]
#vol_rv_write1_start at [vxio]
#voliod_iohandle at [vxio]
#voliod_loop at [vxio]

DESCRIPTION:
Veritas Volume Replicator (VVR) will generate different IOs in different stages. In some cases, parent IO doesn't wait till child IO is freed. Due a bug, a child IO accessed a freed parent IO's memory, further caused system panic.

RESOLUTION:
Code changes have been made to avoid modifying freed memory.

* 3967898 (Tracking ID: 3930914)

SYMPTOM:
Master node panicked with following stack:
[exception RIP: vol_kmsg_respond_common+111]
#9 [ffff880898513d18] vol_kmsg_respond at ffffffffa08fd8af [vxio]
#10 [ffff880898513d30] vol_rv_wrship_srv_done at ffffffffa0b9a955 [vxio]
#11 [ffff880898513d98] volkcontext_process at ffffffffa09e7e5c [vxio]
#12 [ffff880898513de0] vol_rv_write2_start at ffffffffa0ba3489 [vxio]
#13 [ffff880898513e50] voliod_iohandle at ffffffffa09e743a [vxio]
#14 [ffff880898513e88] voliod_loop at ffffffffa09e7640 [vxio]
#15 [ffff880898513ec8] kthread at ffffffff810a5b8f
#16 [ffff880898513f50] ret_from_fork at ffffffff81646a98

DESCRIPTION:
In case sending response for write shipping request to slave node, it may end up using the stale pointer of message handler, in which message block is NULL. When dereferencing the message block panic occurs.

RESOLUTION:
Code changes have been made to fix the issue.

* 3969591 (Tracking ID: 3964337)

SYMPTOM:
After running vxdisk scandisks, the partition size gets set to default value of 512.

DESCRIPTION:
During device discovery, VxVM (Veritas Volume Manager) compares the original partition size present and the new partition size which is reported. In the code while reading the partition size from Kernel memory, the buffer utilized in the userland memory is not initialized and has a garbage value. Because of this the different between old partition size and new partition size is detected which leads to partition size being set to a default value.

RESOLUTION:
Code changes have been done to properly initialize the buffer in userland which is used to read data from kernel.

* 3969997 (Tracking ID: 3964359)

SYMPTOM:
The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.

The DG import may fail due to split brain with following messages in syslog:
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF000043042DD4A5
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF00004003FE9356

DESCRIPTION:
When a disk is detached, the SSB ID of the remaining DA and DM records
shall be incremented. Unfortunately for some reason, the SSB ID of DA 
record is only incremented, but the SSB ID of DM record is NOT updated. 
One probable reason may be because the disks get detached before updating
the DM records.

RESOLUTION:
The code changes are done in DG import process to identify the false split brain condition and correct the
disk SSB IDs during the import. With this fix, the import shall NOT be fail due to a false split brain condition.

Additionally one more improvement is done in -o overridessb option to correct the disk
SSB IDs during import. 

Ideally with this fix the disk group import shall ideally NOT fail due to false split brain conditions. 
But if the disk group import still fails with a false split brain condition, then user can try -o overridessb option. 
For using '-o overridessb', one should confirm that all the DA records of the DG are available in ENABLED state 
and are differing with DM records against SSB by 1.

* 3970119 (Tracking ID: 3943952)

SYMPTOM:
Rolling upgrade from Infoscale 7.3.1.100 and above to Infoscale 7.4 and above 
in Flexible Storage Sharing (FSS) environment may lead to system panic.

DESCRIPTION:
As a part of the code changes for the option (islocal=yes/no) which was added to the the command "vxddladm addjbod"
in IS 7.3.1.100, the size of UDID of the DMP nodes has been increased. In case of Flexible Storage Sharing,
when performing rolling upgrade from the patches 7.3.1.100 and above, to any Infoscale 7.4 and above releases,
mismatch of this UDID between the nodes may cause the systems to panic when IO is shipped from one node to the other.

RESOLUTION:
Code changes have been made to handle the mismatch of the UDID and rolling upgrade to IS 7.4 and above is now fixed.

* 3973086 (Tracking ID: 3956134)

SYMPTOM:
System panic might occur when IO is in progress in VVR (veritas volume replicator) environment with below stack:

page_fault()
voliomem_grab_special()
volrv_seclog_wsio_start()
voliod_iohandle()
voliod_loop()
kthread()
ret_from_fork()

DESCRIPTION:
In a memory crunch scenario in some cases the memory reservation for SIO (staged IO) in VVR configuration might fail. If this situation occurs then SIO is tried at a later time when the memory becomes available again but while doing some of the fields of SIO are passed NULL values which leads to panic in the VVR code.

RESOLUTION:
Code changes have been done to pass proper values to IO when it is retired in VVR environment.

* 3974348 (Tracking ID: 3968279)

SYMPTOM:
Vxconfigd dumps core with SEGFAULT/SIGABRT on boot for NVME setup.

DESCRIPTION:
For NVME setup, vxconfigd dumps core while doing device discovery as the data structure is accessed by multiple threads and can hit a race condition. For sector size other than 512, the partition size mismatch is seen because we are doing comparison with partition size from devintf_getpart() and it is in sector size of the disk. This can lead to call of NVME device discovery.

RESOLUTION:
Added mutex lock while accessing the data structure so as to prevent core. Made calculations in terms of sector size of the disk to prevent the partition size mismatch.

* 3974355 (Tracking ID: 3931048)

SYMPTOM:
Few VxVM log files listed below are created with write permission to all users
which might lead to security issues.

/etc/vx/log/vxloggerd.log
/var/adm/vx/logger.txt
/var/adm/vx/kmsg.log

DESCRIPTION:
The log files are created with write permissions to all users, which is a
security hole. 
The files are created with default rw-rw-rw- (666) permission because the umask
is set to 0 while creating these files.

RESOLUTION:
Changed umask to 022 while creating these files and fixed an incorrect open
system call. Log files will now have rw-r--r--(644) permissions.

* 3974652 (Tracking ID: 3899568)

SYMPTOM:
"vxdmpadm iostat stop" as per design cannot stop the iostat gathering
persistently. To avoid Performance & Memory crunch related issues, it is
generally recommended to stop the iostat gathering.There is a requirement
to provide such ability to stop/start the iostat gathering persistently
in those cases.

DESCRIPTION:
Today DMP iostat daemon is stopped using - "vxdmpadm iostat stop". but this 
is not persistent setting. After reboot this would be lost and hence 
customer
needs to also have to put this in init scripts at appropriate place for
persistent effect.

RESOLUTION:
Code is modified to provide a  tunable "dmp_compute_iostats" which can
start/stop the iostat gathering persistently.

Notes:
Use following command to start/stop the iostat gathering persistently.
# vxdmpadm settune dmp_compute_iostats=on/off.

* 3974669 (Tracking ID: 3966378)

SYMPTOM:
volume creation fails with sles11 sp4.

DESCRIPTION:
The elevator definition is needed to support for Sles11 sp4.

RESOLUTION:
Code is modified to support Sles11 sp4

* 3979133 (Tracking ID: 3959618)

SYMPTOM:
Compilation/functionality was not working with 4.12 kernel.

DESCRIPTION:
There are lot of changes related to BIO and SCSI structures because of which the compilation/functionality was not working with 4.12 kernel.

RESOLUTION:
The diff contains all the changes required to make it functional with 4.12 kernel.

* 3979144 (Tracking ID: 3941784)

SYMPTOM:
System may panic with below stack, while using the vxfenadm or during the SCSI-3 PGR operations for IO fencing.
page_fault()
dmp_handle_pgr()
resv_passthru_ioctl()
dmp_passthru_ioctl()
gendmpioctl()
dmpioctl()
dmp_ioctl()
blkdev_ioctl()
block_ioctl()
do_vfs_ioctl()

DESCRIPTION:
While handling the SCSI-3 PGR requests, the code was incorrectly accessing memory for some internal structures. This was causing a system panic when 'vxfenadm' was used.

RESOLUTION:
The code is corrected to properly access the internal structures.

* 3979145 (Tracking ID: 3976985)

SYMPTOM:
Node from secondary site which is part of Veritas Volume Replicator(VVR) cluster may panic.
The stack trace can be one of the below 2:

Kernel panic - not syncing: VxVM vxio V-5-3-2112 readback_cleanup_verification: bad upd
Call Trace:
dump_stack+0x19/0x1b
panic+0xe8/0x21f
volrv_seclog_wsio_childdone+0x4b6/0x4c0 [vxio]
volsiodone+0x23c/0x7e0 [vxio]
vol_subdisksio_done+0x17c/0x320 [vxio]
volkcontext_process+0x92/0x240 [vxio]
voldiskiodone+0x22d/0x430 [vxio]
voldiskiodone_intr+0x15b/0x1b0 [vxio]
bio_endio+0x67/0xb0
volsp_iodone_common+0x76/0x160 [vxio]
volsp_iodone+0x8d/0x1d0 [vxio]
bio_endio+0x67/0xb0

Call Trace:
volrv_seclog_add_update+0x99/0xc0 [vxio]
volrv_seclog_wsio_done+0x8f/0x300 [vxio]
volsio_mem_free+0x15/0x20 [vxio]
voliod_iohandle+0x7a/0x250 [vxio]
voliod_loop+0xe0/0x360 [vxio]
voliod_iohandle+0x250/0x250 [vxio]
kthread+0xd1/0xe0
insert_kthread_work+0x40/0x40
ret_from_fork_nospec_begin+0x21/0x21
insert_kthread_work+0x40/0x40

DESCRIPTION:
In case of memory pressure on secondary node, data is readback from particular offset on SRL volume and then written to data volume. Since acknowledgement for this data is already given to primary node, it is free to send next data destined for this particular offset. In this case next incoming data is overwriting in-progress data leading to corruption on SRL and can lead to secondary panic.

RESOLUTION:
Fix is to prevent the SRL data from getting overwritten by disconnecting the replica and thus allowing the readback process to complete on secondary.

Patch ID: VRTSvxvm-7.3.1.2100

* 3933888 (Tracking ID: 3868533)

SYMPTOM:
IO hang happens when starting replication. VXIO deamon hang with stack like 
following:

vx_cfs_getemap at ffffffffa035e159 [vxfs]
vx_get_freeexts_ioctl at ffffffffa0361972 [vxfs]
vxportalunlockedkioctl at ffffffffa06ed5ab [vxportal]
vxportalkioctl at ffffffffa06ed66d [vxportal]
vol_ru_start at ffffffffa0b72366 [vxio]
voliod_iohandle at ffffffffa09f0d8d [vxio]
voliod_loop at ffffffffa09f0fe9 [vxio]

DESCRIPTION:
While performing DCM replay in case Smart Move feature is enabled, VxIO 
kernel needs to issue IOCTL to VxFS kernel to get file system free region. 
VxFS kernel needs to clone map by issuing IO to VxIO kernel to complete this 
IOCTL. Just at the time RLINK disconnection happened, so RV is serialized to 
complete the disconnection. As RV is serialized, all IOs including the 
clone map IO form VxFS is queued to rv_restartq, hence the deadlock.

RESOLUTION:
Code changes have been made to handle the dead lock situation.

* 3937544 (Tracking ID: 3929246)

SYMPTOM:
While installing VxVM package in the chroot environment, the rpm installation 
fails with following error:

WARNING: Veki driver is not loaded.
error: %pre(VRTSvxvm-7.3.0.000-RHEL7.x86_64) scriptlet failed, exit status 1
error: VRTSvxvm-7.3.0.000-RHEL7.x86_64: install failed

DESCRIPTION:
When VxVM is installed in chroot environment, veki module can not be loaded. 
However, since other VxVM drivers are dependent on veki, VxVM installation gets 
aborted
due to the failure in loading the veki module.

RESOLUTION:
The installation scripts are fixed to allow the installation of VxVM even if 
veki module can not be loaded, only if it's a chroot environment.

* 3953920 (Tracking ID: 3949954)

SYMPTOM:
Dumpstack messages are printed when vxio module is loaded for the first time when called blk_register_queue.

DESCRIPTION:
In RHEL 7.5 a new check was added in kernel code in blk_register_queue where if QUEUE_FLAG_REGISTERED was already
set on the queue a dumpstack warning message was printed. In vxvm the flag was already set as the flag got copied from the device queue 
which was earlier registered by the OS.

RESOLUTION:
Changes are done in VxVM code to avoid copying of QUEUE_FLAG_REGISTERED fix the dumpstack warnings.

* 3958860 (Tracking ID: 3953681)

SYMPTOM:
Data corruption issue is seen when more than one plex of volume is detached.

DESCRIPTION:
When a plex of volume gets detached, DETACH map gets enabled in the DCO (Data Change Object). The incoming IO's are tracked in DRL (Dirty Region Log) and then asynchronously copied to DETACH map for tracking.
If one more plex gets detached then it might happen that some of the new incoming regions are missed in the DETACH map of the previously detached plex.
This leads to corruption when the disk comes back and plex resync happens using corrupted DETACH map.

RESOLUTION:
Code changes are done to correctly track the IO's in the DETACH map of previously detached plex and avoid corruption.

* 3959451 (Tracking ID: 3913949)

SYMPTOM:
The DG import is failing with Split Brain after the system is rebooted or when a storage 
disturbance is seen.

The DG import may fail due to split brain with following messages in syslog:
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF000043042DD4A5
V-5-1-9576 Split Brain. da id is 0.1, while dm id is 0.0 for dm
B000F8BF40FF00004003FE9356

DESCRIPTION:
When a disk is detached, the SSB ID of the remaining DA and DM records
shall be incremented. Unfortunately for some reason, the SSB ID of DA 
record is only incremented, but the SSB ID of DM record is NOT updated. 
One probable reason may be because the disks get detached before updating
the DM records.

RESOLUTION:
A work-around option is provided to bypass the SSB checks while importing the DG, the user
can import the DG with 'vxdg -o overridessb import <dgname>' command if a false split brain
happens.
For using '-o overridessb', one should confirm that all DA records 
of the DG are available in ENABLED state and are differing with DM records against 
SSB by 1.

* 3959452 (Tracking ID: 3931678)

SYMPTOM:
There was a performance issue while shipping IO to the remote disks 
due
to non-cached memory allocation and the redundant locking.

DESCRIPTION:
There was redundant locking while checking if the flow control is
enabled by GAB during IO shipping. The redundant locking is optimized.
Additionally, the memory allocation during IO shipping is optimized.

RESOLUTION:
Changes are done in VxVM code to optimize the memory allocation and reduce the
redundant locking to improve the performance.

* 3959453 (Tracking ID: 3932241)

SYMPTOM:
VxVM (Veritas Volume Manager) creates some required files under /tmp
and /var/tmp directories.

DESCRIPTION:
VxVM (Veritas Volume Manager) creates some required files under /tmp and
/var/tmp directories. 
The non-root users have access to these folders, and they may accidently modify,
move or delete those files.
Such actions may interfere with the normal functioning of the Veritas Volume
Manager.

RESOLUTION:
This Hot Fix address the issue by moving the required Veritas Volume Manager
files to secure location.

* 3959455 (Tracking ID: 3932496)

SYMPTOM:
In an FSS environment, volume creation might fail on SSD devices if 
vxconfigd was earlier restarted on the master node.

DESCRIPTION:
In an FSS environment, if a shared disk group is created using 
SSD devices and vxconfigd is restarted, then the volume creation might fail. 
The problem is because the mediatype attribute of the disk was NOT propagated 
from kernel to vxconfigd while restarting the vxconfigd daemon.

RESOLUTION:
Changes are done in VxVM code to correctly propagate mediatype for remote 
devices during vxconfigd startup.

* 3959458 (Tracking ID: 3936535)

SYMPTOM:
The IO performance is poor due to frequent cache drops on snapshots 
configured system.

DESCRIPTION:
On VxVM snapshots configured system, along with the IO going, DCO map update 
will happen and it could allocate lots of chucks of pages memory, which 
triggered kswapd to swap the cache memory out, so cache drops were seen.

RESOLUTION:
Code changes are done to allocate big size memory for DCO map update without 
triggering memory swap out.

* 3959460 (Tracking ID: 3942890)

SYMPTOM:
In case of Data Change Object (DCO) configured, IO hang may happen with 
heavy IO
load plus slow Storage Replicator Log (SRL) 
flush.

DESCRIPTION:
Application IO needs to wait for DRL flush complete to proceed. Due to a 
defect
in DRL 
code, DRL flush couldn't proceed in case there're large amount of IO which
exceeded 
avaiable DRL chunks, hence IO hang.

RESOLUTION:
Code changes have been made to fix the issue.

* 3959461 (Tracking ID: 3946350)

SYMPTOM:
kmalloc-1024 and kmalloc-2048 memory consuming keeps increasing when Veritas 
Volume Replicator (VVR) IO size is more than 256K.

DESCRIPTION:
In case of VVR , if I/O size is more than 256K, then the IO is broken into 
child 
IOs. Due to code defect, the allocated space doesn't got freed when splited 
IOs are completed.

RESOLUTION:
The code is modified to free VxVM allocated memory after split IOs competed.

* 3959462 (Tracking ID: 3947265)

SYMPTOM:
vxfen tends to fail and creates split brain issues.

DESCRIPTION:
Currently to check whether the infiniband devices are present or not
we check for some modules which on rhel 7.4 comes by default.

RESOLUTION:
TO check for infiniband devices we would be checking for /sys/class/infiniband
directory in which the device information gets populated if infiniband
devices are present.

* 3959463 (Tracking ID: 3954787)

SYMPTOM:
On a RHEL 7.5 FSS environment with GCO configured having NVMe devices and Infiniband network, data corruption might occur when sending the IO from Master to slave node.

DESCRIPTION:
In the recent RHEL 7.5 release, linux stopped allowing IO on the underlying NVMe device which has gaps in between BIO vectors. In case of VVR, the SRL header of 3 blocks is added to the BIO . When the BIO is sent through LLT to the other node because of LLT limitation of 32 fragments can lead to unalignment of BIO vectors. When this unaligned BIO is sent to the underlying NVMe device, the last 3 blocks of the BIO are skipped and not written to the disk on the slave node. This leads to incomplete data written on the slave node which leads to data corruption.

RESOLUTION:
Code changes have been done to handle this case and send the BIO aligned to the underlying NVMe device.

* 3959465 (Tracking ID: 3956732)

SYMPTOM:
systemd-udevd messages like below can be seen in journalctl logs:

systemd-udevd[7506]: inotify_add_watch(7, /dev/VxDMP8, 10) failed: No such file or directory
systemd-udevd[7511]: inotify_add_watch(7, /dev/VxDMP9, 10) failed: No such file or directory

DESCRIPTION:
When there are some changes done to the underlying VxDMP device the messages are getting displayed in journalctl logs. The reason for the message is because we have not handled the change event of the VxDMP device in our UDEV rule.

RESOLUTION:
Code changes have been done to handle change event of VxDMP device in our UDEV rule.

* 3959469 (Tracking ID: 3922529)

SYMPTOM:
VxVM (Veritas Volume Manager) creates some required files under /tmp
and /var/tmp directories.

DESCRIPTION:
During creation of VxVM (Veritas Volume Manager) rpm package, some files are
created under /usr/lib/vxvm/voladm.d/lib/vxkmiplibs/ directory.

The non-root users have access to these folders, and they may accidentally modify,
move or delete those files.

RESOLUTION:
This Hot Fix address the issue by assigning proper permissions directory during
creation of rpm.

* 3959471 (Tracking ID: 3932356)

SYMPTOM:
In a two node cluster vxconfigd dumps core while importing the DG -

 dapriv_da_alloc ()
 in setup_remote_disks ()
 in volasym_remote_getrecs ()
 req_dg_import ()
 vold_process_request ()
 start_thread () from /lib64/libpthread.so.0
 from /lib64/libc.so.6

DESCRIPTION:
The vxconfigd is dumping core due to address alignment issue.

RESOLUTION:
The alignment issue is fixed.

* 3959473 (Tracking ID: 3945115)

SYMPTOM:
VxVM vxassist relayout command fails for volumes with RAID layout with the 
following message:
VxVM vxassist ERROR V-5-1-2344 Cannot update volume <vol-name>
VxVM vxassist ERROR V-5-1-4037 Relayout operation aborted. (7)

DESCRIPTION:
During relayout operation, the target volume inherits the attributes from 
original volume. One of those attributes is read policy. In case if the layout 
of original volume is RAID, it will set RAID read policy. RAID read policy 
expects the target volume to have appropriate log required for RAID policy. 
Since the target volume is of different layout, it does not have the log 
present and hence relayout operation fails.

RESOLUTION:
Code changes have been made to set the read policy to SELECT for target 
volumes rather than inheriting it from original volume in case original volume 
is of RAID layout.

* 3959475 (Tracking ID: 3950384)

SYMPTOM:
In a scenario where volume data encryption at rest is enabled, data corruption
may occur if the file system size exceeds 1TB and the data is located in a file
extent which has an extent size bigger than 256KB.

DESCRIPTION:
In a scenario where data encryption at rest is enabled, data corruption may
occur when both the following cases are satisfied:
- File system size is over 1TB
- The data is located in a file extent which has an extent size bigger than 256KB
This issue occurs due to a bug which causes an integer overflow for the offset.

RESOLUTION:
As a part of this fix, appropriate code changes have been made to improve data
encryption behavior such that the data corruption does not occur.

* 3959476 (Tracking ID: 3950675)

SYMPTOM:
The following command is not progressing and appears stuck.

# vxdg import <dg-name>

DESCRIPTION:
The DG import command is found to be non-progressing some times on backup system.
The analysis of the situation has shown that devices belonging to the DG are
found to be reporting "devid mismatch" more than once, due to not following
graceful DR steps. An erroneous processing of such situation resulted in not
allowing IOs on such devices leading to DG import hang.

RESOLUTION:
The code processing the "devid mismatch" is rectified.

* 3959477 (Tracking ID: 3953845)

SYMPTOM:
IO hang can be experienced when there is memory pressure situation because of "vxencryptd".

DESCRIPTION:
For large size IO, VxVM(Veritas Volume Manager)  tries to acquire contiguous pages in memory for some of its internal data structures. In heavy memory pressure scenarios, there can be a possibility that contiguous pages are not available. In such case it waits till the required pages are available for allocation and does not process the request further. This causes IO hang kind of situation where IO can't progress further or progresses very slowly.

RESOLUTION:
Code changes are done to avoid the IO hang situation.

* 3959478 (Tracking ID: 3956027)

SYMPTOM:
System panicked while removing disks from disk group, stack likes the following:

[0000F4C4]___memmove64+0000C4 ()
[077ED5FC]vol_get_one_io_stat+00029C ()
[077ED8FC]vol_get_io_stats+00009C ()
[077F1658]volinfo_ioctl+0002B8 ()
[07809954]volsioctl_real+0004B4 ()
[079014CC]volsioctl+00004C ()
[07900C40]vols_ioctl+000120 ()
[00605730]rdevioctl+0000B0 ()
[008012F4]spec_ioctl+000074 ()
[0068FE7C]vnop_ioctl+00005C ()
[0069A5EC]vno_ioctl+00016C ()
[006E2090]common_ioctl+0000F0 ()
[00003938]mfspurr_sc_flih01+000174 ()

DESCRIPTION:
IO stats function was trying to access a freed disk that was being removed from disk group,  result in panic for illegal memory access.

RESOLUTION:
Code changes have been made to resolve this race condition.

* 3959479 (Tracking ID: 3956727)

SYMPTOM:
In SOLARIS DDL discovery when SCSI ioctl fails, direct disk IO on device can lead to high memory consumption and vxconfigd hangs.

DESCRIPTION:
In SOLARIS DDL discovery when SCSI ioctls on disk for private region IO fails we attempt a direct disk read/write to the disk.
Due to compiler issue this direct read/write gets invalid arguments which leads to high memory consumption and vxconfigd hangs.

RESOLUTION:
Changes are done in VxVM code to ensure correct arguments are passed to disk read/write.

* 3959480 (Tracking ID: 3957227)

SYMPTOM:
Disk group import succeeded, but with below error message:

vxvm:vxconfigd: [ID ** daemon.error] V-5-1-0 dg_import_name_to_dgid: Found dgid = **

DESCRIPTION:
When do disk group import, two configuration copies may be found. Volume Manager will use the latest configuration copy, then print a message to indicate this scenario. Due to wrong log level, this message got printed in error category.

RESOLUTION:
Code changes has been made to suppress this harmless message.

* 3960383 (Tracking ID: 3958062)

SYMPTOM:
After migrating boot lun, disabling dmp_native_support fails with following error.

VxVM vxdmpadm ERROR V-5-1-15883 check_bosboot open failed /dev/r errno 2
VxVM vxdmpadm ERROR V-5-1-15253 bosboot would not succeed, please run  
manually to find the cause of failure
VxVM vxdmpadm ERROR V-5-1-15251 bosboot check failed
VxVM vxdmpadm INFO V-5-1-18418 restoring protofile
+ final_ret=18
+ f_exit 18
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume 
groups

VxVM vxdmpadm ERROR V-5-1-15686 The following VG(s) could not be migrated as 
could not disable DMP support for LVM bootability -
        rootvg

DESCRIPTION:
After performing the boot lun migration, while enabling/disabling the DMP native support, 
VxVM was performing 'bosboot' verification with the old boot disk name, instead of the migrated disk. 
The reason was one AIX OS command was returning the old boot disk name.

RESOLUTION:
The code is changed to use correct OS command to get the boot disk name after migration.

* 3961353 (Tracking ID: 3950199)

SYMPTOM:
System may panic with following stack while DMP(Dynamic Mulitpathing) path 
restoration:

#0 [ffff880c65ea73e0] machine_kexec at ffffffff8103fd6b
 #1 [ffff880c65ea7440] crash_kexec at ffffffff810d1f02
 #2 [ffff880c65ea7510] oops_end at ffffffff8154f070
 #3 [ffff880c65ea7540] no_context at ffffffff8105186b
 #4 [ffff880c65ea7590] __bad_area_nosemaphore at ffffffff81051af5
 #5 [ffff880c65ea75e0] bad_area at ffffffff81051c1e
 #6 [ffff880c65ea7610] __do_page_fault at ffffffff81052443
 #7 [ffff880c65ea7730] do_page_fault at ffffffff81550ffe
 #8 [ffff880c65ea7760] page_fault at ffffffff8154e2f5
    [exception RIP: _spin_lock_irqsave+31]
    RIP: ffffffff8154dccf  RSP: ffff880c65ea7818  RFLAGS: 00210046
    RAX: 0000000000010000  RBX: 0000000000000000  RCX: 0000000000000000
    RDX: 0000000000200246  RSI: 0000000000000040  RDI: 00000000000000e8
    RBP: ffff880c65ea7818   R8: 0000000000000000   R9: ffff8824214ddd00
    R10: 0000000000000002  R11: 0000000000000000  R12: ffff88302d2ce400
    R13: 0000000000000000  R14: ffff880c65ea79b0  R15: ffff880c65ea79b7
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff880c65ea7820] dmp_open_path at ffffffffa07be2c5 [vxdmp]
#10 [ffff880c65ea7980] dmp_restore_node at ffffffffa07f315e [vxdmp]
#11 [ffff880c65ea7b00] dmp_revive_paths at ffffffffa07ccee3 [vxdmp]
#12 [ffff880c65ea7b40] gendmpopen at ffffffffa07cbc85 [vxdmp]
#13 [ffff880c65ea7c10] dmpopen at ffffffffa07cc51d [vxdmp]
#14 [ffff880c65ea7c20] dmp_open at ffffffffa07f057b [vxdmp]
#15 [ffff880c65ea7c50] __blkdev_get at ffffffff811d7f7e
#16 [ffff880c65ea7cb0] blkdev_get at ffffffff811d82a0
#17 [ffff880c65ea7cc0] blkdev_open at ffffffff811d8321
#18 [ffff880c65ea7cf0] __dentry_open at ffffffff81196f22
#19 [ffff880c65ea7d50] nameidata_to_filp at ffffffff81197294
#20 [ffff880c65ea7d70] do_filp_open at ffffffff811ad180
#21 [ffff880c65ea7ee0] do_sys_open at ffffffff81196cc7
#22 [ffff880c65ea7f30] compat_sys_open at ffffffff811eee9a
#23 [ffff880c65ea7f40] symev_compat_open at ffffffffa0c9b08f

DESCRIPTION:
System panic can be encounter due to race condition.  There is a possibility 
that a path picked by DMP restore daemon for processing
may be deleted before the restoration process is complete. Hence when the 
restoration daemon tries to access the path properties it leads to system panic 
as the path properties are already freed.

RESOLUTION:
Code changes are done to handle the race condition.

* 3961355 (Tracking ID: 3952529)

SYMPTOM:
Enabling and disabling DMP (Dynamic Multipathing) Native Support using command "vxdmpadm settune dmp_native_support" fails with below 
error:

VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groups

VxVM vxdmpadm ERROR V-5-1-15686 The following vgs could not be migrated as they
are in use -
               <vgname>

DESCRIPTION:
During enabling/disabling DMP Native Support, the VG's need to migrate from OS devices to dmpnodes and vice-versa when the native support 
is enabled. To complete the migration, vgexport/vgimport commands are used. If the VG is in mounted state, the vgexport command fails 
indicating the VG is in use. Because of the failure the VG migration fails and the command "vxdmpadm settune dmp_native_support" fails 
with error "VG is in use".

RESOLUTION:
Code changes have been done to use vgchange instead of vgexport/vgimport to solve the problem.

* 3961356 (Tracking ID: 3953481)

SYMPTOM:
A stale entry of a replaced disk was left behind under /dev/[r]dsk 
to represent the replaced disk.

DESCRIPTION:
Whenever a disk is removed from DMP view, the driver property information of 
the disk has to be removed from the kernel, if not a stale entry will be 
left out under the /dev/[r]dsk. Now when a new disk replaces with the 
same minor number, instead of refreshing the property, the stale information is left.

RESOLUTION:
Code is modified to remove the stale device property when a disk is removed.

* 3961358 (Tracking ID: 3955101)

SYMPTOM:
Server might panic in a GCO environment with the following stack:

nmcom_server_main_tcp()
ttwu_do_wakeup()
ttwu_do_activate.constprop.90()
try_to_wake_up()
update_curr()
update_curr()
account_entity_dequeue()
 __schedule()
nmcom_server_proc_tcp()
kthread()
kthread_create_on_node()
ret_from_fork()
kthread_create_on_node()

DESCRIPTION:
There are recent changes done in the code to handle Dynamic port changes i.e deletion and addition of ports can now happen dynamically. It might happen that while accessing the port, it was deleted in the background by other thread. This would lead to a panic in the code since the port to be accessed has been already deleted.

RESOLUTION:
Code changes have been done to take care of this situation and check if the port is available before accessing it.

* 3961359 (Tracking ID: 3955725)

SYMPTOM:
Utility to clear "failio" flag on disk after storage connectivity is back.

DESCRIPTION:
If I/Os to the disks timeout due to some hardware failures like weak Storage  Area Network (SAN) cable link or Host Bus Adapter (HBA) failure, VxVM assumes 
that disk is bad or slow and it sets "failio" flag on the disk. Because of this  flag, all the subsequent I/Os fail with the No such device error. After the connectivity is back, the "failio" needs to clear using "vxdisk  <disk_name> failio=off". We have come up with a utility  "vxcheckfailio" which will
clear the "failio" flag for all the disks whose all paths are enabled.

RESOLUTION:
Code changes are done to add utility "vxcheckfailio" that will clear the "failio" flag on the disks.

* 3961468 (Tracking ID: 3926067)

SYMPTOM:
In a Campus Cluster environment, vxassist relayout command may fail with 
following error:
VxVM vxassist ERROR V-5-1-13124  Site  offline or detached
VxVM vxassist ERROR V-5-1-4037 Relayout operation aborted. (20)

vxassist convert command also might fail with following error:
VxVM vxassist ERROR V-5-1-10128  No complete plex on the site.

DESCRIPTION:
For vxassist "relayout" and "convert" operations in Campus Cluster 
environment, VxVM (Veritas Volume Manager) needs to sort the plexes of volume 
according to sites. When the number of 
plexes of volumes are greater than 100, the sorting of plexes fail due to a 
bug in the code. Because of this sorting failure, vxassist relayout/convert 
operations fail.

RESOLUTION:
Code changes are done to properly sort the plexes according to site.

* 3961469 (Tracking ID: 3948140)

SYMPTOM:
System may panic if RTPG data returned by the array is greater than 255 with
below stack:

dmp_alua_get_owner_state()
dmp_alua_get_path_state()
dmp_get_path_state()
dmp_check_path_state()
dmp_restore_callback()
dmp_process_scsireq()
dmp_daemons_loop()

DESCRIPTION:
The size of the buffer given to RTPG SCSI command is currently 255 bytes. But the
size of data returned by underlying array for RTPG can be greater than 255
bytes. As a result
incomplete data is retrieved (only the first 255 bytes) and when trying to read
the RTPG data, it causes invalid access of memory resulting in error while
claiming the devices. This invalid access of memory may lead to system panic.

RESOLUTION:
The RTPG buffer size has been increased to 1024 bytes for handling this.

* 3961480 (Tracking ID: 3957549)

SYMPTOM:
Server panicked when resyncing mirror volume with the following stack:

voliot_object_event+0x2e0
vol_oes_sio_start+0x80
voliod_iohandle+0x30
voliod_loop+0x248
thread_start+4

DESCRIPTION:
In case IO error happened during mirror resync, need to log trace event for the IO error. As the IO is from mirror resync, KIO should be NULL. But NULL pointer check for KIO is missed during logging trace event, hence the panic.

RESOLUTION:
Code changes have been made to fix the issue.

* 3964315 (Tracking ID: 3952042)

SYMPTOM:
dmpevents.log is flooding with below messages:
Tue Jul 11 09:28:36.620: Lost 12 DMP I/O statistics records
Tue Jul 11 10:05:44.257: Lost 13 DMP I/O statistics records
Tue Jul 11 10:10:05.088: Lost 6 DMP I/O statistics records
Tue Jul 11 11:28:24.714: Lost 6 DMP I/O statistics records
Tue Jul 11 11:46:35.568: Lost 1 DMP I/O statistics records
Tue Jul 11 12:04:10.267: Lost 13 DMP I/O statistics records
Tue Jul 11 12:04:16.298: Lost 5 DMP I/O statistics records
Tue Jul 11 12:44:05.656: Lost 31 DMP I/O statistics records
Tue Jul 11 12:44:38.855: Lost 2 DMP I/O statistics records

DESCRIPTION:
when DMP (Dynamic Multi-Pathing) expand iostat table, DMP allocates a new larger table, replaces the old table with the new one and frees the old one. This 
increases the possibility of memory fragmentation.

RESOLUTION:
The code is modified to increase the initial value for iostat table.

* 3966132 (Tracking ID: 3960576)

SYMPTOM:
With installation of the said VxVM patch 7.3.1.100, one more rule got added incorrectly.

DESCRIPTION:
The rules are added directly through the file vxdmp.PdDv. The install/upgrade scripts take care of adding these rules using the odmadd command. So ideally if we see all the rules should be added twice like the current rule. But in the install/upgrade scripts other rules are removed before they are added 
using the odmadd command.

RESOLUTION:
We need to add an entry similar to these for the "vxdisk scandisks" rule as well in the install/upgrade script. Once that is done the error for the duplicate 
entry would go away.

Patch ID: VRTSvxvm-7.3.1.100

* 3932464 (Tracking ID: 3926976)

SYMPTOM:
Excessive number of connections are found in open state causing FD leak and
eventually reporting license errors.

DESCRIPTION:
The vxconfigd reports license errors as it fails to open the license files. The
failure to open is due to FD exhaustion, caused by excessive FIFO connections
left in open state.

The FIFO connections used to communicate with vxconfigd by clients (vx
commands). Usually these should get closed once the client exits. One of such
client "vxdclid" which is a daemon connecting frequently and leaving the
connection is open state, causing FD leak. 

This issue is applicable to Solaris platform only.

RESOLUTION:
One of the API, a library call is leaving the connection in open state while
leaving, which is fixed.

* 3933874 (Tracking ID: 3852146)

SYMPTOM:
In a CVM cluster, when importing a shared diskgroup specifying both -c and -o
noreonline options, the following error may be returned: 
VxVM vxdg ERROR V-5-1-10978 Disk group <dgname>: import failed: Disk for disk
group not found.

DESCRIPTION:
The -c option will update the disk ID and disk group ID on the private region
of the disks in the disk group being imported. Such updated information is not
yet seen by the slave because the disks have not been re-onlined (given that
noreonline option is specified). As a result, the slave cannot identify the
disk(s) based on the updated information sent from the master, causing the
import to fail with the error Disk for disk group not found.

RESOLUTION:
The code is modified to handle the working of the "-c" and "-o noreonline"
options together.

* 3933875 (Tracking ID: 3872585)

SYMPTOM:
System running with VxFS and VxVM panics with storage key exception with the 
following stack:

simple_lock
dispatch
flih_util
touchrc
pin_seg_range
pin_com
pinx_plock
plock_pinvec
plock
mfspurr_sc_flih01

DESCRIPTION:
The xntpd process mounted on a vxfs filesystem could panic with storage key 
exception. The xntpd binary page faulted and did an IO, after which the storage 
key exception was detected OS as it couldn't locate it's keyset. From the code 
review it was found that in a few error cases in the vxvm, the storage key may 
not be restored after they're replaced.

RESOLUTION:
Do storage key restore even when in the error cases in vxio and dmp layer.

* 3933876 (Tracking ID: 3894657)

SYMPTOM:
VxVM commands may hang when using space optimized snapshot.

DESCRIPTION:
If there is a volume with DRL enabled having space optimized and mirrored 
cache 
object volume which DRL enabled, VxVM commands may hang. If the IO load on the 
volume is 
high it can lead to memory crunch as memory stabilization is done when 
DRL(Dirty 
Region Logging) is enabled. The IOs in the queue may wait for memory to become 
free. In the meantime, other VxVM commands which require changing the 
configuration of the volumes may hang
because of IO not able to proceed.

RESOLUTION:
Memory stabilization is not required for VxVM generated internal IO's for 
cache 
object volume. Code changes have be done to eliminate memory stabilization for 
cache object IOs.

* 3933877 (Tracking ID: 3914789)

SYMPTOM:
System may panic when reclaiming on secondary in VVR(Veritas Volume Replicator)
environment. It's due to accessing invalid address, error message is similiar to
"data access MMU miss".

DESCRIPTION:
VxVM maintains a linked list to keep memory segment information. When accessing
its content with certain offset, linked list is traversed. Due to code defect
when offset is equal to segment chunk size, end of such segement is returned
instead of start of next segment. It can result silent memory corruption because
it tries to access memory out of its boundary. System can panic when out of
boundary address isn't allocated yet.

RESOLUTION:
Code changes have been made to fix the out-of-boundary access.

* 3933878 (Tracking ID: 3918408)

SYMPTOM:
Data corruption when volume grow is attempted on thin reclaimable disks whose space is just freed.

DESCRIPTION:
When the space in the volume is freed by deleting some data or subdisks, the corresponding subdisks are marked for 
reclamation. It might take some time for the periodic reclaim task to start if not issued manually. In the meantime, if 
same disks are used for growing another volume, it can happen that reclaim task will go ahead and overwrite the data 
written on the new volume. Because of this race condition between reclaim and volume grow operation, data corruption 
occurs.

RESOLUTION:
Code changes are done to handle race condition between reclaim and volume grow operation. Also reclaim is skipped for 
those disks which have been already become part of new volume.

* 3933880 (Tracking ID: 3864063)

SYMPTOM:
Application I/O hangs after the Master Pause command is issued.

DESCRIPTION:
Some flags (VOL_RIFLAG_DISCONNECTING or VOL_RIFLAG_REQUEST_PENDING) in VVR
(Veritas Volume Replicator) kernel are not cleared because of a race between the
Master Pause SIO and the Error Handler SIO. This causes the RU (Replication
Update) SIO to fail to proceed, which leads to I/O hang.

RESOLUTION:
The code is modified to handle the race condition.

* 3933882 (Tracking ID: 3865721)

SYMPTOM:
Vxconfigd hang in dealing transaction while pausing the replication in 
Clustered VVR environment.

DESCRIPTION:
In Clustered VVR (CVM VVR) environment, while pausing replication which is in 
DCM (Data Change Map) mode, the master pause SIO (staging IO) can not finish 
serialization since there are metadata shipping SIOs in the throttle queue 
with the activesio count added. Meanwhile, because master pause 
SIOs SERIALIZE flag is set, DCM flush SIO can not be started to flush the 
throttle queue. It leads to a dead loop hang state. Since the master pause 
routine needs to sync up with transaction routine, vxconfigd hangs in 
transaction.

RESOLUTION:
Code changes were made to flush the metadata shipping throttle queue if master 
pause SIO can not finish serialization.

* 3933883 (Tracking ID: 3867236)

SYMPTOM:
Application IO hang happens after issuing Master Pause command.

DESCRIPTION:
The flag VOL_RIFLAG_REQUEST_PENDING in VVR(Veritas Volume Replicator) kernel is 
not cleared because of a race between Master Pause SIO and RVWRITE1 SIO resulting 
in RU (Replication Update) SIO to fail to proceed thereby causing IO hang.

RESOLUTION:
Code changes have been made to handle the race condition.

* 3933884 (Tracking ID: 3868154)

SYMPTOM:
When DMP Native Support is set to ON, and if a dmpnode has multiple VGs,
'vxdmpadm native ls' shows incorrect VG entries for dmpnodes.

DESCRIPTION:
When DMP Native Support is set to ON, multiple VGs can be created on a disk as
Linux supports creating VG on a whole disk as well as on a partition of 
a disk.This possibility was not handled in the code, hence the display of
'vxdmpadm native ls' was getting messed up.

RESOLUTION:
Code now handles the situation of multiple VGs of a single disk

* 3933889 (Tracking ID: 3879234)

SYMPTOM:
dd read on the Veritas Volume Manager (VxVM) character device fails with 
Input/Output error while accessing end of device like below:

[root@dn pmansukh_debug]# dd if=/dev/vx/rdsk/hfdg/vol1 of=/dev/null bs=65K
dd: reading `/dev/vx/rdsk/hfdg/vol1': Input/output error
15801+0 records in
15801+0 records out
1051714560 bytes (1.1 GB) copied, 3.96065 s, 266 MB/s

DESCRIPTION:
The issue occurs because of the change in the Linux API 
generic_file_aio_read. Because of lot of changes in Linux API 
generic_file_aio_read, 
it does not properly handle end of device reads/writes. The Linux code has 
been changed to use blkdev_aio_read which is a GPL symbol and hence 
cannot be used.

RESOLUTION:
Made changes in the code to handle end of device reads/writes properly.

* 3933890 (Tracking ID: 3879324)

SYMPTOM:
VxVM(Veritas Volume Manager) DR(Dynamic Reconfiguration) tool fails to 
handle busy device problem while LUNs are removed from OS

DESCRIPTION:
OS devices may still be busy after removing them from OS, it fails 'luxadm -
e offline <disk>' operation and leaves staled entries in 'vxdisk list' 
output 
like:
emc0_65535   auto            -            -            error
emc0_65536   auto            -            -            error

RESOLUTION:
Code changes have been done to address busy devices issue.

* 3933897 (Tracking ID: 3907618)

SYMPTOM:
vxdisk resize leads to data corruption on filesystem with MSDOS labelled disk having VxVM sliced format.

DESCRIPTION:
vxdisk resize changes the geometry on the device if required. When vxdisk resize is in progress, absolute offsets i.e offsets starting 
from start of the device are used. For MSDOS labelled disk, the full disk is devoted on Slice 4 but not slice 0. Thus when IO is 
scheduled on the device an extra 32 sectors gets added to the IO which is not required since we are already starting the IO from start of 
the device. This leads to data corruption since the IO on the device shifted by 32 sectors.

RESOLUTION:
Code changes have been made to not add 32 sectors to the IO when vxdisk resize is in progress to avoid corruption.

* 3933898 (Tracking ID: 3908987)

SYMPTOM:
The following unnecessary error message is printed to inform customer hot 
relocation will be performed on master mode.

VxVM vxrelocd INFO V-5-2-6551
hot-relocation operation for shared disk group will be performed on master 
node.

DESCRIPTION:
In case there're failed disks the message will be printed. Because related 
code is not placed in right position, it's printed even if there's no failed 
disks.

RESOLUTION:
Code changes have been make to fix the issue.

* 3933900 (Tracking ID: 3915523)

SYMPTOM:
Local disk from other node belonging to private DG is exported to the node when
a private DG is imported on current node.

DESCRIPTION:
When we try to import a DG, all the disks belonging to the DG are automatically
exported to the current node so as to make sure 
that the DG gets imported. This is done to have same behaviour as SAN with local
disks as well. Since we are exporting all disks in 
the DG, then it happens that disks which belong to same DG name but different
private DG on other node get exported to current node 
as well. This leads to wrong disk getting selected while DG gets imported.

RESOLUTION:
Instead of DG name, DGID (diskgroup ID) is used to decide whether disk needs to
be exported or not.

* 3933904 (Tracking ID: 3921668)

SYMPTOM:
Running the vxrecover command with -m option fails when run on the
slave node with message "The command can be executed only on the master."

DESCRIPTION:
The issue occurs as currently vxrecover -g <dgname> -m command on shared
disk groups is not shipped using the command shipping framework from CVM
(Cluster Volume Manager) slave node to the master node.

RESOLUTION:
Implemented code change to ship the vxrecover -m command to the master
node, when its triggered from the slave node.

* 3933907 (Tracking ID: 3873123)

SYMPTOM:
When remote disk on node is EFI disk, vold enable fails.
And following message get logged, and eventually causing the vxconfigd to go 
into disabled state:
Kernel and on-disk configurations don't match; transactions are disabled.

DESCRIPTION:
This is becasue one of the cases of EFI remote disk is not properly handled
in disk recovery part when vxconfigd is enabled.

RESOLUTION:
Code changes have been done to set the EFI flag on darec in recovery code

* 3933910 (Tracking ID: 3910228)

SYMPTOM:
Registration of GAB(Global Atomic Broadcast) port u fails on slave nodes after 
multiple new devices are added to the system..

DESCRIPTION:
Vxconfigd sends command to GAB for port u registration and waits for a respnse 
from GAB. During this timeframe if the vxconfigd is interrupted by any other 
module apart from GAB then it will not be able to receive the signal from GAB 
of successful registration. Since the signal is not received, vxconfigd 
believes the registration did not succeed and treats it as a failure.

RESOLUTION:
Mask the signals which vxconfigd can receive before waiting for the signal from 
GAB for registration of gab u port.

* 3933911 (Tracking ID: 3925377)

SYMPTOM:
Not all disks could be discovered by Dynamic Multi-Pathing(DMP) after first 
startup..

DESCRIPTION:
DMP is started too earlier in the boot process if iSCSI and raw haven't been 
installed. Till that point the FC devices are not recognized by OS, hence DMP 
misses FC devices.

RESOLUTION:
The code is modified to make sure DMP get started after OS disk discovery.

* 3934775 (Tracking ID: 3907800)

SYMPTOM:
VxVM package installation will fail on SLES12 SP2.

DESCRIPTION:
Since SLES12 SP2 has lot of kernel changes, package
installation fails. Added code changes to provide SLES12 SP2 platform support
for VxVM.

RESOLUTION:
Added code changes to provide SLES12 SP2 platform support.

* 3937540 (Tracking ID: 3906534)

SYMPTOM:
After enabling DMP (Dynamic Multipathing) Native support, enable /boot to be
mounted on DMP device.

DESCRIPTION:
Currently /boot is mounted on top of OS (Operating System) device. When DMP
Native support is enabled, only VG's (Volume Groups) are migrated from OS 
device to DMP device.This is the reason /boot is not migrated to DMP device.
With this if OS device path is not available then system becomes unbootable 
since /boot is not available. Thus it becomes necessary to mount /boot on DMP
device to provide multipathing and resiliency.

RESOLUTION:
Code changes have been done to migrate /boot on top of DMP device when DMP
Native support is enabled.
Note - The code changes are currently implemented for RHEL-6 only. For other
linux platforms, /boot will still not be mounted on the DMP device

* 3937541 (Tracking ID: 3911930)

SYMPTOM:
Valid PGR operations sometimes fail on a dmpnode.

DESCRIPTION:
As part of the PGR operations, if the inquiry command finds that PGR is not
supported on the dmpnode node, a flag PGR_FLAG_NOTSUPPORTED is set on the
dmpnode.
Further PGR operations check this flag and issue PGR commands only if this flag
is
NOT set.
This flag remains set even if the hardware is changed so as to support PGR.

RESOLUTION:
A new command (namely enablepr) is provided in the vxdmppr utility to clear this
flag on the specified dmpnode.

* 3937542 (Tracking ID: 3917636)

SYMPTOM:
Filesystems from /etc/fstab file are not mounted automatically on boot 
through systemd on RHEL7 and SLES12.

DESCRIPTION:
While bootup, when systemd tries to mount using the devices mentioned in 
/etc/fstab file on the device, the device is not accessible leading to the 
failure of the mount operation. As the device discovery happens through udev 
infrastructure, the udev-rules for those 
devices need to be run when volumes are created so that devices get 
registered with systemd. In the case udev rules are executed even before the 
devices in "/dev/vx/dsk" directory are created.
Since the devices are not created, devices will not be registered with 
systemd leading to the failure of mount operation.

RESOLUTION:
Run "udevadm trigger" to execute all the udev rules once all volumes are 
created so that devices are registered.

* 3937549 (Tracking ID: 3934910)

SYMPTOM:
IO errors on data volume or file system happen after some cycles of snapshot 
creation/removal with dg reimport.

DESCRIPTION:
With the snapshot of the data volume removal and the dg reimport, the DRL map 
keep active rather than to be inactivated. With the new snapshot created, the 
DRL would be re-enabled and new DRL map allocated with the first write to the 
data volume. The original active DRL map would not be used and leaked. After 
some such cycles, the extent of the DCO volume would be exhausted due to the 
active but not be used DRL maps, then no more DRL map could be allocated and 
the IOs would be failed or unable to be issued on the data volume.

RESOLUTION:
Code changes are done to inactivate the DRL map if the DRL is disabled during 
the volume start, then it could be reused later safely.

* 3937550 (Tracking ID: 3935232)

SYMPTOM:
Replication and IO hang may happen on new master node during master 
takeover.

DESCRIPTION:
During master switch is in progress if log owner change kicks in, flag 
VOLSIO_FLAG_RVC_ACTIVE will be set by log owner change SIO. 
RVG(Replicated Volume Group) recovery initiated by master switch  will clear 
flag VOLSIO_FLAG_RVC_ACTIVE after RVG recovery done. When log owner 
change done,  as flag VOLSIO_FLAG_RVC_ACTIVE has been cleared, resetting 
flag VOLOBJ_TFLAG_VVR_QUIESCE is skipped. The present of flag 
VOLOBJ_TFLAG_VVR_QUIESCE will make replication and application IO on RVG 
always be in pending state.

RESOLUTION:
Code changes have been done to make log owner change wait until master 
switch completed.

* 3937808 (Tracking ID: 3931936)

SYMPTOM:
In FSS(Flexible Storage Sharing) environment, after restarting slave node VxVM 
command on master node hang result in failed disks on slave node could not 
rejoin disk group.

DESCRIPTION:
While lost remote disks on slave node comes back, online these disk and add 
them to disk group operations are performed on master node. Disk online 
includes operations from both master and slave node. On slave node these 
disks 
should be offlined then reonlined, but due to code defect reonline disks are 
missed result in these disks are kept in reonlining state. The following add disk 
to 
disk group operation needs to issue private region IOs on the disk. These IOs 
are 
shipped to slave node to complete. As the disks are in reonline state, busy error 
gets returned and remote IOs keep retrying, hence VxVM command hang on 
master node.

RESOLUTION:
Code changes have been made to fix the issue.

* 3937811 (Tracking ID: 3935974)

SYMPTOM:
While communicating with client process, vxrsyncd daemon terminates and after 
sometime it gets started or may require a reboot to start.

DESCRIPTION:
When the client process shuts down abruptly and vxrsyncd daemon attempt to write 
on the client socket, SIGPIPE signal is generated. The default action for this 
signal is to terminate the process. Hence vxrsyncd gets terminated.

RESOLUTION:
This SIGPIPE signal should be handled in order to prevent the termination of 
vxrsyncd.

* 3939938 (Tracking ID: 3939796)

SYMPTOM:
Installation of VxVM package fails on SLES12 SP3.

DESCRIPTION:
Since SLES12 SP3 has many kernel changes, package installation fails.
Added code changes to provide SLES12 SP3 platform support for VxVM.

RESOLUTION:
Added code changes to provide SLES12 SP3 platform support.

* 3940039 (Tracking ID: 3897047)

SYMPTOM:
Filesystems are not mounted automatically on boot through systemd on RHEL7 and
SLES12.

DESCRIPTION:
When systemd service tries to start all the FS in /etc/fstab, the Veritas Volume

Manager (VxVM) volumes are not started since vxconfigd is still not up. The VxVM

volumes are started a little bit later in the boot process. Since the volumes
are 
not available, the FS are not mounted automatically at boot.

RESOLUTION:
Registered the VxVM volumes with UDEV daemon of Linux so that the FS would be 
mounted when the VxVM volumes are started and discovered by udev.

* 3940143 (Tracking ID: 3941037)

SYMPTOM:
VxVM (Veritas Volume Manager) creates some required files under /tmp
and /var/tmp directories.

DESCRIPTION:
VxVM (Veritas Volume Manager) creates some .lock files under /etc/vx directory. 

The non-root users have access to these .lock files, and they may accidentally
modify, move or delete those files.
Such actions may interfere with the normal functioning of the Veritas Volume
Manager.

RESOLUTION:
This Fix address the issue by masking the write permission for non-root users
for these .lock files.

Patch ID: VRTSaslapm-7.3.1.2900

* 4003022 (Tracking ID: 3991649)

SYMPTOM:
Existing package is not supported with SLES 12 SP5 kernel.

DESCRIPTION:
The VRTSaslapm code needs to be recompiled with SLES 12 SP5 kernel.

RESOLUTION:
VRTSaslapm  is recompiled with SLES 12 SP5 kernel.

Patch ID: VRTSdbac-7.3.1.2300

* 4003013 (Tracking ID: 3991274)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP5 is
now introduced.

Patch ID: VRTSdbac-7.3.1.2100

* 3979681 (Tracking ID: 3969613)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP3.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP4 is
now introduced.

Patch ID: VRTSamf-7.3.1.3300

* 4003012 (Tracking ID: 3991274)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP5 is
now introduced.

Patch ID: VRTSamf-7.3.1.3100

* 3979680 (Tracking ID: 3969613)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP3.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP4 is
now introduced.

Patch ID: VRTSvxfen-7.3.1.3300

* 4003011 (Tracking ID: 3991274)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP5 is
now introduced.

Patch ID: VRTSvxfen-7.3.1.3100

* 3979679 (Tracking ID: 3969613)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP3.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP4 is
now introduced.

Patch ID: VRTSvxfen-7.3.1.100

* 3935528 (Tracking ID: 3931654)

SYMPTOM:
In a disk based fencing configuration in a cluster, after the node reboots, the
VxFEN module does not come up and the cluster is stuck.

DESCRIPTION:
On some kernel versions(RHEl7.2+ , CentOS7.2+) after a reboot, VxVM module takes
more than 2 mins to start. By this time the VxFEN configuration fails as the
vxconfigd deamon is not yet up and the cluster is stuck.

RESOLUTION:
The restart attribute in the systemd unit script for VxFEN is changed to
on-failure, so that vxfen configuration is retried till VxVM comes up.

Patch ID: VRTSgab-7.3.1.2300

* 4003010 (Tracking ID: 3991274)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP5 is
now introduced.

Patch ID: VRTSgab-7.3.1.2100

* 3979678 (Tracking ID: 3969613)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP3.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP4 is
now introduced.

Patch ID: VRTSllt-7.3.1.4300

* 4003009 (Tracking ID: 3991274)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 5 (SLES 12 SP5).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP4.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP5 is
now introduced.

Patch ID: VRTSllt-7.3.1.4100

* 3979677 (Tracking ID: 3969613)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP3.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP4 is
now introduced.

Patch ID: VRTSllt-7.3.1.300

* 3933242 (Tracking ID: 3948201)

SYMPTOM:
Kernel panics in case of FSS with LLT over RDMA during heavy data transfer.

DESCRIPTION:
In case of FSS using LLT over RDMA, sometimes kernel may panic because of an 
issue in the buffer advertisement logic of RDMA buffers. The case arises when the 
buffer advertisement for a particular RDMA buffer reaches the sender LLT node 
earlier than the hardware ACK comes to LLT.

RESOLUTION:
LLT module is modified to fix the panic by using a different temporary queue 
for such buffers.

Patch ID: VRTSllt-7.3.1.100

* 3927713 (Tracking ID: 3927712)

SYMPTOM:
LLT on SLES12SP3 shows soft lockup with RDMA configuration

DESCRIPTION:
LLT on SLES12SP3 shows soft lockup stack traces in syslogs due to high memory 
consumption at the RDMA layer during setting up queue pairs with high number of send 
requests.

RESOLUTION:
Issue is resolved by reducing the number of send requests passed during creation of RDMA 
queue pair.

Patch ID: VRTSveki-7.3.1.1200

* 4000509 (Tracking ID: 3991689)

SYMPTOM:
VEKI module failed to load on SLES12 SP5.

DESCRIPTION:
Since SLES12 SP5 is new release therefore VEKI module failed to load
on it.

RESOLUTION:
Added VEKI support for SLES12 SP5.

Patch ID: VRTSveki-7.3.1.1100

* 3979682 (Tracking ID: 3969613)

SYMPTOM:
Veritas Infoscale Availability (VCS) does not support SUSE Linux Enterprise Server
12 Service Pack 4 (SLES 12 SP4).

DESCRIPTION:
Veritas Infoscale Availability did not support SUSE Linux Enterprise Server
versions released after SLES 12 SP3.

RESOLUTION:
Veritas Infoscale Availability support for SUSE Linux Enterprise Server 12 SP4 is
now introduced.

Patch ID: VRTSgms-7.3.1.1400

* 4000513 (Tracking ID: 3991226)

SYMPTOM:
GMS module failed to load on SLES12SP5.

DESCRIPTION:
The SLES12SP5 is new release and it has some changes in kernel which caused GMS module failed to load
on it.

RESOLUTION:
Added code to support GMS on SLES12SP5.

Patch ID: VRTSgms-7.3.1.1200

* 3978691 (Tracking ID: 3973946)

SYMPTOM:
GMS module failed to load on SLES12 SP4

DESCRIPTION:
The SLES12 SP4 is new release and it has 4.12 Linux kernel therefore GMS module failed to load
on it.

RESOLUTION:
Enabled GMS support for SLES12 SP4

Patch ID: VRTSglm-7.3.1.1400

* 4000512 (Tracking ID: 3991223)

SYMPTOM:
GLM module failed to load on SLES12SP5.

DESCRIPTION:
The SLES12SP5 is new release and it has some changes in kernel which caused GLM module failed to load
on it.

RESOLUTION:
Added code to support GLM on SLES12SP5.

* 4001069 (Tracking ID: 3927489)

SYMPTOM:
GLM service failed to start during system startup

DESCRIPTION:
The GLM service is depends on LLT and GAB services. Sometimes
during system boot the multi-user.target, GLM, LLT and GAB services creates
cyclic dependency. To break the cyclic dependency systemd doesn't start GLM service.

RESOLUTION:
Removed dependency of glm service on multi-user.target and
graphical.target

Patch ID: VRTSglm-7.3.1.1300

* 3978693 (Tracking ID: 3973945)

SYMPTOM:
GLM module failed to load on SLES12 SP4

DESCRIPTION:
The SLES12 SP4 is new release and it has 4.12 Linux kernel therefore GLM module failed to load
on it.

RESOLUTION:
Enabled GLM support for SLES12 SP4

Patch ID: VRTSodm-7.3.1.2700

* 3936184 (Tracking ID: 3897161)

SYMPTOM:
Oracle Database on Veritas filesystem with Veritas ODM library has high
log file sync wait time.

DESCRIPTION:
The ODM_IOP lock would not be held for long, so instead of trying
to take a trylock, and deferring the IO when we fail to get the trylock, it
would be better to call the non-trylock lock and finish the IO in the interrupt
context. It should be fine on solaris since this "sleep" lock is actually an
adaptive mutex.

RESOLUTION:
Instead of ODM_IOP_TRYLOCK() call ODM_IOP_LOCK() in the odm_iodone
and finish the IO. This fix will not defer any IO.

* 4000511 (Tracking ID: 3990489)

SYMPTOM:
ODM module failed to load on SLES12SP5.

DESCRIPTION:
The SLES12SP5 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on SLES12SP5.

Patch ID: VRTSodm-7.3.1.2500

* 3978694 (Tracking ID: 3973944)

SYMPTOM:
ODM module failed to load on SLES12 SP4

DESCRIPTION:
The SLES12 SP4 is new release and it has 4.12 Linux kernel therefore ODM module failed to load
on it.

RESOLUTION:
Enabled ODM support for SLES12 SP4

* 3981458 (Tracking ID: 3980810)

SYMPTOM:
Hang in the ODM write processes may be observed, with a subsequent crash with panic string as "Kernel panic - not syncing: Hard LOCKUP".

DESCRIPTION:
It is a ABBA deadlock, where A has taken the spinlock and waiting for sleep lock, but sleep lock owner B is interrupted, and interrupt service 
requires original spin lock taken by A. So since A is using CPU continuously which leads into crash due to hard lockup.

RESOLUTION:
Code has been modified to resolve this deadlock.

Patch ID: VRTSodm-7.3.1.100

* 3939411 (Tracking ID: 3941018)

SYMPTOM:
VRTSodm driver will not load with 7.3.1.100 VRTSvxfs patch.

DESCRIPTION:
Need recompilation of VRTSodm due to recent changes in VRTSvxfs 
header files due to which some symbols are not being resolved.

RESOLUTION:
Recompiled the VRTSodm with new changes in VRTSvxfs header files.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sles12_x86_64-Patch-7.3.1.3200.tar.gz to /tmp
2. Untar infoscale-sles12_x86_64-Patch-7.3.1.3200.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sles12_x86_64-Patch-7.3.1.3200.tar.gz
    # tar xf /tmp/infoscale-sles12_x86_64-Patch-7.3.1.3200.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # cd /tmp/hf
    # ./installVRTSinfoscale731P3200 [<host1> <host2>...]

You can also install this patch together with 7.3.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.3.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE
Read and accept Terms of Service