sfha-rhel7.9_x86_64-Patch-6.2.1.7300

 Basic information
Release type: Patch
Release date: 2021-08-24
OS update support: None
Technote: None
Documentation: None
Popularity: 646 viewed    downloaded
Download size: 315.08 MB
Checksum: 3002157000

 Applies to one or more of the following products:
Application HA 6.2 On RHEL7 x86-64
Cluster Server 6.2 On RHEL7 x86-64
Dynamic Multi-Pathing 6.2 On RHEL7 x86-64
File System 6.2 On RHEL7 x86-64
Storage Foundation 6.2 On RHEL7 x86-64
Storage Foundation Cluster File System 6.2 On RHEL7 x86-64
Storage Foundation for Oracle RAC 6.2 On RHEL7 x86-64
Storage Foundation HA 6.2 On RHEL7 x86-64
Volume Manager 6.2 On RHEL7 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vcsag-rhel7_x86_64-Patch-6.2.1.100 (obsolete) 2017-06-15
vxfen-rhel7_x86_64-Patch-6.2.1.200 (obsolete) 2017-04-24
fs-rhel7_x86_64-Patch-6.2.1.300 (obsolete) 2017-04-12
odm-rhel7_x86_64-Patch-6.2.1.300 (obsolete) 2017-04-12
llt-rhel7_x86_64-Patch-6.2.1.800 (obsolete) 2017-02-09
llt-rhel7_x86_64-Patch-6.2.1.600 (obsolete) 2016-12-01
gab-rhel7_x86_64-Patch-6.2.1.300 (obsolete) 2016-05-04
fs-rhel7_x86_64-Patch-6.2.1.100 (obsolete) 2015-09-02

 Fixes the following incidents:
3093833, 3657150, 3657151, 3657152, 3657153, 3657156, 3657157, 3657158, 3657159, 3661452, 3662745, 3665980, 3665984, 3665990, 3666007, 3666008, 3666010, 3683470, 3687679, 3691204, 3697142, 3697966, 3698165, 3699148, 3703631, 3706864, 3710794, 3714463, 3715567, 3716627, 3717895, 3718542, 3721458, 3726403, 3727166, 3729704, 3733811, 3733812, 3737330, 3743913, 3744425, 3745651, 3749727, 3749776, 3753724, 3754492, 3755796, 3756002, 3765324, 3765998, 3769992, 3793241, 3794201, 3798437, 3807627, 3808285, 3817120, 3817229, 3821688, 3852346, 3852524, 3859708, 3868412, 3869158, 3875807, 3876001, 3877717, 3892587, 3896150, 3896151, 3896154, 3896156, 3896160, 3896223, 3896231, 3896261, 3896267, 3896269, 3896270, 3896273, 3896277, 3896281, 3896285, 3896303, 3896304, 3896306, 3896308, 3896310, 3896311, 3896312, 3896313, 3896314, 3901379, 3903657, 3904841, 3905056, 3905062, 3905431, 3906065, 3906148, 3906409, 3906410, 3906411, 3906412, 3906846, 3906961, 3907350, 3907359, 3908111, 3912033, 3916871, 3917204, 3926161, 3926166, 3926440, 3927027, 3927028, 3927029, 3927092, 3945129, 3945130, 3945676, 3945678, 3945680, 3945681, 3945682, 3947952, 3967008, 3967009, 3967012, 3967013, 3967015, 3967019, 3967020, 3967021, 3967022, 3967023, 3967024, 3967025, 3967026, 3967028, 3967029, 3967183, 3967184, 3967189, 3967190, 3967348, 3967349, 3967350, 3967351, 3967352, 3972960, 3980789, 3983567, 3983574, 3984272, 3984273, 3986744, 3986745, 3986746, 3986747, 3986748, 3986836, 3990432, 4000911, 4004399, 4009064, 4009065, 4009159, 4009162, 4009169, 4010339, 4010341, 4010346, 4010347, 4010348, 4018792, 4042461, 4042463, 4042464, 4042692, 4042767, 4042768, 4042769, 4043052, 4043056, 4043118, 4043275, 4043355, 4043384, 4043856, 4043865, 4043874, 4045974, 4045975, 4045976, 4045977, 4045978, 4046032

 Patch ID:
VRTSvxfs-6.2.1.8600-RHEL7
VRTSodm-6.2.1.8600-RHEL7
VRTSaslapm-6.2.1.8600-RHEL7
VRTSvxvm-6.2.1.8600-RHEL7
VRTSllt-6.2.1.9800-RHEL7
VRTSgab-6.2.1.9500-RHEL7
VRTSamf-6.2.1.9400-RHEL7
VRTSvxfen-6.2.1.7500-RHEL7
VRTSvcs-6.2.1.1200-RHEL7
VRTSdbac-6.2.1.8400-RHEL7
VRTSvcsag-6.2.1.1200-RHEL7

Readme file
                          * * * READ ME * * *
            * * * Symantec Storage Foundation HA 6.2.1 * * *
                         * * * Patch 7300 * * *
                         Patch Date: 2021-08-06


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Symantec Storage Foundation HA 6.2.1 Patch 7300


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL7 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSllt
VRTSodm
VRTSvcs
VRTSvcsag
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Symantec Application HA 6.2
   * Symantec Cluster Server 6.2
   * Symantec Dynamic Multi-Pathing 6.2
   * Symantec File System 6.2
   * Symantec Storage Foundation 6.2
   * Symantec Storage Foundation Cluster File System HA 6.2
   * Symantec Storage Foundation for Oracle RAC 6.2
   * Symantec Storage Foundation HA 6.2
   * Symantec Volume Manager 6.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxvm-6.2.1.8600
* 4043355 (3919559) IO hangs after pulling out all cables, when VVR is reconfigured.
* 4043856 (4012763) IO hang may happen in VVR (Veritas Volume Replicator) configuration when SRL overflows for one rlink while another one rlink is in AUTOSYNC mode.
* 4043865 (4010458) In VVR (Veritas Volume replicator), the rlink might inconsistently disconnect due to unexpected transactions.
Patch ID: VRTSvcsag-6.2.1.1200
* 4042461 (3994836) If an IPv4 address is not associated with a device IP, the resource goes into UNKNOWN state and interrupts failover.
* 4042767 (3986741) In a replication environment, a LVMVolumeGroup resource faults when multi-pathing is enabled for LVM disks within the group.
* 4042768 (3856669) On RHEL 7 systems, the Mount agent fails to unmount a checkpoint even though the CkptUmount attribute is set to 1.
Patch ID: VRTSvcsag-6.2.1.100
* 3807627 (3521067) If a VMwareDisks resource is online on a virtual machine, thevirtual machine shutdown operation hangs.
* 3852346 (3852345) DiskGroup agent fails to log a message before powering off the cluster node if 
the PanicSystemOnDGLoss attribute is set and the DiskGroup is disabled.
* 3852524 (3852521) VCS cluster becomes unavailable after virtual machine 
shutdown is initiated through vSphere UI
* 3859708 (3870031) diff_sync is not invoked for additional secondaries.
* 3869158 (3869156) When a virtual machine (VM) loses network connection to the ESX 
host, the VMwareDisks agent is unable to detach the disks from the VM. The VMwareDisk resources 
go into Administrative intervention state.
* 3876001 (3876000) The VMwareDisks agent always attaches disk with the 
persistent mode.
* 3877717 (3881394) RVGSharedPri agent do not support the multiple secondary(s)
configuration in CVM environment.
* 3892587 (3880663) After switching VVR Primary role  from one Geography to other, VVR replication 
might take some time to start after VVR role migration.
* 3905062 (3905061) vxvm-recover is unable to find the disk group and displays an
error.
* 3908111 (3908109) Mount agent support for NFS over IPv6.
* 3916871 (3916870) LVMVolumeGroup agent reports ONLINE state even if LUN is
removed from the cluster node
* 3917204 (3919377) Enable auto sync as an option after primary takes over for VVR RVGSharedPri and
RVGPrimary agents
Patch ID: VRTSvcsag-6.2.1.000
* 3662745 (3520211) The HostMonitor agent reports incorrect memory usage.
* 3699148 (3699146) The Application agent reports an application resource as
offline even when the resource is online.
Patch ID: VRTSdbac-6.2.1.8400
* 4045978 (4013953) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).
Patch ID: VRTSdbac-6.2.1.8300
* 4010348 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSdbac-6.2.1.8100
* 3986748 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSdbac-6.2.1.800
* 3967352 (3967265) Support for RHEL 7.6 and RHEL 7.x RETPOLINE kernels.
Patch ID: VRTSdbac-6.2.1.500
* 3945682 (3944179) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSdbac-6.2.1.400
* 3927092 (3925832) vcsmm module does not load with RHEL7.4
Patch ID: VRTSvcs-6.2.1.1200
* 4042464 (4026819) When IPv6 is disabled, non-root guest users cannot run HAD CLI commands.
* 4042769 (3995685) Discrepancy in engine log messages of PR and DR site in GCO configuration.
Patch ID: VRTSvxfen-6.2.1.7500
* 4045977 (4013953) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).
Patch ID: VRTSvxfen-6.2.1.7400
* 4010347 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSvxfen-6.2.1.7200
* 3986746 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSvxfen-6.2.1.900
* 3967350 (3967265) Support for RHEL 7.6 and RHEL 7.x RETPOLINE kernels.
Patch ID: VRTSvxfen-6.2.1.600
* 3945680 (3944179) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSvxfen-6.2.1.500
* 3927028 (3923100) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 4(RHEL7.4).
Patch ID: VRTSvxfen-6.2.1.300
* 3906411 (3896877) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).
Patch ID: VRTSvxfen-6.2.1.200
* 3912033 (2852872) Fencing sometimes shows "replaying" RFSM state for some nodes
in the cluster.
Patch ID: VRTSvxfen-6.2.1.100
* 3794201 (3794154) Veritas Cluster Server (VCS) does not support Red Hat Enterprise Linux 6 Update 7
(RHEL6.7).
Patch ID: VRTSvxfen-6.2.1.000
* 3691204 (3691202) Sometimes fencing in the customized mode fails to start when the number of files present in the current working directory is large.
* 3714463 (3722178) The rpm --verify command on VXFEN changes the runlevel settings for the VXFEN service.
Patch ID: VRTSamf-6.2.1.9400
* 4018792 (4018791) A cluster node panics when the AMF module attempts to access an executable binary or a script using its absolute path.
* 4045976 (4013953) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).
Patch ID: VRTSamf-6.2.1.9300
* 4010346 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSamf-6.2.1.9100
* 3986747 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSamf-6.2.1.900
* 3967351 (3967265) Support for RHEL 7.6 and RHEL 7.x RETPOLINE kernels.
Patch ID: VRTSamf-6.2.1.600
* 3945681 (3944179) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
* 3947952 (3947895) Select appropriate module to load in case exact module not available.
Patch ID: VRTSamf-6.2.1.500
* 3927029 (3923100) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 4(RHEL7.4).
Patch ID: VRTSamf-6.2.1.300
* 3906412 (3896877) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).
Patch ID: VRTSamf-6.2.1.000
* 3661452 (3652819) VxFS module fails to unload because the AMF module fails to decrement the reference count.
Patch ID: VRTSgab-6.2.1.9500
* 4042463 (4011683) The GAB module failed to start and the system log messages indicate failures with the mknod command.
* 4045975 (4013953) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).
Patch ID: VRTSgab-6.2.1.9400
* 4010341 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSgab-6.2.1.9200
* 3986744 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSgab-6.2.1.910
* 3967349 (3967265) Support for RHEL 7.6 and RHEL 7.x RETPOLINE kernels.
Patch ID: VRTSgab-6.2.1.700
* 3945678 (3944179) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSgab-6.2.1.600
* 3927027 (3923100) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 4(RHEL7.4).
Patch ID: VRTSgab-6.2.1.400
* 3906410 (3896877) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).
Patch ID: VRTSgab-6.2.1.300
* 3875807 (3875805) In some rare cases, if a few unicast messages are stuck in 
the Group Membership Atomic Broadcast (GAB) receive queue of a port, the port 
might receive a GAB I/O fence message.
Patch ID: VRTSllt-6.2.1.9800
* 4042692 (3992144) The system may panic while freeing LLT packets.
* 4045974 (4013953) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).
Patch ID: VRTSllt-6.2.1.9700
* 4010339 (3998676) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).
Patch ID: VRTSllt-6.2.1.9500
* 3986745 (3982213) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSllt-6.2.1.940
* 3967348 (3967265) Support for RHEL 7.6 and RHEL 7.x RETPOLINE kernels.
Patch ID: VRTSllt-6.2.1.910
* 3945676 (3944179) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).
Patch ID: VRTSllt-6.2.1.900
* 3926440 (3923100) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 4(RHEL7.4).
Patch ID: VRTSllt-6.2.1.800
* 3905431 (3905430) Application IO hangs in case of FSS with LLT over RDMA during heavy data transfer.
Patch ID: VRTSllt-6.2.1.700
* 3906409 (3896877) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).
Patch ID: VRTSllt-6.2.1.600
* 3905431 (3905430) Application IO hangs in case of FSS with LLT over RDMA during heavy data transfer.
Patch ID: VRTSodm-6.2.1.8600
* 4043118 (3877256) The Linux system panics at vx_uioctl(), which is called from odm_clust_delete().
Patch ID: VRTSodm-6.2.1.8500
* 4009065 (3995697) ODM module failed to load on RHEL7.8.
Patch ID: VRTSodm-6.2.1.8400
* 3984273 (3981630) ODM module failed to load on RHEL7.7
Patch ID: VRTSodm-6.2.1.8200
* 3868412 (3716577) Failed thread fork causing ODM ERROR V-41-4-1-354-22
* 3967184 (3958865) ODM module failed to load on RHEL7.6.
Patch ID: VRTSodm-6.2.1.700
* 3945130 (3938546) ODM module failed to load on RHEL7.5.
Patch ID: VRTSodm-6.2.1.500
* 3926166 (3923310) ODM module failed to load on RHEL7.4.
Patch ID: VRTSodm-6.2.1.300
* 3906065 (3757609) CPU usage going high because of contention over ODM_IO_LOCK
Patch ID: VRTSvxfs-6.2.1.8600
* 4043052 (3946098) Typo in o/p of 'mount -v' in linux.
* 4043056 (4003395) System got panicked during upgrading IS 6.2.1 to IS 7.4.2 using phased upgrade.
* 4043275 (4000465) FSCK binary loops when it detects break in sequence of log ids.
* 4043384 (3979297) A kernel panic occurs when installing VxFS on RHEL6.
* 4043874 (3915464) vxupgrade from DLV 9 to 10 with partition directory enabled caused system corruption  camode 0xff
* 4046032 (4042684) ODM resize fails for size 8192.
Patch ID: VRTSvxfs-6.2.1.8500
* 3990432 (3911048) LDH corrupt and filesystem hang.
* 4000911 (3853315) Fsck core dumped in check_dotdot() when processing regular inode with named attribute inode associated.
* 4004399 (3911314) CPU load overhead caused by mounting up VxFS file system
* 4009064 (3995694) VxFS module failed to load on RHEL7.8.
* 4009159 (3940268) File system might get disabled in case the size of the directory surpasses the
vx_dexh_sz value.
* 4009162 (3947648) Mistuning of vxfs_ninode and vx_bc_bufhwm to very small 
value.
* 4009169 (3998168) vxresize operations results in system freeze for 8-10 mins causing application 
hangs and VCS timeouts
Patch ID: VRTSvxfs-6.2.1.8400
* 3972960 (3905099) VxFS unmount panicked in deactive_super().
* 3980789 (3980754) In function vx_io_proxy_thread(), system may hit kernel panic due to general protection fault.
* 3983567 (3922986) Dead lock issue with buffer cache iodone routine in CFS.
* 3983574 (3978149) FIFO file's timestamps are not updated in case of writes.
* 3984272 (3981627) VxFS module failed to load on RHEL7.7.
* 3986836 (3987141) Vxcompress operation generates core dumps
Patch ID: VRTSvxfs-6.2.1.8200
* 3967008 (3908785) System panic observed because of null page address in writeback 
structure in case of 
kswapd process.
* 3967009 (3909553) Use the GAB backenable flow control interface.
* 3967012 (3890701) The write back recovery process got hung.
* 3967013 (3866962) Data corruption seen when dalloc writes are going on the file and 
simultaneously fsync started on the same file.
* 3967015 (3931026) umounting a CFS is hung on RHEL6.6
* 3967019 (3908954) Some writes could be missed causing data loss.
* 3967020 (3940516) File resize thread loops infinitely for file resize operation crossing 32 bit
boundary.
* 3967021 (3894223) On the servers having multi TB memory, vxfs mount command 
takes time to finish.
* 3967022 (3907587) Application may get blocked during File System de-fragmentation.
* 3967023 (3930267) Deadlock between fsq flush threads and writer threads.
* 3967024 (3922259) Force umount hang in vx_idrop
* 3967025 (2780633) Support for multiple device driver (MD)
* 3967026 (3896670) Intermittent CFS hang like situation with many CFS pglock 
grant messages pending on LLT layer
* 3967028 (3934175) 4-node FSS CFS experienced IO hung on all nodes.
* 3967029 (3938256) When checking file size through seek_hole, it will return incorrect offset/size 
when delayed allocation is enabled on the file.
* 3967183 (3958853) VxFS module failed to load on RHEL7.6.
* 3967189 (3917538) system hang
* 3967190 (3933763) Oracle Hang in VxFS.
Patch ID: VRTSvxfs-6.2.1.700
* 3945129 (3938544) VxFS module failed to load on RHEL7.5.
Patch ID: VRTSvxfs-6.2.1.500
* 3926161 (3923307) VxFS module failed to load on RHEL7.4.
Patch ID: VRTSvxfs-6.2.1.300
* 3817229 (3762174) fsfreeze and vxdump commands may not work together.
* 3896150 (3833816) Read returns stale data on one node of the CFS.
* 3896151 (3827491) Data relocation is not executed correctly if the IOTEMP policy is set to AVERAGE.
* 3896154 (1428611) 'vxcompress' can spew many GLM block lock messages over the 
LLT network.
* 3896156 (3633683) vxfs thread consumes high CPU while running an 
application 
that makes excessive sync() calls.
* 3896160 (3808033) When using 6.2.1 ODM on RHEL7, Oracle resource cannot be killed after forced umount via VCS.
* 3896223 (3735697) vxrepquota reports error
* 3896231 (3708836) fallocate causes data corruption
* 3896261 (3855726) Panic in vx_prot_unregister_all().
* 3896267 (3861271) Missing an inode clear operation when a Linux inode is being de-initialized on
SLES11.
* 3896269 (3879310) File System may get corrupted after a failed vxupgrade.
* 3896270 (3707662) Race between reorg processing and fsadm timer thread (alarm expiry) leads to panic in vx_reorg_emap.
* 3896273 (3558087) The ls -l and other commands which uses stat system call may
take long time to complete.
* 3896277 (3691633) Remove RCQ Full messages
* 3896281 (3830300) Degraded CPU performance during backup of Oracle archive logs
on CFS vs local filesystem
* 3896285 (3757609) CPU usage going high because of contention over ODM_IO_LOCK
* 3896303 (3762125) Directory size increases abnormally.
* 3896304 (3846521) "cp -p" fails if modification time in nano seconds have 10 
digits.
* 3896306 (3790721) High cpu usage caused by vx_send_bcastgetemapmsg_remaus
* 3896308 (3695367) Unable to remove volume from multi-volume VxFS using "fsvoladm" command.
* 3896310 (3859032) System panics in vx_tflush_map() due to NULL pointer 
de-reference.
* 3896311 (3779916) vxfsconvert fails to upgrade layout verison for a vxfs file 
system with large number of inodes.
* 3896312 (3811849) On cluster file system (CFS), while executing lookup() function in a directory
with Large Directory Hash (LDH), the system panics and displays an error.
* 3896313 (3817734) Direct command to run  fsck with -y|Y option was mentioned in
the message displayed to user when file system mount fails.
* 3896314 (3856363) Filesystem inodes have incorrect blocks.
* 3901379 (3897793) Panic happens because of race where the mntlock ID is 
cleared while mntlock flag still set.
* 3903657 (3857254) Assert failure because of missed flush before taking 
filesnap of the file.
* 3904841 (3901318) VxFS module failed to load on RHEL7.3.
* 3905056 (3879761) Performance issue observed due to contention on vxfs spin lock
vx_worklist_lk.
* 3906148 (3894712) ACL permissions are not inherited correctly on cluster 
file system.
* 3906846 (3872202) VxFS internal test hits an assert.
* 3906961 (3891801) Internal test hit debug assert.
* 3907350 (3817734) Direct command to run  fsck with -y|Y option was mentioned in
the message displayed to user when file system mount fails.
* 3907359 (3907722) kernel BUG at fs/dcache.c:964.
Patch ID: VRTSvxfs-6.2.1.100
* 3753724 (3731844) umount -r option fails for vxfs 6.2.
* 3754492 (3761603) Internal assert failure because of invalid extop processing 
at the mount time.
* 3756002 (3764824) Internal cluster file system(CFS) testing hit debug assert
* 3765324 (3736398) NULL pointer dereference panic in lazy unmount.
* 3765998 (3759886) In case of nested mount, force umount of parent leaves 
stale child entry in /etc/mtab even after subsequent umount of child.
* 3769992 (3729158) Deadlock due to incorrect locking order between write advise
and dalloc flusher thread.
* 3793241 (3793240) Vxrestore command dumps core file because of invalid 
japanese strings.
* 3798437 (3812914) On RHEL 6.5 and RHEL 6.4 latest kernel patch, umount(8) system call hangs if an
application watches for inode events using inotify(7) APIs.
* 3808285 (3808284) fsdedupadm status Japanese text includes strange character.
* 3817120 (3804400) VRTS/bin/cp does not return any error when quota hard 
limit is reached and partial write is encountered.
* 3821688 (3821686) VxFS module failed to load on SLES11 SP4.
Patch ID: VRTSvxfs-6.2.1.000
* 3093833 (3093821) The system panics due to referring freed super block after the vx_unmount() function errors.
* 3657150 (3604071) High CPU usage consumed by the vxfs thread process.
* 3657151 (3513507) Filesystem umount() may hang due to deadlock in kernel
* 3657152 (3602322) System panics while flushing the dirty pages of the inode.
* 3657153 (3622323) Cluster Filesystem mounted as read-only panics when it gets sharing and/or compression statistics with the fsadm_vxfs(1M) command.
* 3657156 (3604750) The kernel loops during the extent re-org.
* 3657157 (3617191) Checkpoint creation takes a lot of time.
* 3657158 (3601943) Truncating corrupted block map of a file may lead to an infinite loop.
* 3657159 (3633067) While converting from ext3 file system to VxFS using vxfsconvert, it is observed that many inodes are missing..
* 3665980 (2059611) The system panics due to a NULL pointer dereference while
flushing bitmaps to the disk.
* 3665984 (2439261) When the vx_fiostats_tunable value is changed from zero to
non-zero, the system panics.
* 3665990 (3567027) During the File System resize operation, the "fullfsck flag is set.
* 3666007 (3594386) On RHEL6u5, stack overflows lead to system panic.
* 3666008 (3616907) System is unresponsive causing the NMI watchdog service to 
stall.
* 3666010 (3233276) With a large file system, primary to secondary migration takes longer duration.
* 3683470 (3682138) The VxVM symbols are released before the VxFS modules unload time.
* 3687679 (3685391) Execute permissions for a file not honored correctly.
* 3697142 (3697141) Added support for Sles12
* 3697966 (3697964) The vxupgrade(1M) command fails to retain the fs_flags after upgrading a file system.
* 3698165 (3690078) The system panics at vx_dev_strategy() routine due to stack overflow.
* 3706864 (3709947) The SSD cache fails to go offline due to additional slashes "//" in the dev path of the cache device.
* 3710794 (3710792) Unmount fails when the mount point contains a special character(colon).
* 3715567 (3715566) VxFS fails to report an error when the maxlink and nomaxlink options are set on file systems having disk layout version (DLV) lower than 10.
* 3716627 (3622326) During a checkpoint promote, a file system is marked with a fullfsck flag because the corresponding inode is marked as bad.
* 3717895 (2919310) During stress testing on cluster file system, an assertion failure was hit because of a missing linkage between the directory and the associated attribute inode.
* 3718542 (3269553) VxFS returns inappropriate message for read of hole via Oracle Disk Manager (ODM).
* 3721458 (3721466) After a file system is upgraded from version 6 to 7, the vxupgrade(1M) command fails to set the VX_SINGLEDEV flag on a superblock.
* 3726403 (3739618) sfcache command with "-i" option maynot show filesystem cache statistic periodically.
* 3727166 (3727165) Enhance RHEV support for SF devices for identification in Guest
* 3729704 (3719523) 'vxupgrade' retains the superblock replica of old layout versions.
* 3733811 (3729030) The fsdedupschd daemon failed to start on RHEL7.
* 3733812 (3729030) The fsdedupschd daemon failed to start on RHEL7.
* 3737330 (3737329) Added support for RHEL7.1
* 3743913 (3743912) Users could create sub-directories more than 64K for disk layouts having versions lower than 10.
* 3744425 (3744424) Rebundled the fix "Openssl: Common Vulnerability and Exposure (CVE) CVE-
2014-3566 (POODLE)" as 6.2.1
* 3745651 (3642314) Umount operation reports error code 255 in case of write-back cache.
* 3749727 (3750187) Internal noise testing hits debug assert.
* 3749776 (3637636) Cluster File System (CFS) node initialization and protocol upgrade may hang during the rolling upgrade.
* 3755796 (3756750) VxFS may leak memory when File Design Driver (FDD) module is unloaded before the cache file system is taken offline.
Patch ID: VRTSvxfs-6.2.0.100
* 3703631 (3615043) Data loss when writing to a file while dalloc is on.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxvm-6.2.1.8600

* 4043355 (Tracking ID: 3919559)

SYMPTOM:
IO hangs after pulling out all cables, when VVR(Veritas Volume Replicator) 
is 
reconfigured.

DESCRIPTION:
When VVR is configured and SRL(Storage Replicator Log) batch feature is 
enabled, after pulling out all cable, if more than one IO get queued in VVR 
before a header error, due to a bug in VVR, at least one IO won't be 
handled, 
hence the issue.

RESOLUTION:
Code has been modified to get every queued IO in VVR handled properly.

* 4043856 (Tracking ID: 4012763)

SYMPTOM:
IO hang may happen in VVR (Veritas Volume Replicator) configuration when SRL overflows for one rlink while another one rlink is in AUTOSYNC mode.

DESCRIPTION:
In VVR, if the SRL overflow happens for rlink (R1) and some other rlink (R2) is ongoing the AUTOSYNC, then AUTOSYNC is aborted for R2, R2 gets detached and DCM mode is activated on R1 rlink.

However, due to a race condition in code handling AUTOSYNC abort and DCM activation in parallel, the DCM could not be activated properly and IO which caused DCM activation gets queued incorrectly, this results in a IO hang.

RESOLUTION:
The code has been modified to fix the race issue in handling the AUTOSYNC abort and DCM activation at same time.

* 4043865 (Tracking ID: 4010458)

SYMPTOM:
In VVR (Veritas Volume replicator), the rlink might inconsistently disconnect due to unexpected transactions with below messages:
VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed

DESCRIPTION:
In VVR (Veritas Volume replicator), a transaction is triggered when a change in the VxVM/VVR objects needs 
to be persisted on disk. 

In some scenario, few unnecessary transactions were getting triggered in loop. This was causing multiple rlink
disconnects with below message logged frequently:
VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed

One such unexpected transaction was happening due to open/close on volume as part of SmartIO caching.
Additionally, vradmind daemon was also issuing some open/close on volumes as part of IO statistics collection,
which was causing unnecessary transactions. 

Additionally some unexpected transactions were happening due to incorrect checks in code related
to some temporary flags on volume.

RESOLUTION:
The code is fixed to disable the SmartIO caching on the volumes if the SmartIO caching is not configured on the system.
Additionally code is fixed to avoid the unexpected transactions due to incorrect checking on the temporary flags
on volume.

Patch ID: VRTSvcsag-6.2.1.1200

* 4042461 (Tracking ID: 3994836)

SYMPTOM:
If an IPv4 address is not associated with a device IP, the resource goes into UNKNOWN state and interrupts failover.

DESCRIPTION:
An IP address is used to check the presence of a NIC device. If an IPv4 address is not plumbed on a device or the base IPv4 address gets detached from the device, the monitor function cannot detect the NIC device. In such a case, the monitor function returns the UNKNOWN state and interrupts the failover.

RESOLUTION:
The Monitor function is modified to check the current and the previous state of the resource and change it to OFFLINE if the previous state of the resource was ONLINE.

* 4042767 (Tracking ID: 3986741)

SYMPTOM:
In a replication environment, a LVMVolumeGroup resource faults when multi-pathing is enabled for LVM disks within the group.

DESCRIPTION:
After the replication agent resource fails over to secondary site, the LVMVolumeGroup agent does not rescan all the devices in case of a multi-pathing configuration. The vgimport command fails, because the correct device state for all the paths is not known.

RESOLUTION:
The LVMVolumeGroup agent is enhanced to rescan all the paths of device or a disk before a disk group is imported.

* 4042768 (Tracking ID: 3856669)

SYMPTOM:
On RHEL 7 systems, the Mount agent fails to unmount a checkpoint even though the CkptUmount attribute is set to 1.

DESCRIPTION:
This issue occurs due to the change in the file system type. Along with the value of CkptUmount, the Mount agent takes the file system type into consideration. The value of the "vxclonefs" checkpoint appears as "file system" on RHEL 7 systems, whereas it appears as "vxfs" on RHEL 6 systems. Due to this mismatch, the Mount agent fails to unmount the checkpoint.

RESOLUTION:
The Mount agent is updated to unmount the checkpoint correctly when CkptUmount is set to 1.

Patch ID: VRTSvcsag-6.2.1.100

* 3807627 (Tracking ID: 3521067)

SYMPTOM:
When a VMwareDisks resource is online on a virtual machine, if you initiate a shutdown for that virtual machine from the vSphere UI, the shutdown operation hangs.

DESCRIPTION:
During a virtual machine (VM) shutdown, Veritas Cluster 
Server (VCS) tries to offline the VMwareDisks resources. As part of the offline operation, the VMwareDisks agent attempts to detach configured disks.
VMware does not allow disk operations during a VM shutdown. As a result, the detach operation fails and effectively the resource offline operation also fails.The VM continues to wait until the offline operation is complete (through Manual intervention).

RESOLUTION:
The VMwareDisks agent is modified to fix this issue. The agent now allows shutdown of a virtual machine even if the detach of the configured disks
fail. The disks are later detached when VCS brings the resource online on the failover VM.

* 3852346 (Tracking ID: 3852345)

SYMPTOM:
DiskGroup agent fails to log a message before powering off the cluster node if 
the PanicSystemOnDGLoss attribute is set and the DiskGroup is disabled.

DESCRIPTION:
If the DiskGroup goes into disabled state when the PanicSystemOnDGLoss 
attribute is set for the DiskGroup resource, the DiskGroup agent fails to log 
a message as it tries to power off the cluster node.

RESOLUTION:
The code is modified to log a message in the system log after the reboot of 
the cluster node.

* 3852524 (Tracking ID: 3852521)

SYMPTOM:
After vSphere UI initiates shutdown operation of the virtual 
machine in which the VMwareDisks resource was online, the VCS cluster 
becomes unavailable because cluster nodes may go down.

DESCRIPTION:
During the virtual machine (VM) shutdown where VMwareDisks 
resource was online, VCS tries to bring the VMwareDisks resource online on 
the failover cluster node. The VMware attach operation during online may 
take a long time if disks are still attached to the source node and 
potentially hang the failover system, thereby triggering Low Latency 
Transport (LLT) heartbeat loss. This loss can initiate fencing action that 
may panic the failover node, thus jeopardizing the cluster.

RESOLUTION:
The VMwareDisks agent is modified to call VMware attach 
operation only when disks are not attached to any other node. This new 
behavior eliminates the situation where attach operation may hang the system 
and cause heartbeat loss.

* 3859708 (Tracking ID: 3870031)

SYMPTOM:
Additional secondaries are in disconnected state.

DESCRIPTION:
In the Perl module RVGPrimaryAgent.pm we are closing both STDERR 
and
STDOUT 
before executing a Perl script within the Perl module. 
 
    close STDOUT;
    close STDERR;

Due to closing STDOUT and STDERR perl script(startrep CLI in diff_sync) is not
invoked which causes additional secondaries in disconnect state.

RESOLUTION:
Updated RVGPrimaryAgent.pm to assign STDOUT to dev/null. and
observed that additional secondaries are in connected state

* 3869158 (Tracking ID: 3869156)

SYMPTOM:
When a virtual machine (VM) loses network connection to the ESX 
host, the VMwareDisks agent is unable to detach the disks from the VM. The VMwareDisk resources 
go into Administrative intervention state.

DESCRIPTION:
When a VM loses network connection to the ESX host, the VMwareDisks 
agent is unable to login to the ESX host to check the state of the disks. As a 
result, the VMwareDisk resources go into UNKNOWN state. Its also unable to take the 
resources offline because the connection with ESX host is lost, thus making it 
impossible to receive instructions to detach the disks from the VM. The resources go 
into Administrative intervention state.

RESOLUTION:
The code is modified so that it allows service group failover in such 
cases because other nodes in the cluster might be able to communicate with the ESX 
host.

* 3876001 (Tracking ID: 3876000)

SYMPTOM:
The VMwareDisks agent always attaches disk with the persistent mode.

DESCRIPTION:
The VMwareDisks agent does not support attaching disks with 
independent persistent or independent nonpersistent mode. If the disk mode is 
not set to persistent outside of VCS, the VMwareDisks agent does not preserve 
the disk mode across cluster.

RESOLUTION:
The code is modified so that the VMwareDisks agent can support 
different disk modes.

* 3877717 (Tracking ID: 3881394)

SYMPTOM:
RVGSharedPri agent do not support the multiple secondary(s)
configuration in CVM environment.

DESCRIPTION:
The RVGSharedPri agent supports only one secondary so if there are
multiple secondary(s) in configuration agent fails to change the role from
secondary to primary.

RESOLUTION:
RVGSharedPri agent is updated to support the multiple secondary(s)
environment.

* 3892587 (Tracking ID: 3880663)

SYMPTOM:
After switching VVR Primary role from one Geography to another, VVR role 
migration takes place quickly, however VVR Replication might take 
some time before starting from new primary to other secondaries.

DESCRIPTION:
In multiple Geography cluster environment, after switching Global service 
group to another site, the Global service groups and Netbackup service come
online quickly and become available for use quickly, but it takes time to
start the replication from the new VVR Primary site to other secondary sites.
This is because RVGPrimary agent performs a differential based sync operation 
which might take some time to finish depending on the size of the data volumes 
configured for replication.

RESOLUTION:
Code Changes have been made to avoid using diff sync for migrate operation in 
RVGPrimary Agent.

* 3905062 (Tracking ID: 3905061)

SYMPTOM:
The DiskGroup agent attempts to stop all the volumes and simultaneously
deport the disk group. If the disk group has been deported and if all the
volumes have not been stopped, the following error message is displayed: 
VxVM vxdg ERROR V-5-1-582 Disk group a01dg: No such disk group

DESCRIPTION:
The DiskGroup agent stopall command internally triggers events to vxvm-recover,
but since the disk group has already been deported, vxvm-recover is unable to
find the disk group and displays an error.

RESOLUTION:
The DiskGroup agent code is modified to check whether all volumes have been
stopped before deporting the disk group.

* 3908111 (Tracking ID: 3908109)

SYMPTOM:
If the NFS server is configured over IPv6, the Mount agent does not work for NFS
share mount points.

DESCRIPTION:
If the NFS server is configured over IPv6, the Mount agent is unable to monitor
mount resources because of a parsing error in the agents MonitorProgram.

RESOLUTION:
Mount agent is modified to fix this issue.

* 3916871 (Tracking ID: 3916870)

SYMPTOM:
LVMVolumeGroup agents report the resource state as ONLINE even after
removing LUN from the cluster node.

DESCRIPTION:
When a LUN is removed, LVMVolumeGroup agent could not identify the
device level metadata inconsistencies. This results in displaying incorrect
state for the resource.

RESOLUTION:
The LVMVolumeGroup agent code is modified to identify meta data
inconsistencies and to correct the issue.

* 3917204 (Tracking ID: 3919377)

SYMPTOM:
RVGSharedPri and RVGPrimary agents only supports diff sync at the moment. When
VVR(Veritas Volume Replicator) primary role changes, diff sync doesn't have
convenient way to track its progress and takes much longer time than auto sync
to resync all data volumes.

DESCRIPTION:
When VVR(Veritas Volume Replicator) Primary role changes, it requires resync
between new primary node and other secondary nodes for data volumes. VCS(Veritas
Cluster Server) RVGSharedPri and  RVGPrimary agents only supports diff sync for
resync at the moment. In most situation, auto sync is smarter and has better
performance than diff sync. Also, VVR has command to track auto sync progress.

RESOLUTION:
RVGSharedPri and RVGPrimary agents scripts have been enhanced to support auto
sync as an option. New attribute ResyncType is introduced in RVGSharedPri and
RVGprimary agents to enable user to choose auto sync(1) or diff sync(0). By
default diff sync is used for resync. To use auto sync:
        # haconf -makerw
	# hares -modify <RVGSharedPri_Resource_name> ResyncType 1
	# hares -modify <RVGPrimary_Resource_name> ResyncType 1
	# haconf -dump -makero
	# hares -value <RVGSharedPri_Resource_name> ResyncType
	# hares -value <RVGPrimary_Resource_name> ResyncType

To track auto sync progress:
        # vxrlink -g <dg name> -i <time interval> status <rlk name>
        # vradmin -g <dg name> repstatus <rvg name>


Provide the following information:

DOCUMENT_TITLE:
TOPIC_NAME:
IMPACTED PRODUCTS: 
IS THE ISSUE CROSS-PLATFORM? (Y/N): 
Provide clear feedback. Use a numbered list for multiple feedback items.

Patch ID: VRTSvcsag-6.2.1.000

* 3662745 (Tracking ID: 3520211)

SYMPTOM:
The HostMonitor agent reports incorrect memory usage.

DESCRIPTION:
The issue was observed because the HostMonitor agent for Linux did not consider the buffer and cached memory while calculating the available free memory for the system.

RESOLUTION:
The code is modified to calculate the available free memory considering the available buffer and cache memory.

* 3699148 (Tracking ID: 3699146)

SYMPTOM:
Occasionally, the Application agent reports an application resource as 
offline even when the resource is online.

DESCRIPTION:
The Application agent was incorrectly comparing running processes, 
due to which, occasionally, the application resource is wrongly reported as 
offline even when the resource is online.

RESOLUTION:
The code is modified to report the correct state of the application.

Patch ID: VRTSdbac-6.2.1.8400

* 4045978 (Tracking ID: 4013953)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 8.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
9(RHEL7.9) is now introduced.

Patch ID: VRTSdbac-6.2.1.8300

* 4010348 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSdbac-6.2.1.8100

* 3986748 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSdbac-6.2.1.800

* 3967352 (Tracking ID: 3967265)

SYMPTOM:
RHEL 7.x RETPOLINE kernels and RHEL 7.6 are not supported

DESCRIPTION:
Red Hat has released RHEL 7.6 which has RETPOLINE kernel, and also released RETPOLINE kernels for older RHEL 7.x Updates. Veritas Cluster Server 
kernel modules need to be recompiled with RETPOLINE aware GCC to support RETPOLINE kernel.

RESOLUTION:
Support for RHEL 7.6 and RETPOLINE kernels on RHEL 7.x kernels is now introduced.

Patch ID: VRTSdbac-6.2.1.500

* 3945682 (Tracking ID: 3944179)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
5(RHEL7.5) is now introduced.

Patch ID: VRTSdbac-6.2.1.400

* 3927092 (Tracking ID: 3925832)

SYMPTOM:
vcsmm module does not load with RHEL7.4

DESCRIPTION:
Since RHEL7.4 is new release therefore vcsmm module failed to load
on it.

RESOLUTION:
The VRTSdbac package is re-compiled with RHEL7.4 kernel (3.10.0-693.el7.x86_64)
in the build environment to mitigate the failure.

Patch ID: VRTSvcs-6.2.1.1200

* 4042464 (Tracking ID: 4026819)

SYMPTOM:
Non-root users of GuestGroup in a secure cluster cannot execute VCS commands like "hagrp -state".

DESCRIPTION:
When a non-root guest user runs a HAD CLI command, the command fails to execute and the following error is logged: "VCS ERROR V-16-1-10600 Cannot connect to VCS engine". This issue occurs when IPv6 is disabled.

RESOLUTION:
This hotfix updates the VCS module to run HAD CLI commands successfully even when IPv6 is disabled.

* 4042769 (Tracking ID: 3995685)

SYMPTOM:
A discrepancy was observed between the VCS engine log messages at the primary site and those at the DR site in a GCO configuration.

DESCRIPTION:
If a resource that was online at the primary site is taken offline outside VCS control, the VCS engine logs the messages related to the unexpected change in the state of the resource[, successful clean Entry Point execution and so on]. The messages clearly indicate that the resource is faulted. However, the VCS engine does not log any debugging error messages regarding the fault on the primary site, but instead logs them at the DR site. Consequently, there is a discrepancy in the engine log messages at the primary site and those at the DR site.

RESOLUTION:
The VCS engine module is updated to log the appropriate debugging error messages at the primary site when a resource goes into the Faulted state.

FILE / VERSION:
had.exe / 7.4.10004.0
hacf.exe / 7.4.10004.0
haconf.exe  / 7.4.10004.0

Patch ID: VRTSvxfen-6.2.1.7500

* 4045977 (Tracking ID: 4013953)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 8.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
9(RHEL7.9) is now introduced.

Patch ID: VRTSvxfen-6.2.1.7400

* 4010347 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSvxfen-6.2.1.7200

* 3986746 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSvxfen-6.2.1.900

* 3967350 (Tracking ID: 3967265)

SYMPTOM:
RHEL 7.x RETPOLINE kernels and RHEL 7.6 are not supported

DESCRIPTION:
Red Hat has released RHEL 7.6 which has RETPOLINE kernel, and also released RETPOLINE kernels for older RHEL 7.x Updates. Veritas Cluster Server 
kernel modules need to be recompiled with RETPOLINE aware GCC to support RETPOLINE kernel.

RESOLUTION:
Support for RHEL 7.6 and RETPOLINE kernels on RHEL 7.x kernels is now introduced.

Patch ID: VRTSvxfen-6.2.1.600

* 3945680 (Tracking ID: 3944179)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
5(RHEL7.5) is now introduced.

Patch ID: VRTSvxfen-6.2.1.500

* 3927028 (Tracking ID: 3923100)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 4(RHEL7.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
4(RHEL7.4) is now introduced.

Patch ID: VRTSvxfen-6.2.1.300

* 3906411 (Tracking ID: 3896877)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
3(RHEL7.3) is now introduced.

Patch ID: VRTSvxfen-6.2.1.200

* 3912033 (Tracking ID: 2852872)

SYMPTOM:
Veritas Fencing command "vxfenadm -d" sometimes shows "replaying" RFSM
state for some nodes in the cluster.

DESCRIPTION:
During cluster startup, sometimes fencing RFSM keeps showing
"replaying" state for a node, but in fact the node has entered "running" state.

RESOLUTION:
The code is modified so that now fencing does not show incorrect RFSM
state for a node.

Patch ID: VRTSvxfen-6.2.1.100

* 3794201 (Tracking ID: 3794154)

SYMPTOM:
Veritas Cluster Server (VCS) does not support Red Hat Enterprise Linux 6 Update 7
(RHEL6.7).

DESCRIPTION:
VCS did not support RHEL versions released after RHEL6 Update 6.

RESOLUTION:
VCS support for Red Hat Enterprise Linux 6 Update 7 (RHEL6.7) is now introduced.

Patch ID: VRTSvxfen-6.2.1.000

* 3691204 (Tracking ID: 3691202)

SYMPTOM:
Sometimes fencing fails to start and reports an error that it was unable to register with majority of coordination points.

DESCRIPTION:
The failure was observed due to an issue in the fencing startup script where a wildcard expansion in an echo expression produced some undesired value. The undesired value caused the fencing startup failure.

RESOLUTION:
The fencing code is modified so that the wildcard in the echo
expression is  does not get expanded.

* 3714463 (Tracking ID: 3722178)

SYMPTOM:
The runlevel setting for VXFEN service changes upon running the rpm --verify command on VXFEN.

DESCRIPTION:
The unexpected behavior is observed due to an issue in the VXFEN spec file which was turning on the VXFEN service.

RESOLUTION:
The VXFEN spec file code is modified so that the runlevel settings of VXFEN service are not modified during the verification phase.

Patch ID: VRTSamf-6.2.1.9400

* 4018792 (Tracking ID: 4018791)

SYMPTOM:
A cluster node panics when the AMF module module attempts to access an executable binary or a script using its absolute path.

DESCRIPTION:
A cluster node panics and generates a core dump, which indicates that an issue with the AMF module. The AMF module function that locates an executable binary or a script using its absolute path fails to handle NULL values.

RESOLUTION:
The AMF module is updated to handle NULL values when locating an executable binary or a script using its absolute path.

* 4045976 (Tracking ID: 4013953)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 8.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
9(RHEL7.9) is now introduced.

Patch ID: VRTSamf-6.2.1.9300

* 4010346 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSamf-6.2.1.9100

* 3986747 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSamf-6.2.1.900

* 3967351 (Tracking ID: 3967265)

SYMPTOM:
RHEL 7.x RETPOLINE kernels and RHEL 7.6 are not supported

DESCRIPTION:
Red Hat has released RHEL 7.6 which has RETPOLINE kernel, and also released RETPOLINE kernels for older RHEL 7.x Updates. Veritas Cluster Server 
kernel modules need to be recompiled with RETPOLINE aware GCC to support RETPOLINE kernel.

RESOLUTION:
Support for RHEL 7.6 and RETPOLINE kernels on RHEL 7.x kernels is now introduced.

Patch ID: VRTSamf-6.2.1.600

* 3945681 (Tracking ID: 3944179)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
5(RHEL7.5) is now introduced.

* 3947952 (Tracking ID: 3947895)

SYMPTOM:
Select appropriate module to load in case exact module not available.

DESCRIPTION:
In case exact version of module is not available, and a module compiled with 
higher kernel version than loaded is available, it may be loaded first.

RESOLUTION:
The source code is modified to load the appropriate module version
when exact module version for the kernel is not found.

Patch ID: VRTSamf-6.2.1.500

* 3927029 (Tracking ID: 3923100)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 4(RHEL7.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
4(RHEL7.4) is now introduced.

Patch ID: VRTSamf-6.2.1.300

* 3906412 (Tracking ID: 3896877)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
3(RHEL7.3) is now introduced.

Patch ID: VRTSamf-6.2.1.000

* 3661452 (Tracking ID: 3652819)

SYMPTOM:
VxFS module fails to unload because the Asynchronous Monitoring Framework(AMF) module does not decrement the reference count of VxFS module.

DESCRIPTION:
Sometimes due to a race between AMF unregister (or get-notification) and unmount of the file systems, the AMF module fails to decrement the reference count of VxFS module.

RESOLUTION:
The AMF module code is modified to resolve the reference count issue.

Patch ID: VRTSgab-6.2.1.9500

* 4042463 (Tracking ID: 4011683)

SYMPTOM:
The GAB module failed to start and the system log messages indicate failures with the mknod command.

DESCRIPTION:
The mknod command fails to start the GAB module because its format is invalid. If the names of multiple drivers in an environment contain the value "gab" as a substring, all their major device numbers get passed on to the mknod command. Instead, the command must contain the major device number for the GAB driver only.

RESOLUTION:
This hotfix addresses the issue so that the GAB module starts successfully even when other driver names in the environment contain "gab" as a substring.

* 4045975 (Tracking ID: 4013953)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 8.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
9(RHEL7.9) is now introduced.

Patch ID: VRTSgab-6.2.1.9400

* 4010341 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSgab-6.2.1.9200

* 3986744 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSgab-6.2.1.910

* 3967349 (Tracking ID: 3967265)

SYMPTOM:
RHEL 7.x RETPOLINE kernels and RHEL 7.6 are not supported

DESCRIPTION:
Red Hat has released RHEL 7.6 which has RETPOLINE kernel, and also released RETPOLINE kernels for older RHEL 7.x Updates. Veritas Cluster Server 
kernel modules need to be recompiled with RETPOLINE aware GCC to support RETPOLINE kernel.

RESOLUTION:
Support for RHEL 7.6 and RETPOLINE kernels on RHEL 7.x kernels is now introduced.

Patch ID: VRTSgab-6.2.1.700

* 3945678 (Tracking ID: 3944179)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
5(RHEL7.5) is now introduced.

Patch ID: VRTSgab-6.2.1.600

* 3927027 (Tracking ID: 3923100)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 4(RHEL7.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
4(RHEL7.4) is now introduced.

Patch ID: VRTSgab-6.2.1.400

* 3906410 (Tracking ID: 3896877)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
3(RHEL7.3) is now introduced.

Patch ID: VRTSgab-6.2.1.300

* 3875807 (Tracking ID: 3875805)

SYMPTOM:
A port on a node receives an I/O fence message when the membership for that 
port changes. This is caused by some unicast messages being stuck in the GAB 
receive queue of that port.

DESCRIPTION:
Under certain rare situation, a few unicast messages which belong to a future 
generation get stuck in the GAB receive queue of a port. This causes unintended 
consequences like preventing a RECONFIG message from being delivered to that 
port. In this case, the port receives an I/O fence message from the GAB to 
ensure the consistency in membership.

RESOLUTION:
The code is modified to ensure that the unicast messages belonging to future 
generation are never left pending in the GAB receive queue of a port.

Patch ID: VRTSllt-6.2.1.9800

* 4042692 (Tracking ID: 3992144)

SYMPTOM:
The system may panic while freeing LLT packets.

DESCRIPTION:
The vxglm worker thread uses the GAB APIs to free the LLT packets. Through these APIs, GAB attempts to free the LLT packets from the sk_buffer, that is, the Kernel data structure in the Linux networking code. While GAB attempts to free the LLT sk_buffer, if it detects an invalid or NULL value for the security path (that is, the sk_buff->sp pointer), the system panics.

RESOLUTION:
The LLT module is enhanced to free the sk_buffer without accessing the the security path.

* 4045974 (Tracking ID: 4013953)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 9(RHEL7.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 8.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
9(RHEL7.9) is now introduced.

Patch ID: VRTSllt-6.2.1.9700

* 4010339 (Tracking ID: 3998676)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 8(RHEL7.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
8(RHEL7.8) is now introduced.

Patch ID: VRTSllt-6.2.1.9500

* 3986745 (Tracking ID: 3982213)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 7(RHEL7.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
7(RHEL7.7) is now introduced.

Patch ID: VRTSllt-6.2.1.940

* 3967348 (Tracking ID: 3967265)

SYMPTOM:
RHEL 7.x RETPOLINE kernels and RHEL 7.6 are not supported

DESCRIPTION:
Red Hat has released RHEL 7.6 which has RETPOLINE kernel, and also released RETPOLINE kernels for older RHEL 7.x Updates. Veritas Cluster Server 
kernel modules need to be recompiled with RETPOLINE aware GCC to support RETPOLINE kernel.

RESOLUTION:
Support for RHEL 7.6 and RETPOLINE kernels on RHEL 7.x kernels is now introduced.

Patch ID: VRTSllt-6.2.1.910

* 3945676 (Tracking ID: 3944179)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 5(RHEL7.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
5(RHEL7.5) is now introduced.

Patch ID: VRTSllt-6.2.1.900

* 3926440 (Tracking ID: 3923100)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 4(RHEL7.4).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 3.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
4(RHEL7.4) is now introduced.

Patch ID: VRTSllt-6.2.1.800

* 3905431 (Tracking ID: 3905430)

SYMPTOM:
Application IO hangs in case of FSS with LLT over RDMA during heavy data transfer.

DESCRIPTION:
In case of FSS using LLT over RDMA, sometimes IO may hang because of race conditions in LLT 
code.

RESOLUTION:
LLT module is modified to fix the race conditions arising due to heavy load with multiple 
application threads.

Patch ID: VRTSllt-6.2.1.700

* 3906409 (Tracking ID: 3896877)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 7 
Update 3(RHEL7.3).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL7 Update 2.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 7 Update 
3(RHEL7.3) is now introduced.

Patch ID: VRTSllt-6.2.1.600

* 3905431 (Tracking ID: 3905430)

SYMPTOM:
Application IO hangs in case of FSS with LLT over RDMA during heavy data transfer.

DESCRIPTION:
In case of FSS using LLT over RDMA, sometimes IO may hang because of race conditions in LLT 
code.

RESOLUTION:
LLT module is modified to fix the race conditions arising due to heavy load with multiple 
application threads.

Patch ID: VRTSodm-6.2.1.8600

* 4043118 (Tracking ID: 3877256)

SYMPTOM:
The panic stack is like this:
vx_uioctl
vx_vop_ioctlodm_raw_clustered_file
odm_clustered_file
odm_node_get_hold
odm_clust_delete
odm_delete
odm_ioctl_ctl
odm_ioctl_ctl_unlocked
vfs_ioctl
do_vfs_ioctl
sys_ioctl
system_call_fastpath

DESCRIPTION:
Raw devices are not supported on Linux. But the detection of that was missed in
the case of delete.

RESOLUTION:
Handles the deletion of raw devices properly on Linux.

Patch ID: VRTSodm-6.2.1.8500

* 4009065 (Tracking ID: 3995697)

SYMPTOM:
ODM module failed to load on RHEL7.8.

DESCRIPTION:
RHEL7.8 is new release and it has some changes in kernel which caused ODM module failed to load on it.

RESOLUTION:
Added code to support ODM on RHEL7.8.

Patch ID: VRTSodm-6.2.1.8400

* 3984273 (Tracking ID: 3981630)

SYMPTOM:
ODM module failed to load on RHEL7.7

DESCRIPTION:
The RHEL7.7 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL7.7.

Patch ID: VRTSodm-6.2.1.8200

* 3868412 (Tracking ID: 3716577)

SYMPTOM:
Customer upgraded from SFRAC 6.0.1 to 6.1.1 and tried creating INDEX of
100k+ BLOB which failed with error like this:
Errors in file <file name> 
ORA-00600: internal error code, arguments: [ksfd_odmio1], [0x7F5FBBF8FF70], 
[0x7F5FBBF8FF98], [1], [ODM ERROR V-41-4-1-354-22 Invalid argument], [], [], 
[], 
[], [], [], []
...

DESCRIPTION:
The process in question got a SIGALARM during a clone() system call, which
caused the clone() call to back out of what it was doing.  But this backing out
of the partially-done clone() call has confused ODMs process-exit-detection
mechanism, causing it to think that the process calling clone() is the one being
torn down.  ODM tears down its kernel structures for that process, and whenever
it next tries to issue an i/o (more than 30 minutes later in this case), it will
get the error.

RESOLUTION:
Fixed the code, so that a failed clone will not confuse ODM.

* 3967184 (Tracking ID: 3958865)

SYMPTOM:
ODM module failed to load on RHEL7.6.

DESCRIPTION:
Since RHEL7.6 is new release therefore ODM module failed to load
on it.

RESOLUTION:
Added ODM support for RHEL7.6.

Patch ID: VRTSodm-6.2.1.700

* 3945130 (Tracking ID: 3938546)

SYMPTOM:
ODM module failed to load on RHEL7.5.

DESCRIPTION:
Since RHEL7.5 is new release therefore ODM module failed to load
on it.

RESOLUTION:
Added ODM support for RHEL7.5.

Patch ID: VRTSodm-6.2.1.500

* 3926166 (Tracking ID: 3923310)

SYMPTOM:
ODM module failed to load on RHEL7.4.

DESCRIPTION:
Since RHEL7.4 is new release therefore ODM module failed to load
on it.

RESOLUTION:
Added ODM support for RHEL7.4.

Patch ID: VRTSodm-6.2.1.300

* 3906065 (Tracking ID: 3757609)

SYMPTOM:
High CPU usage because of contention over ODM_IO_LOCK

DESCRIPTION:
While performing ODM IO, to update some of the ODM counters we take
ODM_IO_LOCK which leads to contention from multiple  of iodones trying to update
 these counters at the same time. This is results in high CPU usage.

RESOLUTION:
Code modified to remove the lock contention.

Patch ID: VRTSvxfs-6.2.1.8600

* 4043052 (Tracking ID: 3946098)

SYMPTOM:
In o/p of 'mount -v' option nommapcio is printed as 'nommpacio'

DESCRIPTION:
In o/p of 'mount -v' option nommapcio is printed as 'nommpacio'.

RESOLUTION:
The code is modified to remove the typo.

* 4043056 (Tracking ID: 4003395)

SYMPTOM:
System got panicked during upgrading IS 6.2.1 to IS 7.4.2 using phased upgrade.

DESCRIPTION:
During Upgrade operation, CPI first stops the processes. In such a case the older modules get's unloaded. And, Then, the VRTSvxfs Package is uninstalled. During Uninstallation vxfs-unconfigure script is run, which leaves the module driver file, at the same location because we have checks which is conditional to whether module is loaded or not.

RESOLUTION:
Removing the module driver file unconditionally in vxfs-unconfigure, so that no remnant file remain after the Package uninstallation.

* 4043275 (Tracking ID: 4000465)

SYMPTOM:
FSCK binary loops when it detects break in sequence of log ids.

DESCRIPTION:
When FS is not cleanly unmounted, FS will end up with unflushed intent log. This intent log will either be flushed during next subsequent mount or when fsck ran on the FS. Currently to build the transaction list that needs to be replayed, VxFS uses binary search to find out head and tail. But if there are breakage in intent log, then current code is susceptible to loop. To avoid this loop, VxFS is now going to use sequential search to find out range instead of binary search.

RESOLUTION:
Code is modified to incorporate sequential search instead of binary search to find out replayable transaction range.

* 4043384 (Tracking ID: 3979297)

SYMPTOM:
A kernel panic occurs when installing VxFS on RHEL6.

DESCRIPTION:
During VxFS installation, the fs_supers list is not initialized. While de-referencing the fs_supers pointer, the kernel gets a NULL value for the superblock address and panics.

RESOLUTION:
VxFS has been updated to initialize fs_super during VxFS installation.

* 4043874 (Tracking ID: 3915464)

SYMPTOM:
IOs failing on the inode that has camode value 0xff after DLV upgrade

DESCRIPTION:
ic_pdnlink field in inode was changed to ic_camode from DLV10. If during upgrade the value is not reset ic_camode will have value 0xff indicating that camode dirty flag is set and prevent IOs on this inode.

RESOLUTION:
Reset ic_camode if DLV is less than 10 and has ic_camode value 0xff.

* 4046032 (Tracking ID: 4042684)

SYMPTOM:
Command fails to resize the file.

DESCRIPTION:
There is a window where a parallel thread can clear IDELXWRI flag which it should not.

RESOLUTION:
setting the delayed extending write flag incase any parallel thread has cleared it.

Patch ID: VRTSvxfs-6.2.1.8500

* 3990432 (Tracking ID: 3911048)

SYMPTOM:
The LDH bucket validation failure message is logged and system hang.

DESCRIPTION:
When modifying a large directory, vxfs needs to find a new bucket in the LDH 
for this 
directory, and once the bucket is full, it will be be split to get more 
bucket to use. 
When the bucket is split to maximum amount, overflow bucket will be 
allocated. Under 
some condition, the available bucket lookup on overflow bucket will may got 
incorrect 
result and overwrite the existing bucket entry thus corrupt the LDH file. 
Another 
problem is that when the bucket invalidation failed, the bucket buffer is 
released 
without checking whether the buffer is already in a previous transaction, 
this may 
cause the transaction flush thread to hang and finally stuck the whole 
filesystem.

RESOLUTION:
Correct the LDH bucket entry change code to avoid the corrupt. And release 
the bucket 
buffer without throw it out of memory to avoid blocking the transaction 
flush.

* 4000911 (Tracking ID: 3853315)

SYMPTOM:
Fsck core dumped in check_dotdot() when processing regular inode with named attribute inode associated, the back trace is like following:

get_dotdotdata()
check_dotdot()
process_device()
main()

DESCRIPTION:
"chech_dotdot" function finds the parent inode for a directory and traverse the loop from parent inode to root inode and this function only work for directories but in this case inode is regular file inode with named attribute and it is trying to traverse the loop till root inode for regular file. Hence fsck got exited with segmentation fault.

RESOLUTION:
Added named attribute handling in dir sanity.

* 4004399 (Tracking ID: 3911314)

SYMPTOM:
For an 8-CPU test system, there's approximately 20% CPU load overhead caused by
just mounting up a VxFS file system.

DESCRIPTION:
The CPU load was caused by a loop in a routine which was scheduled to run every
second to gather CPU statistic info.

RESOLUTION:
Code change is made to remove the loop.

* 4009064 (Tracking ID: 3995694)

SYMPTOM:
VxFS module failed to load on RHEL7.8.

DESCRIPTION:
RHEL7.8 is new release and it has some changes in kernel which caused VxFS module failed to load on it.

RESOLUTION:
Added code to support VxFS on RHEL7.8.

* 4009159 (Tracking ID: 3940268)

SYMPTOM:
File system having disk layout version 13 might get disabled in case the size of
the directory surpasses the vx_dexh_sz value.

DESCRIPTION:
When LDH (large directory Hash) hash directory is filled up and the buckets are
filled up, we extend the size of the hash directory. For this we create a reorg
inode and copy extent map of LDH attr inode into reorg inode. This is done using
extent map reorg function. In that function, we check whether extent reorg
structure was passed for the same inode or not. If its not, then we dont
proceed with extent copying. we setup the extent reorg structure accordingly but
while setting up the fileset index,  we use inodes i_fsetindex. But in disk
layout version 13 onwards, we have overlaid the attribute inode and because of
these changes, we no longer sets i_fsetindex in attribute inode and it will
remain 0. Hence the checks in extent map reorg function is failing and resulting in
disabling FS.

RESOLUTION:
Code has been modified to pass correct fileset.

* 4009162 (Tracking ID: 3947648)

SYMPTOM:
Due to the wrong auto tuning of vxfs_ninode/inode cache, there could be hang 
observed due to lot of memory pressure.

DESCRIPTION:
If kernel heap memory is very large(particularly observed from SOLARIS T7 
servers), there can be overflow due to smaller size data type.

RESOLUTION:
Changed the code to handle overflow.

* 4009169 (Tracking ID: 3998168)

SYMPTOM:
For multi-TB filesystems, vxresize operations results in system freeze for 8-10 mins
causing application hangs and VCS timeouts.

DESCRIPTION:
During resize, primary node get the delegation of all the allocation units. In case of larger filesystem,
the total time taken by delegation operation is quite large. Also, flushing the summary maps takes considerable
amount of time. This results in filesystem freeze of around 8-10 mins.

RESOLUTION:
Code changes have been done to reduce the total time taken by vxresize.

Patch ID: VRTSvxfs-6.2.1.8400

* 3972960 (Tracking ID: 3905099)

SYMPTOM:
VxFS unmount panicked in deactive_super(), the panic stack looks like 
following:

 #9 vx_fsnotify_flush [vxfs]
#10 vx_softcnt_flush [vxfs]
#11 vx_idrop  [vxfs]
#12 vx_detach_fset [vxfs]
#13 vx_unmount  [vxfs]
#14 generic_shutdown_super 
#15 kill_block_super
#16 vx_kill_sb
#17 amf_kill_sb
#18 deactivate_super
#19 mntput_no_expire
#20 sys_umount
#21 system_call_fastpath

DESCRIPTION:
Suspected there is a a race between unmount, and a user-space notifier install 
for root inode.

RESOLUTION:
Added diaged code and defensive check for fsnotify_flush in vx_softcnt_flush.

* 3980789 (Tracking ID: 3980754)

SYMPTOM:
In function vx_io_proxy_thread(), system may hit kernel panic due to general protection fault.

DESCRIPTION:
In function vx_io_proxy_thread(), a value is being saved into memory through the uninitialized pointer. This may result in memory corruption.

RESOLUTION:
Function vx_io_proxy_thread() is changed to use the pointer after initializing it.

* 3983567 (Tracking ID: 3922986)

SYMPTOM:
System panic since Linux NMI Watchdog detected LOCKUP in CFS.

DESCRIPTION:
The vxfs buffer cache iodone routine interrupted the inode flush thread 
which 
was trying to acquire the cfs buffer hash lock with releasing the cfs 
buffer. 
And the iodone routine was blocked by other threads on acquiring the free 
list 
lock. In the cycle, the other threads were contending the cfs buffer hash 
lock 
with the inode flush thread. On Linux, the spinlock is FIFO tickets lock, so 
if the inode flush thread set ticket on the spinlock earlier, other threads 
cant acquire the lock. This caused a dead lock issue.

RESOLUTION:
Code changes are made to ensure acquiring the cfs buffer hash lock with irq 
disabled.

* 3983574 (Tracking ID: 3978149)

SYMPTOM:
When the FIFO file is created on VXFS filesystem, its timestamps are
not updated when writes are done to it.

DESCRIPTION:
In write context, Linux kernel calls update_time inode op in order to update timestamp
fields.This op was not implemented in VXFS.

RESOLUTION:
Implemented update_time inode op in VXFS.

* 3984272 (Tracking ID: 3981627)

SYMPTOM:
VxFS module failed to load on RHEL7.7.

DESCRIPTION:
The RHEL7.7 is new release and it has some changes in kernel which caused VxFS module failed to load
on it.

RESOLUTION:
Added code to support VxFS on RHEL7.7.

* 3986836 (Tracking ID: 3987141)

SYMPTOM:
Vxcompress operation generates core dumps

DESCRIPTION:
Vxcompress command fails because thread 1 wants to unlock __thread_lk but fails.
(gdb) bt
#0  0x00007fb1b4618640 in __lll_unlock_elision () from /lib64/libpthread.so.0
#1  0x0000000000406508 in __job_cleanupHandler (arg=0x0) at jobcontrol.c:414
#2  __job_thread (arg=<optimized out>) at jobcontrol.c:439
#3  0x00007fb1b460e74a in start_thread () from /lib64/libpthread.so.0
#4  0x00007fb1b3c33edd in clone () from /lib64/libc.so.6

RESOLUTION:
Fixed this issue by adding trylock calls before unlocking mutexes.

Patch ID: VRTSvxfs-6.2.1.8200

* 3967008 (Tracking ID: 3908785)

SYMPTOM:
System panic observed because of null page address in writeback structure in case of 
kswapd 
process.

DESCRIPTION:
Secfs2/Encryptfs layers had used write VOP as a hook when Kswapd is triggered to 
free page. 
Ideally kswapd should call writepage() routine where writeback structure are correctly filled.  When 
write VOP is 
called because of hook in secfs2/encrypts, writeback structures are cleared, resulting in null page 
address.

RESOLUTION:
Code changes has been done to call VxFS kswapd routine only if valid page address is 
present.

* 3967009 (Tracking ID: 3909553)

SYMPTOM:
Use the GAB backenable flow control interface.

DESCRIPTION:
VxFS/GLM used to sleep for fixed time duration before retrying if sending of message 
is failed over GAB. GAB provides interface named backenable which will notify if it is ok to send 
message again over GAB. This will avoid unnecessary sleeping in VxFS/GLM.

RESOLUTION:
Code is modified to use the GAB backenable flow control interface.

* 3967012 (Tracking ID: 3890701)

SYMPTOM:
The write back recovery process got hung.

DESCRIPTION:
Sometimes the writeback recovery code get triggered unexpectedly,
due the uninitialized error code and which later leads to the hang.

RESOLUTION:
Fix missing error code initialization for writeback recovery.

* 3967013 (Tracking ID: 3866962)

SYMPTOM:
Data corruption seen when dalloc writes are going on the file and 
simultaneously fsync started on the same file.

DESCRIPTION:
In case if dalloc writes are going on the file and simultaneously 
synchronous flushing is started on the same file, then synchronous flushing will try 
to flush all the dirty pages of the file without considering underneath allocation. 
In this case, flushing can happen on the unallocated blocks and this can result into 
data loss.

RESOLUTION:
Code is modified to flush data till actual allocation in case of dalloc 
writes.

* 3967015 (Tracking ID: 3931026)

SYMPTOM:
umounting a LM is hung on RHEL6.6

DESCRIPTION:
There is a race window between linux fsnotify infrastructure and vxfs 
fsnotify 
watcher deletion process during umount time. As a result, this race may 
result 
in umount process hang.

RESOLUTION:
1> VxFS processing of fsnotify/inotify should be done before calling in to 
   vxfs unmount (vx_umount) handler. This assures, watches on VxFS inodes 
   are cleaned before the OS fsnotify cleanup code, which expects all 
   marks/watches on SB to zero.

2> Minor Tidy-up in core fsnotify/inotify cleanup code of vxfs.

3> Remove explicit (hard-setting) "s_fsnotify_marks" to 0.

* 3967019 (Tracking ID: 3908954)

SYMPTOM:
Whilst performing vectored writes, using writev(), where two iovec-writes write
to different offsets within the same 4K page-aligned range of a file, it is
possible to find null data at the beginning of the 4Kb range when reading
the
data back.

DESCRIPTION:
Whilst multiple processes are performing vectored writes to a file, using
writev(),

The following situation can occur 

We have 2 iovecs, the first is 448 bytes and the second is 30000 bytes. The
first iovec of 448 bytes completes, however the second iovec finds that the
source page is no longer in memory. As it cannot fault-in during uiomove, it has
to undo both iovecs. It then faults the page back in and retries the second iovec
only. However, as the undo operation also undid the first iovec, the first 448
bytes of the page are populated with nulls. When reading the file back, it seems
that no data was written for the first iovec. Hence, we find nulls in the file.

RESOLUTION:
Code has been changed to handle the unwind of multiple iovecs accordingly in the
scenarios where certain amount of data is written out of a particular iovec and
some from other.

* 3967020 (Tracking ID: 3940516)

SYMPTOM:
The file resize thread loops infinitely if tried to resize file to a size 
greater than 4TB

DESCRIPTION:
Because of vx_u32_t typecast in vx_odm_resize function, resize threads gets stuck
inside an infinite loop.

RESOLUTION:
Removed vx_u32_t typcast in vx_odm_resize() to handle such scenarios.

* 3967021 (Tracking ID: 3894223)

SYMPTOM:
On the servers having multi TB memory, vxfs mount command takes 
time to 
finish.

DESCRIPTION:
VxFS calculates the amount of buffer cache it needs depending 
on 
the available physical memory. For big memory systems the amount of buffer 
cache and associated headers with it will be huge. During the mount process 
it 
needs buffer headers needs to be traversed. This can take time for big 
buffer 
cache.

RESOLUTION:
Put a high cap on the vxfs buffer cache size to keep it limited 
for big memory systems.

* 3967022 (Tracking ID: 3907587)

SYMPTOM:
Application may get blocked during File System de-fragmentation.

DESCRIPTION:
During de-fragmentation extents of file are re-arranged. To avoid,
the in-consistency issues application IOs might be blocked untill all the extents
of a file are re-arranged. If the number of extents of file are large in number,
application may get blocked for longer time.

RESOLUTION:
Code is modified to add a new option to "fsadm" command to enable de-fragmentation in chunked
fashion.

* 3967023 (Tracking ID: 3930267)

SYMPTOM:
Deadlock between fsq flush threads and writer threads.

DESCRIPTION:
In linux, under certain circumstances i.e. to account dirty pages, writer threads 
takes lock on inode 
and start flushing dirty pages which will need page lock. In this case, if fsq flush threads start 
flushing transaction 
on the same inode then it will need the inode lock which was held by writer thread. The page 
lock was taken by 
another writer thread which is waiting for transaction space which can be only freed by fsq 
flush thread. This 
leads to deadlock between these 3 threads.

RESOLUTION:
Code is modified to add a new flag which will skip dirty page accounting.

* 3967024 (Tracking ID: 3922259)

SYMPTOM:
A force umount hang with stack like this:
- vx_delay
- vx_idrop
- vx_quotaoff_umount2
- vx_detach_fset
- vx_force_umount
- vx_aioctl_common
- vx_aioctl
- vx_admin_ioctl
- vxportalunlockedkioctl
- vxportalunlockedioctl
- do_vfs_ioctl
- SyS_ioctl
- system_call_fastpath

DESCRIPTION:
An opened external quota file was preventing the force umount from continuing.

RESOLUTION:
Code has been changed so that an opened external quota file will be processed
properly during the force umount.

* 3967025 (Tracking ID: 2780633)

SYMPTOM:
VxFS doesn't support linux multiple device driver (MD)

DESCRIPTION:
VxFS doesn't support linux multiple device driver (MD)

RESOLUTION:
Added MD_MAJOR to VxFS's supported major number. It is recommended to use only on 
simple configurations.

* 3967026 (Tracking ID: 3896670)

SYMPTOM:
Intermittent CFS hang like situation with many CFS pglock grant 
messages pending on LLT layer.

DESCRIPTION:
To optimize the CFS locking, VxFS may send greedy pglock grant 
messages to speed up the upcoming write operations. In certain scenarios 
created due to particular read and write pattern across nodes, one node can 
send these greedy msgs far more faster than the speed of response. This may 
cause build up lot of msgs on the CFS layer and may delay the response of 
other msgs and cause slowness in CFS operations.

RESOLUTION:
Fix is to send the next greedy msg only after receiving the 
response of the previous one. This way at a given time there will be only 
one 
pglock greedy msg will be in flight.

* 3967028 (Tracking ID: 3934175)

SYMPTOM:
4-node FSS CFS experienced IO hung on all nodes.

DESCRIPTION:
IO requests are processed in LIFO manner when IO requests are handed off to 
worker thread in case of low stack space, which is not expected, they should be 
process in FIFO.

RESOLUTION:
Modified the code to pick up the older work items from tail of the queue.

* 3967029 (Tracking ID: 3938256)

SYMPTOM:
When checking file size through seek_hole, it will return incorrect offset/size when 
delayed allocation is enabled on the file.

DESCRIPTION:
In recent version of RHEL7 onwards, grep command uses seek_hole feature to check 
current file size and then it reads data depends on this file size. In VxFS, when dalloc is enabled, we 
allocate the extent to file later but we increment the file size as soon as write completes. When 
checking the file size in seek_hole, VxFS didn't completely consider case of dalloc and it was 
returning stale size, depending on the extent allocated to file, instead of actual file size which was 
resulting in reading less amount of data than expected.

RESOLUTION:
Code is modified in such way that VxFS will now return correct size in case dalloc is 
enabled on file and seek_hole is called on that file.

* 3967183 (Tracking ID: 3958853)

SYMPTOM:
VxFS module failed to load on RHEL7.6.

DESCRIPTION:
Since RHEL7.6 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for RHEL7.6.

* 3967189 (Tracking ID: 3917538)

SYMPTOM:
system hang when there's no space left on the file system.

DESCRIPTION:
When flushing data for the inactive inodes, vxfs will loop to check whether 
all of the pages of the inode are flushed out. There's a scenario where the 
page flush thread keeps looping, and if there is a freeze operation happens at 
the same time due to ENOSPC error, then that may cause a dead lock and results 
in system hang.

RESOLUTION:
Modified the code to avoid the infinite loop during inactive inode flushing.

* 3967190 (Tracking ID: 3933763)

SYMPTOM:
Oracle was hung, plsql session and ssh both were hanging.

DESCRIPTION:
There is a case in reuse code path where we are hitting dead loop while processing
inactivation thread.

RESOLUTION:
To fix this loop, we are going to try only once to finish inactivation of inodes
before we allocate a new inode for structural/attribute inode processing.

Patch ID: VRTSvxfs-6.2.1.700

* 3945129 (Tracking ID: 3938544)

SYMPTOM:
VxFS module failed to load on RHEL7.5.

DESCRIPTION:
Since RHEL7.5 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for RHEL7.5.

Patch ID: VRTSvxfs-6.2.1.500

* 3926161 (Tracking ID: 3923307)

SYMPTOM:
VxFS module failed to load on RHEL7.4.

DESCRIPTION:
Since RHEL7.4 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for RHEL7.4.

Patch ID: VRTSvxfs-6.2.1.300

* 3817229 (Tracking ID: 3762174)

SYMPTOM:
When fsfreeze is used together with vxdump, the fsfreeze command gets timeout and vxdump command fails.

DESCRIPTION:
The vxdump command may try to read mount list file to get information of the corresponding mount points. This behavior results in taking a file system active level, in order to synchronize with file system reinit. But in case of fsfreeze, taking the active level will never succeed, since the file system is already freezed, so this causes a deadlock and finally results in the fsfreeze timeout.

RESOLUTION:
Don't use fsfreeze and vxdump command together.

* 3896150 (Tracking ID: 3833816)

SYMPTOM:
In a CFS cluster, one node returns stale data.

DESCRIPTION:
In a 2-node CFS cluster, when node 1 opens the file and writes to
it, the locks are used with CFS_MASTERLESS flag set. But when node 2 tries to
open the file and write to it, the locks on node 1 are normalized as part of
HLOCK revoke. But after the Hlock revoke on node 1, when node 2 takes the PG
Lock grant to write, there is no PG lock revoke on node 1, so the dirty pages on
node 1 are not flushed and invalidated. The problem results in reads returning
stale data on node 1.

RESOLUTION:
The code is modified to cache the PG lock before normalizing it in
vx_hlock_putdata, so that after the normalizing, the cache grant is still with
node 1.When node 2 requests PG lock, there is a revoke on node 1 which flushes
and invalidates the pages.

* 3896151 (Tracking ID: 3827491)

SYMPTOM:
Data relocation is not executed correctly if the IOTEMP policy is set to AVERAGE.

DESCRIPTION:
Database table is not created correctly which results in an error on the database query. This affects the relocation policy of data and the files are not relocated properly.

RESOLUTION:
The code is modified fix the database table creation issue. Therelocation policy based calculations are done correctly.

* 3896154 (Tracking ID: 1428611)

SYMPTOM:
'vxcompress' command can cause many GLM block lock messages to be 
sent over the network. This can be observed with 'glmstat -m' output under the 
section "proxy recv", as shown in the example below -

bash-3.2# glmstat -m
         message     all      rw       g      pg       h     buf     oth    
loop
master send:
           GRANT     194       0       0       0       2       0     192      
98
          REVOKE     192       0       0       0       0       0     192      
96
        subtotal     386       0       0       0       2       0     384     
194

master recv:
            LOCK     193       0       0       0       2       0     191      
98
         RELEASE     192       0       0       0       0       0     192      
96
        subtotal     385       0       0       0       2       0     383     
194

    master total     771       0       0       0       4       0     767     
388

proxy send:
            LOCK      98       0       0       0       2       0      96      
98
         RELEASE      96       0       0       0       0       0      96      
96
      BLOCK_LOCK    2560       0       0       0       0    2560       0       
0
   BLOCK_RELEASE    2560       0       0       0       0    2560       0       
0
        subtotal    5314       0       0       0       2    5120     192     
194

DESCRIPTION:
'vxcompress' creates placeholder inodes (called IFEMR inodes) to 
hold the compressed data of files. After the compression is finished, IFEMR 
inode exchange their bmap with the original file and later given to inactive 
processing. Inactive processing truncates the IFEMR extents (original extents 
of the regular file, which is now compressed) by sending cluster-wide buffer 
invalidation requests. These invalidations need GLM block lock. Regular file 
data need not be invalidated across the cluster, thus making these GLM block 
lock requests unnecessary.

RESOLUTION:
Pertinent code has been modified to skip the invalidation for the 
IFEMR inodes created during compression.

* 3896156 (Tracking ID: 3633683)

SYMPTOM:
"top" command output shows vxfs thread consuming high CPU while 
running an application that makes excessive sync() calls.

DESCRIPTION:
To process sync() system call vxfs scans through inode cache 
which is a costly operation. If an user application is issuing excessive 
sync() calls and there are vxfs file systems mounted, this can make vxfs 
sync 
processing thread to consume high CPU.

RESOLUTION:
Combine all the sync() requests issued in last 60 second into a 
single request.

* 3896160 (Tracking ID: 3808033)

SYMPTOM:
After a service group is set offline via VOM or VCSOracle process is left in an unkillable state.

DESCRIPTION:
Whenever ODM issues an async request to FDD, FDD is required to do iodone processing on it, regardless of how far the request gets. The forced unmount causes FDD to take one of the early error branch which misses iodone routine for this particular async request. From ODM's perspective, the request is submitted, but iodone will never be called. This has several bad consequences, one of which is a user thread is blocked uninterruptibly forever, if it waits for request.

RESOLUTION:
The code is modified to add iodone routine in the error handling code.

* 3896223 (Tracking ID: 3735697)

SYMPTOM:
vxrepquota reports error like,
# vxrepquota -u /vx/fs1
UX:vxfs vxrepquota: ERROR: V-3-20002: Cannot access 
/dev/vx/dsk/sfsdg/fs1:ckpt1: 
No such file or directory
UX:vxfs vxrepquota: ERROR: V-3-24996: Unable to get disk layout version

DESCRIPTION:
vxrepquota checks each mount point entry in mounted file system 
table. If any checkpoint mount point entry presents before the mount point 
specified in the vxrepquota command, vxrepquota will report errors, but the 
command can succeed.

RESOLUTION:
Skip checkpoint mount point in the mounted file system table.

* 3896231 (Tracking ID: 3708836)

SYMPTOM:
When using fallocate together with delayed extending write, data corruption may happen.

DESCRIPTION:
When doing fallocate after EOF, vxfs grows the file by splitting the last extent of the file into two parts, then converts the part after EOF to a ZFOD extent. During this procedure, a stale file size is used to calculate the start offset of the newly zeroed extent. This may overwrite the blocks which contain the unflushed data generated by the extending write and cause data corruption.

RESOLUTION:
The code is modified to use up-to-date file size instead of the stale file size, to make sure the new ZFOD extent is created correctly.

* 3896261 (Tracking ID: 3855726)

SYMPTOM:
Panic happens in vx_prot_unregister_all(). The stack looks like this:

- vx_prot_unregister_all
- vxportalclose
- __fput
- fput
- filp_close
- sys_close
- system_call_fastpath

DESCRIPTION:
The panic is caused by a NULL fileset pointer, which is due to referencing the
fileset before it's loaded, plus, there's a race on fileset identity array.

RESOLUTION:
Skip the fileset if it's not loaded yet. Add the identity array lock to prevent
the possible race.

* 3896267 (Tracking ID: 3861271)

SYMPTOM:
Due to the missing inode clear action, a page can also be in a strange state.
Also, inode is not fully quiescent which leads to races in the inode code.
Sometime this can cause panic from iput_final().

DESCRIPTION:
We're missing an inode clear operation when a Linux inode is being
de-initialized on SLES11.

RESOLUTION:
Add the inode clear operation on SLES11.

* 3896269 (Tracking ID: 3879310)

SYMPTOM:
The file system may get corrupted after the file system freeze during
vxupgrade. The full fsck gives following errors:

UX:vxfs fsck: ERROR: V-3-20451: No valid device inodes found
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate

DESCRIPTION:
The vxupgrade requires file system to be frozen during it's
functional operation. It may happen that the corruption can be detected while
freeze is in progress and full fsck flag can be set on the file system.
However, this doesn't stop vxupgrade to proceed.
At later stage of vxupgrade, after structures related to new disk layout are
updated on disk, 
vxfs frees up and zeroes out some of the old metadata inodes. If an error occurs
after this 
point (because of full fsck being set), the file system completely needs to go
back to previous version, at the tile of full fsck. 
Since the metadata corresponding to previous version is already cleared, the
full fsck cannot proceed and gives error.

RESOLUTION:
Check for full fsck flag after freezing the file system during
vxupgrade. Also, disable the file system if an error occurs after writing new 
metadata on disk. This will force the newly written metadata to be loaded in 
memory on next mount.

* 3896270 (Tracking ID: 3707662)

SYMPTOM:
Race between reorg processing and fsadm timer thread (alarm expiry) leads to panic in vx_reorg_emap with the following stack::

vx_iunlock
vx_reorg_iunlock_rct_reorg
vx_reorg_emap
vx_extmap_reorg
vx_reorg
vx_aioctl_full
vx_aioctl_common
vx_aioctl
vx_ioctl
fop_ioctl
ioctl

DESCRIPTION:
When the timer expires (fsadm with -t option), vx_do_close() calls vx_reorg_clear() on local mount which performs cleanup on reorg rct inode. Another thread currently active in vx_reorg_emap() will panic due to null pointer dereference.

RESOLUTION:
When fop_close is called in alarm handler context, we defer the cleaning up untill the kernel thread performing reorg completes its operation.

* 3896273 (Tracking ID: 3558087)

SYMPTOM:
When stat system call is executed on VxFS File System with delayed
allocation feature enabled, it may take long time or it may cause high cpu
consumption.

DESCRIPTION:
When delayed allocation (dalloc) feature is turned on, the
flushing process takes much time. The process keeps the get page lock held, and
needs writers to keep the inode reader writer lock held. Stat system call may
keeps waiting for inode reader writer lock.

RESOLUTION:
Delayed allocation code is redesigned to keep the get page lock
unlocked while flushing.

* 3896277 (Tracking ID: 3691633)

SYMPTOM:
Remove RCQ Full messages

DESCRIPTION:
Too many unnecessary RCQ Full messages were logging in the system log.

RESOLUTION:
The RCQ Full messages removed from the code.

* 3896281 (Tracking ID: 3830300)

SYMPTOM:
Heavy cpu usage while oracle archive process are running on a clustered
fs.

DESCRIPTION:
The cause of the poor read performance in this case was due to fragmentation,
fragmentation mainly happens when there are multiple archivers running on the
same node. The allocation pattern of the oracle archiver processes is 

1. write header with O_SYNC
2. ftruncate-up the file to its final size ( a few GBs typically)
3. do lio_listio with 1MB iocbs

The problem occurs because all the allocations in this manner go through
internal allocations i.e. allocations below file size instead of allocations
past the file size. Internal allocations are done at max 8 Pages at once. So if
there are multiple processes doing this, they all get these 8 Pages alternately
and the fs becomes very fragmented.

RESOLUTION:
Added a tunable, which will allocate zfod extents when ftruncate
tries to increase the size of the file, instead of creating a hole. This will
eliminate the allocations internal to file size thus the fragmentation. Fixed
the earlier implementation of the same fix, which ran into
locking issues. Also fixed the performance issue while writing from secondary node.

* 3896285 (Tracking ID: 3757609)

SYMPTOM:
High CPU usage because of contention over ODM_IO_LOCK

DESCRIPTION:
While performing ODM IO, to update some of the ODM counters we take
ODM_IO_LOCK which leads to contention from multiple  of iodones trying to update
 these counters at the same time. This is results in high CPU usage.

RESOLUTION:
Code modified to remove the lock contention.

* 3896303 (Tracking ID: 3762125)

SYMPTOM:
Directory size sometimes keeps increasing even though the number of files inside it doesn't 
increase.

DESCRIPTION:
This only happens to CFS. A variable in the directory inode structure marks the start of 
directory free space. But when the directory ownership changes, the variable may become stale, which 
could cause this issue.

RESOLUTION:
The code is modified to reset this free space marking variable when there's 
ownershipchange. Now the space search goes from beginning of the directory inode.

* 3896304 (Tracking ID: 3846521)

SYMPTOM:
cp -p is failing with EINVAL for files with 10 digit 
modification time. EINVAL error is returned if the value in tv_nsec field is 
greater than/outside the range of 0 to 999, 999, 999.  VxFS supports the 
update in usec but when copying in the user space, we convert the usec to 
nsec. So here in this case, usec has crossed the upper boundary limit i.e 
999, 999.

DESCRIPTION:
In a cluster, its possible that time across nodes might 
differ.so 
when updating mtime, vxfs check if it's cluster inode and if nodes mtime is 
newer 
time than current node time, then accordingly increment the tv_usec instead of 
changing mtime to older time value. There might be chance that it,  tv_usec 
counter got overflowed here, which resulted in 10 digit mtime.tv_nsec.

RESOLUTION:
Code is modified to reset usec counter for mtime/atime/ctime when 
upper boundary limit i.e. 999999 is reached.

* 3896306 (Tracking ID: 3790721)

SYMPTOM:
High CPU usage on the vxfs thread process. The backtrace of such kind of threads
usually look like this:

schedule
schedule_timeout
__down
down
vx_send_bcastgetemapmsg_remaus
vx_send_bcastgetemapmsg
vx_recv_getemapmsg
vx_recvdele
vx_msg_recvreq
vx_msg_process_thread
vx_kthread_init
kernel_thread

DESCRIPTION:
The locking mechanism in vx_send_bcastgetemapmsg_process() is inefficient. So that
every
time vx_send_bcastgetemapmsg_process() is called, it will perform a series of
down-up
operation on a certain semaphore. This can result in a huge CPU cost when multiple
threads have contention on this semaphore.

RESOLUTION:
Optimize the locking mechanism in vx_send_bcastgetemapmsg_process(),
so that it only do down-up operation on the semaphore once.

* 3896308 (Tracking ID: 3695367)

SYMPTOM:
Unable to remove volume from multi-volume VxFS using "fsvoladm" command. It fails with "Invalid argument" error.

DESCRIPTION:
Volumes are not being added in the in-core volume list structure correctly. Therefore while removing volume from multi-volume VxFS using "fsvoladm", command fails.

RESOLUTION:
The code is modified to add volumes in the in-core volume list structure correctly.

* 3896310 (Tracking ID: 3859032)

SYMPTOM:
System panics in vx_tflush_map() due to NULL pointer dereference.

DESCRIPTION:
When converting VxFS using vxconvert, new blocks are allocated to 
the structural files like smap etc which can contain garbage. This is done with 
the expectation that fsck will rebuild the correct smap. but in fsck, we have 
missed to distinguish between EAU fully EXPANDED and ALLOCATED. because of
which, if allocation to the file which has the last allocation from such
affected EAU is done, it will create the sub transaction on EAU which are in
allocated state. Map buffers of such EAUs are not initialized properly in VxFS
private buffer cache, as a result, these buffers will be released back as stale
during the transaction commit. Later, if any file-system wide sync tries to
flush the metadata, it can refer to these buffer pointers and panic as these
buffers are already released and reused.

RESOLUTION:
Code is modified in fsck to correctly set the state of EAU on 
disk. Also, modified the involved code paths as to avoid using doing
transactions on unexpanded EAUs.

* 3896311 (Tracking ID: 3779916)

SYMPTOM:
vxfsconvert fails to upgrade layout verison for a vxfs file system with 
large number of inodes. Error message will show some inode discrepancy.

DESCRIPTION:
vxfsconvert walks through the ilist and converts inode. It stores 
chunks of inodes in a buffer and process them as a batch. The inode number 
parameter for this inode buffer is of type unsigned integer. The offset of a 
particular inode in the ilist is calculated by multiplying the inode number with 
size of inode structure. For large inode numbers this product of inode_number * 
inode_size can overflow the unsigned integer limit, thus giving wrong offset 
within the ilist file. vxfsconvert therefore reads wrong inode and eventually 
fails.

RESOLUTION:
The inode number parameter is defined as unsigned long to avoid 
overflow.

* 3896312 (Tracking ID: 3811849)

SYMPTOM:
On cluster file system (CFS), due to a size mismatch in the cluster-wide buffers
containing hash bucket for large directory hashing (LDH), the system panics with
the following stack trace:
  
   vx_populate_bpdata()
   vx_getblk_clust()
   vx_getblk()
   vx_exh_getblk()
   vx_exh_get_bucket()
   vx_exh_lookup()
   vx_dexh_lookup()
   vx_dirscan()
   vx_dirlook()
   vx_pd_lookup()
   vx_lookup_pd()
   vx_lookup()
   
On some platforms, instead of panic, LDH corruption is reported. Full fsck
reports some meta-data inconsistencies as displayed in the following sample
messages:

fileset 999 primary-ilist inode 263 has invalid alternate directory index
        (fileset 999 attribute-ilist inode 8193), clear index? (ynq)y

DESCRIPTION:
On a highly fragmented file system with a file system block size of 1K, 2K or
4K, the bucket(s) of an LDH inode, which has a fixed size of 8K, can spread
across multiple small extents. Currently in-core allocation for bucket of LDH
inode happens in parallel to on-disk allocation, which results in small in-core
buffer allocations. Combination of these small in-core allocations will be
merged for final in memory representation of LDH inodes bucket. On two Cluster
File System (CFS) nodes, this may result in same LDH metadata/bucket represented
as in-core buffers of different sizes. This may result in system panic as LDH
inodes bucket are passed around the cluster, or this may result in on-disk
corruption of LDH inode's buckets, if these buffers are flushed to disk.

RESOLUTION:
The code is modified to separate the on-disk allocation and in-core buffer
initialization in LDH code paths, so that in-core LDH bucket will always be
represented by a single 8K buffer.

* 3896313 (Tracking ID: 3817734)

SYMPTOM:
If file system with full fsck flag set is mounted, direct command message
is printed to the user to clean the file system with full fsck.

DESCRIPTION:
When mounting file system with full fsck flag set, mount will fail
and a message will be printed to clean the file system with full fsck. This
message contains direct command to run, which if run without collecting file
system metasave will result in evidences being lost. Also since fsck will remove
the file system inconsistencies it may lead to undesired data being lost.

RESOLUTION:
More generic message is given in error message instead of direct
command.

* 3896314 (Tracking ID: 3856363)

SYMPTOM:
vxfs reports mapbad errors in the syslog as below:
vxfs: msgcnt 15 mesg 003: V-2-3: vx_mapbad - vx_extfind - 
/dev/vx/dsk/vgems01/lvems01 file system free extent bitmap in au 0 marked 
bad.

And, full fsck reports following metadata inconsistencies:

fileset 999 primary-ilist inode 6 has invalid number of blocks 
(18446744073709551583)
fileset 999 primary-ilist inode 6 failed validation clear? (ynq)n
pass2 - checking directory linkage
fileset 999 directory 8192 block devid/blknum 0/393216 offset 68 references 
free 
inode
                                ino 6 remove entry? (ynq)n
fileset 999 primary-ilist inode 8192 contains invalid directory blocks
                                clear? (ynq)n
pass3 - checking reference counts
fileset 999 primary-ilist inode 5 unreferenced file, reconnect? (ynq)n
fileset 999 primary-ilist inode 5 clear? (ynq)n
fileset 999 primary-ilist inode 8194 unreferenced file, reconnect? (ynq)n
fileset 999 primary-ilist inode 8194 clear? (ynq)n
fileset 999 primary-ilist inode 8195 unreferenced file, reconnect? (ynq)n
fileset 999 primary-ilist inode 8195 clear? (ynq)n
pass4 - checking resource maps

DESCRIPTION:
While processing the VX_IEZEROEXT extop, VxFS frees the extent without 
setting VX_TLOGDELFREE flag. Similarly, there are other cases where the flag 
VX_TLOGDELFREE is not set in the case of the delayed extent free, this could 
result in mapbad errors and invalid block counts.

RESOLUTION:
Since the flag VX_TLOGDELFREE need to be set on every extent free, 
modified to code to discard this flag and treat every extent free as delayed 
extent free implicitly.

* 3901379 (Tracking ID: 3897793)

SYMPTOM:
Panic happens because of race where the mntlock ID is cleared while 
mntlock flag still set.

DESCRIPTION:
Panic happened because of race where mntlockid is null even after 
mntlock flag is set. Race is between fsadm thread and proc mount show_option 
thread. The fsadm thread deintialize mntlock id first and then removes mntlock 
flag. If other thread race with this fsadm thread, then it is possible to have 
mntlock flag set and mntlock id as a NULL. The fix is to remove flag first and 
deintialize mntlock id later.

RESOLUTION:
The code is modified to remove mntlock flag first.

* 3903657 (Tracking ID: 3857254)

SYMPTOM:
Assert failure because of missed flush before taking filesnap of the file.

DESCRIPTION:
If the delayed extended write on the file is not completed but the snap of the file is taken, then the inode size is not updated correctly. This will trigger internal assert because of incorrect inode size.

RESOLUTION:
The code is modified to flush the delayed extended write before taking filesnap.

* 3904841 (Tracking ID: 3901318)

SYMPTOM:
VxFS module failed to load on RHEL7.3.

DESCRIPTION:
Since RHEL7.3 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for RHEL7.3.

* 3905056 (Tracking ID: 3879761)

SYMPTOM:
Performance issue observed due to contention on vxfs spin lock vx_worklist_lk.

DESCRIPTION:
ODM IOs are performed asynchronously, by queuing the ODM work items to
the worker threads. It wakes up more number of worker threads than required after
enqueuing the ODM work items which leads to contention of vx_worklist_lk spinlock.

RESOLUTION:
Modified the code such that, it will wake up one worker thread if only
one workitem is enqueued.

* 3906148 (Tracking ID: 3894712)

SYMPTOM:
ACL permissions are not inherited correctly on cluster file system.

DESCRIPTION:
The ACL counts stored on a directory inode gets reset every 
time directory inodes 
ownership is switched between the nodes. When ownership on directory inode 
comes back to the node, 
which  previously abdicated it, ACL permissions were not getting inherited 
correctly for the newly 
created files.

RESOLUTION:
Modified the source such that the ACLs are inherited correctly.

* 3906846 (Tracking ID: 3872202)

SYMPTOM:
VxFS internal test hits an assert.

DESCRIPTION:
In page create case VxFS was taking the ipglock twice in a thread,
due to which the VxFS test hit the internal assert.

RESOLUTION:
Removed the ipglock from vx_wb_dio_write().

* 3906961 (Tracking ID: 3891801)

SYMPTOM:
Internal test hit debug assert.

DESCRIPTION:
Got an debug assert while creating page in shared page cache for
zfod extent which is same as creating for HOLEs, which VxFS don't do.

RESOLUTION:
Added a check for page creation so that we don't create shared pages
for zfod extent.

* 3907350 (Tracking ID: 3817734)

SYMPTOM:
If file system with full fsck flag set is mounted, direct command message
is printed to the user to clean the file system with full fsck.

DESCRIPTION:
When mounting file system with full fsck flag set, mount will fail
and a message will be printed to clean the file system with full fsck. This
message contains direct command to run, which if run without collecting file
system metasave will result in evidences being lost. Also since fsck will remove
the file system inconsistencies it may lead to undesired data being lost.

RESOLUTION:
More generic message is given in error message instead of direct
command.

* 3907359 (Tracking ID: 3907722)

SYMPTOM:
kernel BUG at fs/dcache.c:964

DESCRIPTION:
There is a race between vxfs dcache pruner and umount thread in
RHEL7/SLES12 kernel. This race causes kernel BUG at fs/dcache.c:964.

RESOLUTION:
Added code to close the race between vxfs dcache pruner and umount
thread.

Patch ID: VRTSvxfs-6.2.1.100

* 3753724 (Tracking ID: 3731844)

SYMPTOM:
umount -r option fails for vxfs 6.2 with error "invalid options"

DESCRIPTION:
Till 6.2 vxfs did not have a umount helper on linux. We added a helper in 6.2,
because of this, each call to linux's umount also gets called to the umount
helper binary. Due to this the -r option, which was only handled by the linux
native umount, is forwarded to the umount.vxfs helper, which exits while
processing the option string becase we don't support readonly remounts.

RESOLUTION:
To solve this, we've changed the umount.vxfs code to not exit on
"-r" option, although we do not support readonly remounts, so if umount -r
actually fails and the os umount attempts a readonly remount, the mount.vxfs
binary will then exit with an error. This solves the problem of linux's default
scripts not working for our fs.

* 3754492 (Tracking ID: 3761603)

SYMPTOM:
Full fsck flag will be set incorrectly at the mount time.

DESCRIPTION:
There might be possibility that extop processing will be deferred 
during umount (i.e. in case of crash or disk failure) and will be kept on 
disk, so that mount can process them. During mount, inode can have multiple 
extop set. Previously if inode has trim and reorg extop set during mount, we 
were incorrectly setting fullfsck. This patch avoids this situation.

RESOLUTION:
Code is modified to avoid such unnecessary setting of fullfsck.

* 3756002 (Tracking ID: 3764824)

SYMPTOM:
Internal cluster file system(CFS) testing hit debug assert

DESCRIPTION:
Internal debug assert is seen when there is a glm recovery while one 
of the secondary  nodes is doing mount, specifically when glm recovery happens 
between attaching a file system and mounting file system.

RESOLUTION:
Code is modified to handle glm reconfiguration issue.

* 3765324 (Tracking ID: 3736398)

SYMPTOM:
Panic in the lazy unmount path during deinit of VxFS-VxVM API.

DESCRIPTION:
The panic is caused when an exiting thread drops the last reference
to a lazy-unmounted VxFS file-system where that fs is the last VxFS mount in the
system. The exiting thread does unmount, which then calls into VxVM to
de-initialize the private FS-VM API(as it is the last VxFS mounted fs). 
The function to be called in VxVM is looked-up via the files under /proc, this
requires an opening of a file but the exit processing has removed the structs
needed by the thread to open a file.

RESOLUTION:
The solution is to cache the de-init function (vx_fsvm_api_deinit)
when the VxFS-VxVM API is initialized, so no function look-up is needed during
an unmount. The cached function pointer can then be called during the last
unmount bypassing the need to open the file by the exiting thread.

* 3765998 (Tracking ID: 3759886)

SYMPTOM:
In case of nested mount, force umount of parent leaves stale child 
entry in /etc/mtab even after subsequent umount of child.

DESCRIPTION:
On rhel6 and sles11, in case of nested mount, if parent mount 
(say /mnt1) was removed/umounted forcefully, then child mounts (like /mnt1/dir) 
also get umounted but the "/etc/mtab" entry was not getting updated accordingly 
for child mount. Previously it was possible to remove such child entries from 
"/etc/mtab" by using os's umount binary. But from shikra on words, we have 
added helper umount binary in "/sbin/umount.vxfs". So now os's umount binary 
will call this helper binary which in turn call vxumount for child umount 
which will fail since path was not present. Hence mtab entry will not get
updated and will show child as mounted.

RESOLUTION:
Code is modified to update mnttab when ENOENT error is returned 
by umount() system call.

* 3769992 (Tracking ID: 3729158)

SYMPTOM:
fuser and other commands hang on vxfs file systems.

DESCRIPTION:
The hang is seen while 2 threads contest for 2 locks -ILOCK and
PLOCK. The writeadvise thread owns the ILOCK but is waiting for the PLOCK.
The dalloc thread owns the PLOCK and is waiting for the ILOCK.

RESOLUTION:
Correct order of locking is PLOCK followed by the ILOCK.

* 3793241 (Tracking ID: 3793240)

SYMPTOM:
Vxrestore command dumps core file because of invalid japanese 
strings.

DESCRIPTION:
Vxrestore command dumps core file because of invalid characters 
such as %, $ etc. are present in the japanese strings.

RESOLUTION:
code is modified to remove the extra characters from the 
Japanese message strings.

* 3798437 (Tracking ID: 3812914)

SYMPTOM:
On RHEL 6.5 and RHEL 6.4 latest kernel patch, umount(8) system call hangs if an
application watches for inode events using inotify(7) APIs.

DESCRIPTION:
On RHEL 6.5 and RHEL 6.4 latest kernel patch, additional OS counters were added in
the super block to track inotify Watches. These new counters were not implemented
in VxFS for RHEL6.5/RHEL6.4 kernel. Hence, while doing umount, the operation hangs
until the counter in the superblock drops to zero, which would never happen since
they are not handled in VxFS.

RESOLUTION:
The code is modified to handle additional counters added in super block of
RHEL6.5/RHEL6.4 latest kernel.

* 3808285 (Tracking ID: 3808284)

SYMPTOM:
fsdedupadm status Japanese text includes strange character.

DESCRIPTION:
The translation of FAILED string in English  is incorrect in Japanese 
and is "I/O" which stands for The failed I / O.So the translation from 
English to Japanese is not correct.

RESOLUTION:
Corrected the translation for "FAILED" string in Japanese.

* 3817120 (Tracking ID: 3804400)

SYMPTOM:
VRTS/bin/cp does not return any error when quota hard limit is 
reached and partial write is encountered.

DESCRIPTION:
When quota hard limit is reached, VRTS/bin/cp may encounter a 
partial write, but it may not return any error to up layer application in 
such situation.

RESOLUTION:
Adjust VRTS/bin/cp to detect the partial write caused by quota 
limit, and return a proper error to up layer application.

* 3821688 (Tracking ID: 3821686)

SYMPTOM:
VxFS module might not get loaded on SLES11 SP4.

DESCRIPTION:
Since SLES11 SP4 is new release therefore VxFS module failed to load
on it.

RESOLUTION:
Added VxFS support for SLES11 SP4.

Patch ID: VRTSvxfs-6.2.1.000

* 3093833 (Tracking ID: 3093821)

SYMPTOM:
The system panics due to referring freed super block after the vx_unmount() function errors.

DESCRIPTION:
In the file system unmount process, once Linux VFS calls in to VxFS specific unmount processing vx_unmount(), it doesn't expect error from this call. So, once the vx_unmount()function returns, Linux frees the file systems corresponding super_block object. But if any error is observed during vx_unmount(), it may leave file system Inodes in vxfs inode cache as it is. Otherwise, when there is no error, the file system inodes are processed and dropped on vx_unmount().

As file systems inodes left in VxFS inode cache would still point to freed super block object, so now when these inodes on Inode cache are cleaned up to free or reuse, they may refer freed super block in certain cases, which might lead to Panic due to NULL pointer de-reference.

RESOLUTION:
Do not return EIO or ENXIO error from vx_detach_fset() when you unmounts the filesystem. Insted of returning error process, drop inode from the inode cache.

* 3657150 (Tracking ID: 3604071)

SYMPTOM:
With the thin reclaim feature turned on, you can observe high CPU usage on the vxfs thread process. The backtrace of such kind of threads usually look like this:
	 
	 - vx_dalist_getau
	 - vx_recv_bcastgetemapmsg
	 - vx_recvdele
	 - vx_msg_recvreq
	 - vx_msg_process_thread
	 - vx_kthread_init

DESCRIPTION:
In the routine to get the broadcast information of a node which contains maps of Allocation Units (AUs) for which node holds the delegations, the locking mechanism is inefficient. Thus every time when this routine is called, it will perform a series of down-up operation on a certain semaphore. This can result in a huge CPU cost when many threads calling the routine in parallel.

RESOLUTION:
The code is modified to optimize the locking mechanism in the routine to get the broadcast information of a node which contains maps of Allocation Units (AUs) for which node holds the delegations, so that it only does down-up operation on the semaphore once.

* 3657151 (Tracking ID: 3513507)

SYMPTOM:
Filesystem umount() may hang due to deadlock in kernel with the following stack trace and other threads(denoted as T1):
vx_async_waitmsg
vx_msg_broadcast
vx_cwfa_step
vx_cwfreeze_common
vx_cwfreeze_all
vx_freeze
vx_detach_fset
vx_unmount
generic_shutdown_super
sys_umount

In the meanwhile, a thread(T2) is stuck at down_read:
rwsem_down_failed_common
rwsem_down_read_failed
call_rwsem_down_read_failed
down_read
do_page_fault
page_fault
filldir64
vx_readdir_int
mntput_no_expire
vx_irwlock2
vx_readdir
vfs_readdir
sys_getdents64

A thread(T3) is waiting for down_write
rwsem_down_failed_common
rwsem_down_write_failed
call_rwsem_down_write_failed
down_write
sys_munmap

A thread(T4) is doing vx_read?
vx_active_common_flush
vx_read
activate_task
vfs_read
sys_read
system_call_fastpath

DESCRIPTION:
This is a dead lock which involves multiple threads. When the umount thread(T1) 
is initiated, it blocks all file operations from coming into the filesystem and 
waits for those active threads to drop the active levels they have. One of the active thread(T2) hits a pagecache which attempts a rw semaphore of the process memory map structure with read mode. However, this semaphore is held by another thread(T4) with read mode, whilst a write request(T3) is waiting in the queue that prevents other read requests from getting the semaphore. The current read user of the semaphore(T4) is requiring the active level of the file system while holding read semaphore. But the file system is frozen by the first thread(T1).

RESOLUTION:
The code is modified to adjust the active level to avoid the deadlock.

* 3657152 (Tracking ID: 3602322)

SYMPTOM:
System may panic while flushing the dirty pages of the inode. The following 
stack traces are observed:

vx_iflush_list()
vx_workitem_process()
vx_worklist_process()
vx_worklist_thread()

and

vx_vn_cache_deinit()
vx_inode_deinit
vx_ilist_chunkclean()
vx_inode_free_list()
vx_ifree_scan_list()
vx_workitem_process()
vx_worklist_process()
vx_worklist_thread()

DESCRIPTION:
Panic may occur due to the synchronization problem between one thread that 
flushes the inode, and the other thread that frees the chunks that contain the 
inodes on the freelist. 

The thread that frees the chunks of inodes on the freelist grabs an inode, and 
clears/dereference the inode pointer while deinitializing the inode. This may 
result in the pointer dereference, if the flusher thread is working on the same 
inode.

RESOLUTION:
The code is modified to resolve the race condition by taking proper locks on 
the inode and freelist, whenever a pointer in the inode is dereferenced. 

If the inode pointer is already de-initialized to NULL, then the flushing is 
attempted on the next inode.

* 3657153 (Tracking ID: 3622323)

SYMPTOM:
Cluster Filesystem mounted as read-only panics when it gets sharing and/or compression statistics using the fsadm_vxfs(1M) command with the following stack:
	 
	- vx_irwlock
	- vx_clust_fset_curused
	- vx_getcompstats
	- vx_aioctl_getcompstats
	- vx_aioctl_common
	- vx_aioctl
	- vx_unlocked_ioctl
	- vx_ioctl
	- vfs_ioctl
	- do_vfs_ioctl
	- sys_ioctl
	- system_call_fastpath

DESCRIPTION:
When file system is mounted as read-only, part of the initial setup is skipped, including loading of few internal structures. These structures are referenced while gathering statistics for sharing and/or compression. As a result, panic occurs.

RESOLUTION:
The code is modified to only allow "fsadm -HS all" to gather sharing and/or compression statistics on read-write file systems. On read-only file systems, this command fails.

* 3657156 (Tracking ID: 3604750)

SYMPTOM:
The kernel loops during the extent re-org with the following stack trace:
vx_bmap_enter()
vx_reorg_enter_zfod()
vx_reorg_emap()
vx_extmap_reorg()
vx_reorg()
vx_aioctl_full()
$cold_vx_aioctl_common()
vx_aioctl()
vx_ioctl()
vno_ioctl()
ioctl()
syscall()

DESCRIPTION:
The extent re-org minimizes the file system fragmentation. When the re-org 
request is issued for an inode with a lot of ZFOD extents, it reallocates the 
extents of the original inode to the re-org inode. During this, the ZFOD extent 
are preserved and enter the re-org inode in a transaction. If the extent 
allocated is big, the transaction that enters the ZFOD extents becomes big and 
returns an error. Even when the transaction is retried the same issue occurs. 
As a result, the kernel loops during the extent re-org.

RESOLUTION:
The code is modified to enter the Bmap (block map) of the allocated extent and 
then perform the ZFOD processing. If you get a committable error during the 
ZFOD enter, then commit the transaction and continue with the ZFOD enter.

* 3657157 (Tracking ID: 3617191)

SYMPTOM:
Checkpoint creation may take hours.

DESCRIPTION:
During checkpoint creation, with an inode marked for removal and being overlaid, there may be a downstream clone and VxFS starts pulling all the data. With Oracle it's evident because of temporary files deletion during checkpoint creation.

RESOLUTION:
The code is modified to selectively pull the data, only if a downstream push inode exists for file.

* 3657158 (Tracking ID: 3601943)

SYMPTOM:
The block map tree of a file is corrupted across the levels, and during truncating, inode for the file may lead to an infinite loop.


There are various

DESCRIPTION:
For files larger than 64G, truncation code first walks through the bmap tree to find the optimal offset from which to begin the truncation. 
If this truncation falls within corrupted range of the bmap, actual truncation 
code which relies on binary search to find this offset. As a result, the truncation cannot find the offset, thus it returns empty. The output makes the truncation code to submit dummy transaction, which updates the inode of file with latest ctime, without freeing the extents allocated.

RESOLUTION:
The truncation code is modified to detect the corruption, mark the inode bad and, mark the file system for full-fsck. The modification makes the truncation  possible for full fsck. Next time when it runs, the truncation code is able to throw out the inode and free the extents.

* 3657159 (Tracking ID: 3633067)

SYMPTOM:
While converting from ext3 file system to VxFS using vxfsconvert, it is observed that many inodes are missing.

DESCRIPTION:
When vxfsconvert(1M) is run on an ext3 file system, it misses an entire block group of inodes. This happens because of an incorrect calculation of block group number of a given inode in border case. The inode which is the last inode for a given block group is calculated to have the correct inode offset, but is calculated to be in the next block group. This causes
 the entire next block group to be skipped when the code attempts to find the next consecutive inode.

RESOLUTION:
The code is modified to correct the calculation of block group number.

* 3665980 (Tracking ID: 2059611)

SYMPTOM:
The system panics due to a NULL pointer dereference while flushing the
bitmaps to the disk and the following stack trace is displayed:


vx_unlockmap+0x10c
vx_tflush_map+0x51c
vx_fsq_flush+0x504
vx_fsflush_fsq+0x190
vx_workitem_process+0x1c
vx_worklist_process+0x2b0
vx_worklist_thread+0x78

DESCRIPTION:
The vx_unlockmap() function unlocks a map structure of the file
system. If the map is being used, the hold count is incremented. The
vx_unlockmap() function attempts to check whether this is an empty mlink doubly
linked list. The asynchronous vx_mapiodone routine can change the link at random
even though the hold count is zero.

RESOLUTION:
The code is modified to change the evaluation rule inside the
vx_unlockmap() function, so that further evaluation can be skipped over when map
hold count is zero.

* 3665984 (Tracking ID: 2439261)

SYMPTOM:
When the vx_fiostats_tunable is changed from zero to non-zero, the
system panics with the following stack trace:
vx_fiostats_do_update
vx_fiostats_update
vx_read1
vx_rdwr
vno_rw
rwuio
pread

DESCRIPTION:
When vx_fiostats_tunable is changed from zero to non-zero, all the
incore-inode fiostats attributes are set to NULL. When these attributes are
accessed, the system panics due to the NULL pointer dereference.

RESOLUTION:
The code has been modified to check the file I/O stat attributes are
present before dereferencing the pointers.

* 3665990 (Tracking ID: 3567027)

SYMPTOM:
During the File System resize operation, the "fullfsck flag is set with the following message:vxfs: msgcnt 183168 mesg 096: V-2-96: vx_setfsflags - /dev/vx/dsk/sfsdg/vol file system fullfsck flag set - vx_fs_upgrade_reorg

DESCRIPTION:
File system resize requires some temporary inodes to swap the old inode and the converted inode. However, before a structural inode ise processed, the "fullfsck flag is set when a failure occurs during the metadata change. The flag is cleared after the swap is successfully completed.

If the temporary inode allocation fails, VxFS leaves the fullfsck flag on the 
disk. However, all temporary inodes can be cleaned up when not being in use, 
thus these temporary inodes do not result in corruption.

RESOLUTION:
The code is modified to clear the fullfsck flag if the structural inode conversion cannot create its temporary inode.

* 3666007 (Tracking ID: 3594386)

SYMPTOM:
On installing VxFS 5.1SP1RP4P1HF1 on RHEL6u5, system panic occurs when 
ftrace(1M) is enabled and multiple files are created when large amount of space 
is consumed.

DESCRIPTION:
The system panic occurs when the stack overflow happens through 
the bio_alloc() function that was initiated through a routine. The routine is 
where the bio_alloc() function exits VxFS and call through the block layer.

RESOLUTION:
The code is modified by adding a new handoff routine that would 
hand off the bio_alloc() task to a new worker thread in case the remaining 
stack was insufficient to proceed.

* 3666008 (Tracking ID: 3616907)

SYMPTOM:
While performing the garbage collection operation, VxFS causes the non-maskable 
interrupt (NMI) service to stall.

DESCRIPTION:
With a highly fragmented Reference Count Table (RCT), when a garbage collection 
operation is performed, the CPU could be used for a longer duration. The CPU 
could be busy if a potential entry that could be freed is not identified.

RESOLUTION:
The code is modified such that the CPU is released after a when it is idle 
after a specified time interval.

* 3666010 (Tracking ID: 3233276)

SYMPTOM:
On a 40 TB file system, the fsclustadm setprimary command consumes more than 2 minutes for execution. And, the unmount operation consumes more time causing a primary migration.

DESCRIPTION:
The old primary needs to process the delegated allocation units while migrating
from primary to secondary. The inefficient implementation of the allocation unit
list is consuming more time while removing the element from the list. As the file system size increases, the allocation unit list also increases, which results in additional migration time.

RESOLUTION:
The code is modified to process the allocation unit list efficiently. With this modification, the primary migration is completed in 1 second on the 40 TB file system.

* 3683470 (Tracking ID: 3682138)

SYMPTOM:
The VxVM symbols are released before the VxFS modules unload time.

DESCRIPTION:
On RHEL7, the VxFS module receives or adds VxVM symbol as follows:

1. Get or imports all required VxVM symbols on first access or on first mount. 
Refcnt/hold for VxVM module is incremented through VEKI on symbol_get request so that module doesnt get unloaded.
   2. The symbols (imported in step 1) are used as required.
3. VxFS puts/releases VxVM symbols during the VxFS unload time. Refcnt is decremented/hold is released. As a result, VxVM module unloads until the VxFS module is unloaded.
   Manual upgrade of VxVM pkg cannot happen until the VxFS 
   modules are unloaded

RESOLUTION:
The code is modified to release VxVM symbols before the VxFS modules unload time.

* 3687679 (Tracking ID: 3685391)

SYMPTOM:
Execute permissions for a file not honored correctly.

DESCRIPTION:
The user was able to execute the file regardless of not having the execute permissions.

RESOLUTION:
The code is modified such that an error is reported when the execute permissions are not applied.

* 3697142 (Tracking ID: 3697141)

SYMPTOM:
Added support for Sles12

DESCRIPTION:
Added support for Sles12

RESOLUTION:
Added support for Sles12

* 3697966 (Tracking ID: 3697964)

SYMPTOM:
When a file system is upgraded, the upgraded layout clears the superblock flags (fs_flags).

DESCRIPTION:
When a file system is upgraded, the new superblock structure gets populated with the field values. Most of these values are inherited from the old superblock. As a result, the fs_flags values are overwritten and the flags such as VX_SINGLEDEV are deleted from the superblock.

RESOLUTION:
The code is modified to restore the old superblock flags while upgrading the disk layout of a file system.

* 3698165 (Tracking ID: 3690078)

SYMPTOM:
The system panics at vx_dev_strategy() routine with the following stack trace:
vx_snap_strategy()
vx_logbuf_write() 
vx_logbuf_io()
vx_logbuf_flush() 
vx_logflush()
vx_mapstrategy() 
vx_snap_strategy() 
vx_clonemap() 
vx_unlockmap() 
vx_holdmap() 
vx_extmaptran() 
vx_extmapchange() 
vx_extprevfind() 
vx_extentalloc() 
vx_te_bmap_alloc() 
vx_bmap_alloc_typed() 
vx_bmap_alloc() 
vx_get_alloc() 
vx_cfs_pagealloc() 
vx_alloc_getpage() 
vx_do_getpage() 
vx_internal_alloc() 
vx_write_alloc() 
vx_write1() 
vx_write_common_slow()
vx_write_common() 
vx_vop_write()
vx_writev() 
vx_naio_write_v2() 
vfs_writev()

DESCRIPTION:
The issue was observed due to low handoff limit of vx_extprevfind.

RESOLUTION:
The code is modified  to avoid the stack overflow.

* 3706864 (Tracking ID: 3709947)

SYMPTOM:
The SSD cache fails to go offline due to additional slashes "//" in the dev path of the cache device.

DESCRIPTION:
The conv_to_bdev() function authenticates whether a path contains rdsk (character device) or dsk (disk device). 
If rdsk is present, the fs_convto_bspec() function calls the fs_pathcanon() function to remove the additional slashes.
In case the path contains dsk (disk device) then VxFS refuses to call the functions. As a result, the additional slashes appear in the final output path.

RESOLUTION:
The code is modified to enable VxFS to call  the correct functions.

* 3710794 (Tracking ID: 3710792)

SYMPTOM:
Unmount fails when the mount point contains a special character (colon).

DESCRIPTION:
For clones, the pseudo device name must contain <devicepath>:<clone name>. Thus, when using the cloned block device, the umount codepath performs an exclusive processing to unmount the device. Sometimes, the mount point itself can be created using the special character (colon). As a result, the  mount point splits at the colon.

RESOLUTION:
The code is modified to unsplit the mount point if it is not a block device and it is a real mount point that can be searched in the mounted file systems table (mtab).

* 3715567 (Tracking ID: 3715566)

SYMPTOM:
VxFS fails to report an error  when the maxlink and nomaxlink options are set for disk layout version (DLV) lower than 10.

DESCRIPTION:
The maxlink and nomaxlink options allow you to enable and disable the maxlink support feature respectively. The maxlink support feature operates only on DLV version 10. Due to an issue, the maxlink and nomaxlink options wrongly appear in DLV versions lower than 10. However, when selected the options do not take effect.

RESOLUTION:
The code is modified such that VxFS reports an error when you attempt to  set the maxlink and nomaxlink options for DLV version lower than 10.

* 3716627 (Tracking ID: 3622326)

SYMPTOM:
During a checkpoint promote, a file system is marked with a fullfsck flag because the corresponding inode is marked as bad.

DESCRIPTION:
The issue was observed because VxFS skipped moving the data to clone inode. As a result, the inode is marked bad during a checkpoint promote and as a consequence the file system is marked with a fullfsck flag.

RESOLUTION:
The code is modified to move the correct data to the clone inode.

* 3717895 (Tracking ID: 2919310)

SYMPTOM:
During stress testing on cluster file system, an assertion failure was hit because of a missing linkage between the directory and the associated attribute inode.

DESCRIPTION:
As per the designed behavior, the node which owns the inode of the file, receives the request to remove the file from the directory. If the directory has an alternate index (hash directory) present, then in the file remove receive handler, the attribute inode is read from the disk. However, VxFS does not create a linkage between the directory and the corresponding inode, which results in an assert failure.

RESOLUTION:
The code is modified to set the directory inodes i_dirhash field to attribute inode. This change is exercised while bringing the inode incore during file or directory removal.

* 3718542 (Tracking ID: 3269553)

SYMPTOM:
VxFS returns inappropriate message for read of hole via ODM.

DESCRIPTION:
Sometimes sparse files containing temp or backup/restore files are created outside the Oracle database. And, Oracle can read these files only using the ODM. As a result, ODM fails with an ENOTSUP error.

RESOLUTION:
The code is modified to return zeros instead of an error.

* 3721458 (Tracking ID: 3721466)

SYMPTOM:
After a file system is upgraded from version 6 to 7, the vxupgrade(1M) command fails to set the VX_SINGLEDEV flag on a superblock.

DESCRIPTION:
The VX_SINGLEDEV flag was introduced in disk layout version 7.
The purpose of the flag is to indicate whether a file system resides only on a single device or a volume.
When the disk layout is upgraded from version 6 to 7, the flag is not inherited along with the other values since it was not supported in version 6.

RESOLUTION:
The code is modified to set the VX_SINGLEDEV flag when the disk layout is upgraded from version 6 to 7.

* 3726403 (Tracking ID: 3739618)

SYMPTOM:
sfcache command with the "-i" option may not show filesystem cache statistic periodically.

DESCRIPTION:
sfcache command with the "-i" option may not show filesystem cache statistic periodically.

RESOLUTION:
The code is modified to add a loop to print sfcache statistics at the specified interval.

* 3727166 (Tracking ID: 3727165)

SYMPTOM:
Enhance RHEV support for SF devices for identification in Guest

DESCRIPTION:
We need a provision of assigning serial no to a device while
attach. This would be passed on as the serial no of the scsi device to the
guest. As a result, scsi inquiry on the device should return the provided serial
no in Vendor specific data.

RESOLUTION:
Corresponding changes for supplying the serial number to the vxfs
hook is done

* 3729704 (Tracking ID: 3719523)

SYMPTOM:
'vxupgrade' does not clear the superblock replica of old layout versions.

DESCRIPTION:
While upgrading the file system to a new layout version, a new superblock inode is allocated and an extent is allocated for the replica superblock. After writing the new superblock (primary + replica), VxFS frees the extent of the old superblock replica.
Now, if the primary superblock corrupts, the full fsck searches for replica to repair the file system. If it finds the replica of old superblock, it restores the file system to the old layout, instead of creating a new one. This behavior is wrong.
In order to take the file system to a new version, we should clear the replica of old superblock as part of vxupgrade, so that full fsck won't detect it later.

RESOLUTION:
Clear the replica of old superblock as part of vxupgrade.

* 3733811 (Tracking ID: 3729030)

SYMPTOM:
The fsdedupschd daemon failed to start on RHEL7.

DESCRIPTION:
The dedup service daemon failed to start because RHEL 7 changed the service management mechanism. The daemon uses the new systemctl to start and stop the service. For the systemctl to properly start, stop, or query the service, it needs a service definition file under the /usr/lib/systemd/system.

RESOLUTION:
The code is modified to create the fsdedupschd.service file while installing the VRTSfsadv package.

* 3733812 (Tracking ID: 3729030)

SYMPTOM:
The fsdedupschd daemon failed to start on RHEL7.

DESCRIPTION:
The dedup service daemon failed to start because RHEL 7 changed the service management mechanism. The daemon uses the new systemctl to start and stop the service. For the systemctl to properly start, stop, or query the service, it needs a service definition file under the /usr/lib/systemd/system.

RESOLUTION:
The code is modified to create the fsdedupschd.service file while installing the VRTSfsadv package.

* 3737330 (Tracking ID: 3737329)

SYMPTOM:
Added support for RHEL7.1

DESCRIPTION:
Added support for RHEL7.1

RESOLUTION:
Added support for RHEL7.1

* 3743913 (Tracking ID: 3743912)

SYMPTOM:
Users could create sub-directories more than 64K for disk layouts having versions lower than 10.

DESCRIPTION:
In this release, the maxlink feature enables users to create sub-directories larger than 64K.This feature is supported on disk layouts whose versions are higher than or equal to 10. The macro VX_TUNEMAXLINK denotes the maximum limitation on sub-directories. And, its value was changed from 64K to 4 billion. Due to this, users could create more than 64K sub-directories for layout versions < 10 as well, which is undesirable.
This fix is applicable only on platforms other than AIX.

RESOLUTION:
The code is modified such that now you can set the value of sub-directory limitation to 64K for layouts whose versions are lower than 10.

* 3744425 (Tracking ID: 3744424)

SYMPTOM:
Rebundled the fix "Openssl: Common Vulnerability and Exposure (CVE) CVE-2014-3566 (POODLE)" as 
6.2.1.

DESCRIPTION:
VRTSfsadv package uses old versions of OpenSSL which are vulnerable to
POODLE(CVE-2014-3566) 
and Hearbleed(CVE-2014-0160). By upgrading to OpenSSL 0.9.8zc, many security
vulnerabilities 
have been fixed.

RESOLUTION:
The VRTSfsadv package is built with OpenSSL 0.9.8zc..

* 3745651 (Tracking ID: 3642314)

SYMPTOM:
Umount operation reports an  error code 255 in case of write-back cache.

DESCRIPTION:
Umount helper binary uses umount or umount2 system calls to unmount VxFS file system. In case of error, these system calls return -1. The umount helper returns this value to the system(OSes) umount binary which interpreted it as 255.

RESOLUTION:
The code is modified to maintain consistency with the operating systems behavior. In case of umount failure, it must return MOUNT_EX_FAIL which is defined as 32 for RHEL7 and 1 for RHEL6 or SLES11.

* 3749727 (Tracking ID: 3750187)

SYMPTOM:
Internal noise testing hits debug assert.

DESCRIPTION:
The issue is observed becasue Linux inodes are not initialized properly. Initialization of Linux inodes are done at some VxFS entry points like vx_lookup. In write back replay, an inode is created based on the inode number in log and similar initialization is required here as well.

RESOLUTION:
The code is modified to have proper initialization of inodes.

* 3749776 (Tracking ID: 3637636)

SYMPTOM:
Cluster File System (CFS) node initialization and protocol upgrade may hang during rolling upgrade with the following stack trace:
vx_svar_sleep_unlock()
vx_event_wait()
vx_async_waitmsg()
vx_msg_broadcast()
vx_msg_send_join_version()
vx_msg_send_join()
vx_msg_gab_register()
vx_cfs_init()
vx_cfs_reg_fsckd()
vx_cfsaioctl()
vxportalunlockedkioctl()
vxportalunlockedioctl()

And

vx_delay()
vx_recv_protocol_upgrade_intent_msg()
vx_recv_protocol_upgrade()
vx_ctl_process_thread()
vx_kthread_init()

DESCRIPTION:
CFS node initialization waits for the protocol upgrade to complete. Protocol upgrade waits for the flag related to the CFS initialization to be cleared. As the result, the deadlock occurs.

RESOLUTION:
The code is modified so that the protocol upgrade process does not wait to clear the CFS initialization flag.

* 3755796 (Tracking ID: 3756750)

SYMPTOM:
VxFS may leak memory when File Design Driver (FDD) module is unloaded before the cache file system is taken offline.

DESCRIPTION:
When FDD module is unloaded before the cache file system is taken offline, few FDD related structures in the cache file system remains to be free. As a result, memory leak is observed.

RESOLUTION:
The code is modified such that FDD related structure is not initialized for cache file systems.

Patch ID: VRTSvxfs-6.2.0.100

* 3703631 (Tracking ID: 3615043)

SYMPTOM:
At times, while writing to a file, data could be missed.

DESCRIPTION:
While writing to a file when delayed allocation is on, Solaris could dishonour the NON_CLUSTERING flag and 
cluster pages beyond the range for which we have issued the flushing, leading to data loss.

RESOLUTION:
Make sure we clear the flag and flush the exact range, in case of dalloc.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch sfha-rhel7_x86_64-Patch-6.2.1.7300.tar.gz to /tmp
2. Untar sfha-rhel7_x86_64-Patch-6.2.1.7300.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/sfha-rhel7_x86_64-Patch-6.2.1.7300.tar.gz
    # tar xf /tmp/sfha-rhel7_x86_64-Patch-6.2.1.7300.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installSFHA621P73 [<host1> <host2>...]

You can also install this patch together with 6.2.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 6.2.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE