fs-rhel6_x86_64-Patch-6.0.5.600

 Basic information
Release type: Patch
Release date: 2018-10-17
OS update support: None
Technote: None
Documentation: None
Popularity: 800 viewed    downloaded
Download size: 12.61 MB
Checksum: 4287598680

 Applies to one or more of the following products:
VirtualStore 6.0.1 On RHEL6 x86-64
Storage Foundation 6.0.1 On RHEL6 x86-64
Storage Foundation Cluster File System 6.0.1 On RHEL6 x86-64
Storage Foundation for Oracle RAC 6.0.1 On RHEL6 x86-64
Storage Foundation HA 6.0.1 On RHEL6 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
fs-rhel6_x86_64-Patch-6.0.5.400 (obsolete) 2016-03-01
fs-rhel6_x86_64-6.0.5.200 (obsolete) 2014-12-18
fs-rhel6_x86_64-6.0.5.100 (obsolete) 2014-09-10

This patch requires: Release date
sfha-rhel6_x86_64-6.0.5 2014-04-15

 Fixes the following incidents:
2705336, 2912412, 2927359, 2928921, 2933290, 2933291, 2933292, 2933294, 2933296, 2933300, 2933309, 2933313, 2933330, 2933333, 2933335, 2933571, 2933729, 2933751, 2933822, 2937367, 2972674, 2976664, 2978227, 2978234, 2982161, 2983739, 2984589, 2987373, 2988749, 2999566, 3008450, 3027250, 3040130, 3056103, 3059000, 3100385, 3108176, 3248029, 3248031, 3248042, 3248046, 3248054, 3296988, 3310758, 3317118, 3338024, 3338026, 3338030, 3338063, 3338750, 3338762, 3338776, 3338779, 3338780, 3338781, 3338787, 3338790, 3339230, 3339884, 3339949, 3339963, 3339964, 3340029, 3340031, 3348459, 3349652, 3351939, 3351946, 3351947, 3356841, 3356845, 3356892, 3356895, 3356909, 3357264, 3357278, 3359278, 3364285, 3364289, 3364302, 3364305, 3364307, 3364317, 3364333, 3364335, 3364338, 3364349, 3364353, 3364355, 3369037, 3369039, 3370650, 3372896, 3372909, 3380905, 3381928, 3383150, 3383271, 3384781, 3396539, 3402484, 3405172, 3409692, 3411725, 3426009, 3429587, 3430687, 3436393, 3468413, 3469683, 3498950, 3498954, 3498963, 3498976, 3498978, 3498998, 3499005, 3499008, 3499011, 3499014, 3499030, 3514824, 3515559, 3515569, 3515588, 3515737, 3515739, 3517702, 3517707, 3557193, 3567855, 3579957, 3581566, 3584297, 3588236, 3590573, 3595894, 3597454, 3597560, 3622423, 3673958, 3678626, 3682640, 3690056, 3726112, 3796626, 3796630, 3796633, 3796637, 3796644, 3796652, 3796664, 3796671, 3796676, 3796684, 3796687, 3796727, 3796731, 3796733, 3796745, 3799999, 3821416, 3843470, 3843734, 3848508, 3851967, 3852297, 3852512, 3861518, 3862350, 3862425, 3862435, 3864139, 3864751, 3866970, 3952360, 3957178

 Patch ID:
VRTSvxfs-6.0.500.600-RHEL6

Readme file
                          * * * READ ME * * *
                 * * * Veritas File System 6.0.5 * * *
                      * * * Patch 6.0.5.600 * * *
                         Patch Date: 2018-09-11


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas File System 6.0.5 Patch 6.0.5.600


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL6 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxfs


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Symantec VirtualStore 6.0.1
   * Veritas Storage Foundation 6.0.1
   * Veritas Storage Foundation Cluster File System HA 6.0.1
   * Veritas Storage Foundation for Oracle RAC 6.0.1
   * Veritas Storage Foundation HA 6.0.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 6.0.500.600
* 3957178 (3957852) VxFS support for RHEL6.10.
Patch ID: 6.0.500.500
* 3952360 (3952357) VxFS support for RHEL6.x retpoline kernels
Patch ID: 6.0.500.400
* 2927359 (2927357) Assert hit in internal testing.
* 2933300 (2933297) Compression support for the dedup ioctl.
* 2972674 (2244932) Internal assert failure during testing.
* 3040130 (3137886) Thin Provisioning Logging does not work for reclaim
operations triggered via fsadm command.
* 3248031 (2628207) Full fsck operation taking long time.
* 3682640 (3637636) Cluster File System (CFS) node initialization and protocol upgrade may hang
during the rolling upgrade.
* 3690056 (3689354) Users having write permission on file cannot open the file
with O_TRUNC if the file has setuid or setgid bit set.
* 3726112 (3704478) The library routine to get the mount point, fails to return
mount point of root file system.
* 3796626 (3762125) Directory size increases abnormally.
* 3796630 (3736398) NULL pointer dereference panic in lazy unmount.
* 3796633 (3729158) Deadlock occurs due to incorrect locking order between write advise and dalloc flusher thread.
* 3796637 (3574404) Stack overflow during rename operation.
* 3796644 (3269553) VxFS returns inappropriate message for read of hole via 
Oracle Disk Manager (ODM).
* 3796652 (3686438) NMI panic in the vx_fsq_flush function.
* 3796664 (3444154) Reading from a de-duped file-system over NFS can result in data corruption seen 
on the NFS client.
* 3796671 (3596378) The copy of a large number of small files is slower on vxfs
compared to ext4
* 3796676 (3615850) Write system call hangs with invalid buffer length
* 3796684 (3601198) Replication makes the copies of 64-bit external quota files too.
* 3796687 (3604071) High CPU usage consumed by the vxfs thread process.
* 3796727 (3617191) Checkpoint creation takes a lot of time.
* 3796731 (3558087) The ls -l and other commands which uses stat system call may
take long time to complete.
* 3796733 (3695367) Unable to remove volume from multi-volume VxFS using "fsvoladm" command.
* 3796745 (3667824) System panicked while delayed allocation(dalloc) flushing.
* 3799999 (3602322) System panics while flushing the dirty pages of the inode.
* 3821416 (3817734) Direct command to run  fsck with -y|Y option was mentioned in
the message displayed to user when file system mount fails.
* 3843470 (3867131) Kernel panic in internal testing.
* 3843734 (3812914) On RHEL 6.5 and RHEL 6.4 latest kernel patch, umount(8) system call hangs if an
application watches for inode events using inotify(7) APIs.
* 3848508 (3867128) Assert failed in internal native AIO testing.
* 3851967 (3852324) Assert failure during internal stress testing.
* 3852297 (3553328) During internal testing full fsck failed to clean the file
system cleanly.
* 3852512 (3846521) "cp -p" fails if modification time in nano seconds have 10 
digits.
* 3861518 (3549057) The "relatime" mount option is shown in /proc/mounts but it is
not supported by VxFS.
* 3862350 (3853338) Files on VxFS are corrupted while running the sequential
asynchronous write workload under high memory pressure.
* 3862425 (3859032) System panics in vx_tflush_map() due to NULL pointer 
de-reference.
* 3862435 (3833816) Read returns stale data on one node of the CFS.
* 3864139 (3869174) Write system call deadlock on rhel5 and sles10.
* 3864751 (3867147) Assert failed in internal dedup testing.
* 3866970 (3866962) Data corruption seen when dalloc writes are going on the file and 
simultaneously fsync started on the same file.
Patch ID: 6.0.500.200
* 3622423 (3250239) Panic in vx_naio_worker
* 3673958 (3660422) On RHEL 6.6, umount(8) system call hangs if an application is watching for inode
events using inotify(7) APIs.
* 3678626 (3613048) Support vectored AIO on Linux
Patch ID: 6.0.500.100
* 3409692 (3402618) The mmap read performance on VxFS is slow
* 3426009 (3412667) The RHEL 6 system panics with Stack Overflow.
* 3469683 (3469681) File system is disabled while free space defragmentation is going on.
* 3498950 (3356947) When there are multi-threaded writes with fsync calls between them, VxFS becomes slow.
* 3498954 (3352883) During the rename operation, lots of nfsd threads hang.
* 3498963 (3396959) RHEL 6.4 system panics with stack overflow errors due to memory pressure.
* 3498976 (3434811) The vxfsconvert(1M) in VxFS 6.1 hangs.
* 3498978 (3424564) fsppadm fails with ENODEV and "file is encrypted or is not a 
database" errors
* 3498998 (3466020) File System is corrupted with error message "vx_direrr: vx_dexh_keycheck_1"
* 3499005 (3469644) System panics in the vx_logbuf_clean() function.
* 3499008 (3484336) The fidtovp() system call can panic in the vx_itryhold_locked () function.
* 3499011 (3486726) VFR logs too much data on the target node.
* 3499014 (3471245) The mongodb fails to insert any record.
* 3499030 (3484353) The file system may hang with a partitioned directory feature enabled.
* 3514824 (3443430) Fsck allocates too much memory.
* 3515559 (3498048) while the system is making backup, the Als AlA command on the same file system may hang.
* 3515569 (3430461) The nested unmounts fail if the parent file system is disabled.
* 3515588 (3294074) System call fsetxattr() is slower on Veritas File System (VxFS) than ext3 file system.
* 3515737 (3511232) Stack overflow causes kernel panic in the vx_write_alloc() function.
* 3515739 (3510796) System panics when VxFS cleans the inode chunks.
* 3517702 (3517699) Return code 240 for command fsfreeze(1M) is not documented in man page for fsfreeze.
* 3517707 (3093821) The system panics due to referring freed super block after the vx_unmount() function errors.
* 3557193 (3473390) Multiple stack overflows with VxFS on RHEL6 lead to panics/system crashes.
* 3567855 (3567854) On VxFS 6.0.5, the vxedquota(1M) and vxrepquota(1M) commands fail in certain scenarios.
* 3579957 (3233315) "fsck" utility dumps core, with full scan.
* 3581566 (3560968) The delicache_enable tunable is not persistent in the Cluster File System (CFS) environment.
* 3584297 (3583930) While external quota file is restored or over-written, old quota records are preserved.
* 3588236 (3604007) Stack overflow on SLES11
* 3590573 (3331010) Command fsck(1M) dumped core with segmentation fault
* 3595894 (3595896) While creating the OracleRAC 12.1.0.2 database, the node panics.
* 3597454 (3602386) vfradmin man page shows the incorrect info about default
behavior of -d option
* 3597560 (3597482) The pwrite(2) function fails with the EOPNOTSUPP error.
Patch ID: 6.0.500.000
* 2705336 (2059611) The system panics due to a NULL pointer dereference while
flushing bitmaps to the disk.
* 2978234 (2972183) The fsppadm(1M) enforce command takes a long time on the secondary nodes
compared to the primary nodes.
* 2982161 (2982157) During internal testing, the Af:vx_trancommit:4A debug asset was hit when the available transaction space is lesser than required.
* 2999566 (2999560) The 'fsvoladm'(1M) command fails to clear the 'metadataok' flag on a volume.
* 3027250 (3031901) The 'vxtunefs(1M)' command accepts the garbage value for the 'max_buf_dat_size' tunable.
* 3056103 (3197901) prevent duplicate symbol in VxFS libvxfspriv.a and 
vxfspriv.so
* 3059000 (3046983) Invalid CFS node number in ".__fsppadm_fclextract", causes the DST policy 
enforcement failure.
* 3108176 (2667658) The 'fscdsconv endian' conversion operation fails because of a macro overflow.
* 3248029 (2439261) When the vx_fiostats_tunable value is changed from zero to
non-zero, the system panics.
* 3248042 (3072036) Read operations from secondary node in CFS can sometimes fail with the ENXIO 
error code.
* 3248046 (3092114) The information output displayed by the "df -i" command may be inaccurate for 
cluster mounted file systems.
* 3248054 (3153919) The fsadm (1M) command may hang when the structural file set re-organization is 
in progress.
* 3296988 (2977035) A debug assert issue was encountered in vx_dircompact() function while running an internal noise test in the Cluster File System (CFS) environment
* 3310758 (3310755) Internal testing hits a debug assert Avx_rcq_badrecord:9:corruptfsA.
* 3317118 (3317116) Internal command conformance text for mount command on RHEL6 Update4 hit debug assert inside the vx_get_sb_impl()function.
* 3338024 (3297840) A metadata corruption is found during the file removal process.
* 3338026 (3331419) System panic because of kernel stack overflow.
* 3338030 (3335272) The mkfs (make file system) command dumps core when the log 
size provided is not aligned.
* 3338063 (3332902) While shutting down, the system running the fsclustadm(1M)
command panics.
* 3338750 (2414266) The fallocate(2) system call fails on Veritas File System (VxFS) file systems in
the Linux environment.
* 3338762 (3096834) Intermittent vx_disable messages are displayed in the system log.
* 3338776 (3224101) After you enable the optimization for updating the i_size across the cluster
nodes lazily, the system panics.
* 3338779 (3252983) On a high-end system greater than or equal to 48 CPUs, some file system operations may hang.
* 3338780 (3253210) File system hangs when it reaches the space limitation.
* 3338781 (3249958) When the /usr file system is mounted as a separate file 
system, Veritas File system (VxFS) fails to load.
* 3338787 (3261462) File system with size greater than 16TB corrupts with vx_mapbad messages in the system log.
* 3338790 (3233284) FSCK binary hangs while checking Reference Count Table (RCT.
* 3339230 (3308673) A fragmented file system is disabled when delayed allocations
feature is enabled.
* 3339884 (1949445) System is unresponsive when files were created on large directory.
* 3339949 (3271892) Veritas File Replicator (VFR) jobs fail if the same Process
ID (PID) is associated with the multiple jobs working on different target file
systems.
* 3339963 (3071622) On SLES10, bcopy(3) with overlapping address does not work.
* 3339964 (3313756) The file replication daemon exits unexpectedly and dumps core
on the target side.
* 3340029 (3298041) With the delayed allocation feature enabled on a locally 
mounted file system, observable performance degradation might be experienced 
when writing to a file and extending the file size.
* 3340031 (3337806) The find(1) command may panic the systems with Linux kernels
with versions greater than 3.0.
* 3348459 (3274048) VxFS hangs when it requests a cluster-wide grant on an inode while holding a lock on the inode.
* 3351939 (3351937) Command vfradmin(1M) may fail while promoting job on locally
mounted VxFS file system due to "relatime" mount option.
* 3351946 (3194635) The internal stress test on a locally mounted file system exited with an error message.
* 3351947 (3164418) Internal stress test on locally mounted VxFS filesytem results in data corruption in no space on device scenario while doing spilt on Zero Fill-On-Demand(ZFOD) extent
* 3359278 (3364290) The kernel may panic in Veritas File System (VxFS) when it is
internally working on reference count queue (RCQ) record.
* 3364285 (3364282) The fsck(1M) command  fails to correct inode list file
* 3364289 (3364287) Debug assert may be hit in the vx_real_unshare() function in the cluster environment.
* 3364302 (3364301) Assert failure because of improper handling of inode lock while truncating a reorg inode.
* 3364305 (3364303) Internal stress test on a locally mounted file system hits a debug assert in VxFS File Device Driver (FDD).
* 3364307 (3364306) Stack overflow seen in extent allocation code path.
* 3364317 (3364312) The fsadm(1M) command is unresponsive while processing the VX_FSADM_REORGLK_MSG message.
* 3364333 (3312897) System can hang when the Cluster File System (CFS) primary node is disabled.
* 3364335 (3331109) The full fsck does not repair the corrupted reference count queue (RCQ) record.
* 3364338 (3331045) Kernel Oops in unlock code of map while referring freed mlink due to a race with iodone routine for delayed writes.
* 3364349 (3359200) Internal test on Veritas File System (VxFS) fsdedup(1M) feature in cluster file system environment results in
a hang.
* 3364353 (3331047) Memory leak occurs in the vx_followlink() function in error condition.
* 3364355 (3263336) Internal noise test on cluster file system hits the "f:vx_cwfrz_wait:2" and "f:vx_osdep_msgprint:panic" debug asserts.
* 3369037 (3349651) VxFS modules fail to load on RHEL6.5
* 3369039 (3350804) System panic on RHEL6 due to kernel stack overflow corruption.
* 3370650 (2735912) The performance of tier relocation using the fsppadm(1M)
enforce command degrades while migrating a large number of files.
* 3372896 (3352059) High memory usage occurs when VxFS uses Veritas File Replicator (VFR) on the target even when no jobs are running.
* 3372909 (3274592) Internal noise test on cluster file system is unresponsive while executing the fsadm(1M) command
* 3380905 (3291635) Internal testing found debug assert Avx_freeze_block_threads_all:7cA on locally mounted file systems while processing preambles for transactions.
* 3381928 (3444771) Internal noise test on cluster file system hits debug assert
while creating a file.
* 3383150 (3383147) The ACA operator precedence error may occur while turning off
delayed allocation.
* 3383271 (3433786) The vxedquota(1M) command fails to set quota limits  for some users.
* 3396539 (3331093) Issue with MountAgent Process for vxfs. While doing repeated
switchover on HP-UX, MountAgent got stuck.
* 3402484 (3394803) A panic is observed in VxFS routine vx_upgrade7() function
while running the vxupgrade command(1M).
* 3405172 (3436699) An assert failure occurs because of a race condition between clone mount thread and directory removal thread while pushing data on clone.
* 3411725 (3415639) The type of the fsdedupadm(1M) command always shows as MANUAL even it is launched by the fsdedupschd daemon.
* 3429587 (3463464) Internal kernel functionality conformance test hits a kernel panic due to null pointer dereference.
* 3430687 (3444775) Internal noise testing on cluster file system results in a kernel panic in function vx_fsadm_query() with an error message.
* 3436393 (3462694) The fsdedupadm(1M) command fails with error code 9 when it
tries to mount checkpoints on a cluster.
* 3468413 (3465035) The VRTSvxfs and VRTSfsadv packages displays  incorrect "Provides" list.
Patch ID: 6.0.300.300
* 3384781 (3384775) Installing patch 6.0.3.200 on RHEL 6.4  or earlier RHEL 6.* versions fails with ERROR: 
No appropriate modules found.
Patch ID: 6.0.300.200
* 3349652 (3349651) VxFS modules fail to load on RHEL6.5
* 3356841 (2059611) The system panics due to a NULL pointer dereference while
flushing bitmaps to the disk.
* 3356845 (3331419) System panic because of kernel stack overflow.
* 3356892 (3259634) A Cluster File System (CFS) with blocks larger than 4GB may
become corrupt.
* 3356895 (3253210) File system hangs when it reaches the space limitation.
* 3356909 (3335272) The mkfs (make file system) command dumps core when the log 
size provided is not aligned.
* 3357264 (3350804) System panic on RHEL6 due to kernel stack overflow corruption.
* 3357278 (3340286) After a file system is resized, the tunable setting of
dalloc_enable gets reset to a default value.
Patch ID: 6.0.300.100
* 3100385 (3369020) The Veritas File System (VxFS) module fails to load in the
RHEL 6 Update 4 environment.
Patch ID: 6.0.300.000
* 2912412 (2857629) File system corruption can occur requiring a full fsck(1M) 
after cluster reconfiguration.
* 2928921 (2843635) Internal testing is having some failures.
* 2933290 (2756779) The code is modified to improve the fix for the read and write performance
concerns on Cluster File System (CFS) when it runs applications that rely on
the POSIX file-record using the fcntl lock.
* 2933291 (2806466) A reclaim operation on a file system that is mounted on a
Logical Volume Manager (LVM) may panic the system.
* 2933292 (2895743) Accessing named attributes for some files stored in CFS seems to be slow.
* 2933294 (2750860) Performance of the write operation with small request size
may degrade on a large file system.
* 2933296 (2923105) Removal of the VxFS module from the kernel takes a longer time.
* 2933309 (2858683) Reserve extent attributes changed after vxrestore, for files greater than 
8192bytes.
* 2933313 (2841059) full fsck fails to clear the corruption in attribute inode 15
* 2933330 (2773383) The read and write operations on a memory mapped files are
unresponsive.
* 2933333 (2893551) The file attribute value is replaced with question mark symbols when the 
Network File System (NFS) connections experience a high load.
* 2933335 (2641438) After a system is restarted, the modifications that are performed on the 
username space-extended attributes are lost.
* 2933571 (2417858) VxFS quotas do not support 64 bit limits.
* 2933729 (2611279) Filesystem with shared extents may panic.
* 2933751 (2916691) Customer experiencing hangs when doing dedups
* 2933822 (2624262) Filestore:Dedup:fsdedup.bin hit oops at vx_bc_do_brelse
* 2937367 (2923867) Internal test hits an assert "f:xted_set_msg_pri1:1".
* 2976664 (2906018) The vx_iread errors are displayed after successful log replay and mount of the 
file system.
* 2978227 (2857751) The internal testing hits the assert "f:vx_cbdnlc_enter:1a".
* 2983739 (2857731) Internal testing hits an assert "f:vx_mapdeinit:1"
* 2984589 (2977697) A core dump is generated while you are removing the clone.
* 2987373 (2881211) File ACLs not preserved in checkpoints properly if file has hardlink.
* 2988749 (2821152) Internal Stress test hit an assert "f:vx_dio_physio:4, 1" on
locally mounter file system.
* 3008450 (3004466) Installation of 5.1SP1RP3 fails on RHEL 6.3


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: 6.0.500.600

* 3957178 (Tracking ID: 3957852)

SYMPTOM:
VxFS support for RHEL6.10.

DESCRIPTION:
Since RHEL6.10 is new release and it has retpoline kernel therefore adding VxFS support for it.

RESOLUTION:
Added VxFS support for RHEL6.10.

Patch ID: 6.0.500.500

* 3952360 (Tracking ID: 3952357)

SYMPTOM:
VxFS support for RHEL6.x retpoline kernels

DESCRIPTION:
Redhat released retpoline kernel for older RHEL6.x releases. The 
VxFS module should recompile with retpoline aware GCC to support retpoline 
kernel.

RESOLUTION:
Compiled VxFS with retpoline GCC.

Patch ID: 6.0.500.400

* 2927359 (Tracking ID: 2927357)

SYMPTOM:
Assert hit in internal testing.

DESCRIPTION:
Got the assert in internal testing when attribute inode being purged is marked
bad or the file system is disabled

RESOLUTION:
Code is modified to return EIO in such cases.

* 2933300 (Tracking ID: 2933297)

SYMPTOM:
Compression support for the dedup ioctl.

DESCRIPTION:
Compression support for the dedup ioctl for NBU.

RESOLUTION:
Added limited support for compressed extents to the dedup ioctl for NBU.

* 2972674 (Tracking ID: 2244932)

SYMPTOM:
Deadlock is seen when trying to reuse the inode number in case of
checkpoint.

DESCRIPTION:
In the presence of a checkpoint, there is a possibility that inode
number is assigned in such a way that parent->child can become child->parent in
different fset. This is leading to a deadlock scenario. So, possible fix is that
in presence of checkpoint , when its mounted, dont try to take a blocking lock
on the second inode(as we do it in rename vnode operation ) , as this inode may
have been reused in the clone fset and lock has been already been 
taken, as even if there is no push set for the inode, rwlock goes on  whole
clone chain, even if chain inodes are unrelated. In the context of rename vnode
operation , when there is a need for creation of a hidden hash directory, parent
directory need to be  exclusively rwlocked(right now, its a blocking lock) ,
this may also lead to a scenario where two renames are going 
simultaneously and same  parent inode involved   can cause deadlock.

RESOLUTION:
Code is modified to avoid this deadlock.

* 3040130 (Tracking ID: 3137886)

SYMPTOM:
Thin Provisioning Logging does not work for reclaim operations
triggered via fsadm command.

DESCRIPTION:
Thin Provisioning Logging does not work for reclaim operations
triggered via fsadm command.

RESOLUTION:
Code is added to log reclamation issued by fsadm command; to create
backup log file once size of reclaim log file exceeds 1MB and for saving command
string of fsadm command.

* 3248031 (Tracking ID: 2628207)

SYMPTOM:
A full fsck operation on a file system with a large number of Access Control 
List (ACL) settings and checkpoints takes a long time (in some cases, more than 
a week) to complete.

DESCRIPTION:
The fsck operation is mainly blocked in pass1d phase. The process of pass1d is 
as follows:
1.	For each fileset, pass1d goes through all the inodes in the ilist of 
the fileset and then retrieves the inode information from the disk.
2.	It extracts the corresponding attribute inode number.
3.	Then, it reads the attribute inode and counts the number of rules 
inside it.
4.	Finally, it reads the rules into the buffer and performs the checking.

The attribute rules data can be parsed anywhere on the file system and the 
checkpoint link may be followed to locate the real inodes, which consumes a 
significant amount of time.

RESOLUTION:
The code is modified to: 
1.	Add a read ahead mechanism for the attribute ilist which boosts the 
read for attribute inodes via buffering. 
2.	Add a bitmap to record those attribute inodes which have been already 
checked to avoid redundant checks. 
3.	Add an option to the fsck operation which enables it to check a 
specific fileset separately rather than checking the entire file system.

* 3682640 (Tracking ID: 3637636)

SYMPTOM:
Cluster File System (CFS) node initialization and protocol upgrade may hang
during rolling upgrade with the following stack trace:
vx_svar_sleep_unlock()
vx_event_wait()
vx_async_waitmsg()
vx_msg_broadcast()
vx_msg_send_join_version()
vx_msg_send_join()
vx_msg_gab_register()
vx_cfs_init()
vx_cfs_reg_fsckd()
vx_cfsaioctl()
vxportalunlockedkioctl()
vxportalunlockedioctl()

And

vx_delay()
vx_recv_protocol_upgrade_intent_msg()
vx_recv_protocol_upgrade()
vx_ctl_process_thread()
vx_kthread_init()

DESCRIPTION:
CFS node initialization waits for the protocol upgrade to complete. Protocol
upgrade waits for the flag related to the CFS initialization to be cleared. As
the result, the deadlock occurs.

RESOLUTION:
The code is modified so that the protocol upgrade process does not wait to clear
the CFS initialization flag.

* 3690056 (Tracking ID: 3689354)

SYMPTOM:
Users having write permission on file cannot open the file with O_TRUNC
if the file has setuid or setgid bit set.

DESCRIPTION:
On Linux, kernel triggers an explicit mode change as part of
O_TRUNC processing to clear setuid/setgid bit. Only the file owner or a
privileged user is allowed to do a mode change operation. Hence for a
non-privileged user who is not the file owner, the mode change operation fails
making open() system call to return EPERM.

RESOLUTION:
Mode change request to clear setuid/setgid bit coming as part of
O_TRUNC processing is allowed for other users.

* 3726112 (Tracking ID: 3704478)

SYMPTOM:
The library routine to get the mount point, fails to return mount point
of root file system.

DESCRIPTION:
For getting the mount point of a path,  we scan the input path name
and get the nearest path which represents a mount point. But, when the file is
in the root file system ("/"), this function returns an error code 1 and hence
does not return the mount point of that file. This is because of a bug in the
logic of path name parsing, which neglects the root mount point while parsing.

RESOLUTION:
Fix the logic in path name parsing so that the mount point of root 
file system is returned.

* 3796626 (Tracking ID: 3762125)

SYMPTOM:
Directory size sometimes keeps increasing even though the number of files inside it doesn't 
increase.

DESCRIPTION:
This only happens to CFS. A variable in the directory inode structure marks the start of 
directory free space. But when the directory ownership changes, the variable may become stale, which 
could cause this issue.

RESOLUTION:
The code is modified to reset this free space marking variable when there's 
ownershipchange. Now the space search goes from beginning of the directory inode.

* 3796630 (Tracking ID: 3736398)

SYMPTOM:
Panic occurs in the lazy unmount path during deinit of VxFS-VxVM API.

DESCRIPTION:
The panic occurs when an exiting thread drops the last reference to 
a lazy-unmounted VxFS file system which is the last VxFS mount on the system. The 
exiting thread does unmount, which then makes call into VxVM to de-initialize the 
private FS-VM API as it is the last VxFS mounted file system. The function to be 
called in VxVM is looked-up via the files under /proc. This requires a file to be 
opened, but the exit processing has removed the structures needed by the thread 
to open a file, because of which a panic is observed

RESOLUTION:
The code is modified to pass the deinit work to worker thread.

* 3796633 (Tracking ID: 3729158)

SYMPTOM:
The fuser and other commands hang on VxFS file systems.

DESCRIPTION:
The hang is seen while 2 threads contest for 2 locks -ILOCK and PLOCK. The writeadvise thread owns the ILOCK but is waiting for the PLOCK, while the dalloc thread owns the PLOCK and is waiting for the ILOCK.

RESOLUTION:
The code is modified to correct the order of locking. Now PLOCK is followed by ILOCK.

* 3796637 (Tracking ID: 3574404)

SYMPTOM:
System panics because of a stack overflow during rename operation.

DESCRIPTION:
The stack is overflown by 88 bytes in the rename code path. The thread_info
structure is disrupted with VxFS page buffer head addresses.

RESOLUTION:
We now use dynamic allocation of local structures. This saves 256 bytes and
gives enough room.

* 3796644 (Tracking ID: 3269553)

SYMPTOM:
VxFS returns inappropriate message for read of hole via ODM.

DESCRIPTION:
Sometimes sparse files containing temp or backup/restore files are
created outside the Oracle database. And, Oracle can read these files only using
the ODM. As a result, ODM fails with an ENOTSUP error.

RESOLUTION:
The code is modified to return zeros instead of an error.

* 3796652 (Tracking ID: 3686438)

SYMPTOM:
System panicked with NMI during file system transaction flushing.

DESCRIPTION:
In vx_iflush_list we take the icachelock to traverse the icache list and flush
the dirty inodes on the icache list to the disk. In that context, when we do a
vx_iunlock we may sleep while flushing and holding the icachelock which is a
spinlock. The other processors which are busy waiting for the same icache lock 
spinlock have the interrupts disabled, and this results in the NMI panic.

RESOLUTION:
In vx_iflush_list, use VX_iUNLOCK_NOFLUSH instead of vx_iunlock which avoids
flushing and sleeping holding the spinlock.

* 3796664 (Tracking ID: 3444154)

SYMPTOM:
Reading from a de-duped file-system over NFS can result in data corruption 
seen on the NFS client.

DESCRIPTION:
This is not corruption on the file-system, but comes from a combination of 
VxFS's shared page cache and Linux's TCP stack.

RESOLUTION:
To avoid the corruption, a temporary page is used to avoid sending the same
physical page back-to-back down the TCP stack.

* 3796671 (Tracking ID: 3596378)

SYMPTOM:
The copy of a large number of small files is slower on Veritas File
System (VxFS) compared to EXT4.

DESCRIPTION:
VxFS implements the fsetxattr() system call in a synchronized way.
Hence, before returning to the system call, the VxFS will take some time to
flush the data to the disk. In this way, the VxFS guarantees the file system
consistency in case of file system crash. However, this implementation has a
side-effect that it serializes the whole processing, which takes more time.

RESOLUTION:
The code is modified to change the transaction to flush the data in
a delayed way.

* 3796676 (Tracking ID: 3615850)

SYMPTOM:
The write system call writes up to count bytes from the pointed buffer to the file referred to by the file descriptor field:

ssize_t write(int fd, const void *buf, size_t count);

When the count parameter is invalid, sometimes it can cause the write() to hang on VxFS file system. E.g. with a 10000 bytes buffer, but the count is set to 30000 by mistake, then you may encounter such problem.

DESCRIPTION:
On recent linux kernels, you cannot take a page-fault while holding a page locked so as to avoid a deadlock. This means uiomove can copy less than requested, and any partially populated pages created in routine which establish a virtual mapping for the page are destroyed.
This can cause an infinite loop in the write code path when the given user-buffer is not aligned with a page boundary and the length given to write() causes an EFAULT; uiomove() does a partial copy, segmap_release destroys the partially populated pages and unwinds the uio. The operation is then repeated.

RESOLUTION:
The code is modified to move the pre-faulting to the buffered IO write-loops; The system either shortens the length of the copy if all of the requested pages cannot be faulted, or fails with EFAULT if no pages are pre-faulted. This prevents the infinite loop.

* 3796684 (Tracking ID: 3601198)

SYMPTOM:
Replication copies the 64-bit external quota files ('quotas.64' and 
'quotas.grp.64') to the destination file system.

DESCRIPTION:
The external quota files hold the quota limits for users and
groups. While replicating the file system, the vfradmin (1M) command copies
these external quota files to the destination file system. But, the quota limits
of source FS may not be applicable for destination FS. Hence, we should ideally
skip the external quota files from being replicated.

RESOLUTION:
Exclude the 64-bit external quota files in the replication process.

* 3796687 (Tracking ID: 3604071)

SYMPTOM:
With the thin reclaim feature turned on, you can observe high CPU usage on the
vxfs thread process.

DESCRIPTION:
In the routine to get the broadcast information of a node which contains maps of
Allocation Units (AUs) for which node holds the delegations, the locking
mechanism is inefficient. Thus every time when this routine is called, it will
perform a series of down-up operation on a certain semaphore. This can result in
a huge CPU cost when many threads calling the routine in parallel.

RESOLUTION:
The code is modified to optimize the locking mechanism in the routine to get the
broadcast information of a node which contains maps of Allocation Units (AUs)
for which node holds the delegations, so that it only does down-up operation on
the semaphore once.

* 3796727 (Tracking ID: 3617191)

SYMPTOM:
Checkpoint creation may take hours.

DESCRIPTION:
During checkpoint creation, with an inode marked for removal and
being overlaid, there may be a downstream clone and VxFS starts pulling all the
data. With Oracle it's evident because of temporary files deletion during
checkpoint creation.

RESOLUTION:
The code is modified to selectively pull the data, only if a
downstream push inode exists for file.

* 3796731 (Tracking ID: 3558087)

SYMPTOM:
When stat system call is executed on VxFS File System with delayed
allocation feature enabled, it may take long time or it may cause high cpu
consumption.

DESCRIPTION:
When delayed allocation (dalloc) feature is turned on, the
flushing process takes much time. The process keeps the get page lock held, and
needs writers to keep the inode reader writer lock held. Stat system call may
keeps waiting for inode reader writer lock.

RESOLUTION:
Delayed allocation code is redesigned to keep the get page lock
unlocked while flushing.

* 3796733 (Tracking ID: 3695367)

SYMPTOM:
Unable to remove volume from multi-volume VxFS using "fsvoladm" command. It fails with "Invalid argument" error.

DESCRIPTION:
Volumes are not being added in the in-core volume list structure correctly. Therefore while removing volume from multi-volume VxFS using "fsvoladm", command fails.

RESOLUTION:
The code is modified to add volumes in the in-core volume list structure correctly.

* 3796745 (Tracking ID: 3667824)

SYMPTOM:
Race between vx_dalloc_flush() and other threads turning off dalloc 
and reusing the inode results in a kernel panic.

DESCRIPTION:
Dalloc flushing can race with dalloc being disabled on the 
inode. In vx_dalloc_flush we drop the dalloc lock, the inode is no longer held,
so the inode can be reused while the dalloc flushing is still in 
progress.

RESOLUTION:
Resolve the race by taking a hold on the inode before doing the 
actual dalloc flushing. The hold prevents the inode from getting reused and
hence prevents the kernel panic.

* 3799999 (Tracking ID: 3602322)

SYMPTOM:
System may panic while flushing the dirty pages of the inode.

DESCRIPTION:
Panic may occur due to the synchronization problem between one
thread that flushes the inode, and the other thread that frees the chunks that
contain the inodes on the freelist. 

The thread that frees the chunks of inodes on the freelist grabs an inode, and 
clears/de-reference the inode pointer while deinitializing the inode. This may 
result in the pointer de-reference, if the flusher thread is working on the same
inode.

RESOLUTION:
The code is modified to resolve the race condition by taking proper
locks on the inode and freelist, whenever a pointer in the inode is de-referenced. 

If the inode pointer is already de-initialized to NULL, then the flushing is 
attempted on the next inode.

* 3821416 (Tracking ID: 3817734)

SYMPTOM:
If file system with full fsck flag set is mounted, direct command message
is printed to the user to clean the file system with full fsck.

DESCRIPTION:
When mounting file system with full fsck flag set, mount will fail
and a message will be printed to clean the file system with full fsck. This
message contains direct command to run, which if run without collecting file
system metasave will result in evidences being lost. Also since fsck will remove
the file system inconsistencies it may lead to undesired data being lost.

RESOLUTION:
More generic message is given in error message instead of direct
command.

* 3843470 (Tracking ID: 3867131)

SYMPTOM:
Kernel panic in internal testing.

DESCRIPTION:
In internal testing the vdl_fsnotify_sb is found NULL because we
are not allocating and initializing it in initialization routine i.e
vx_fill_super(). The vdl_fsnotify_sb would be initialized in vx_fill_super()
only when the kernel's fsnotify feature is available. But fsnotify feature is
not available in RHEL5/SLES10 kernel.

RESOLUTION:
code is added to check if fsnotify feature is available in the
running kernel.

* 3843734 (Tracking ID: 3812914)

SYMPTOM:
On RHEL 6.5 and RHEL 6.4 latest kernel patch, umount(8) system call hangs if an
application watches for inode events using inotify(7) APIs.

DESCRIPTION:
On RHEL 6.5 and RHEL 6.4 latest kernel patch, additional OS counters were added in
the super block to track inotify Watches. These new counters were not implemented
in VxFS for RHEL6.5/RHEL6.4 kernel. Hence, while doing umount, the operation hangs
until the counter in the superblock drops to zero, which would never happen since
they are not handled in VxFS.

RESOLUTION:
The code is modified to handle additional counters added in super block of
RHEL6.5/RHEL6.4 latest kernel.

* 3848508 (Tracking ID: 3867128)

SYMPTOM:
Assert failed in internal native AIO testing.

DESCRIPTION:
On RHEL5/SLES10, in NAIO, the iovec comes from the kernel stack. So
when handed-off the work item to the worker thread, then the work item points
to an iovec structure in a stack-frame which no longer exists.  So, the iovecs
memory can be corrupted when it is used for a new stack-frame.

RESOLUTION:
Code is modified to allocate the iovec dynamically in naio hand-off code and 
copy it into the work item before doing handoff.

* 3851967 (Tracking ID: 3852324)

SYMPTOM:
Assert failure during internal stress testing.

DESCRIPTION:
While reading the partition directory, offset is right shifted by 8 
but while retrying, offset wasn't left shifted back to original value. This can 
lead to offset as 0 which results in assert failure.

RESOLUTION:
Code is modified to left shift offset by 8 before retrying.

* 3852297 (Tracking ID: 3553328)

SYMPTOM:
During internal testing it was found that per node LCT file was
corrupted, due to which attribute inode reference counts were mismatching,
resulting in fsck failure.

DESCRIPTION:
During clone creation LCT from 0th pindex is copied to the new
clone's LCT. Any update to this LCT file from non-zeroth pindex can cause count
mismatch in the new fileset.

RESOLUTION:
The code is modified to handle this issue.

* 3852512 (Tracking ID: 3846521)

SYMPTOM:
cp -p is failing with EINVAL for files with 10 digit 
modification time. EINVAL error is returned if the value in tv_nsec field is 
greater than/outside the range of 0 to 999, 999, 999.  VxFS supports the 
update in usec but when copying in the user space, we convert the usec to 
nsec. So here in this case, usec has crossed the upper boundary limit i.e 
999, 999.

DESCRIPTION:
In a cluster, its possible that time across nodes might 
differ.so 
when updating mtime, vxfs check if it's cluster inode and if nodes mtime is 
newer 
time than current node time, then accordingly increment the tv_usec instead of 
changing mtime to older time value. There might be chance that it,  tv_usec 
counter got overflowed here, which resulted in 10 digit mtime.tv_nsec.

RESOLUTION:
Code is modified to reset usec counter for mtime/atime/ctime when 
upper boundary limit i.e. 999999 is reached.

* 3861518 (Tracking ID: 3549057)

SYMPTOM:
The "relatime" mount option wrongly shown in /proc/mounts.

DESCRIPTION:
The "relatime" mount option wrongly shown in /proc/mounts. VxFS does
not understand relatime mount option. It comes from Linux kernel.

RESOLUTION:
Code is modified to handle the issue.

* 3862350 (Tracking ID: 3853338)

SYMPTOM:
Files on VxFS are corrupted while running the sequential write workload
under high memory pressure.

DESCRIPTION:
VxFS may miss out writes sometimes under excessive write workload.
Corruption occurs because of the race between the writer thread which is doing
sequential asynchronous writes and the flusher thread which flushes the in-core
dirty pages. Due to an overlapping write, they are serialized 
over a page lock. Because of an optimization, this lock is released, leading to
a small window where the waiting thread could race.

RESOLUTION:
The code is modified to fix the race by reloading the inode write
size after taking the page lock.

* 3862425 (Tracking ID: 3859032)

SYMPTOM:
System panics in vx_tflush_map() due to NULL pointer dereference.

DESCRIPTION:
When converting VxFS using vxconvert, new blocks are allocated to 
the structural files like smap etc which can contain garbage. This is done with 
the expectation that fsck will rebuild the correct smap. but in fsck, we have 
missed to distinguish between EAU fully EXPANDED and ALLOCATED. because of
which, if allocation to the file which has the last allocation from such
affected EAU is done, it will create the sub transaction on EAU which are in
allocated state. Map buffers of such EAUs are not initialized properly in VxFS
private buffer cache, as a result, these buffers will be released back as stale
during the transaction commit. Later, if any file-system wide sync tries to
flush the metadata, it can refer to these buffer pointers and panic as these
buffers are already released and reused.

RESOLUTION:
Code is modified in fsck to correctly set the state of EAU on 
disk. Also, modified the involved code paths as to avoid using doing
transactions on unexpanded EAUs.

* 3862435 (Tracking ID: 3833816)

SYMPTOM:
In a CFS cluster, one node returns stale data.

DESCRIPTION:
In a 2-node CFS cluster, when node 1 opens the file and writes to
it, the locks are used with CFS_MASTERLESS flag set. But when node 2 tries to
open the file and write to it, the locks on node 1 are normalized as part of
HLOCK revoke. But after the Hlock revoke on node 1, when node 2 takes the PG
Lock grant to write, there is no PG lock revoke on node 1, so the dirty pages on
node 1 are not flushed and invalidated. The problem results in reads returning
stale data on node 1.

RESOLUTION:
The code is modified to cache the PG lock before normalizing it in
vx_hlock_putdata, so that after the normalizing, the cache grant is still with
node 1.When node 2 requests PG lock, there is a revoke on node 1 which flushes
and invalidates the pages.

* 3864139 (Tracking ID: 3869174)

SYMPTOM:
Write system call might get into deadlock on rhel5 and sles10.

DESCRIPTION:
Issue exists due to page fault handling when holding the page lock.
On rhel5 and sles10 when we go for write we may hold page locks and now if page
fault happens, page fault handler will be waiting on the lock which we have
already held resulting in deadlock.

RESOLUTION:
This behavior has been taken care of. Now we prefault so that deadlock
can be skipped.

* 3864751 (Tracking ID: 3867147)

SYMPTOM:
Assert failed in internal dedup testing.

DESCRIPTION:
If dedup is run on the same file twice simultaneously and extent split 
is happened, then the second extent split of the same file can cause assert 
failure. It is due to stale extent information being used for second split after 
first one.

RESOLUTION:
Code is modified to handle lookup for new bmap if same inode detected.

* 3866970 (Tracking ID: 3866962)

SYMPTOM:
Data corruption seen when dalloc writes are going on the file and 
simultaneously fsync started on the same file.

DESCRIPTION:
In case if dalloc writes are going on the file and simultaneously 
synchronous flushing is started on the same file, then synchronous flushing will try 
to flush all the dirty pages of the file without considering underneath allocation. 
In this case, flushing can happen on the unallocated blocks and this can result into 
data loss.

RESOLUTION:
Code is modified to flush data till actual allocation in case of dalloc 
writes.

Patch ID: 6.0.500.200

* 3622423 (Tracking ID: 3250239)

SYMPTOM:
Panic in vx_naio_worker, trace back:-
crash_kexec()
__die ()
do_page_fault()
error_exit()
vx_naio_worker()
vx_kthread_init()
kernel_thread()

DESCRIPTION:
If a thread submitting linux native AIO has posix sibling threads, and it 
simply exits after the submit, then kernel will not wait for the aio to finish 
in the exit processing. When aio complete, it may dereference the task_struct 
of that exited thread, this cause the panic.

RESOLUTION:
Install vxfs hook to task_struct, so when the thread exits, it will wait for 
its aio to complete no matter it's cloned or not. Set module load-time tunable 
vx_naio_wait to non-zero value to turn on this fix.

* 3673958 (Tracking ID: 3660422)

SYMPTOM:
On RHEL 6.6, umount(8) system call hangs if an application is watching for inode
events using inotify(7) APIs.

DESCRIPTION:
On RHEL 6.6, additional counters were added in the super block to track inotify
watches, these new counters were not implemented in VxFS.
Hence while doing umount, the operation hangs until the counter in the
superblock drops to zero, which would never happen since they are not handled in
VXFS.

RESOLUTION:
Code is modified to handle additional counters added in RHEL6.6.

* 3678626 (Tracking ID: 3613048)

SYMPTOM:
System can panic with the following stack::
 
machine_kexec
crash_kexec
oops_end
die
do_invalid_op
invalid_op
aio_complete
vx_naio_worker
vx_kthread_init

DESCRIPTION:
VxFS does not correctly support IOCB_CMD_PREADV and IOCB_CMD_PREADV, which 
causes a BUG to fire in the kernel code (in fs/aio.c:__aio_put_req()).

RESOLUTION:
Add support for the vectored AIO commands and fixed the increment of ->ki_users 
so it is guarded by the required spinlock.

Patch ID: 6.0.500.100

* 3409692 (Tracking ID: 3402618)

SYMPTOM:
The mmap read performance on VxFS is slow.

DESCRIPTION:
The mmap read performance on VxFS is not good, because the read ahead operation is not triggered while the mmap reads is executed.

RESOLUTION:
An enhancement has been made to the read ahead operation. It helps improve the mmap read performance.

* 3426009 (Tracking ID: 3412667)

SYMPTOM:
On RHEL 6, the inode update operation may create deep stack and cause system panic  due to stack overflow. Below is the stack trace:
dequeue_entity()
dequeue_task_fair()
dequeue_task()
deactivate_task()
thread_return()
io_schedule()
get_request_wait()
blk_queue_bio()
generic_make_request()
submit_bio()
vx_dev_strategy()
vx_bc_bwrite()
vx_bc_do_bawrite()
vx_bc_bawrite()
 vx_bwrite()
vx_async_iupdat()
vx_iupdat_local()
vx_iupdat_clustblks()
vx_iupdat_local()
vx_iupdat()
vx_iupdat_tran()
vx_tflush_inode()
vx_fsq_flush()
vx_tranflush()
vx_traninit()
vx_get_alloc()
vx_tran_get_alloc()
vx_alloc_getpage()
vx_do_getpage()
vx_internal_alloc()
vx_write_alloc()
vx_write1()
vx_write_common_slow()
vx_write_common()
vx_vop_write()
vx_writev()
vx_naio_write_v2()
do_sync_readv_writev()
do_readv_writev()
vfs_writev()
nfsd_vfs_write()
nfsd_write()
nfsd3_proc_write()
nfsd_dispatch()
svc_process_common()
svc_process()
nfsd()
kthread()
kernel_thread()

DESCRIPTION:
Some VxFS operation may need inode update. This may create very deep stack and cause system panic due to stack overflow.

RESOLUTION:
The code is modified to add a handoff point in the inode update function. If the stack usage reaches a threshold, it will start a separate thread to do the work to limit stack usage.

* 3469683 (Tracking ID: 3469681)

SYMPTOM:
Free space defragmentation results into EBUSY error and file system is disabled.

DESCRIPTION:
While remounting the file system, the re-initialization gives EBUSY error if the  in-core and on-disk version numbers of an inode does not match. When pushing data blocks to the clone, the inode version of the immediate clone inode is bumped. But if there is another clone in the chain, then the ILIST extent of this immediate clone inode is not pushed onto that clone. This is not right because the inode has been modified.

RESOLUTION:
The code is modified so that the ILIST extents of the immediate clone inode is pushed onto the next clone in chain.

* 3498950 (Tracking ID: 3356947)

SYMPTOM:
VxFS doesnAt work as fast as expected when multi-threaded writes are
issued onto a file, which is interspersed by fsync calls.

DESCRIPTION:
When multi-threaded writes are issued with fsync calls in between the writes, fsync can serialise the writes by taking IRWLOCK on the inode and doing the whole file putpages. Therefore out-of-the box performance is relatively slow in terms of throughput.

RESOLUTION:
The code is fixed to remove the fsync's serialisation with IRWLOCK and make it conditional only for some cases.

* 3498954 (Tracking ID: 3352883)

SYMPTOM:
During the rename operation, lots of nfsd threads waiting for mutex operation hang with the following stack trace :
vxg_svar_sleep_unlock 
vxg_get_block
vxg_api_initlock  
vx_glm_init_blocklock
vx_cbuf_lookup  
vx_getblk_clust 
vx_getblk_cmn 
vx_getblk
vx_fshdchange
vx_unlinkclones
vx_clone_dispose
vx_workitem_process
vx_worklist_process
vx_worklist_thread
vx_kthread_init


vxg_svar_sleep_unlock 
vxg_grant_sleep 
vxg_cmn_lock 
vxg_api_trylock 
vx_glm_trylock 
vx_glmref_trylock 
vx_mayfrzlock_try 
vx_walk_fslist 
vx_log_sync 
vx_workitem_process 
vx_worklist_process 
vx_worklist_thread 
vx_kthread_init

DESCRIPTION:
A race condition is observed between the NFS rename and additional dentry alias created by the current vx_splice_alias()function. 
This race condition causes two different directory dentries pointing to the same inode, which results in mutex deadlock in lock_rename()function.

RESOLUTION:
The code is modified to change the vx_splice_alias()function to prevent the creation of additional dentry alias.

* 3498963 (Tracking ID: 3396959)

SYMPTOM:
On RHEL 6.4 with insufficient free memory, creating a file may panic the system with the following stack trace: 

shrink_mem_cgroup_zone()
shrink_zone()
do_try_to_free_pages()
try_to_free_pages()
__alloc_pages_nodemask()
alloc_pages_current()
__get_free_pages()
vx_getpages()
vx_alloc()
vx_bc_getfreebufs()
vx_bc_getblk()
vx_getblk_bp()
vx_getblk_cmn()
vx_getblk()
vx_iread()
vx_local_iread()
vx_iget()
vx_ialloc()
vx_dirmakeinode()
vx_dircreate()
vx_dircreate_tran()
vx_pd_create()
vx_create1_pd()
vx_do_create()
vx_create1()
vx_create_vp()
vx_create()
vfs_create()
do_filp_open()
do_sys_open() 
sys_open()
system_call_fastpath()

DESCRIPTION:
VxFS estimates the stack that is required to perform various kernel operations and creates hand-off threads if the estimated stack usage goes above the allowed kernel limit. However, the estimation may go wrong when the system is under heavy memory pressure, as some Linux kernel changes in RHEL 6.4 increase the depth of stack. There might be additional functions that are called in case of getpage to alleviate the situation, which leads to increased stack usage.

RESOLUTION:
The code is modified to take stack depth calculations to
correctly estimate the stack usage under memory pressure conditions.

* 3498976 (Tracking ID: 3434811)

SYMPTOM:
In VxFS 6.1, the vxfsconvert(1M) command hangs within the vxfsl3_getext()
Function with following stack trace:

search_type()
bmap_typ()
vxfsl3_typext()
vxfsl3_getext()
ext_convert()
fset_convert()
convert()

DESCRIPTION:
There is a type casting problem for extent size. It may cause a non-zero value to overflow and turn into zero by mistake. This further leads to infinite looping inside the function.

RESOLUTION:
The code is modified to remove the intermediate variable and avoid type casting.

* 3498978 (Tracking ID: 3424564)

SYMPTOM:
fsppadm fails with ENODEV and "file is encrypted or is not a database" 
errors

DESCRIPTION:
The error handler was missing for ENODEV, while we process only the 
directory inodes and the database got corrupted for 2nd error.

RESOLUTION:
Added a error handler to ignore the ENODEV while processing 
directory inode only and for database corruption: we added a log message to 
capture all the db logs to understand/know why corruption happened.

* 3498998 (Tracking ID: 3466020)

SYMPTOM:
File System is corrupted with the following error message in the log:

WARNING: msgcnt 28 mesg 008: V-2-8: vx_direrr: vx_dexh_keycheck_1 - /TraceFile
file system dir inode 3277090 dev/block 0/0 diren
 WARNING: msgcnt 27 mesg 008: V-2-8: vx_direrr: vx_dexh_keycheck_1 - /TraceFile
file system dir inode 3277090 dev/block 0/0 diren
 WARNING: msgcnt 26 mesg 008: V-2-8: vx_direrr: vx_dexh_keycheck_1 - /TraceFile
file system dir inode 3277090 dev/block 0/0 diren
 WARNING: msgcnt 25 mesg 096: V-2-96: vx_setfsflags -
 /dev/vx/dsk/a2fdc_cfs01/trace_lv01 file system fullfsck flag set - vx_direr
 WARNING: msgcnt 24 mesg 008: V-2-8: vx_direrr: vx_dexh_keycheck_1 - /TraceFile
file system dir inode 3277090 dev/block 0/0 diren

DESCRIPTION:
In case error is returned from function vx_dirbread() via function vx_dexh_keycheck1(), the FULLFSCK flag is set on the FS unconditionally. A corrupted LDH can lead to the reading of the wrong block, which results in the setting of FULLFSCK flag. The system doesnAt verify whether it is reading the wrong value due to a corrupted LDH, so that the FULLFSCK flag is set unnecessarily because a corrupted LDH can be fixed online by recreating the hash.

RESOLUTION:
The code is modified such that when a corruption of the LDH is detected, the system removes the Large Directory hash instead of setting FULLFSCK. The Large Directory Hash will then be recreated the next time the directory is modified.

* 3499005 (Tracking ID: 3469644)

SYMPTOM:
System panics in the vx_logbuf_clean() function while traversing chain of transactions off the intent log buffer. The stack trace is as follows:


vx_logbuf_clean ()
vx_logadd ()
vx_log()
vx_trancommit()
vx_exh_hashinit ()
vx_dexh_create ()
vx_dexh_init ()
vx_pd_rename ()
vx_rename1_pd()
vx_do_rename ()
vx_rename1 ()
vx_rename ()
vx_rename_skey ()

DESCRIPTION:
The system panics as the vx_logbug_clean() function tries to access an already freed transaction from transaction chain to flush it to log.

RESOLUTION:
The code is modified to make sure that transaction gets flushed to the log before it is freed.

* 3499008 (Tracking ID: 3484336)

SYMPTOM:
The fidtovp() system call can panic in the vx_itryhold_locked() function with the following stack trace:

vx_itryhold_locked
vx_iget
vx_common_vget
vx_do_vget
vx_vget_skey
vfs_vget
fidtovp
kernel_add_gate_cstack
nfs3_fhtovp
rfs3_getattr
rfs_dispatch
svc_getreq
threadentry
[kdb_read_mem]

DESCRIPTION:
Some VxFS operations like the vx_vget() function try to get a hold on an in-core inode using the vx_itryhold_locked() function, but it doesnAt take the lock on the corresponding directory inode. This may lead to a race condition when this inode is present on the delicache list and is inactivated. Thereby this results in a panic when the vx_itryhold_locked() function tries to remove it from a free list. This is actually a known issue, but the last fix was not complete. It missed some functions which may also cause the race condition.

RESOLUTION:
The code is modified to take inode list lock inside the vx_inactive_tran(), vx_tranimdone() and vx_tranuninode() functions to avoid race condition.

* 3499011 (Tracking ID: 3486726)

SYMPTOM:
VFR logs too much data on the target node.

DESCRIPTION:
At the target node, it logs debug level messages evenif the debug mode was off. Also it doesnAt consider the debug mode specified at the time of job creation.

RESOLUTION:
The code is modified to not log the debug level messages on the target node if the specified debug mode is set off.

* 3499014 (Tracking ID: 3471245)

SYMPTOM:
The Mongodb fails to insert any record because lseek fails to seek to the EOF.

DESCRIPTION:
Fallocate doesn't update the inode's i_size on linux, which causes lseek unable to seek to the EOF.

RESOLUTION:
Before returning from the vx_fallocate() function, call the vx_getattr()function to update the Linux inode with the VxFS inode.

* 3499030 (Tracking ID: 3484353)

SYMPTOM:
It is a self-deadlock caused by a missing unlock of DIRLOCK. Its typical stack trace is like the following:
 
slpq_swtch_core()
real_sleep()
sleep_one()
vx_smp_lock()
vx_dirlock()
vx_do_rename()
vx_rename1()
vx_rename()
vn_rename()
rename()
syscall()

DESCRIPTION:
When a partitioned directory feature (PD) of Veritas File System (VxFS) is enabled, there is a possibility of self-deadlock when there are multiple renaming threads operating on the same target directory.
The issue is due to the fact that there is a missing unlock of DIRLOCK in the vx_int_rename() function.

RESOLUTION:
The code is modified by adding missing unlock for directory lock in the vx_int_rename()function..

* 3514824 (Tracking ID: 3443430)

SYMPTOM:
Fsck allocates too much memory.

DESCRIPTION:
Since Storage Foundation 6.0, parallel inode list processing with multiple threads is introduced to help reduce the fsck time. However, the parallel threads have to allocate redundant memory instead of reusing buffers in the buffer cache efficiently when inode list has many holes.

RESOLUTION:
The code is fixed to make each thread to maintain its own buffer cache from which it can reuse free memory.

* 3515559 (Tracking ID: 3498048)

SYMPTOM:
while the system is making backup, the Als AlA command on the same file system may hang.

DESCRIPTION:
When the dalloc (delayed allocation) feature is turned on, flushing takes quite a lot of time which keeps hold on getpage lock, as this lock is needed by writers which keep read write lock held on inodes. The Als AlA command needs ACLs(access control lists) to display information. But in Veritas File System (VxFS), ACLS are accessed only under protection of inode read write lock, which results in the hang.

RESOLUTION:
The code is modified to turn dalloc off and improve write throttling by restricting the kernel flusher from updating Intenal counter for write page flush..

* 3515569 (Tracking ID: 3430461)

SYMPTOM:
The nested unmounts as well as the force unmounts fail if, the parent file system is disabled which further inhibits the unmounting of the child file system.

DESCRIPTION:
If a file system is mounted inside another vxfs mount, and if the parent file system gets disabled, then it is not possible to sanely unmount the child even with the force unmounts. This issue is observed because a disabled file system does not allow directory look up on it. On Linux, a file system can be unmounted only by providing the path of the mount point.

RESOLUTION:
The code is modified to allow the exceptional path look for unmounts. These are read only operations and hence are safer. This makes it possible for the unmount of child file system to proceed.

* 3515588 (Tracking ID: 3294074)

SYMPTOM:
System call fsetxattr() is slower on Veritas File System (VxFS) than ext3 file system.

DESCRIPTION:
VxFS implements the fsetxattr() system call in a synchronized sync way.  Hence, it will take some time to flush the data to the disk before returning to the system call to guarantee file system consistency in case of file system crash.

RESOLUTION:
The code is modified to allow the transaction to flush the data in a delayed way.

* 3515737 (Tracking ID: 3511232)

SYMPTOM:
The kernel panics in the vx_write_alloc() function with the following stack trace:

__schedule_bug
thread_return
schedule_timeout
wait_for_common
wait_for_completion
vx_getpages_handoff
vx_getpages
vx_alloc
vx_mklbtran
vx_te_bufalloc
vx_te_bmap_split
vx_te_bmap_alloc
vx_bmap_alloc_typed
vx_bmap_alloc
vx_get_alloc
vx_tran_get_alloc
vx_alloc_getpage
vx_do_getpage
vx_internal_alloc
vx_write_alloc
vx_write1
vx_write_common_slow
vx_write_common
vx_write
vx_naio_write
vx_naio_write_v2
aio_rw_vect_retry
aio_run_iocb
do_io_submit
sys_io_submit
system_call_fastpath

DESCRIPTION:
During a memory allocation, the stack overflows  and corrupts the
thread_info structure of the thread. Then when the system sleeps for a handed-off result, the corruption caused by the overflow is detected and the system panics.

RESOLUTION:
The code is fixed to detect thread_info corruptions, as well as to change some stack allocations to dynamic allocations to save stack usage.

* 3515739 (Tracking ID: 3510796)

SYMPTOM:
System panics in Linux 3.0.x kernel when VxFS cleans the inode chunks.

DESCRIPTION:
On Linux, the kernel swap daemon (kswapd) takes a reference hold on a page but not the owning inode. In vx_softcnt_flush(), when the inode's final softcnt drops, the kernel calls the vx_real_destroy_inode() function. On 3.x.x kernels, vx_real_destroy_inode() temporarily clears the address-space operations. Therefore a window is created where kswapd works on a page without VxFS's address-space operations set, and results in a panic.

RESOLUTION:
The code is fixed to flush all pages for an inode before it calls the vx_softcnt_flush() function.

* 3517702 (Tracking ID: 3517699)

SYMPTOM:
Return code 240 for command fsfreeze(1M) is not documented in man page for fsfreeze.

DESCRIPTION:
Return code 240 for command fsfreeze(1M) is not documented in man page for fsfreeze.

RESOLUTION:
The man page for fsfreeze(1M) is modified to document return code 240.

* 3517707 (Tracking ID: 3093821)

SYMPTOM:
The system panics due to referring freed super block after the vx_unmount() function errors.

DESCRIPTION:
In the file system unmount process, once Linux VFS calls in to VxFS specific unmount processing vx_unmount(), it doesn't expect error from this call. So, once the vx_unmount()function returns, Linux frees the file systems corresponding super_block object. But if any error is observed during vx_unmount(), it may leave file system Inodes in vxfs inode cache as it is. Otherwise, when there is no error, the file system inodes are processed and dropped on vx_unmount().

As file systems inodes left in VxFS inode cache would still point to freed super block object, so now when these inodes on Inode cache are cleaned up to free or reuse, they may refer freed super block in certain cases, which might lead to Panic due to NULL pointer de-reference.

RESOLUTION:
Do not return EIO or ENXIO error from vx_detach_fset() when you unmounts the filesystem. Insted of returning error process, drop inode from the inode cache.

* 3557193 (Tracking ID: 3473390)

SYMPTOM:
In memory pressure scenarios, we see panics/system crashes due to stack overflows.

DESCRIPTION:
Specifically on RHEL6, the memory allocation routines consume much higher memory than other distributions like SLES, or even RHEL5.  Due to this, multiple overflows are reported for the RHEL6 platform. Most of these overflows occur when Veritas File System (VxFS) tries to allocate memory under memory pressure.

RESOLUTION:
The code is modified to fix multiple overflows by adding handoff codepaths, adjusting handoff limits, removing on-stack structures and reducing the number of function frames on stack wherever possible.

* 3567855 (Tracking ID: 3567854)

SYMPTOM:
On Veritas File System (VxFS) 6.0.5, 
A If the file system has no external quota file, the vxedquota(1M) command gives error. 
A If the first mounted VxFS file system in the mnttab file contains no quota file, the vxrepquota(1M) command fails.

DESCRIPTION:
A The vxedquota(1M) issue:
When the vxedquota(1M) command is executed, it expects the 32-bit quota file to be present in the mount-point, even if it does not contain any external quota file.
This results in the error message while you are editing the quotas.
This issue is fixed by only processing the file systems on which quota files are present.
A The vxrepquota(1M) issue:
On the other hand, when vxrepquota(1M) is executed, the command vxrepquota(1M) scans the mnttab file [0]to look for the VxFS file system which has external quota files. But, if the first VxFS file system doesn't contain any quota file, the command gives error and does not report any quota information.
This issue is fixed by looping through the mnttab file till we get mount-point in which quota files exist.

RESOLUTION:
A For the vxedquota(1M) issue, the code is modified by skipping the file systems  which don't contain quota files.
A For the vxrepquota(1M) issue, the code is modified by obtaining a VxFS mount-point (with quota files) from the mnttab file.

* 3579957 (Tracking ID: 3233315)

SYMPTOM:
"fsck" utility dumps core, while checking the RCT file.

DESCRIPTION:
"fsck" utility dumps core, while checking the RCT file. "bmap_search_typed()"
function is passed with wrong parameter, and results in the core dump with the
following stack trace:

bmap_get_typeparms ()
bmap_search_typed_raw()
bmap_search_typed()
rct_walk()
bmap_check_typed_raw()
rct_check()
main()

RESOLUTION:
Fixed the code to pass the correct parameters to "bmap_search_typed()" function.

* 3581566 (Tracking ID: 3560968)

SYMPTOM:
The delicache_enable tunable is inconsistent in the CFS environment.

DESCRIPTION:
On the secondary nodes, the tunable values are exported from the primary mount, while the delicache_enable tunable value comes from the AtunefstabA file. Therefore the  tunable values are not persistent.

RESOLUTION:
The code is fixed to read the "tunefstab" file only for the delicache_enable tunable during mount and set the value accordingly.

* 3584297 (Tracking ID: 3583930)

SYMPTOM:
When external quota file is over-written or restored from backup, new settings which were added after the backup still remain.

DESCRIPTION:
The purpose of the quotaon operation is to copy the quota limits from external to internal quota file, because internal quota file is not always updated with correct limits. To complete the copy operation, the extent of external file is compared to the extent of internal file at the corresponding offset.
     Now, if external quota file is overwritten (or restored to its original copy) and the size of internal file is more than that of external, the quotaon operation does not clear the additional (stale) quota records in the internal file. Later, the sync operation (part of quotaon) copies these stale records from internal to external file. Hence, both internal and external files contain stale records.

RESOLUTION:
The code is modified to get rid of the stale records in the internal file at the time of quotaon.

* 3588236 (Tracking ID: 3604007)

SYMPTOM:
Stack overflow was observed during extent allocation code path.

DESCRIPTION:
In extent allocation code path during write we are seeing stack 
overflow on sles11. We already have hand-off point in the code path but hand-off 
was not happening due to the reason that we were having some bytes more than 
necessary to trigger the hand-off.

RESOLUTION:
Now changing the value of trigger point so that hand-off can takes 
place.

* 3590573 (Tracking ID: 3331010)

SYMPTOM:
Command fsck(1M) dumped core with segmentation fault.
Following stack is observed.

fakebmap()
rcq_apply_op()
rct_process_pending_tasklist()
process_device()
main()

DESCRIPTION:
While working on the device in function precess_device(), command fsck tries to
access already freed device related structures available in pending task list
during retry code path.

RESOLUTION:
Code is modified to free up the pending task list before retrying in function
precess_device().

* 3595894 (Tracking ID: 3595896)

SYMPTOM:
While creating the OracleRAC 12.1.0.2 database, the node panics with the following stack:
aio_complete()
vx_naio_do_work()
vx_naio_worker()
vx_kthread_init()

DESCRIPTION:
For a zero size request (with a correctly aligned buffer), Veritas File System (VxFS) erroneously queues the work internally and returns -EIOCBQUEUED. The kernel calls function aio_complete() for this zero size request. However, while VxFS is performing the queued work internally,  function aio_complete()gets called again. The double call of function aio_complete() results in the panic.

RESOLUTION:
The code is modified such that zero size requests will not queue elements inside VxFS work queue.

* 3597454 (Tracking ID: 3602386)

SYMPTOM:
vfradmin man page shows the incorrect info about default behavior of -d
option

DESCRIPTION:
When we run vfradmin command without -d option then by default the
debugging is in ENABLED mode but man page indicates that the default debugging
should be in DISABLED mode.

RESOLUTION:
Changes has been done in man page of vfradmin to reflect the correct
default behavior.

* 3597560 (Tracking ID: 3597482)

SYMPTOM:
The pwrite(2) function fails with EOPNOTSUPP when the write range is in two indirect extents.

DESCRIPTION:
When the range of pwrite() falls in two indirect extents, one ZFOD extent belonging to DB2 pre-allocated files created with setext( , VX_GROWFILE, ) ioctl and another DATA extent belonging to adjacent INDIR, write fails with EOPNOTSUPP. 
The reason is that Veritas File System (VxFS) is trying to coalesce extents which belong to different indirect address extents as part of this transaction A such a meta-data change consumes more transaction resources which VxFS transaction engine is unable to support in the current implementation.

RESOLUTION:
The code is modified to retry the write transaction without combining the extents.

Patch ID: 6.0.500.000

* 2705336 (Tracking ID: 2059611)

SYMPTOM:
The system panics due to a NULL pointer dereference while flushing the
bitmaps to the disk and the following stack trace is displayed:
a|
a|
vx_unlockmap+0x10c
vx_tflush_map+0x51c
vx_fsq_flush+0x504
vx_fsflush_fsq+0x190
vx_workitem_process+0x1c
vx_worklist_process+0x2b0
vx_worklist_thread+0x78

DESCRIPTION:
The vx_unlockmap() function unlocks a map structure of the file
system. If the map is being used, the hold count is incremented. The
vx_unlockmap() function attempts to check whether this is an empty mlink doubly
linked list. The asynchronous vx_mapiodone routine can change the link at random
even though the hold count is zero.

RESOLUTION:
The code is modified to change the evaluation rule inside the
vx_unlockmap() function, so that further evaluation can be skipped over when map
hold count is zero.

* 2978234 (Tracking ID: 2972183)

SYMPTOM:
"fsppadm enforce"  takes longer than usual time force update the secondary 
nodes than it takes to force update the primary nodes.

DESCRIPTION:
The ilist is force updated on secondary node. As a result the performance on 
the secondary becomes low.

RESOLUTION:
Force update the ilist file on Secondary nodes only on error condition.

* 2982161 (Tracking ID: 2982157)

SYMPTOM:
During internal testing, the Af:vx_trancommit:4A debug asset was hit when the available transaction space is lesser than the required space.

DESCRIPTION:
The Af:vx_trancommit:4A assert is hit when available transaction space is lesser than required. During the file truncate operations, when VxFS calculates transaction space, it doesnAt consider the transaction space required in case the file has shared extents.  As a result, the Af:vx_trancommit:4A debug assert is hit.

RESOLUTION:
The code is modified to take into account the extra transaction buffer space required when the file being truncated has shared extents.

* 2999566 (Tracking ID: 2999560)

SYMPTOM:
While trying to clear the 'metadataok' flag on a volume of the volume set, the 'fsvoladm'(1M) command gives error.

DESCRIPTION:
The 'fsvoladm'(1M) command sets and clears 'dataonly' and 'metadataok'flags on a volume in a vset on which VxFS is mounted. 
The 'fsvoladm'(1M) command fails while clearing A the AmetadataokA flag and reports, an EINVAL (invalid argument) error for certain volumes. This failure occurs because while clearing the flag, VxFS reinitialize the reorg structure for some volumes. During re-initialization, VxFS frees the existing FS structures. However, it still refers to the stale device structure resulting in an EINVAL error.

RESOLUTION:
The code is modified to let the in-core device structure point to the updated and correct data.

* 3027250 (Tracking ID: 3031901)

SYMPTOM:
The 'vxtunefs(1M)' command accepts the garbage value for the 
'max_buf_dat_size' tunable.

DESCRIPTION:
When the garbage value for the 'max_buf_dat_size' tunable  using 'vxtunefs(1M)' is specified, the tunable accepts the value and gives the successful update message; but the value actually doesn't get reflected in the system. And, this error is not identified from parsing the command line value of THE 'max_buf_dat_size' tunable; hence the garbage value for this tunable is also accepted.

RESOLUTION:
The code is modified to handle the error returned from parsing the command line value of the 'max_buf_data_size' tunable.

* 3056103 (Tracking ID: 3197901)

SYMPTOM:
fset_get fails for the mention configuration

DESCRIPTION:
duplicate symbol fs_bmap in VxFS libvxfspriv.a and vxfspriv.so

RESOLUTION:
duplicate symbol fs_bmap in VxFS libvxfspriv.a and vxfspriv.so has 
being fixed by renaming to fs_bmap_priv in the libvxfspriv.a

* 3059000 (Tracking ID: 3046983)

SYMPTOM:
There is an invalid CFS node number (<inode number>) 
in ".__fsppadm_fclextract". This causes the Dynamic Storage Tiering (DST) 
policy enforcement to fail.

DESCRIPTION:
DST policy enforcement sometimes depends on the extraction of the File Change 
Log (FCL). When the FCL change log is processed, it reads the FCL records from 
the change log into the buffer. If it finds that the buffer is not big enough 
to hold the records, it will do some rollback and pass out the needed buffer 
size. However, the rollback is not complete, this results in the problem.

RESOLUTION:
The code is modified to add the codes to the rollback content of "fh_bp1-
>fb_addr" and "fh_bp2->fb_addr".

* 3108176 (Tracking ID: 2667658)

SYMPTOM:
Attempt to perform an fscdsconv-endian conversion from the SPARC little-endian 
byte order to the x86 big-endian byte order fails because of a macro overflow.

DESCRIPTION:
Using the fscdsconv(1M) command to perform endian conversion from the SPARC 
little-endian (any SPARC architecture machine) byte order to the x86 big-endian 
(any x86 architecture machine) byte order fails. The write operation for the 
recovery file results in the control data offset (a hard coded macro to 500MB) 
overflow.

RESOLUTION:
The code is modified to take an estimate of the control-data offset explicitly 
and dynamically while creating and writing the recovery file.

* 3248029 (Tracking ID: 2439261)

SYMPTOM:
When the vx_fiostats_tunable is changed from zero to non-zero, the
system panics with the following stack trace:
vx_fiostats_do_update
vx_fiostats_update
vx_read1
vx_rdwr
vno_rw
rwuio
pread

DESCRIPTION:
When vx_fiostats_tunable is changed from zero to non-zero, all the
incore-inode fiostats attributes are set to NULL. When these attributes are
accessed, the system panics due to the NULL pointer dereference.

RESOLUTION:
The code has been modified to check the file I/O stat attributes are
present before dereferencing the pointers.

* 3248042 (Tracking ID: 3072036)

SYMPTOM:
Reads from secondary node in CFS can sometimes fail with ENXIO (No such device 
or address).

DESCRIPTION:
The incore attribute ilist on secondary node is out of sync with that of the 
primary.

RESOLUTION:
The code is modified such that incore attribute ilist on secondary node is force
updated with data from primary node.

* 3248046 (Tracking ID: 3092114)

SYMPTOM:
The information output by the "df -i" command can often be inaccurate for 
cluster mounted file systems.

DESCRIPTION:
In Cluster File System 5.0 release a concept of delegating metadata to nodes in 
the cluster is introduced. This delegation of metadata allows CFS secondary 
nodes to update metadata without having to ask the CFS primary to do it. This 
provides greater node scalability. 
However, the "df -i" information is still collected by the CFS primary 
regardless of which node (primary or secondary) the "df -i" command is executed 
on.

For inodes the granularity of each delegation is an Inode Allocation Unit 
[IAU], thus IAUs can be delegated to nodes in the cluster.
When using a VxFS 1Kb file system block size each IAU will represent 8192 
inodes.
When using a VxFS 2Kb file system block size each IAU will represent 16384 
inodes.
When using a VxFS 4Kb file system block size each IAU will represent 32768 
inodes.
When using a VxFS 8Kb file system block size each IAU will represent 65536 
inodes.
Each IAU contains a bitmap that determines whether each inode it represents is 
either allocated or free, the IAU also contains a summary count of the number 
of inodes that are currently free in the IAU.
The ""df -i" information can be considered as a simple sum of all the IAU 
summary counts.
Using a 1Kb block size IAU-0 will represent inodes numbers      0 -  8191
Using a 1Kb block size IAU-1 will represent inodes numbers   8192 - 16383
Using a 1Kb block size IAU-2 will represent inodes numbers  16384 - 32768
etc.
The inaccurate "df -i" count occurs because the CFS primary has no visibility 
of the current IAU summary information for IAU that are delegated to Secondary 
nodes.
Therefore the number of allocated inodes within an IAU that is currently 
delegated to a CFS Secondary node is not known to the CFS Primary.  As a 
result, the "df -i" count information for the currently delegated IAUs is 
collected from the Primary's copy of the IAU summaries. Since the Primary's 
copy of the IAU is stale, therefore the "df -i" count is only accurate when no 
IAUs are currently delegated to CFS secondary nodes.
In other words - the IAUs currently delegated to CFS secondary nodes will cause 
the "df -i" count to be inaccurate.
Once an IAU is delegated to a node it can "timeout" after a 3 minutes  of 
inactivity. However, not all IAU delegations will timeout. One IAU will always 
remain delegated to each node for performance reasons. Also an IAU whose inodes 
are all allocated (so no free inodes remain in the IAU) it would not timeout 
either.
The issue can be best summarized as:
The more IAUs that remain delegated to CFS secondary nodes, the greater the 
inaccuracy of the "df -i" count.

RESOLUTION:
Allow the delegations for IAU's whose inodes are all allocated (so no free 
inodes in the IAU) to "timeout" after 3 minutes of inactivity.

* 3248054 (Tracking ID: 3153919)

SYMPTOM:
The fsadm(1M) command may hang when the structural file set re-organization is
in progress. The following stack trace is observed:
vx_event_wait
vx_icache_process
vx_switch_ilocks_list
vx_cfs_icache_process
vx_switch_ilocks
vx_fs_reinit
vx_reorg_dostruct
vx_extmap_reorg
vx_struct_reorg 
vx_aioctl_full
vx_aioctl_common
vx_aioctl
vx_ioctl
vx_compat_ioctl
compat_sys_ioctl

DESCRIPTION:
During the structural file set re-organization, due to some race condition, the
VX_CFS_IOWN_TRANSIT flag is set on the inode. At the final stage of the
structural file set re-organization, all the inodes are re-initialized. Since, 
the VX_CFS_IOWN_TRANSIT flag is set improperly, the re-initialization fails to
proceed. This causes the hang.

RESOLUTION:
The code is modified such that VX_CFS_IOWN_TRANSIT flag is cleared.

* 3296988 (Tracking ID: 2977035)

SYMPTOM:
While running an internal noise test in a Cluster File System (CFS) environment, a debug assert issue was observed in vx_dircompact()function.

DESCRIPTION:
Compacting directory blocks are avoided if the inode has AextopA (extended operation) flags, such as deferred inode removal and pass through truncation set.. The issue is caused when the inode has extended pass through truncation and is considered for compacting.

RESOLUTION:
The code is modified to avoid compacting the directory blocks of the inode if it has [0]an extended operation of pass through truncation set.[0]

* 3310758 (Tracking ID: 3310755)

SYMPTOM:
When the system processes an indirect extent, if it finds the first record as Zero Fill-On-Demand (ZFOD) extent (or first n records are ZFOD records), then it hits the assert.

DESCRIPTION:
In case of indirect extents reference count mechanism (shared block count)
regarding files having the shared ZFOD extents are not behaving correctly.

RESOLUTION:
The code for the reference count queue (RCQ) handling for the shared indirect ZFOD extents is modified, and the fsck(1M) issues with snapshot of file[0] when there are ZFOD extents has been fixed.

* 3317118 (Tracking ID: 3317116)

SYMPTOM:
Internal command conformance text for mount command on RHEL6 Update4 hit debug assert inside the vx_get_sb_impl() function

DESCRIPTION:
In RHEL6 Update4 kernel security update 2.6.32-358.18.1, Redhat changed the flag used to save mount status of dentry from d_flags to d_mounted. This resulted in debug assert in the vx_get_sb_impl() function, as d_flags was used to check mount status of dentry in RHEL6.

RESOLUTION:
The code is modified to use d_flags if OS is RHEL6 update2, else use d_mounted to determine mount status for dentry.

* 3338024 (Tracking ID: 3297840)

SYMPTOM:
A metadata corruption is found during the file removal process with the inode block count getting negative.

DESCRIPTION:
When the user removes or truncates a file having the shared indirect blocks, there can be an instance where the block count can be updated to reflect the removal of the shared indirect blocks when the blocks are not removed from the file. The next iteration of the loop updates the block count again while removing these blocks. This will eventually lead to the block count being a negative value after all the blocks are removed from the file. The removal code expects the block count to be zero before updating the rest of the metadata.

RESOLUTION:
The code is modified to update the block count and other tracking metadata in the same transaction as the blocks are removed from the file.

* 3338026 (Tracking ID: 3331419)

SYMPTOM:
Machine panics with the following stack trace.

 #0 [ffff883ff8fdc110] machine_kexec at ffffffff81035c0b
 #1 [ffff883ff8fdc170] crash_kexec at ffffffff810c0dd2
 #2 [ffff883ff8fdc240] oops_end at ffffffff81511680
 #3 [ffff883ff8fdc270] no_context at ffffffff81046bfb
 #4 [ffff883ff8fdc2c0] __bad_area_nosemaphore at ffffffff81046e85
 #5 [ffff883ff8fdc310] bad_area at ffffffff81046fae
 #6 [ffff883ff8fdc340] __do_page_fault at ffffffff81047760
 #7 [ffff883ff8fdc460] do_page_fault at ffffffff815135ce
 #8 [ffff883ff8fdc490] page_fault at ffffffff81510985
    [exception RIP: print_context_stack+173]
    RIP: ffffffff8100f4dd  RSP: ffff883ff8fdc548  RFLAGS: 00010006
    RAX: 00000010ffffffff  RBX: ffff883ff8fdc6d0  RCX: 0000000000002755
    RDX: 0000000000000000  RSI: 0000000000000046  RDI: 0000000000000046
    RBP: ffff883ff8fdc5a8   R8: 000000000002072c   R9: 00000000fffffffb
    R10: 0000000000000001  R11: 000000000000000c  R12: ffff883ff8fdc648
    R13: ffff883ff8fdc000  R14: ffffffff81600460  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff883ff8fdc540] print_context_stack at ffffffff8100f4d1
#10 [ffff883ff8fdc5b0] dump_trace at ffffffff8100e4a0
#11 [ffff883ff8fdc650] show_trace_log_lvl at ffffffff8100f245
#12 [ffff883ff8fdc680] show_trace at ffffffff8100f275
#13 [ffff883ff8fdc690] dump_stack at ffffffff8150d3ca
#14 [ffff883ff8fdc6d0] warn_slowpath_common at ffffffff8106e2e7
#15 [ffff883ff8fdc710] warn_slowpath_null at ffffffff8106e33a
#16 [ffff883ff8fdc720] hrtick_start_fair at ffffffff810575eb
#17 [ffff883ff8fdc750] pick_next_task_fair at ffffffff81064a00
#18 [ffff883ff8fdc7a0] schedule at ffffffff8150d908
#19 [ffff883ff8fdc860] __cond_resched at ffffffff81064d6a
#20 [ffff883ff8fdc880] _cond_resched at ffffffff8150e550
#21 [ffff883ff8fdc890] vx_nalloc_getpage_lnx at ffffffffa041afd5 [vxfs]
#22 [ffff883ff8fdca80] vx_nalloc_getpage at ffffffffa03467a3 [vxfs]
#23 [ffff883ff8fdcbf0] vx_do_getpage at ffffffffa034816b [vxfs]
#24 [ffff883ff8fdcdd0] vx_do_read_ahead at ffffffffa03f705e [vxfs]
#25 [ffff883ff8fdceb0] vx_read_ahead at ffffffffa038ed8a [vxfs]
#26 [ffff883ff8fdcfc0] vx_do_getpage at ffffffffa0347732 [vxfs]
#27 [ffff883ff8fdd1a0] vx_getpage1 at ffffffffa034865d [vxfs]
#28 [ffff883ff8fdd2f0] vx_fault at ffffffffa03d4788 [vxfs]
#29 [ffff883ff8fdd400] __do_fault at ffffffff81143194
#30 [ffff883ff8fdd490] handle_pte_fault at ffffffff81143767
#31 [ffff883ff8fdd570] handle_mm_fault at ffffffff811443fa
#32 [ffff883ff8fdd5e0] __get_user_pages at ffffffff811445fa
#33 [ffff883ff8fdd670] get_user_pages at ffffffff81144999
#34 [ffff883ff8fdd690] vx_dio_physio at ffffffffa041d812 [vxfs]
#35 [ffff883ff8fdd800] vx_dio_rdwri at ffffffffa02ed08e [vxfs]
#36 [ffff883ff8fdda20] vx_write_direct at ffffffffa044f490 [vxfs]
#37 [ffff883ff8fddaf0] vx_write1 at ffffffffa04524bf [vxfs]
#38 [ffff883ff8fddc30] vx_write_common_slow at ffffffffa0453e4b [vxfs]
#39 [ffff883ff8fddd30] vx_write_common at ffffffffa0454ea8 [vxfs]
#40 [ffff883ff8fdde00] vx_write at ffffffffa03dc3ac [vxfs]
#41 [ffff883ff8fddef0] vfs_write at ffffffff81181078
#42 [ffff883ff8fddf30] sys_pwrite64 at ffffffff81181a32
#43 [ffff883ff8fddf80] system_call_fastpath at ffffffff8100b072

DESCRIPTION:
The panic is due to kernel referring to corrupted thread_info structure from the
scheduler, thread_info got corrupted by stack overflow. While doing direct I/O
write, user-space pages need to be pre-faulted using __get_user_pages() code
path. This code path is very deep can end up consuming lot of stack space.

RESOLUTION:
Reduced the kernel stack consumption by ~400-500 bytes in this code path by
making various changes in the way pre-faulting is done.

* 3338030 (Tracking ID: 3335272)

SYMPTOM:
The mkfs (make file system) command dumps core when the log size 
provided is not aligned. The following stack trace is displayed:

(gdb) bt
#0  find_space ()
#1  place_extents ()
#2  fill_fset ()
#3  main ()
(gdb)

DESCRIPTION:
While creating the VxFS file system using the mkfs command, if the 
log size provided is not aligned properly, you may end up in doing 
miscalculations for placing the RCQ extents and finding no place. This leads to 
illegal memory access of AU bitmap and results in core dump.

RESOLUTION:
The code is modified to place the RCQ extents in the same AU where 
log extents are allocated.

* 3338063 (Tracking ID: 3332902)

SYMPTOM:
The system running the fsclustadm(1M) command panics while shutting
down. The following stack trace is logged along with the panic:
machine_kexec
crash_kexec
oops_end
page_fault [exception RIP: vx_glm_unlock]
vx_cfs_frlpause_leave [vxfs]
vx_cfsaioctl [vxfs]
vxportalkioctl [vxportal]
vfs_ioctl
do_vfs_ioctl
sys_ioctl
system_call_fastpath

DESCRIPTION:
There exists a race-condition between "fsclustadm(1M) cfsdeinit"
and "fsclustadm(1M) frlpause_disable". The "fsclustadm(1M) cfsdeinit" fails
after cleaning the Group Lock Manager (GLM),  without downgrading the CFS state.
Under the false CFS state, the "fsclustadm(1M) frlpause_disable" command enters
and accesses the GLM lock, which "fsclustadm(1M) cfsdeinit" frees resulting in a
panic.

There exists another race between the code in vx_cfs_deinit() and the code in
fsck, and it will lead to the situation that although fsck has a reservation
held, but this couldn't prevent vx_cfs_deinit() from freeing vx_cvmres_list
because there is no such a check for vx_cfs_keepcount.

RESOLUTION:
The code is modified to add appropriate checks in the
"fsclustadm(1M) cfsdeinit" and "fsclustadm(1M) frlpause_disable" to avoid the
race-condition.

* 3338750 (Tracking ID: 2414266)

SYMPTOM:
The fallocate(2) system call fails on VxFS file systems in the Linux environment.

DESCRIPTION:
The fallocate(2) system call, which is used for pre-allocating the file space on
Linux, is not supported on VxFS.

RESOLUTION:
The code is modified to support the fallocate(2) system call on VxFS in the
Linux environment.

* 3338762 (Tracking ID: 3096834)

SYMPTOM:
Intermittent vx_disable messages are displayed in system log.

DESCRIPTION:
VxFS displays intermittent vx_disable messages. The file system is
not corrupt and the fsck(1M) command does not indicate any problem with the file
system. However, the file system gets disabled.

RESOLUTION:
The code is modified to make the vx_disable message verbose with
stack trace information to facilitate further debugging.

* 3338776 (Tracking ID: 3224101)

SYMPTOM:
On a file system that is mounted by a cluster, the system panics after you
enable the lazy optimization for updating the i_size across the cluster nodes.
The stack trace may look as follows:
vxg_free()
vxg_cache_free4()
vxg_cache_free()
vxg_free_rreq()
vxg_range_unlock_body()
vxg_api_range_unlock()
vx_get_inodedata()
vx_getattr()
vx_linux_getattr()
vxg_range_unlock_body()
vxg_api_range_unlock()
vx_get_inodedata()
vx_getattr()
vx_linux_getattr()

DESCRIPTION:
On a file system that is mounted by a cluster with the -o cluster option, read
operations or write operations take a range lock to synchronize updates across
the different nodes. The lazy optimization incorrectly enables a node to release
a range lock which is not acquired and panic the node.

RESOLUTION:
The code has been modified to release only those range locks which are acquired.

* 3338779 (Tracking ID: 3252983)

SYMPTOM:
On a high-end system greater than or equal to 48 CPUs, some file-system
operations may hang with the following stack trace:
vx_ilock()
vx_tflush_inode()
vx_fsq_flush()
vx_tranflush()
vx_traninit()
vx_tran_iupdat()
vx_idelxwri_done()
vx_idelxwri_flush()
vx_delxwri_flush()
vx_workitem_process()
vx_worklist_process()
vx_worklist_thread()

DESCRIPTION:
The function to get an inode returns an incorrect error value if there are no
free inodes available in incore, this error value allocates an inode on-disk
instead of allocating it to the incore. As a result, the same function is called
again resulting in a continuous loop.

RESOLUTION:
The code is modified to return the correct error code.

* 3338780 (Tracking ID: 3253210)

SYMPTOM:
When the file system reaches the space limitation, it hangs with the following stack trace:
vx_svar_sleep_unlock()
default_wake_function()
wake_up()
vx_event_wait()
vx_extentalloc_handoff()
vx_te_bmap_alloc()
vx_bmap_alloc_typed()
vx_bmap_alloc()
vx_bmap()
vx_exh_allocblk()
vx_exh_splitbucket()
vx_exh_split()
vx_dopreamble()
vx_rename_tran()
vx_pd_rename()

DESCRIPTION:
When large directory hash is enabled through the vx_dexh_sz(5M) tunable,  Veritas File System (VxFS) uses the large directory hash for directories.
When you rename a file, a new directory entry is inserted to the hash table, which results in hash split. The hash split fails the current transaction and retries after some housekeeping jobs complete. These jobs include allocating more space for the hash table. However, VxFS doesn't check the return value of the preamble job. And thus, when VxFS runs out of space, the rename transaction is re-entered permanently without knowing if more space is allocated by preamble jobs.

RESOLUTION:
The code is modified to enable VxFS to exit looping when ENOSPC is returned from the preamble job.

* 3338781 (Tracking ID: 3249958)

SYMPTOM:
When the /usr file system is mounted as a separate file system, 
Veritas File system (VxFS) fails to load.

DESCRIPTION:
There is a dependency between the two file systems, as VxFS uses a 
non-vital system utility command found in the /usr file system at boot up.

RESOLUTION:
The code is modified by changing the VxFS init script to load VxFS 
only after /usr file system is up to resolve the dependency.

* 3338787 (Tracking ID: 3261462)

SYMPTOM:
File system with size greater than 16TB corrupts with vx_mapbad messages in the system log.

DESCRIPTION:
The corruption results from the combination of the following two conditions:
a.	Two or more threads race against each other to allocate around the same offset range. As a result, VxFS returns the buffer locked only in shared mode for all the threads which fail in allocating the extent.
b.	Since the allocated extent is from a region beyond 16TB, threads need to convert the buffer to a different type so that to accommodate the new extentAs start value.
 
The buffer overrun happens because VxFS erroneously tries to unconditionally convert the buffer to the new type even though the buffer might not be able to accommodate the converted data.

RESOLUTION:
When the race condition is detected, VxFS returns proper retry errors to the caller, so that the whole operation is retried from the beginning. Also, the code is modified to ensure that VxFS doesnAt try to convert the buffer to the new type when it cannot accommodate the new data. In case this check fails, VxFS performs the proper split logic, so that buffer overrun doesnAt happen when the operation is retried.

* 3338790 (Tracking ID: 3233284)

SYMPTOM:
FSCK binary hangs while checking Reference Count Table (RCT) with the following stack trace:
bmap_search_typed_raw()
bmap_check_typed_raw()
rct_check()
process_device()
main()

DESCRIPTION:
The FSCK binary hangs due to the looping in the bmap_search_typed_raw() function. This function searches for extent entry in the indirect buffer for a given offset. In this case, the given offset is less than the start offset of the first extent entry. This unhandled corner case causes the infinite loop.

RESOLUTION:
The code is modified to handle the following cases:
1. Searching in empty indirect block.
2. Searching for an offset, which is less than the start offset of the first entry in the indirect block.

* 3339230 (Tracking ID: 3308673)

SYMPTOM:
With the delayed allocations feature enabled for local mounted file
system having highly fragmented available free space, the file system is
disabled with  the following message seen in the system log
WARNING: msgcnt 1 mesg 031: V-2-31: vx_disable - /dev/vx/dsk/testdg/testvol file
system disabled

DESCRIPTION:
VxFS transaction provides multiple extent allocations to fulfill one allocation
request for a file system that has a high free space fragmentation. Thus, the
allocation transaction becomes large and fails to commit. After retrying the
transaction for a defined number of times, the file system is disabled with the
with the above mentioned error

RESOLUTION:
The code is modified to commit the part of the transaction which is commit table
and retrying the remaining part

* 3339884 (Tracking ID: 1949445)

SYMPTOM:
System is unresponsive when files are created on large directory. The following stack is logged:

vxg_grant_sleep()                                             
vxg_cmn_lock()
vxg_api_lock()                                             
vx_glm_lock()
vx_get_ownership()                                                  
vx_exh_coverblk()  
vx_exh_split()                                                 
vx_dexh_setup() 
vx_dexh_create()                                              
vx_dexh_init() 
vx_do_create()

DESCRIPTION:
For large directories, large directory hash (LDH) is enabled to improve the lookup feature. When a system takes ownership of LDH inode twice in same thread context (while building hash for directory), it becomes unresponsive

RESOLUTION:
The code is modified to avoid taking ownership again if we already have the ownership of the LDH inode.

* 3339949 (Tracking ID: 3271892)

SYMPTOM:
VFR fail if the same Process ID (PID) is associated with the multiple
jobs working on different target file systems.

DESCRIPTION:
The single-threaded VFR jobs cause the bottlenecks that result in
scalability issues. Also, they may fail, if there are multiple jobs doing
replications on different file systems and associated with single process. If
one of jobs finishes, it may unregister all other jobs and the replication
operation may fail.

RESOLUTION:
The code is modified to add multithreading, so that multiple jobs
can be served per thread basis. This modification allows better scalability. The
code is also modified to fix the VFR job unregister method.

* 3339963 (Tracking ID: 3071622)

SYMPTOM:
On SLES10, bcopy(3) with overlapping address does not work.

DESCRIPTION:
The bcopy() function is a deprecated API due to which the function
fails to handle overlapping addresses.

RESOLUTION:
The code is modified to replace the bcopy() function with memmove()
function that handles the overlapping addresses requests.

* 3339964 (Tracking ID: 3313756)

SYMPTOM:
The file replication daemon exits unexpectedly and dumps core on the
target side. With the following stack trace:

rep_cleanup_session()
replnet_server_dropchan()
replnet_client_connstate()
replnet_conn_changestate()
replnet_conn_evalpoll()
vxev_loop()
main()

DESCRIPTION:
The replication daemon tries to close a session which is already
closed and hence cored dumps accessing a null pointer.

RESOLUTION:
The code is modified to the check the state of the session before
trying to close it.

* 3340029 (Tracking ID: 3298041)

SYMPTOM:
While performing "delayed extent allocations" by writing to a file 
sequentially and extending the file's size, or performing a mixture of 
sequential write I/O and random write I/O which extend a file's size, the write 
I/O performance to the file can suddenly degrade significantly.

DESCRIPTION:
The 'dalloc' feature allows VxFS to allocate extents (file system 
blocks) to a file in a delayed fashion when extending a file size. Asynchronous 
writes that extend a file's size will create and dirty memory pages, new 
extents can therefore be allocated when the dirty pages are flushed to disk 
(via background processing) rather than allocating the extents in the same 
context as the write I/O. However, in some cases, with the delayed allocation 
on, the flushing of dirty pages may occur synchronously in the foreground in 
the same context as the write I/O, when triggered the foreground flushing can 
significantly slow the write I/O performance.

RESOLUTION:
The code is modified to avoid the foreground flushing of data in 
the same write context.

* 3340031 (Tracking ID: 3337806)

SYMPTOM:
On linux kernels greater than 3.0 find(1) command, the kernel may panic
in the link_path_walk() function with the following stack trace:

do_page_fault
page_fault
link_path_walk
path_lookupat
do_path_lookup
user_path_at_empty
vfs_fstatat
sys_newfstatat
system_call_fastpath

DESCRIPTION:
VxFS overloads a bit of the dentry flag at 0x1000 for internal
usage. Linux didn't use this bit until kernel version 3.0 onwards. Therefore it
is possible that both Linux and VxFS strive for this bit, which panics the kernel.

RESOLUTION:
The code is modified not to use 0x1000 bit in the dentry flag .

* 3348459 (Tracking ID: 3274048)

SYMPTOM:
VxFS hangs when it requests a cluster-wide grant on an inode while holding a lock on the inode.

DESCRIPTION:
In the vx_switch_ilocks_list() function, VxFS requests the cluster-wide grant on an inode when it holds a lock on the inode, which may result in a deadlock.

RESOLUTION:
The code is modified to release a lock on an inode before asking for the cluster-wide grant on the inode, and recapture lock on the inode after cluster-wide grant is obtained.

* 3351939 (Tracking ID: 3351937)

SYMPTOM:
The vfradmin(1M) command may fail when promoting a job on the locally
mounted file system due to the "relatime" mount option.[0]

DESCRIPTION:
If /etc/mtab is a symlink to /proc/mounts, the vfradmin(1M) command fails as the
mount command is unable to handle "relatime" option returned by new Linux
versions and not handling of fstype "vxclonefs" for checkpoints

RESOLUTION:
The code is modified such that to suppress the "relatime" mount option while
remounting file system and proper handling of fstype "vxclonefs" for checkpoints.

* 3351946 (Tracking ID: 3194635)

SYMPTOM:
Internal stress test on locally mounted filesystem exitsed with an error message.

DESCRIPTION:
With a file having Zero-Filled on Demand (ZFOD) extents, a write operation in ZFOD extent area may lead to the coalescing of extent of type SHARED or COMPRESSED, or both with new extent of type DATA. The new DATA extent may be coalesced with the adjacent extent, if possible. If this happens without unsharing for shared extent or uncompressing for compressed extent case, data or metadata corruption may occur.

RESOLUTION:
The code is modified such that adjacent shared, compressed or pseudo-compressed extent is not coalesced.

* 3351947 (Tracking ID: 3164418)

SYMPTOM:
Internal stress test on locally mounted VxFS filesytem results in data corruption in no space on device scenario while doing spilt on Zero Fill-On-Demand(ZFOD) extent.

DESCRIPTION:
When the split operation on Zero Fill-On-Demand(ZFOD) extent fails because of the ENOSPC(no space on device) error, then it erroneously processes the original ZFOD extent and returns no error. This may result in data corruption.

RESOLUTION:
The code is modified to return the ZFOD extent to its original state, if the ZFOD split operation fails due to ENOSPC error.

* 3359278 (Tracking ID: 3364290)

SYMPTOM:
The kernel may panic in Veritas File System (VxFS) when it is internally working
on reference count queue(RCQ) record.

DESCRIPTION:
The work item spawned by VxFS in kernel to process RCQ records during RCQ full
situation is getting passed file system pointer as argument. Since no active
level is held, this file system pointer is not guaranteed to be valid by the
time the workitem starts processing. This may result in the panic.

RESOLUTION:
The code is modified to pass externally visible file system structure, as this
structure is guaranteed to be valid since the creator of the work item takes a
reference held on the structure which is released after the workitem exits.

* 3364285 (Tracking ID: 3364282)

SYMPTOM:
The fsck(1M) command  fails to correct inode list file.

DESCRIPTION:
The fsck(1M) command fails to correct the inode list file to write metadata for the inode list file after writing to disk an extent for the inode list file, if the write operation is successful.

RESOLUTION:
The fsck(1M) command is modified to write metadata for the inode list file after succewrite operations of an extent for the inode list file.

* 3364289 (Tracking ID: 3364287)

SYMPTOM:
Debug assert may be hit in the vx_real_unshare() function in the
cluster environment.

DESCRIPTION:
The vx_extend_unshare() function wrongly looks at the offset immediately after
the current unshare length boundary. Instead, it should look at the offset that
falls on the last byte of current unshare length. This may result in hitting
debug asserts in the vx_real_unshare() function.

RESOLUTION:
The code is modified for the shared compressed extent. When the
vx_extend_unshare() function tries to extend the unshared region, it doesnAt
look up at the first byte immediately after the region is unshared. Instead, it
does a looks up at the last byte unshared.

* 3364302 (Tracking ID: 3364301)

SYMPTOM:
Assert failure because of improper handling of inode lock while truncating a reorg inode.

DESCRIPTION:
While truncating the reorg extent, there may be a case where unlock on inode is called even when 
lock on inode is not taken.While truncating reorg inode, locks held are released and before it acquires them 
again, it checks if the inode is cluster inode. if true, it goes for taking delegation hold lock. If there 
was error while taking delegation hold lock, it comes to error code path. Here it checks if there was any 
transaction and if it had tran commitable error. It commits the transaction and on success calls the unlock 
to release the locks which was not held.

RESOLUTION:
The code is modified to check whether lock is taken or not before unlocking.

* 3364305 (Tracking ID: 3364303)

SYMPTOM:
Internal stress test on a locally mounted file system hits a debug assert in VxFS File Device Driver (FDD).

DESCRIPTION:
In VxFS File Device Driver (FDD), dentry operations are explicitly set using an assignment, which cause flags related to deletion to get incorrectly populated in dentry. This causes dentry from not getting shrunk immediately after use from cache. As a result a debug assert is hit.

RESOLUTION:
The code is modified to use the d_set_d_op() operating system function to initialize dentry operations. This function also takes care of the related flags inside dentry.

* 3364307 (Tracking ID: 3364306)

SYMPTOM:
Stack overflow seen in extent allocation code path.

DESCRIPTION:
Stack overflow appears in the vx_extprevfind() code path.

RESOLUTION:
The code is modified to hand-off the extent allocation to a worker thread when stack consumption reaches 4k.

* 3364317 (Tracking ID: 3364312)

SYMPTOM:
The fsadm(1M) command is unresponsive while processing the VX_FSADM_REORGLK_MSG message. The following stack trace may be seen while processing VX_FSADM_REORGLK_MSG:

vx_tranundo()
vx_do_rct_gc()
vx_rct_setup_gc()
vx_reorg_complete_gc()
vx_reorg_complete()
vx_reorg_clear_rct()
vx_reorg_clear()
vx_reorg_clear()
vx_recv_fsadm_reorglk()
vx_recv_fsadm()
vx_msg_recvreq()
vx_msg_process_thread()
vx_thread_base()

DESCRIPTION:
In the vx_do_rct_gc() function, flag for in-directory cleanup is set for a shared indirect extent (SHR_IADDR_EXT). If the truncation fails, the vx_do_rct_gc()function does not clear the in-directory cleanup flag. As a result, the caller ends up calling the vx_do_rct_gc()function repeatedly leading to a never ending loop.

RESOLUTION:
The code is modified to reset the value of in-directory cleanup flag in case of truncation error inside the vx_do_rct_gc() function.

* 3364333 (Tracking ID: 3312897)

SYMPTOM:
In Cluster File System (CFS), system can hang while trying to perform any administrative operation when the primary node is disabled.

DESCRIPTION:
In CFS, when node 1 tries to do some administrative operation which freezes and thaws the file system (e.g. turning on/off fcl), a deadlock can occur between the thaw and recovery (which started due to CFS primary being disabled) threads. The thread on node 1 trying to thaw is blocked while waiting for node 2 to reply to the loadfs message. The thread processing the loadfs message is waiting to complete the recovery operation. The recovery thread on node 2 is waiting for lock on an extent map (emap) buffer. This lock is held on node 1, as part of a transaction that was committed during the freeze, which results into a deadlock.

RESOLUTION:
The code is modified such as to flush any transactions that were committed during a freeze before starting the thawing process.

* 3364335 (Tracking ID: 3331109)

SYMPTOM:
The full fsck does not repair corrupted reference count queue (RCQ) record.

DESCRIPTION:
When the RCQ record is corrupted due to an I/O error or log error, there is no code in full fsck which handles this corruption.
As a result, some further operations related to RCQ might fail.

RESOLUTION:
The code is modified To repair the corrupt RCQ entry during a full fsck.

* 3364338 (Tracking ID: 3331045)

SYMPTOM:
Kernel Oops in unlock code of map while referring freed mlink due to a race with iodone routine for delayed writes.

DESCRIPTION:
After issuing ASYNC I/O of map buffer, there is a possible race Between the vx_unlockmap() function and the vx_mapiodone() function. Due to a race, the vx_unlockmap() function refers a mlink after it gets freed.

RESOLUTION:
The code is modified to handle such race condition.

* 3364349 (Tracking ID: 3359200)

SYMPTOM:
Internal test on Veritas File System (VxFS) fsdedup(1M) feature in cluster filesystem environment results in
a hang.

DESCRIPTION:
The thread which processes the fsdedup(1M) request is taking the delegation lock on extent map which itself is waiting to acquire a lock on cluster-wide reference count queue(RCQ) buffer. While other internal VxFS thread is working on RCQ takes lock on cluster-wide RCQ buffer and is waiting to acquire delegation lock on extent map causinga deadlock.

RESOLUTION:
The code is modified to correct the lock hierarchy such that the  delegation lock on extent map is taken before taking lock on cluster-wide RCQ buffer.

* 3364353 (Tracking ID: 3331047)

SYMPTOM:
Memory leak occurs in the vx_followlink() function in error condition.

DESCRIPTION:
The vx_followlink() function doesnAt free allocated buffer, which results in the error of memory leak.

RESOLUTION:
The code is modified to free the buffer in the vx_followlink() function in error condition.

* 3364355 (Tracking ID: 3263336)

SYMPTOM:
Internal noise test on cluster file system hits
"f:vx_cwfrz_wait:2" and "f:vx_osdep_msgprint:panic" debug asserts.

DESCRIPTION:
VxFS hits the Af:vx_cwfrz_wait:2A and Af:vx_osdep_msgprint:panicA debug asserts due to a deadlock between work list thread and freeze. As a result, in processing delicache items (vx_delicache_process), all of the work threads are looping on the file control log (FCL) items in the work list 1, and they can never reach work list 0. Thus, the delicache items are never processed, cluster freeze never finish, and the FCL item never gets its active level.

RESOLUTION:
The code is modified to remove the force flag judgment in  vx_delicache_process, so that the thread which enqueues delicache work item can always help in processing its own item.

* 3369037 (Tracking ID: 3349651)

SYMPTOM:
VxFS modules fail to load on RHEL6.5 and following error messages are
reported in system log.
kernel: vxfs: disagrees about version of symbol putname
kernel: vxfs: disagrees about version of symbol getname

DESCRIPTION:
In RHEL6.5 the kernel interfaces for getname and putname used by
VxFS have changed.

RESOLUTION:
Code modified to use latest definitions of getname and putname
kernel interfaces.

* 3369039 (Tracking ID: 3350804)

SYMPTOM:
On RHEL6, VxFS can sometimes report system panic with errors as Thread
overruns stack, or stack corrupts.

DESCRIPTION:
On RHEL6, the low-stack memory allocations consume significant
memory, especially when system is under memory pressure and takes page allocator route. This breaks our earlier assumptions of stack depth calculations.

RESOLUTION:
The code is modified to add a check in case of RHEL6 before doing low-stack allocations for available stack size, VxFS re-tunes the various stack depth calculations for each distribution separately to minimize performance penalties.

* 3370650 (Tracking ID: 2735912)

SYMPTOM:
The performance of tier relocation for moving a large number of files is poor 
when the `fsppadm enforce' command is used.  When looking at the fsppadm(1M) 
command in the kernel, the following stack trace is observed:

vx_cfs_inofindau 
vx_findino
vx_ialloc
vx_reorg_ialloc
vx_reorg_isetup
vx_extmap_reorg
vx_reorg
vx_allocpolicy_enforce
vx_aioctl_allocpolicy
vx_aioctl_common
vx_ioctl
vx_compat_ioctl

DESCRIPTION:
When the relocation is for each file located in Tier 1 to be relocated to Tier 
2, Veritas File System (VxFS) allocates a new reorg inode and all its extents 
in Tier 2. VxFS then swaps the content of these two files and deletes the 
original file. This new inode allocation which involves a lot of processing can 
result in poor performance when a large number of files are moved.

RESOLUTION:
The code is modified to develop a reorg inode pool or cache instead of 
allocating it each time.

* 3372896 (Tracking ID: 3352059)

SYMPTOM:
Due to memory leak, high memory usage occurs with vxfsrepld on target when no jobs are running.

DESCRIPTION:
On the target side, high memory usage may occur even when
there are no jobs running because the memory allocated for some structures is not freed for every job iteration.

RESOLUTION:
The code is modified to resolve the memory leaks.

* 3372909 (Tracking ID: 3274592)

SYMPTOM:
Internal noise test on Cluster File System (CFS)is unresponsive while executing the fsadm(1M) command

DESCRIPTION:
In CFS, the fsadm(1M) command hangs in the kernel, while processing the fsadm-reorganisation message on a secondary node. The hang results due to a race with the thread processing fsadm-query message for mounting primary-fileset on secondary node where the thread processing fsadm-query message wins the race.

RESOLUTION:
The code is modified to synchronize the processing of fsadm-query message and fsadm-reorganization message on the primary node. This synchronization ensures that they are processed in the order in which they were received.

* 3380905 (Tracking ID: 3291635)

SYMPTOM:
Internal testing found the Avx_freeze_block_threads_all:7cA debug assert on locally mounted file systems while processing preambles for transactions

DESCRIPTION:
While processing preambles for transactions, if reference count queue (RCQ) is full, VxFS may hamper the processing of RCQ to free some records. This may result in hitting the debug assert.

RESOLUTION:
The code is modified to ignore the Reference count queue (RCQ) full errors when VxFS processes preambles for transactions.

* 3381928 (Tracking ID: 3444771)

SYMPTOM:
Internal noise test on cluster file system hits debug assert while
creating a file.

DESCRIPTION:
While creating a file, if the inode is allotted from the delicache list,
it displayed null security field used for selinux.

RESOLUTION:
The code is modified to allocate the security structure when reusing the
inode from delicache list.

* 3383150 (Tracking ID: 3383147)

SYMPTOM:
The ACA operator precedence error may occur while turning AoffA delayed
allocation.

DESCRIPTION:
Due to the C operator precedence issue, VxFS evaluates a condition
wrongly.

RESOLUTION:
The code is modified to evaluate the condition correctly.

* 3383271 (Tracking ID: 3433786)

SYMPTOM:
The vxedquota(1M) command fails to set quota limits for some  users.

DESCRIPTION:
When vxedquota(1M) is invoked to set quota limits for users, it scans all the mounted VxFS file systems which have quota enabled and gets the quota file name paths. The buffer of user quota file path displayed path of group quota file name as the records were set for groups instead of users. In addition, passing of device name instead of mount point name leads to ioctl failure.

RESOLUTION:
The code is modified to use the correct buffer for user quota file name and use the mount point for all ioctls related to quota. 
ioctls related to quota.

* 3396539 (Tracking ID: 3331093)

SYMPTOM:
MountAgent got stuck while doing repeated switchover due to current
VxFS-AMF notification/unregistration design with the following stacktrace:

sleep_spinunlock+0x61 ()
vx_delay2+0x1f0 ()
vx_unreg_callback_funcs_impl+0xd0 ()
disable_vxfs_api+0x190 ()
text+0x280 ()
amf_event_release+0x230 ()
amf_fs_event_lookup_notify_multi+0x2f0 ()
amf_vxfs_mount_opt_change_callback+0x190 ()
vx_aioctl_unsetmntlock+0x390 ()
cold_vx_aioctl_common+0x7c0 ()
vx_aioctl+0x300 ()
vx_admin_ioctl+0x610 ()
vxportal_ioctl+0x690 ()
spec_ioctl+0xf0 () 
vno_ioctl+0x350 ()
ioctl+0x410 ()
syscall+0x5b0 ()

DESCRIPTION:
This issue is related to VxFS-AMF interface. VxFS provides
notifications to AMF for certain events like FS being disabled or mount options
change. While VxFS has called into AMF, AMF event handling mechanism can trigger
an unregistration of VxFS in the same context since VxFS's notification
triggered the last event notification registered with AMF.

Before VxFS calls into AMF, a variable vx_fsamf_busy is set to 1 and it is reset
when the callback returns. The unregistration loops if it finds that
vx_fsamf_busy is set to 1. Since unregistration was called from the same context
of the notification call back, the vx_fsamf_busy was never set to 0 and the loop
goes on endlessly causing the command that triggered the notification to hang.

RESOLUTION:
A delayed unregistration mechanism is employed. The fix addresses
the issue of getting unregistration from AMF in the context of callback from
VxFS to AMF. In such scenario, the unregistration is marked for a later time.
When all the notifications return and if a delayed unregistration is marked, the
unregistration routine is explicitly called.

* 3402484 (Tracking ID: 3394803)

SYMPTOM:
The vxupgrade(1M) command causes VxFS to panic with the following stack trace:
panic_save_regs_switchstack()
panic
bad_kern_reference()
$cold_pfault()
vm_hndlr()
bubbleup()
vx_fs_upgrade()
vx_upgrade()
$cold_vx_aioctl_common()
vx_aioctl()
vx_ioctl()
vno_ioctl()
ioctl()
syscall()

DESCRIPTION:
The panic is caused due to de_referencing the operator in the NULL device (one
of the devices in the DEVLIST is showing as a NULL device).

RESOLUTION:
The code is modified to skip the NULL devices when the device in EVLIST is
processed.

* 3405172 (Tracking ID: 3436699)

SYMPTOM:
Assert failure occurs because of race condition between clone mount thread and directory removal thread while pushing data on clone.

DESCRIPTION:
There is a race condition between clone mount thread and directory removal thread (while pushing modified directory data on clone). On AIX, vnodes are added into the VFS vnode list (link-list of vnodes). The first entry to this vnode link-list must be root's vnode, which was done during the mount process. While mounting a clone, mount thread is scheduled before adding root's vnode into this list. During this time, the thread 2 takes the VFS lock on the same VFS list and tries to enter the directory's vnode into this vnode list. As there was no root vnode present at the start, it is assumed that this directory vnode as a root vnode and while cross checking this with the VROOT flag, the assert fails.

RESOLUTION:
The code is modified to handle the race condition by attaching root vnode into VFS vnode list before setting VFS pointer into file set.

* 3411725 (Tracking ID: 3415639)

SYMPTOM:
The type of the fsdedupadm(1M) command always shows as MANUAL even it is launched by the fsdedupschd daemon.

DESCRIPTION:
The deduplication tasks scheduled by the scheduler do not show their type
as "SCHEDULED", instead they show it as "MANUAL". This is because the fsdeduschd daemon, while calling fsdedup, does not set the flag -d which would set the correct status.

RESOLUTION:
The code is modified so that the flag is set properly.

* 3429587 (Tracking ID: 3463464)

SYMPTOM:
Internal kernel functionality conformance test hits a kernel panic due to null pointer dereference.

DESCRIPTION:
In the vx_fsadm_query()function, error handling code path incorrectly sets the nodeid to AnullA in the file system structure. As a result of clearing nodeid, any subsequent access to this field results in the kernel panic.

RESOLUTION:
The code is modified to improve the error handling code path.

* 3430687 (Tracking ID: 3444775)

SYMPTOM:
Internal noise testing on Cluster File System (CFS) results in a kernel panic in function vx_fsadm_query()with the following error message "Unable to handle kernel paging request".

DESCRIPTION:
The issue occurs due to simultaneous asynchronous access or modification by two threads to inode list extent array. As a result, memory freed by one thread is accessed by the other thread, resulting in the panic.

RESOLUTION:
The code is modified to add relevant locks to synchronize access or modification of inode list extent array.

* 3436393 (Tracking ID: 3462694)

SYMPTOM:
The fsdedupadm(1M) command fails with error code 9 when it tries to
mount checkpoints on a cluster.

DESCRIPTION:
While mounting checkpoints, the fsdedupadm(1M) command fails to
parse the cluster mount option correctly, resulting in the mount failure.

RESOLUTION:
The code is modified to parse cluster mount options correctly in the
fsdedupadm(1M) operation.

* 3468413 (Tracking ID: 3465035)

SYMPTOM:
The VRTSvxfs and VRTSfsadv packages display incorrect "Provides" list.

DESCRIPTION:
The VRTSvxfs and VRTSfsadv packages show incorrect provides such as libexpat, libssh2 etc. These libraries are used internally by VRTSvxfs and VRTSfsadv. And, since the libraries are private copies they are not available for non-Veritas product.

RESOLUTION:
The code is modified to disable the "Provides" list in VRTSvxfs and VRTSfsadv packages.

Patch ID: 6.0.300.300

* 3384781 (Tracking ID: 3384775)

SYMPTOM:
Installing patch 6.0.3.200 on RHEL 6.4  or earlier RHEL 6.* versions fails with ERROR: No appropriate modules 
found.
# /etc/init.d/vxfs start
ERROR: No appropriate modules found.
Error in loading module "vxfs". See documentation.
Failed to create /dev/vxportal
ERROR: Module fdd does not exist in /proc/modules
ERROR: Module vxportal does not exist in /proc/modules
ERROR: Module vxfs does not exist in /proc/modules

DESCRIPTION:
VRTSvxfs  and VRTSodm rpms ship four different set module for RHEL 6.1 and RHEL6.2 , RHEL 6.3 , RHEL6.4 
and RHEL 6.5 each.
However the current patch only contains the RHEL 6.5 module.  Hence installation on earlier RHEL 6.* version 
fail.

RESOLUTION:
A superseding patch 6.0.3.300 will be released to include all the modules for RHEL 6.*  versions 
which will be available on SORT for download.

Patch ID: 6.0.300.200

* 3349652 (Tracking ID: 3349651)

SYMPTOM:
VxFS modules fail to load on RHEL6.5 and following error messages are
reported in system log.
kernel: vxfs: disagrees about version of symbol putname
kernel: vxfs: disagrees about version of symbol getname

DESCRIPTION:
In RHEL6.5 the kernel interfaces for getname and putname used by
VxFS have changed.

RESOLUTION:
Code modified to use latest definitions of getname and putname
kernel interfaces.

* 3356841 (Tracking ID: 2059611)

SYMPTOM:
The system panics due to a NULL pointer dereference while flushing the
bitmaps to the disk and the following stack trace is displayed:
a|
a|
vx_unlockmap+0x10c
vx_tflush_map+0x51c
vx_fsq_flush+0x504
vx_fsflush_fsq+0x190
vx_workitem_process+0x1c
vx_worklist_process+0x2b0
vx_worklist_thread+0x78

DESCRIPTION:
The vx_unlockmap() function unlocks a map structure of the file
system. If the map is being used, the hold count is incremented. The
vx_unlockmap() function attempts to check whether this is an empty mlink doubly
linked list. The asynchronous vx_mapiodone routine can change the link at random
even though the hold count is zero.

RESOLUTION:
The code is modified to change the evaluation rule inside the
vx_unlockmap() function, so that further evaluation can be skipped over when map
hold count is zero.

* 3356845 (Tracking ID: 3331419)

SYMPTOM:
Machine panics with the following stack trace.

 #0 [ffff883ff8fdc110] machine_kexec at ffffffff81035c0b
 #1 [ffff883ff8fdc170] crash_kexec at ffffffff810c0dd2
 #2 [ffff883ff8fdc240] oops_end at ffffffff81511680
 #3 [ffff883ff8fdc270] no_context at ffffffff81046bfb
 #4 [ffff883ff8fdc2c0] __bad_area_nosemaphore at ffffffff81046e85
 #5 [ffff883ff8fdc310] bad_area at ffffffff81046fae
 #6 [ffff883ff8fdc340] __do_page_fault at ffffffff81047760
 #7 [ffff883ff8fdc460] do_page_fault at ffffffff815135ce
 #8 [ffff883ff8fdc490] page_fault at ffffffff81510985
    [exception RIP: print_context_stack+173]
    RIP: ffffffff8100f4dd  RSP: ffff883ff8fdc548  RFLAGS: 00010006
    RAX: 00000010ffffffff  RBX: ffff883ff8fdc6d0  RCX: 0000000000002755
    RDX: 0000000000000000  RSI: 0000000000000046  RDI: 0000000000000046
    RBP: ffff883ff8fdc5a8   R8: 000000000002072c   R9: 00000000fffffffb
    R10: 0000000000000001  R11: 000000000000000c  R12: ffff883ff8fdc648
    R13: ffff883ff8fdc000  R14: ffffffff81600460  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff883ff8fdc540] print_context_stack at ffffffff8100f4d1
#10 [ffff883ff8fdc5b0] dump_trace at ffffffff8100e4a0
#11 [ffff883ff8fdc650] show_trace_log_lvl at ffffffff8100f245
#12 [ffff883ff8fdc680] show_trace at ffffffff8100f275
#13 [ffff883ff8fdc690] dump_stack at ffffffff8150d3ca
#14 [ffff883ff8fdc6d0] warn_slowpath_common at ffffffff8106e2e7
#15 [ffff883ff8fdc710] warn_slowpath_null at ffffffff8106e33a
#16 [ffff883ff8fdc720] hrtick_start_fair at ffffffff810575eb
#17 [ffff883ff8fdc750] pick_next_task_fair at ffffffff81064a00
#18 [ffff883ff8fdc7a0] schedule at ffffffff8150d908
#19 [ffff883ff8fdc860] __cond_resched at ffffffff81064d6a
#20 [ffff883ff8fdc880] _cond_resched at ffffffff8150e550
#21 [ffff883ff8fdc890] vx_nalloc_getpage_lnx at ffffffffa041afd5 [vxfs]
#22 [ffff883ff8fdca80] vx_nalloc_getpage at ffffffffa03467a3 [vxfs]
#23 [ffff883ff8fdcbf0] vx_do_getpage at ffffffffa034816b [vxfs]
#24 [ffff883ff8fdcdd0] vx_do_read_ahead at ffffffffa03f705e [vxfs]
#25 [ffff883ff8fdceb0] vx_read_ahead at ffffffffa038ed8a [vxfs]
#26 [ffff883ff8fdcfc0] vx_do_getpage at ffffffffa0347732 [vxfs]
#27 [ffff883ff8fdd1a0] vx_getpage1 at ffffffffa034865d [vxfs]
#28 [ffff883ff8fdd2f0] vx_fault at ffffffffa03d4788 [vxfs]
#29 [ffff883ff8fdd400] __do_fault at ffffffff81143194
#30 [ffff883ff8fdd490] handle_pte_fault at ffffffff81143767
#31 [ffff883ff8fdd570] handle_mm_fault at ffffffff811443fa
#32 [ffff883ff8fdd5e0] __get_user_pages at ffffffff811445fa
#33 [ffff883ff8fdd670] get_user_pages at ffffffff81144999
#34 [ffff883ff8fdd690] vx_dio_physio at ffffffffa041d812 [vxfs]
#35 [ffff883ff8fdd800] vx_dio_rdwri at ffffffffa02ed08e [vxfs]
#36 [ffff883ff8fdda20] vx_write_direct at ffffffffa044f490 [vxfs]
#37 [ffff883ff8fddaf0] vx_write1 at ffffffffa04524bf [vxfs]
#38 [ffff883ff8fddc30] vx_write_common_slow at ffffffffa0453e4b [vxfs]
#39 [ffff883ff8fddd30] vx_write_common at ffffffffa0454ea8 [vxfs]
#40 [ffff883ff8fdde00] vx_write at ffffffffa03dc3ac [vxfs]
#41 [ffff883ff8fddef0] vfs_write at ffffffff81181078
#42 [ffff883ff8fddf30] sys_pwrite64 at ffffffff81181a32
#43 [ffff883ff8fddf80] system_call_fastpath at ffffffff8100b072

DESCRIPTION:
The panic is due to kernel referring to corrupted thread_info structure from the
scheduler, thread_info got corrupted by stack overflow. While doing direct I/O
write, user-space pages need to be pre-faulted using __get_user_pages() code
path. This code path is very deep can end up consuming lot of stack space.

RESOLUTION:
Reduced the kernel stack consumption by ~400-500 bytes in this code path by
making various changes in the way pre-faulting is done.

* 3356892 (Tracking ID: 3259634)

SYMPTOM:
In CFS, each node with mounted file system cluster has its own intent
log in the file system. A CFS with more than 4, 294, 967, 296 file system blocks
can zero out an incorrect location resulting from an incorrect typecasting. For
example, that kind of CFS can incorrectly zero out 65536 file system blocks at
the block offset of 1, 537, 474, 560 (file system blocks) with a 8-Kb file system
block size and an intent log with the size of 65536 file system blocks. This
issue can only occur if an intent log is located above an offset of
4, 294, 967, 296 file system blocks. This situation can occur when you add a new
node to the cluster and mount an additional CFS secondary for the first time,
which needs to create and zero a new intent log. This situation can also happen
if you resize a file system or intent log and clear an intent log.

The problem occurs only with the following file system size and the FS block
size combinations:

1kb block size and FS size > 4TB
2kb block size and FS size > 8TB
4kb block size and FS size > 16TB
8kb block size and FS size > 32TB

For example, the message log can contain the following messages:

The full fsck flag is set on a file system with the following type of messages:
 

2013 Apr 17 14:52:22 sfsys kernel: vxfs: msgcnt 5 mesg 096: V-2-96:
vx_setfsflags - /dev/vx/dsk/sfsdg/vol1 file system fullfsck flag set - vx_ierror

2013 Apr 17 14:52:22 sfsys kernel: vxfs: msgcnt 6 mesg 017: V-2-17:
vx_attr_iget - /dev/vx/dsk/sfsdg/vol1 file system inode 13675215 marked bad 
incore

2013 Jul 17 07:41:22 sfsys kernel: vxfs: msgcnt 47 mesg 096:  V-2-96:
vx_setfsflags - /dev/vx/dsk/sfsdg/vol1 file system fullfsck  flag set - 
vx_ierror 

2013 Jul 17 07:41:22 sfsys kernel: vxfs: msgcnt 48 mesg 017:  V-2-17:
vx_dirbread - /dev/vx/dsk/sfsdg/vol1 file system inode 55010476  marked bad 
incore

DESCRIPTION:
In CFS, each node with mounted file system cluster has its own
intent log in the file system. When an additional node mounts the file system as
a CFS Secondary, the CFS creates an intent log. Note that intent logs are never
removed, they are reused.

When you clear an intent log, Veritas File System (VxFS) passes an incorrect
block number to the log clearing routine, which zeros out an incorrect location.
The incorrect location might point to the file data or file system metadata. Or,
the incorrect location might be part of the file system's available free space.
This is silent corruption. If the file system metadata corrupts, VxFS can detect
the corruption when it subsequently accesses the corrupt metadata and marks the
file system for full fsck.

RESOLUTION:
The code is modified so that VxFS can pass the correct block number
to the log clearing routine.

* 3356895 (Tracking ID: 3253210)

SYMPTOM:
When the file system reaches the space limitation, it hangs with the following stack trace:
vx_svar_sleep_unlock()
default_wake_function()
wake_up()
vx_event_wait()
vx_extentalloc_handoff()
vx_te_bmap_alloc()
vx_bmap_alloc_typed()
vx_bmap_alloc()
vx_bmap()
vx_exh_allocblk()
vx_exh_splitbucket()
vx_exh_split()
vx_dopreamble()
vx_rename_tran()
vx_pd_rename()

DESCRIPTION:
When large directory hash is enabled through the vx_dexh_sz(5M) tunable,  Veritas File System (VxFS) uses the large directory hash for directories.
When you rename a file, a new directory entry is inserted to the hash table, which results in hash split. The hash split fails the current transaction and retries after some housekeeping jobs complete. These jobs include allocating more space for the hash table. However, VxFS doesn't check the return value of the preamble job. And thus, when VxFS runs out of space, the rename transaction is re-entered permanently without knowing if more space is allocated by preamble jobs.

RESOLUTION:
The code is modified to enable VxFS to exit looping when ENOSPC is returned from the preamble job.

* 3356909 (Tracking ID: 3335272)

SYMPTOM:
The mkfs (make file system) command dumps core when the log size 
provided is not aligned. The following stack trace is displayed:

(gdb) bt
#0  find_space ()
#1  place_extents ()
#2  fill_fset ()
#3  main ()
(gdb)

DESCRIPTION:
While creating the VxFS file system using the mkfs command, if the 
log size provided is not aligned properly, you may end up in doing 
miscalculations for placing the RCQ extents and finding no place. This leads to 
illegal memory access of AU bitmap and results in core dump.

RESOLUTION:
The code is modified to place the RCQ extents in the same AU where 
log extents are allocated.

* 3357264 (Tracking ID: 3350804)

SYMPTOM:
On RHEL6, VxFS can sometimes report system panic with errors as Thread
overruns stack, or stack corrupts.

DESCRIPTION:
On RHEL6, the low-stack memory allocations consume significant
memory, especially when system is under memory pressure and takes page allocator route. This breaks our earlier assumptions of stack depth calculations.

RESOLUTION:
The code is modified to add a check in case of RHEL6 before doing low-stack allocations for available stack size, VxFS re-tunes the various stack depth calculations for each distribution separately to minimize performance penalties.

* 3357278 (Tracking ID: 3340286)

SYMPTOM:
The tunable setting of dalloc_enable gets reset to a default value
after a file system is resized.

DESCRIPTION:
The file system resize operation triggers the file system re-initialization
process. 
During this process, the tunable value of dalloc_enable gets reset to the
default value instead of retaining the old tunable value.

RESOLUTION:
The code is fixed such that the old tunable value of dalloc_enable is retained.

Patch ID: 6.0.300.100

* 3100385 (Tracking ID: 3369020)

SYMPTOM:
The Veritas File System (VxFS) module fails to load in RHEL 6 Update 4
environments.

DESCRIPTION:
The module fails to load because of two kABI incompatibilities with
the RHEL 6 Update 4 environment.

RESOLUTION:
The code is modified to ensure that the VxFS module is supported in
RHEL 6 Update 4 environments.

Patch ID: 6.0.300.000

* 2912412 (Tracking ID: 2857629)

SYMPTOM:
When a new node takes over a primary for the file system, it could 
process stale shared extent records in a per node queue.  The primary will 
detect a bad record and set the full fsck flag.  It will also disable the file 
system to prevent further corruption.

DESCRIPTION:
Every node in the cluster that adds or removes references to shared extents, 
adds the shared extent records to a per node queue.  The primary node in the 
cluster processes the records in the per node queues and maintains reference 
counts in a global shared extent device.  In certain cases the primary node 
might process bad or stale records in the per node queue.  Two situations under 
which bad or stale records could be processed are:
    1. clone creation initiated from a secondary node immediately after primary 
migration to different node.
    2. queue wraparound on any node and take over of primary by new node 
immediately afterwards.
Full fsck might not be able to rectify the file system corruption.

RESOLUTION:
Update the per node shared extent queue head and tail pointers to correct 
values 
on primary before starting processing of shared extent records.

* 2928921 (Tracking ID: 2843635)

SYMPTOM:
The VxFS internal testing, there are some failures during the reorg operation of
structural files.

DESCRIPTION:
While the reorg is in progress, from certain ioctl, the error value that is to
be returned is overwritten and thus results in an incorrect error value and test
failures.

RESOLUTION:
Made changes accordingly so as the error value is not overwritten.

* 2933290 (Tracking ID: 2756779)

SYMPTOM:
Write and read performance concerns on Cluster File System (CFS) when running 
applications that rely on POSIX file-record locking (fcntl).

DESCRIPTION:
The usage of fcntl on CFS leads to high messaging traffic across nodes thereby 
reducing the performance of readers and writers.

RESOLUTION:
The code is modified to cache the ranges that are being file-record locked on 
the node. This is tried whenever possible to avoid broadcasting of messages 
across the nodes in the cluster.

* 2933291 (Tracking ID: 2806466)

SYMPTOM:
A reclaim operation on a file system that is mounted on an LVM volume using the
fsadm(1M) command with the -R option may panic the system. And the following
stack trace is displayed:
vx_dev_strategy+0xc0() 
vx_dummy_fsvm_strategy+0x30() 
vx_ts_reclaim+0x2c0() 
vx_aioctl_common+0xfd0() 
vx_aioctl+0x2d0() 
vx_ioctl+0x180()

DESCRIPTION:
Thin reclamation supports only mounted file systems on a VxVM volume.

RESOLUTION:
The code is modified to return errors without panicking the system if the
underlying volume is LVM.

* 2933292 (Tracking ID: 2895743)

SYMPTOM:
It takes a longer than usual time for many Windows7 clients to log off in 
parallel if the user profile is stored in Cluster File system (CFS).

DESCRIPTION:
Veritas File System (VxFS) keeps file creation time/full ACL things for samba 
clients in the extended attribute which is implemented via named streams. VxFS 
reads the named stream for each of the ACL objects. Reading of named stream is 
a costly operation, as it results in an open, an opendir, a lookup, and another 
open to get the fd. The VxFS function vx_nattr_open() holds the exclusive 
rwlock to read an ACL object that stored as extended attribute. It may cause 
heavy lock contention when many threads want the same lock. They might get 
blocked until one of the nattr_open releases it. This takes time since 
nattr_open is very slow.

RESOLUTION:
The code is modified so that it takes the rwlock in shared mode instead of 
Exclusive mode.

* 2933294 (Tracking ID: 2750860)

SYMPTOM:
Performance of the write operation with small request size may degrade
on a large Veritas File System (VxFS) file system. Many threads may be found
sleeping with the following stack trace:

vx_sleep_lock
vx_lockmap
vx_getemap
vx_extfind
vx_searchau_downlevel
vx_searchau_downlevel
vx_searchau_downlevel
vx_searchau_downlevel
vx_searchau_uplevel
vx_searchau+0x600
vx_extentalloc_device
vx_extentalloc
vx_te_bmap_alloc
vx_bmap_alloc_typed
vx_bmap_alloc
vx_write_alloc3
vx_recv_prealloc
vx_recv_rpc
vx_msg_recvreq
vx_msg_process_thread
kthread_daemon_startup

DESCRIPTION:
A VxFS allocate unit (AU) is composed of 32768 disk blocks, and can
be expanded when it is partially allocated, or non-expanded when the AU is fully
occupied or completely unused. The extent map for a large file system with 1k
block size is organized as a big tree. For example, a 4-TB file system with 1KB
file system block size can have up to 128k Aus. To find an appropriate extent,
VxFS extent allocation algorithm will first search expanded AU to avoid causing
free space fragmentation by traversing the free extent map tree. If getting
failed, it will do the same with the non-expanded AUs. When there are too many
small extents(less than 32768 blocks) requests, and all the small free extents
are used up, but a large number of au-size extents (32768 blocks) are available;
the file system could run into this hang. Because of no small available extents
in the expanded AUs, VxFS will look for some larger non-expanded extents, namely
au-size extents, which are not what VxFS wanted (expanded AU is
expected). As a result, each request will walk along the big extent map tree for
every au-size extent, which will end up with failure finally. The requested
extent can be gotten during the second attempt for non-expanded AUs eventually,
but the unnecessary work consumes a lot of CPU resource.

RESOLUTION:
The code is modified to optimize the free-extend-search algorithm by
skipping certain au-size extents to reduce the overall search time.

* 2933296 (Tracking ID: 2923105)

SYMPTOM:
Removing the Veritas File System (VxFS) module using rmmod(8) on a system 
having heavy buffer cache usage may hang.

DESCRIPTION:
When a large number of buffers are allocated from the buffer cache, at the time 
of removing VxFS module, the process of freeing the buffers takes a long time.

RESOLUTION:
The code is modified to use an improved algorithm which prevents it from 
traversing the free lists even if it has found the free chunk. Instead, it will 
break out from the search and free that buffer.

* 2933309 (Tracking ID: 2858683)

SYMPTOM:
The reserve-extent attributes are changed after the vxrestore(1M ) operation, 
for files that are greater than 8192 bytes.

DESCRIPTION:
A local variable is used to contain the number of the reserve bytes that are 
reused during the vxrestore(1M) operation, for further VX_SETEXT ioctl call for 
files that are greater than 8k. As a result, the attribute information is 
changed.

RESOLUTION:
The code is modified to preserve the original variable value till the end of 
the function.

* 2933313 (Tracking ID: 2841059)

SYMPTOM:
The file system gets marked for a full fsck operation and the following message 
is displayed in the system log:

V-2-96: vx_setfsflags 
<volume name> file system fullfsck flag set - vx_ierror

vx_setfsflags+0xee/0x120 
vx_ierror+0x64/0x1d0 [vxfs]
vx_iremove+0x14d/0xce0 
vx_attr_iremove+0x11f/0x3e0
vx_fset_pnlct_merge+0x482/0x930
vx_lct_merge_fs+0xd1/0x120
vx_lct_merge_fs+0x0/0x120 
vx_walk_fslist+0x11e/0x1d0
vx_lct_merge+0x24/0x30 
vx_workitem_process+0x18/0x30
vx_worklist_process+0x125/0x290
vx_worklist_thread+0x0/0xc0
vx_worklist_thread+0x6d/0xc0
vx_kthread_init+0x9b/0xb0 

 V-2-17: vx_iremove_2
<volume name>: file system inode 15 marked bad incore

DESCRIPTION:
Due to a race condition, the thread tries to remove an attribute inode that has 
already been removed by another thread. Hence, the file system is marked for a 
full fsck operation and the attribute inode is marked as 'bad ondisk'.

RESOLUTION:
The code is modified to check if the attribute node that a thread is trying to 
remove has already been removed.

* 2933330 (Tracking ID: 2773383)

SYMPTOM:
Deadlock involving two threads are observed. One holding the semaphore
and waiting for the 'irwlock' and the other holding 'irwlock' and waiting for
the 'mmap' semaphore and the following stack trace is displayed:
vx_rwlock 
vx_naio_do_work
vx_naio_worker 
vx_kthread_init 
kernel_thread.

DESCRIPTION:
The hang in 'down_read' occurs due to waiting for the mmap_sem. The
thread holding the mmap_sem is waiting for the RWLOCK. This is being held by one
of the threads wanting the mmap_sem, and hence the deadlock observed. An
enhancement was made to not take the mmap_sem for cio and mmap. This was not
complete and did not allow the mmap_sem to be taken for native asynch io calls
when using this nommapcio option.

RESOLUTION:
The code is modified to skip taking the mmap_sem in case of
native-io if the file has CIO advisory set.

* 2933333 (Tracking ID: 2893551)

SYMPTOM:
When the Network File System (NFS) connections experience a high load, the file 
attribute value is replaced with question mark symbols. This issue occurs 
because the EACCESS() function got appended with the ls -l command when the 
cached entries from the NFS server were deleted.

DESCRIPTION:
The Veritas File System (VxFS) uses capabilities, such as, CAP_CHOWN to 
override the default inode permissions and allow users to search for 
directories. VxFS allows users to perform the search operation even when 
the 'r' or 'x' bits are not set as permissions. 
When the nfsd file system uses these capabilities to perform a dentry reconnect 
to connect to the dentry tree, some of the Linux file systems use the 
inode_permission() function to verify if a user is authorized to perform the 
operation.

When dentry connects on behalf of the disconnected dentries then nfsd file 
system enables all capabilities without setting the on-wire user id to fsuid. 
Hence, VxFS fails to understand these capabilities and reports an error stating 
that the user doesn't have permissions on the directory.

RESOLUTION:
The code is modified to enable the vx_iaccess() function to check if Linux is 
processing the capabilities before returning the EACCES() function. This 
modification adds a minimum capability support for the nfsd file system.

* 2933335 (Tracking ID: 2641438)

SYMPTOM:
When a system is unexpectedly shutdown, on a restart, you may lose the 
modifications that are performed on the username space-extended attributes 
("user").

DESCRIPTION:
The modification of a username space-extended attribute leads to an 
asynchronous-write operation. As a result, these modifications get lost during 
an unexpected system shutdown.

RESOLUTION:
The code is modified such that the modifications that are performed on the 
username space-extended attributes before the shutdown are made synchronous.

* 2933571 (Tracking ID: 2417858)

SYMPTOM:
When the hard/soft limit of quota is specified above 1TB, the command 
fails and gives error.

DESCRIPTION:
We store the quota records corresponding to users in the external and 
internal quota files. In external quota file, the record are in the form of 
structure which are 32 bit. So, we can specify the block limits upto 32-bit 
value (1TB). This limit was insufficient in many cases.

RESOLUTION:
Made use of 64-bit structures and 64-bit limit macros to let users 
have usage/limits greater than 1 TB.

* 2933729 (Tracking ID: 2611279)

SYMPTOM:
Filesystem with shared extents may panic with following stack trace.
  page_fault 
  vx_overlay_bmap
  vx_bmap_lookup
  vx_bmap
  vx_local_get_sharedblkcnt 
  vx_get_sharedblkcnt 
  vx_aioctl_get_sharedblkcnt 
  vx_aioctl_common 
  mntput_no_expire 
  vx_aioctl 
  vx_ioctl

DESCRIPTION:
The mechanism to manage shared extent uses special file. We never
expects HOLE in this special file. In HOLE cases we may see panic while working
on this file.

RESOLUTION:
Code has been modified to check if HOLE is present in special file.
In case if it is HOLE processing is skipped and thus panic is avoided.

* 2933751 (Tracking ID: 2916691)

SYMPTOM:
fsdedup infinite loop with the following stack:

#5 [ffff88011a24b650] vx_dioread_compare at ffffffffa05416c4 
#6 [ffff88011a24b720] vx_read_compare at ffffffffa05437a2 
#7 [ffff88011a24b760] vx_dedup_extents at ffffffffa03e9e9b 
#11 [ffff88011a24bb90] vx_do_dedup at ffffffffa03f5a41 
#12 [ffff88011a24bc40] vx_aioctl_dedup at ffffffffa03b5163

DESCRIPTION:
vx_dedup_extents() do the following to dedup two files:

1.	Compare the data extent of the two files that need to be deduped.
2.	Split both files' bmap to make them share the first file's common data 
extent.
3. Free the duplicate data extent of the second file.

In step 2, During bmap split, vx_bmap_split() might need to allocate space for
the inode's bmap to add new bmap entries, which will add emap to this
transaction. (This condition is more likely to hit if the dedup is being run on
two large files that have interleaved duplicate/difference data extents, the
files bmap will needed to be splited more in this case)

In step 3, vx_extfree1() doesn't support Multi AU extent free if there is
already an emap in the same transaction,
In this case, it will return VX_ETRUNCMAX. (Please see incident e569695 for
history of this limitation)

VX_ETRUNCMAX is a retirable error, so vx_dedup_extents() will undo everything in
the transaction and retry from the beginning, then hit the same error again.
Thus infinite loop.

RESOLUTION:
We make vx_te_bmap_split() always register an transaction preamble for the bmap
split operation in dedup, and let vx_dedup_extents() perform the preamble at a
separate transaction before it retry the dedup operation.

* 2933822 (Tracking ID: 2624262)

SYMPTOM:
Panic hit in vx_bc_do_brelse() function while executing dedup functionality with 
following backtrace.
vx_bc_do_brelse()
vx_mixread_compare()
vx_dedup_extents()
enqueue_entity()
__alloc_pages_slowpath()
__get_free_pages()
vx_getpages()
vx_do_dedup()
vx_aioctl_dedup()
vx_aioctl_common()
vx_rwunlock()
vx_aioctl()
vx_ioctl()
vfs_ioctl()
do_vfs_ioctl()
sys_ioctl()

DESCRIPTION:
While executing function vx_mixread_compare() in dedup codepath, we hit error 
due 
to which an allocated data structure remained uninitialised.
The panic occurs due to writing to this uninitialised allocated data structure 
in 
the function vx_mixread_compare().

RESOLUTION:
Code is changed to free the memory allocated to the data structure when we are 
going out due to error.

* 2937367 (Tracking ID: 2923867)

SYMPTOM:
Got assert hit due to VX_RCQ_PROCESS_MSG having lower
priority(Numerically) than VX_IUPDATE_MSG;

DESCRIPTION:
When primary is going to send VX_IUPDATE_MSG message to the owner
of the inode about updation of the inode's non-transactional field change then
it checks for the current messaging priority(for VX_RCQ_PROCESS_MSG) with the
priority of the message being sent(VX_IUPDATE_MSG) to avoid possible deadlock.
In our case we were getting VX_RCQ_PROCESS_MSG priority numerically lower than 
VX_IUPDATE_MSG thus getting assert hit.

RESOLUTION:
We have changed the VX_RCQ_PROCESS_MSG priority numerically higher
than VX_IUPDATE_MSG thus avoiding possible assert hit.

* 2976664 (Tracking ID: 2906018)

SYMPTOM:
In the event of a system crash, the fsck-intent-log is not replayed and file 
system is marked clean. Subsequently, mounting the file-system-extended 
operations is not completed.

DESCRIPTION:
Only when a file system that contains PNOLTs is mounted locally (mounted 
without using 'mount -o cluster') are potentially exposed to this issue. 

The reason why fsck silently skips the intent-log replay is that each PNOLT has 
a flag to identify whether the intent-log is dirty or not - in the event of a 
system crash this flag signifies whether intent-log replay is required or not. 
In the event of a system crash whilst the file system was mounted locally and 
the PNOLTs are not utilized. The fsck intent-log replay will still check for 
the flags in the PNOLTs, however, these are the wrong flags to check if the 
file system was locally mounted. The fsck intent-log replay therefore assumes 
that the intent-logs are clean (because the PNOLTs are not marked dirty) and it 
therefore skips the replay of intent-log altogether.

RESOLUTION:
The code is modified such that when PNOLTs exist in the file system, VxFS will 
set the dirty flag in the CFS primary PNOLT while mounting locally. With this 
change, in the event of system crash whilst a file system is locally mounted, 
the subsequent fsck intent-log replay will correctly utilize the PNOLT 
structures and successfully replay the intent log.

* 2978227 (Tracking ID: 2857751)

SYMPTOM:
The internal testing hits the assert "f:vx_cbdnlc_enter:1a" when the upgrade was
in progress.

DESCRIPTION:
The clone/fileset should be mounted if there is an attempt to add an entry in
the dnlc. If the clone/fileset is not mounted and still there is an attempt to
add it to dnlc, then it is not valid.

RESOLUTION:
Fix is added to check if filset is mounted or not before adding an entry to dnlc.

* 2983739 (Tracking ID: 2857731)

SYMPTOM:
Internal testing hits an assert "f:vx_mapdeinit:1" while the file
system is frozen and not disabled.

DESCRIPTION:
while performing deinit for a free inode map, the delegation state
should not be set.This is actually a race between freeze/release-dele/fs-reinit
sequence and processing of extops during reconfig.

RESOLUTION:
Taking appropriate locks during the extop processing so that fs
structures remain quiesced during the switch.

* 2984589 (Tracking ID: 2977697)

SYMPTOM:
Deleting checkpoints of file systems with character special device
files viz. /dev/null using fsckptadm may panic the machine with the following
stack trace:
vx_idetach
vx_inode_deinit
vx_idrop
vx_inull_list
vx_workitem_process
vx_worklist_process
vx_worklist_thread
vx_kthread_init

DESCRIPTION:
During the checkpoint removal operation the type of the inodes is
converted to 'pass through inode'.  During a conversion we try to refer to the
device reference for the special file, which is invalid in the clone context
leading to a panic.

RESOLUTION:
The code is modified to remove device reference of the special character files
during the clone removal operation thus preventing the panic.

* 2987373 (Tracking ID: 2881211)

SYMPTOM:
File ACLs not preserved in checkpoints properly if file has hardlink. Works fine
with file ACLs which don't have hardlinks.

DESCRIPTION:
This issue is with attribute inode. When we add an acl entry, if its in the
immediate area its propagated to the clone . But in the case if attribute inode
is created, its not being propagated to the checkpoint. We are missing push in
the context of attribute inode and so getting this issue.

RESOLUTION:
Modified the code to propagate the ACLs entries (attribute inode case) to the clone.

* 2988749 (Tracking ID: 2821152)

SYMPTOM:
Internal Stress test hit an assert "f:vx_dio_physio:4, 1" on locally mounter file
system.

DESCRIPTION:
The issue was that there were two pages that were getting allocated, because the
block size of the FS is of 8k. Now, while coming for "dio" the pages which were
getting allocated they were tested whether the pages are good for I/O or
not(VX_GOOD_IO_PAGE) i.e. its _count should be greater than zero(page_count(pp)
> 0) or the page is compound(PageCompund(pp)) or the its
reserverd(PageReserved(pp)). The first page was getting passed through the
assert because the _count of the page was greater than zero and the second page
was hitting the assert as all three conditions were failing for it. The problem
why second page didn't have _count greater than 0 was because the compound page
was getting allocated for it and in case of compound only the head page
maintains the count.

RESOLUTION:
Code changed such that compound allocation shouldn't be done for pages greater
than the PAGESIZE, instead we use VX_BUF_KMALLOC().

* 3008450 (Tracking ID: 3004466)

SYMPTOM:
Installation of 5.1SP1RP3 fails on RHEL 6.3

DESCRIPTION:
Installation of 5.1SP1RP3 fails on RHEL 6.3

RESOLUTION:
Updated the install script to handle the installation failure.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch fs-rhel6_x86_64-Patch-6.0.5.600.tar.gz to /tmp
2. Untar fs-rhel6_x86_64-Patch-6.0.5.600.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/fs-rhel6_x86_64-Patch-6.0.5.600.tar.gz
    # tar xf /tmp/fs-rhel6_x86_64-Patch-6.0.5.600.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd 
    /tmp/hf
    # ./installFS605P6 [<host1> <host2>...]

You can also install this patch together with 6.0.1 GA release and 6.0.5 Patch release
    # ./installFS605P6 -base_path [<601 path>] -mr_path [<605 path>] [<host1> <host2>...]
where the -mr_path should point to the 6.0.5 image directory, while -base_path to the 6.0.1 image.

Install the patch manually:
--------------------------
#rpm -Uvh VRTSvxfs-6.0.500.600-RHEL6.x86_64.rpm


REMOVING THE PATCH
------------------
#rpm -e VRTSvxfs-6.0.500.600-RHEL6.x86_64


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE