fs-sol10_sparc-Patch-6.2.1.300

 Basic information
Release type: Patch
Release date: 2016-12-30
OS update support: None
Technote: None
Documentation: None
Popularity: 1648 viewed    downloaded
Download size: 16.41 MB
Checksum: 708225452

 Applies to one or more of the following products:
File System 6.2 On Solaris 10 SPARC
Storage Foundation 6.2 On Solaris 10 SPARC
Storage Foundation Cluster File System 6.2 On Solaris 10 SPARC
Storage Foundation for Oracle RAC 6.2 On Solaris 10 SPARC
Storage Foundation for Sybase ASE CE 6.2 On Solaris 10 SPARC
Storage Foundation HA 6.2 On Solaris 10 SPARC

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
fs-sol10_sparc-Patch-6.2.1.100 (obsolete) 2015-09-02

This patch requires: Release date
sfha-sol10_sparc-MR-6.2.1 2015-04-24

 Fixes the following incidents:
3657150, 3657152, 3657153, 3657156, 3657157, 3657158, 3657491, 3665980, 3665984, 3665990, 3666009, 3666010, 3677165, 3688210, 3697966, 3699953, 3703631, 3715567, 3718542, 3721458, 3725347, 3725569, 3726403, 3729111, 3729704, 3734750, 3736133, 3743913, 3751050, 3754492, 3755796, 3756002, 3769992, 3817120, 3817229, 3896150, 3896151, 3896154, 3896156, 3896160, 3896218, 3896223, 3896248, 3896249, 3896250, 3896261, 3896267, 3896269, 3896270, 3896273, 3896277, 3896281, 3896285, 3896303, 3896304, 3896306, 3896308, 3896310, 3896311, 3896312, 3896313, 3896314, 3901379, 3903583, 3905055, 3905056, 3906148, 3907038, 3907350

 Patch ID:
151230-03

Readme file
                          * * * READ ME * * *
                 * * * Symantec File System 6.2.1 * * *
                         * * * Patch 300 * * *
                         Patch Date: 2016-12-23


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH
   * KNOWN ISSUES


PATCH NAME
----------
Symantec File System 6.2.1 Patch 300


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Solaris 10 SPARC


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxfs


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Symantec File System 6.2
   * Symantec Storage Foundation 6.2
   * Symantec Storage Foundation Cluster File System HA 6.2
   * Symantec Storage Foundation for Oracle RAC 6.2
   * Symantec Storage Foundation HA 6.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 151230-03
* 3734750 (3608239) System panics when deinitializing voprwlock in Solaris.
* 3817229 (3762174) fsfreeze and vxdump commands may not work together.
* 3896150 (3833816) Read returns stale data on one node of the CFS.
* 3896151 (3827491) Data relocation is not executed correctly if the IOTEMP policy is set to AVERAGE.
* 3896154 (1428611) 'vxcompress' can spew many GLM block lock messages over the 
LLT network.
* 3896156 (3633683) vxfs thread consumes high CPU while running an 
application 
that makes excessive sync() calls.
* 3896160 (3808033) When using 6.2.1 ODM on RHEL7, Oracle resource cannot be killed after forced umount via VCS.
* 3896218 (3751049) The umountall operation fails on Solaris.
* 3896223 (3735697) vxrepquota reports error
* 3896248 (3876223) Truncate(fcntl F_FREESP*) on newly created file doesn't update 
time stamp.
* 3896249 (3861713) High %sys CPU seen on Large CPU/Memory configurations.
* 3896250 (3870832) Panic due to a race between force umount and nfs lock 
manager
vnode get operation.
* 3896261 (3855726) Panic in vx_prot_unregister_all().
* 3896267 (3861271) Missing an inode clear operation when a Linux inode is being de-initialized on
SLES11.
* 3896269 (3879310) File System may get corrupted after a failed vxupgrade.
* 3896270 (3707662) Race between reorg processing and fsadm timer thread (alarm expiry) leads to panic in vx_reorg_emap.
* 3896273 (3558087) The ls -l and other commands which uses stat system call may
take long time to complete.
* 3896277 (3691633) Remove RCQ Full messages
* 3896281 (3830300) Degraded CPU performance during backup of Oracle archive logs
on CFS vs local filesystem
* 3896285 (3757609) CPU usage going high because of contention over ODM_IO_LOCK
* 3896303 (3762125) Directory size increases abnormally.
* 3896304 (3846521) "cp -p" fails if modification time in nano seconds have 10 
digits.
* 3896306 (3790721) High cpu usage caused by vx_send_bcastgetemapmsg_remaus
* 3896308 (3695367) Unable to remove volume from multi-volume VxFS using "fsvoladm" command.
* 3896310 (3859032) System panics in vx_tflush_map() due to NULL pointer 
de-reference.
* 3896311 (3779916) vxfsconvert fails to upgrade layout verison for a vxfs file 
system with large number of inodes.
* 3896312 (3811849) On cluster file system (CFS), while executing lookup() function in a directory
with Large Directory Hash (LDH), the system panics and displays an error.
* 3896313 (3817734) Direct command to run  fsck with -y|Y option was mentioned in
the message displayed to user when file system mount fails.
* 3896314 (3856363) Filesystem inodes have incorrect blocks.
* 3901379 (3897793) Panic happens because of race where the mntlock ID is 
cleared while mntlock flag still set.
* 3903583 (3905607) Internal assert failed during migration.
* 3905055 (3880113) Mapbad scenario in case of deletion of cloned files having shared ZFOD extents
* 3905056 (3879761) Performance issue observed due to contention on vxfs spin lock
vx_worklist_lk.
* 3906148 (3894712) ACL permissions are not inherited correctly on cluster 
file system.
* 3907038 (3879799) Due to inconsistent LCT (Link Count Table), Veritas File System (VxFS) mount
prompts for full fsck every time.
* 3907350 (3817734) Direct command to run  fsck with -y|Y option was mentioned in
the message displayed to user when file system mount fails.
Patch ID: 151230-02
* 3754492 (3761603) Internal assert failure because of invalid extop processing 
at the mount time.
* 3756002 (3764824) Internal cluster file system(CFS) testing hit debug assert
* 3769992 (3729158) Deadlock due to incorrect locking order between write advise
and dalloc flusher thread.
* 3817120 (3804400) VRTS/bin/cp does not return any error when quota hard 
limit is reached and partial write is encountered.
Patch ID: 151230-01
* 3657150 (3604071) High CPU usage consumed by the vxfs thread process.
* 3657152 (3602322) Panic while flushing the dirty pages of the inode
* 3657153 (3622323) Cluster Filesystem mounted as read-only panics when it gets sharing and/or compression statistics with the fsadm_vxfs(1M) command.
* 3657156 (3604750) The kernel loops during the extent re-org.
* 3657157 (3617191) Checkpoint creation takes a lot of time.
* 3657158 (3601943) Truncating corrupted block map of a file may lead to an infinite loop.
* 3657491 (3657482) Stress test on cluster file system fails due to data corruption
* 3665980 (2059611) The system panics due to a NULL pointer dereference while
flushing bitmaps to the disk.
* 3665984 (2439261) When the vx_fiostats_tunable value is changed from zero to
non-zero, the system panics.
* 3665990 (3567027) During the File System resize operation, the "fullfsck flag is set.
* 3666009 (3647749) On Solaris, an obsolete v_path is created for the VxFS vnode.
* 3666010 (3233276) With a large file system, primary to secondary migration takes longer duration.
* 3677165 (2560032) System panics after SFHA is upgraded from 5.1SP1 to 
5.1SP1RP2 or from 6.0.1 to 6.0.5
* 3688210 (3689104) The module version of the vxcafs module is not displayed after the modinfo vxcafs command is run on Solaris.
* 3697966 (3697964) The vxupgrade(1M) command fails to retain the fs_flags after upgrading a file system.
* 3699953 (3702136) Link Count Table (LCT) corruption is observed while mounting a file system on a secondary node.
* 3715567 (3715566) VxFS fails to report an error when the maxlink and nomaxlink options are set on file systems having disk layout version (DLV) lower than 10.
* 3718542 (3269553) VxFS returns inappropriate message for read of hole via Oracle Disk Manager (ODM).
* 3721458 (3721466) After a file system is upgraded from version 6 to 7, the vxupgrade(1M) command fails to set the VX_SINGLEDEV flag on a superblock.
* 3725347 (3725346) Trimming of underlying SSD volume was not supported for AIX and Solar is using 
"fsadm -R -o ssd" command.
* 3725569 (3731678) During an internal test, a debug assert was observed while handling the error scenario.
* 3726403 (3739618) sfcache command with "-i" option maynot show filesystem cache statistic periodically.
* 3729111 (3729104) Man pages changes missing for smartiomode option of
mount_vxfs (1M)
* 3729704 (3719523) 'vxupgrade' retains the superblock replica of old layout versions.
* 3736133 (3736772) The sfcache(1M) command does not  automatically enable write-back caching on file system once the cache size is increased to enable write-back caching.
* 3743913 (3743912) Users could create sub-directories more than 64K for disk layouts having versions lower than 10.
* 3755796 (3756750) VxFS may leak memory when File Design Driver (FDD) module is unloaded before the cache file system is taken offline.
Patch ID: 151773-01
* 3703631 (3615043) Data loss when writing to a file while dalloc is on.
* 3751050 (3751049) umountall failed on Solaris.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: 151230-03

* 3734750 (Tracking ID: 3608239)

SYMPTOM:
System panics when deinitializing voprwlock in Solaris.

DESCRIPTION:
On earlier Solaris release, it was mandatory to implement vnode rwlock operation 
(voprwlock). This can lead to provide lock handling to other modules as well using vop such as 
VOP_RWLOCK. If in case, some module take this lock and missed a unlock, system can panic when 
deinitializing this voprwlock. With the latest Solaris release, the implementation of this lock is now 
optional. Thus, this lock can be removed. This will help to reduce extra locking, in turn increasing the 
performance.

RESOLUTION:
The code is modified to remove voprwlock implementation.

* 3817229 (Tracking ID: 3762174)

SYMPTOM:
When fsfreeze is used together with vxdump, the fsfreeze command gets timeout and vxdump command fails.

DESCRIPTION:
The vxdump command may try to read mount list file to get information of the corresponding mount points. This behavior results in taking a file system active level, in order to synchronize with file system reinit. But in case of fsfreeze, taking the active level will never succeed, since the file system is already freezed, so this causes a deadlock and finally results in the fsfreeze timeout.

RESOLUTION:
Don't use fsfreeze and vxdump command together.

* 3896150 (Tracking ID: 3833816)

SYMPTOM:
In a CFS cluster, one node returns stale data.

DESCRIPTION:
In a 2-node CFS cluster, when node 1 opens the file and writes to
it, the locks are used with CFS_MASTERLESS flag set. But when node 2 tries to
open the file and write to it, the locks on node 1 are normalized as part of
HLOCK revoke. But after the Hlock revoke on node 1, when node 2 takes the PG
Lock grant to write, there is no PG lock revoke on node 1, so the dirty pages on
node 1 are not flushed and invalidated. The problem results in reads returning
stale data on node 1.

RESOLUTION:
The code is modified to cache the PG lock before normalizing it in
vx_hlock_putdata, so that after the normalizing, the cache grant is still with
node 1.When node 2 requests PG lock, there is a revoke on node 1 which flushes
and invalidates the pages.

* 3896151 (Tracking ID: 3827491)

SYMPTOM:
Data relocation is not executed correctly if the IOTEMP policy is set to AVERAGE.

DESCRIPTION:
Database table is not created correctly which results in an error on the database query. This affects the relocation policy of data and the files are not relocated properly.

RESOLUTION:
The code is modified fix the database table creation issue. Therelocation policy based calculations are done correctly.

* 3896154 (Tracking ID: 1428611)

SYMPTOM:
'vxcompress' command can cause many GLM block lock messages to be 
sent over the network. This can be observed with 'glmstat -m' output under the 
section "proxy recv", as shown in the example below -

bash-3.2# glmstat -m
         message     all      rw       g      pg       h     buf     oth    
loop
master send:
           GRANT     194       0       0       0       2       0     192      
98
          REVOKE     192       0       0       0       0       0     192      
96
        subtotal     386       0       0       0       2       0     384     
194

master recv:
            LOCK     193       0       0       0       2       0     191      
98
         RELEASE     192       0       0       0       0       0     192      
96
        subtotal     385       0       0       0       2       0     383     
194

    master total     771       0       0       0       4       0     767     
388

proxy send:
            LOCK      98       0       0       0       2       0      96      
98
         RELEASE      96       0       0       0       0       0      96      
96
      BLOCK_LOCK    2560       0       0       0       0    2560       0       
0
   BLOCK_RELEASE    2560       0       0       0       0    2560       0       
0
        subtotal    5314       0       0       0       2    5120     192     
194

DESCRIPTION:
'vxcompress' creates placeholder inodes (called IFEMR inodes) to 
hold the compressed data of files. After the compression is finished, IFEMR 
inode exchange their bmap with the original file and later given to inactive 
processing. Inactive processing truncates the IFEMR extents (original extents 
of the regular file, which is now compressed) by sending cluster-wide buffer 
invalidation requests. These invalidations need GLM block lock. Regular file 
data need not be invalidated across the cluster, thus making these GLM block 
lock requests unnecessary.

RESOLUTION:
Pertinent code has been modified to skip the invalidation for the 
IFEMR inodes created during compression.

* 3896156 (Tracking ID: 3633683)

SYMPTOM:
"top" command output shows vxfs thread consuming high CPU while 
running an application that makes excessive sync() calls.

DESCRIPTION:
To process sync() system call vxfs scans through inode cache 
which is a costly operation. If an user application is issuing excessive 
sync() calls and there are vxfs file systems mounted, this can make vxfs 
sync 
processing thread to consume high CPU.

RESOLUTION:
Combine all the sync() requests issued in last 60 second into a 
single request.

* 3896160 (Tracking ID: 3808033)

SYMPTOM:
After a service group is set offline via VOM or VCSOracle process is left in an unkillable state.

DESCRIPTION:
Whenever ODM issues an async request to FDD, FDD is required to do iodone processing on it, regardless of how far the request gets. The forced unmount causes FDD to take one of the early error branch which misses iodone routine for this particular async request. From ODM's perspective, the request is submitted, but iodone will never be called. This has several bad consequences, one of which is a user thread is blocked uninterruptibly forever, if it waits for request.

RESOLUTION:
The code is modified to add iodone routine in the error handling code.

* 3896218 (Tracking ID: 3751049)

SYMPTOM:
The umountall operation fails on Solaris with error "V-3-20358: cannot open mnttab"

DESCRIPTION:
On Solaris, normally, fopen() returns an EMFILE error for 32-bit applications if it attempts to associate a stream with a file accessed by a file descriptor with a value greater than 255. When using umountall to umount more than 256 file systems, the command will fork child process and open more than 256 file descriptors at the same time.This will  cross the 256 file descriptor maximum limit and cause the operation to fail.

RESOLUTION:
Use "F" mode in fopen call to avoid the 256 file descriptor limitation.

* 3896223 (Tracking ID: 3735697)

SYMPTOM:
vxrepquota reports error like,
# vxrepquota -u /vx/fs1
UX:vxfs vxrepquota: ERROR: V-3-20002: Cannot access 
/dev/vx/dsk/sfsdg/fs1:ckpt1: 
No such file or directory
UX:vxfs vxrepquota: ERROR: V-3-24996: Unable to get disk layout version

DESCRIPTION:
vxrepquota checks each mount point entry in mounted file system 
table. If any checkpoint mount point entry presents before the mount point 
specified in the vxrepquota command, vxrepquota will report errors, but the 
command can succeed.

RESOLUTION:
Skip checkpoint mount point in the mounted file system table.

* 3896248 (Tracking ID: 3876223)

SYMPTOM:
Truncate(fcntl F_FREESP*) on newly created file doesn't update time 
stamp.

DESCRIPTION:
In solaris, F_FREESP64(truncate), doesn't do anything and simply 
returns, if  "truncate from" size matches with the size of the file.

RESOLUTION:
Code is modified to update mtime and ctime of file in above scenario.

* 3896249 (Tracking ID: 3861713)

SYMPTOM:
Contention observed on vx_sched_lk and vx_worklist_lk spinlock when profiled using lockstats.

DESCRIPTION:
Internal worker threads take a lock to sleep on a CV while waiting
for work. This lock is global, If there are large numbers of CPU's and large numbers of worker threads then contention 
can be seen on the vx_sched_lk and vx_worklist_lk using lockstat as well as an increased %sys CPU

RESOLUTION:
Make the lock more scalable in large CPU configs

* 3896250 (Tracking ID: 3870832)

SYMPTOM:
System panic due to a race between force umount and the nfs lock 
manager
thread trying to get a vnode with the stack as below:

vx_active_common_flush
vx_do_vget
vx_vget
fsop_vget
lm_nfs3_fhtovp
lm_get_vnode
lm_unlock
lm_nlm4_dispatch
svc_getreq
svc_run
svc_do_run
nfssys

DESCRIPTION:
When the nfs mounted filesystem is unshared and force unmounted, 
if
there is a file that was locked from the nfs client before that, there could be 
a
panic. In nfs3 the unshare does not clear the existing locks or clear/kill the
lock manager threads, so when the force umount wins the race, it would go and 
free
the vx_fsext and vx_vfs structures. Later when the lockmanager threads try to 
get
the vnode of this force unmounted filesystem it panics on the vx_fsext 
structure
that is freed.

RESOLUTION:
The code is modified to mark the solaris vfs flag with 
VFS_UNMOUNTED
flag during a force umount. This flag is later checked in the vx_vget function
when the lock manager thread comes to get vnode, if the flag is set, then it
returns an error.

* 3896261 (Tracking ID: 3855726)

SYMPTOM:
Panic happens in vx_prot_unregister_all(). The stack looks like this:

- vx_prot_unregister_all
- vxportalclose
- __fput
- fput
- filp_close
- sys_close
- system_call_fastpath

DESCRIPTION:
The panic is caused by a NULL fileset pointer, which is due to referencing the
fileset before it's loaded, plus, there's a race on fileset identity array.

RESOLUTION:
Skip the fileset if it's not loaded yet. Add the identity array lock to prevent
the possible race.

* 3896267 (Tracking ID: 3861271)

SYMPTOM:
Due to the missing inode clear action, a page can also be in a strange state.
Also, inode is not fully quiescent which leads to races in the inode code.
Sometime this can cause panic from iput_final().

DESCRIPTION:
We're missing an inode clear operation when a Linux inode is being
de-initialized on SLES11.

RESOLUTION:
Add the inode clear operation on SLES11.

* 3896269 (Tracking ID: 3879310)

SYMPTOM:
The file system may get corrupted after the file system freeze during
vxupgrade. The full fsck gives following errors:

UX:vxfs fsck: ERROR: V-3-20451: No valid device inodes found
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate

DESCRIPTION:
The vxupgrade requires file system to be frozen during it's
functional operation. It may happen that the corruption can be detected while
freeze is in progress and full fsck flag can be set on the file system.
However, this doesn't stop vxupgrade to proceed.
At later stage of vxupgrade, after structures related to new disk layout are
updated on disk, 
vxfs frees up and zeroes out some of the old metadata inodes. If an error occurs
after this 
point (because of full fsck being set), the file system completely needs to go
back to previous version, at the tile of full fsck. 
Since the metadata corresponding to previous version is already cleared, the
full fsck cannot proceed and gives error.

RESOLUTION:
Check for full fsck flag after freezing the file system during
vxupgrade. Also, disable the file system if an error occurs after writing new 
metadata on disk. This will force the newly written metadata to be loaded in 
memory on next mount.

* 3896270 (Tracking ID: 3707662)

SYMPTOM:
Race between reorg processing and fsadm timer thread (alarm expiry) leads to panic in vx_reorg_emap with the following stack::

vx_iunlock
vx_reorg_iunlock_rct_reorg
vx_reorg_emap
vx_extmap_reorg
vx_reorg
vx_aioctl_full
vx_aioctl_common
vx_aioctl
vx_ioctl
fop_ioctl
ioctl

DESCRIPTION:
When the timer expires (fsadm with -t option), vx_do_close() calls vx_reorg_clear() on local mount which performs cleanup on reorg rct inode. Another thread currently active in vx_reorg_emap() will panic due to null pointer dereference.

RESOLUTION:
When fop_close is called in alarm handler context, we defer the cleaning up untill the kernel thread performing reorg completes its operation.

* 3896273 (Tracking ID: 3558087)

SYMPTOM:
When stat system call is executed on VxFS File System with delayed
allocation feature enabled, it may take long time or it may cause high cpu
consumption.

DESCRIPTION:
When delayed allocation (dalloc) feature is turned on, the
flushing process takes much time. The process keeps the get page lock held, and
needs writers to keep the inode reader writer lock held. Stat system call may
keeps waiting for inode reader writer lock.

RESOLUTION:
Delayed allocation code is redesigned to keep the get page lock
unlocked while flushing.

* 3896277 (Tracking ID: 3691633)

SYMPTOM:
Remove RCQ Full messages

DESCRIPTION:
Too many unnecessary RCQ Full messages were logging in the system log.

RESOLUTION:
The RCQ Full messages removed from the code.

* 3896281 (Tracking ID: 3830300)

SYMPTOM:
Heavy cpu usage while oracle archive process are running on a clustered
fs.

DESCRIPTION:
The cause of the poor read performance in this case was due to fragmentation,
fragmentation mainly happens when there are multiple archivers running on the
same node. The allocation pattern of the oracle archiver processes is 

1. write header with O_SYNC
2. ftruncate-up the file to its final size ( a few GBs typically)
3. do lio_listio with 1MB iocbs

The problem occurs because all the allocations in this manner go through
internal allocations i.e. allocations below file size instead of allocations
past the file size. Internal allocations are done at max 8 Pages at once. So if
there are multiple processes doing this, they all get these 8 Pages alternately
and the fs becomes very fragmented.

RESOLUTION:
Added a tunable, which will allocate zfod extents when ftruncate
tries to increase the size of the file, instead of creating a hole. This will
eliminate the allocations internal to file size thus the fragmentation. Fixed
the earlier implementation of the same fix, which ran into
locking issues. Also fixed the performance issue while writing from secondary node.

* 3896285 (Tracking ID: 3757609)

SYMPTOM:
High CPU usage because of contention over ODM_IO_LOCK

DESCRIPTION:
While performing ODM IO, to update some of the ODM counters we take
ODM_IO_LOCK which leads to contention from multiple  of iodones trying to update
 these counters at the same time. This is results in high CPU usage.

RESOLUTION:
Code modified to remove the lock contention.

* 3896303 (Tracking ID: 3762125)

SYMPTOM:
Directory size sometimes keeps increasing even though the number of files inside it doesn't 
increase.

DESCRIPTION:
This only happens to CFS. A variable in the directory inode structure marks the start of 
directory free space. But when the directory ownership changes, the variable may become stale, which 
could cause this issue.

RESOLUTION:
The code is modified to reset this free space marking variable when there's 
ownershipchange. Now the space search goes from beginning of the directory inode.

* 3896304 (Tracking ID: 3846521)

SYMPTOM:
cp -p is failing with EINVAL for files with 10 digit 
modification time. EINVAL error is returned if the value in tv_nsec field is 
greater than/outside the range of 0 to 999, 999, 999.  VxFS supports the 
update in usec but when copying in the user space, we convert the usec to 
nsec. So here in this case, usec has crossed the upper boundary limit i.e 
999, 999.

DESCRIPTION:
In a cluster, its possible that time across nodes might 
differ.so 
when updating mtime, vxfs check if it's cluster inode and if nodes mtime is 
newer 
time than current node time, then accordingly increment the tv_usec instead of 
changing mtime to older time value. There might be chance that it,  tv_usec 
counter got overflowed here, which resulted in 10 digit mtime.tv_nsec.

RESOLUTION:
Code is modified to reset usec counter for mtime/atime/ctime when 
upper boundary limit i.e. 999999 is reached.

* 3896306 (Tracking ID: 3790721)

SYMPTOM:
High CPU usage on the vxfs thread process. The backtrace of such kind of threads
usually look like this:

schedule
schedule_timeout
__down
down
vx_send_bcastgetemapmsg_remaus
vx_send_bcastgetemapmsg
vx_recv_getemapmsg
vx_recvdele
vx_msg_recvreq
vx_msg_process_thread
vx_kthread_init
kernel_thread

DESCRIPTION:
The locking mechanism in vx_send_bcastgetemapmsg_process() is inefficient. So that
every
time vx_send_bcastgetemapmsg_process() is called, it will perform a series of
down-up
operation on a certain semaphore. This can result in a huge CPU cost when multiple
threads have contention on this semaphore.

RESOLUTION:
Optimize the locking mechanism in vx_send_bcastgetemapmsg_process(),
so that it only do down-up operation on the semaphore once.

* 3896308 (Tracking ID: 3695367)

SYMPTOM:
Unable to remove volume from multi-volume VxFS using "fsvoladm" command. It fails with "Invalid argument" error.

DESCRIPTION:
Volumes are not being added in the in-core volume list structure correctly. Therefore while removing volume from multi-volume VxFS using "fsvoladm", command fails.

RESOLUTION:
The code is modified to add volumes in the in-core volume list structure correctly.

* 3896310 (Tracking ID: 3859032)

SYMPTOM:
System panics in vx_tflush_map() due to NULL pointer dereference.

DESCRIPTION:
When converting VxFS using vxconvert, new blocks are allocated to 
the structural files like smap etc which can contain garbage. This is done with 
the expectation that fsck will rebuild the correct smap. but in fsck, we have 
missed to distinguish between EAU fully EXPANDED and ALLOCATED. because of
which, if allocation to the file which has the last allocation from such
affected EAU is done, it will create the sub transaction on EAU which are in
allocated state. Map buffers of such EAUs are not initialized properly in VxFS
private buffer cache, as a result, these buffers will be released back as stale
during the transaction commit. Later, if any file-system wide sync tries to
flush the metadata, it can refer to these buffer pointers and panic as these
buffers are already released and reused.

RESOLUTION:
Code is modified in fsck to correctly set the state of EAU on 
disk. Also, modified the involved code paths as to avoid using doing
transactions on unexpanded EAUs.

* 3896311 (Tracking ID: 3779916)

SYMPTOM:
vxfsconvert fails to upgrade layout verison for a vxfs file system with 
large number of inodes. Error message will show some inode discrepancy.

DESCRIPTION:
vxfsconvert walks through the ilist and converts inode. It stores 
chunks of inodes in a buffer and process them as a batch. The inode number 
parameter for this inode buffer is of type unsigned integer. The offset of a 
particular inode in the ilist is calculated by multiplying the inode number with 
size of inode structure. For large inode numbers this product of inode_number * 
inode_size can overflow the unsigned integer limit, thus giving wrong offset 
within the ilist file. vxfsconvert therefore reads wrong inode and eventually 
fails.

RESOLUTION:
The inode number parameter is defined as unsigned long to avoid 
overflow.

* 3896312 (Tracking ID: 3811849)

SYMPTOM:
On cluster file system (CFS), due to a size mismatch in the cluster-wide buffers
containing hash bucket for large directory hashing (LDH), the system panics with
the following stack trace:
  
   vx_populate_bpdata()
   vx_getblk_clust()
   vx_getblk()
   vx_exh_getblk()
   vx_exh_get_bucket()
   vx_exh_lookup()
   vx_dexh_lookup()
   vx_dirscan()
   vx_dirlook()
   vx_pd_lookup()
   vx_lookup_pd()
   vx_lookup()
   
On some platforms, instead of panic, LDH corruption is reported. Full fsck
reports some meta-data inconsistencies as displayed in the following sample
messages:

fileset 999 primary-ilist inode 263 has invalid alternate directory index
        (fileset 999 attribute-ilist inode 8193), clear index? (ynq)y

DESCRIPTION:
On a highly fragmented file system with a file system block size of 1K, 2K or
4K, the bucket(s) of an LDH inode, which has a fixed size of 8K, can spread
across multiple small extents. Currently in-core allocation for bucket of LDH
inode happens in parallel to on-disk allocation, which results in small in-core
buffer allocations. Combination of these small in-core allocations will be
merged for final in memory representation of LDH inodes bucket. On two Cluster
File System (CFS) nodes, this may result in same LDH metadata/bucket represented
as in-core buffers of different sizes. This may result in system panic as LDH
inodes bucket are passed around the cluster, or this may result in on-disk
corruption of LDH inode's buckets, if these buffers are flushed to disk.

RESOLUTION:
The code is modified to separate the on-disk allocation and in-core buffer
initialization in LDH code paths, so that in-core LDH bucket will always be
represented by a single 8K buffer.

* 3896313 (Tracking ID: 3817734)

SYMPTOM:
If file system with full fsck flag set is mounted, direct command message
is printed to the user to clean the file system with full fsck.

DESCRIPTION:
When mounting file system with full fsck flag set, mount will fail
and a message will be printed to clean the file system with full fsck. This
message contains direct command to run, which if run without collecting file
system metasave will result in evidences being lost. Also since fsck will remove
the file system inconsistencies it may lead to undesired data being lost.

RESOLUTION:
More generic message is given in error message instead of direct
command.

* 3896314 (Tracking ID: 3856363)

SYMPTOM:
vxfs reports mapbad errors in the syslog as below:
vxfs: msgcnt 15 mesg 003: V-2-3: vx_mapbad - vx_extfind - 
/dev/vx/dsk/vgems01/lvems01 file system free extent bitmap in au 0 marked 
bad.

And, full fsck reports following metadata inconsistencies:

fileset 999 primary-ilist inode 6 has invalid number of blocks 
(18446744073709551583)
fileset 999 primary-ilist inode 6 failed validation clear? (ynq)n
pass2 - checking directory linkage
fileset 999 directory 8192 block devid/blknum 0/393216 offset 68 references 
free 
inode
                                ino 6 remove entry? (ynq)n
fileset 999 primary-ilist inode 8192 contains invalid directory blocks
                                clear? (ynq)n
pass3 - checking reference counts
fileset 999 primary-ilist inode 5 unreferenced file, reconnect? (ynq)n
fileset 999 primary-ilist inode 5 clear? (ynq)n
fileset 999 primary-ilist inode 8194 unreferenced file, reconnect? (ynq)n
fileset 999 primary-ilist inode 8194 clear? (ynq)n
fileset 999 primary-ilist inode 8195 unreferenced file, reconnect? (ynq)n
fileset 999 primary-ilist inode 8195 clear? (ynq)n
pass4 - checking resource maps

DESCRIPTION:
While processing the VX_IEZEROEXT extop, VxFS frees the extent without 
setting VX_TLOGDELFREE flag. Similarly, there are other cases where the flag 
VX_TLOGDELFREE is not set in the case of the delayed extent free, this could 
result in mapbad errors and invalid block counts.

RESOLUTION:
Since the flag VX_TLOGDELFREE need to be set on every extent free, 
modified to code to discard this flag and treat every extent free as delayed 
extent free implicitly.

* 3901379 (Tracking ID: 3897793)

SYMPTOM:
Panic happens because of race where the mntlock ID is cleared while 
mntlock flag still set.

DESCRIPTION:
Panic happened because of race where mntlockid is null even after 
mntlock flag is set. Race is between fsadm thread and proc mount show_option 
thread. The fsadm thread deintialize mntlock id first and then removes mntlock 
flag. If other thread race with this fsadm thread, then it is possible to have 
mntlock flag set and mntlock id as a NULL. The fix is to remove flag first and 
deintialize mntlock id later.

RESOLUTION:
The code is modified to remove mntlock flag first.

* 3903583 (Tracking ID: 3905607)

SYMPTOM:
Internal assert failed during migration.

DESCRIPTION:
In the latest release of Solaris, Solaris vnode structure is 
changed and a new field is added at the end of Solaris vnode structure. 
Migration vnode contains Solaris vnode and because of this newly added field 
in Solaris vnode structure, migration vnode's forw corrupted, resulting in 
assert failure.

RESOLUTION:
Code is modified and introduced a new padding field in migration 
vnode structure, so that it will not corrupt forw.

* 3905055 (Tracking ID: 3880113)

SYMPTOM:
Mapbad scenario in case of deletion of cloned files having shared ZFOD extents

DESCRIPTION:
In a cloned filesystem , if two overlay inodes have shared ZFOD extents, then
there are some issue around it. If write happens on both the inodes then same
ZFOD extents may get allocated to both the files. This may further lead to
mapbad scenarios while deleting both the files.

RESOLUTION:
Code has been modified such that shared zfod extents are not being
pushed on cloned filesystem.

* 3905056 (Tracking ID: 3879761)

SYMPTOM:
Performance issue observed due to contention on vxfs spin lock vx_worklist_lk.

DESCRIPTION:
ODM IOs are performed asynchronously, by queuing the ODM work items to
the worker threads. It wakes up more number of worker threads than required after
enqueuing the ODM work items which leads to contention of vx_worklist_lk spinlock.

RESOLUTION:
Modified the code such that, it will wake up one worker thread if only
one workitem is enqueued.

* 3906148 (Tracking ID: 3894712)

SYMPTOM:
ACL permissions are not inherited correctly on cluster file system.

DESCRIPTION:
The ACL counts stored on a directory inode gets reset every 
time directory inodes 
ownership is switched between the nodes. When ownership on directory inode 
comes back to the node, 
which  previously abdicated it, ACL permissions were not getting inherited 
correctly for the newly 
created files.

RESOLUTION:
Modified the source such that the ACLs are inherited correctly.

* 3907038 (Tracking ID: 3879799)

SYMPTOM:
Due to inconsistent LCT (Link Count Table), Veritas File System (VxFS) mount
prompts for full fsck every time, and displays the following error message:
UX:vxfs mount: ERROR: V-3-26881: Cannot be mounted until it has been cleaned by 
fsck. Please run "fsck -F vxfs -y <volume>" before mounting.

DESCRIPTION:
LCT corruption occurs due to events taking place outside of the VxFS code. The
mount command fails. The VxFS fsck utility is unable to handle and correct these
kinds of LCT inconsistencies.

RESOLUTION:
The fsck(1M) command is modified to handle LCT inconsistency and rectify the
state of file system.

* 3907350 (Tracking ID: 3817734)

SYMPTOM:
If file system with full fsck flag set is mounted, direct command message
is printed to the user to clean the file system with full fsck.

DESCRIPTION:
When mounting file system with full fsck flag set, mount will fail
and a message will be printed to clean the file system with full fsck. This
message contains direct command to run, which if run without collecting file
system metasave will result in evidences being lost. Also since fsck will remove
the file system inconsistencies it may lead to undesired data being lost.

RESOLUTION:
More generic message is given in error message instead of direct
command.

Patch ID: 151230-02

* 3754492 (Tracking ID: 3761603)

SYMPTOM:
Full fsck flag will be set incorrectly at the mount time.

DESCRIPTION:
There might be possibility that extop processing will be deferred 
during umount (i.e. in case of crash or disk failure) and will be kept on 
disk, so that mount can process them. During mount, inode can have multiple 
extop set. Previously if inode has trim and reorg extop set during mount, we 
were incorrectly setting fullfsck. This patch avoids this situation.

RESOLUTION:
Code is modified to avoid such unnecessary setting of fullfsck.

* 3756002 (Tracking ID: 3764824)

SYMPTOM:
Internal cluster file system(CFS) testing hit debug assert

DESCRIPTION:
Internal debug assert is seen when there is a glm recovery while one 
of the secondary  nodes is doing mount, specifically when glm recovery happens 
between attaching a file system and mounting file system.

RESOLUTION:
Code is modified to handle glm reconfiguration issue.

* 3769992 (Tracking ID: 3729158)

SYMPTOM:
fuser and other commands hang on vxfs file systems.

DESCRIPTION:
The hang is seen while 2 threads contest for 2 locks -ILOCK and
PLOCK. The writeadvise thread owns the ILOCK but is waiting for the PLOCK.
The dalloc thread owns the PLOCK and is waiting for the ILOCK.

RESOLUTION:
Correct order of locking is PLOCK followed by the ILOCK.

* 3817120 (Tracking ID: 3804400)

SYMPTOM:
VRTS/bin/cp does not return any error when quota hard limit is 
reached and partial write is encountered.

DESCRIPTION:
When quota hard limit is reached, VRTS/bin/cp may encounter a 
partial write, but it may not return any error to up layer application in 
such situation.

RESOLUTION:
Adjust VRTS/bin/cp to detect the partial write caused by quota 
limit, and return a proper error to up layer application.

Patch ID: 151230-01

* 3657150 (Tracking ID: 3604071)

SYMPTOM:
With the thin reclaim feature turned on, you can observe high CPU usage on the vxfs thread process. The backtrace of such kind of threads usually look like this:
	 
	 - vx_dalist_getau
	 - vx_recv_bcastgetemapmsg
	 - vx_recvdele
	 - vx_msg_recvreq
	 - vx_msg_process_thread
	 - vx_kthread_init

DESCRIPTION:
In the routine to get the broadcast information of a node which contains maps of Allocation Units (AUs) for which node holds the delegations, the locking mechanism is inefficient. Thus every time when this routine is called, it will perform a series of down-up operation on a certain semaphore. This can result in a huge CPU cost when many threads calling the routine in parallel.

RESOLUTION:
The code is modified to optimize the locking mechanism in the routine to get the broadcast information of a node which contains maps of Allocation Units (AUs) for which node holds the delegations, so that it only does down-up operation on the semaphore once.

* 3657152 (Tracking ID: 3602322)

SYMPTOM:
Panic while flushing the dirty pages of the inode with backtrace,

do_page_fault
error_exit
vx_iflush
vx_workitem_process
vx_worklist_process
vx_worklist_thread
vx_kthread_init
kernel_thread

DESCRIPTION:
The race between the vx_iflush and vx_ilist_chunkclean on the 
same 
inode. The vx_ilist_chunkclean takes the inode and clears the inode pointers 
while deiniting, which causes NULL pointer dereference in the flusher 
thread.

RESOLUTION:
Resolve the race by taking the ilock, along with icache lock, 
whenever we dereference a pointer in the inode. If the inode pointer is NULL 
already/deinitialized then goto the next inode and try to flush it.

* 3657153 (Tracking ID: 3622323)

SYMPTOM:
Cluster Filesystem mounted as read-only panics when it gets sharing and/or compression statistics using the fsadm_vxfs(1M) command with the following stack:
	 
	- vx_irwlock
	- vx_clust_fset_curused
	- vx_getcompstats
	- vx_aioctl_getcompstats
	- vx_aioctl_common
	- vx_aioctl
	- vx_unlocked_ioctl
	- vx_ioctl
	- vfs_ioctl
	- do_vfs_ioctl
	- sys_ioctl
	- system_call_fastpath

DESCRIPTION:
When file system is mounted as read-only, part of the initial setup is skipped, including loading of few internal structures. These structures are referenced while gathering statistics for sharing and/or compression. As a result, panic occurs.

RESOLUTION:
The code is modified to only allow "fsadm -HS all" to gather sharing and/or compression statistics on read-write file systems. On read-only file systems, this command fails.

* 3657156 (Tracking ID: 3604750)

SYMPTOM:
The kernel loops during the extent re-org with the following stack trace:
vx_bmap_enter()
vx_reorg_enter_zfod()
vx_reorg_emap()
vx_extmap_reorg()
vx_reorg()
vx_aioctl_full()
$cold_vx_aioctl_common()
vx_aioctl()
vx_ioctl()
vno_ioctl()
ioctl()
syscall()

DESCRIPTION:
The extent re-org minimizes the file system fragmentation. When the re-org 
request is issued for an inode with a lot of ZFOD extents, it reallocates the 
extents of the original inode to the re-org inode. During this, the ZFOD extent 
are preserved and enter the re-org inode in a transaction. If the extent 
allocated is big, the transaction that enters the ZFOD extents becomes big and 
returns an error. Even when the transaction is retried the same issue occurs. 
As a result, the kernel loops during the extent re-org.

RESOLUTION:
The code is modified to enter the Bmap (block map) of the allocated extent and 
then perform the ZFOD processing. If you get a committable error during the 
ZFOD enter, then commit the transaction and continue with the ZFOD enter.

* 3657157 (Tracking ID: 3617191)

SYMPTOM:
Checkpoint creation may take hours.

DESCRIPTION:
During checkpoint creation, with an inode marked for removal and being overlaid, there may be a downstream clone and VxFS starts pulling all the data. With Oracle it's evident because of temporary files deletion during checkpoint creation.

RESOLUTION:
The code is modified to selectively pull the data, only if a downstream push inode exists for file.

* 3657158 (Tracking ID: 3601943)

SYMPTOM:
The block map tree of a file is corrupted across the levels, and during truncating, inode for the file may lead to an infinite loop.


There are various

DESCRIPTION:
For files larger than 64G, truncation code first walks through the bmap tree to find the optimal offset from which to begin the truncation. 
If this truncation falls within corrupted range of the bmap, actual truncation 
code which relies on binary search to find this offset. As a result, the truncation cannot find the offset, thus it returns empty. The output makes the truncation code to submit dummy transaction, which updates the inode of file with latest ctime, without freeing the extents allocated.

RESOLUTION:
The truncation code is modified to detect the corruption, mark the inode bad and, mark the file system for full-fsck. The modification makes the truncation  possible for full fsck. Next time when it runs, the truncation code is able to throw out the inode and free the extents.

* 3657491 (Tracking ID: 3657482)

SYMPTOM:
Stress test on cluster file system fails due to data corruption

DESCRIPTION:
In direct I/O write code path, there is an optimization which avoids invalidation of any in-core pages in the range. Instead, in-core pages are updated with new data together with disk write. This optimization comes into picture when cached qio is enabled on the file. When we modify an in-core page, it is not getting marked dirty. If the page was already not dirty, there are chances that in-core changes might be lost if page was reused. This can cause a corruption if the page is read again before the disk update completes.

RESOLUTION:
In case of cached qio/ODM, disable the page overwrite optimization.

* 3665980 (Tracking ID: 2059611)

SYMPTOM:
The system panics due to a NULL pointer dereference while flushing the
bitmaps to the disk and the following stack trace is displayed:


vx_unlockmap+0x10c
vx_tflush_map+0x51c
vx_fsq_flush+0x504
vx_fsflush_fsq+0x190
vx_workitem_process+0x1c
vx_worklist_process+0x2b0
vx_worklist_thread+0x78

DESCRIPTION:
The vx_unlockmap() function unlocks a map structure of the file
system. If the map is being used, the hold count is incremented. The
vx_unlockmap() function attempts to check whether this is an empty mlink doubly
linked list. The asynchronous vx_mapiodone routine can change the link at random
even though the hold count is zero.

RESOLUTION:
The code is modified to change the evaluation rule inside the
vx_unlockmap() function, so that further evaluation can be skipped over when map
hold count is zero.

* 3665984 (Tracking ID: 2439261)

SYMPTOM:
When the vx_fiostats_tunable is changed from zero to non-zero, the
system panics with the following stack trace:
vx_fiostats_do_update
vx_fiostats_update
vx_read1
vx_rdwr
vno_rw
rwuio
pread

DESCRIPTION:
When vx_fiostats_tunable is changed from zero to non-zero, all the
incore-inode fiostats attributes are set to NULL. When these attributes are
accessed, the system panics due to the NULL pointer dereference.

RESOLUTION:
The code has been modified to check the file I/O stat attributes are
present before dereferencing the pointers.

* 3665990 (Tracking ID: 3567027)

SYMPTOM:
During the File System resize operation, the "fullfsck flag is set with the following message:vxfs: msgcnt 183168 mesg 096: V-2-96: vx_setfsflags - /dev/vx/dsk/sfsdg/vol file system fullfsck flag set - vx_fs_upgrade_reorg

DESCRIPTION:
File system resize requires some temporary inodes to swap the old inode and the converted inode. However, before a structural inode ise processed, the "fullfsck flag is set when a failure occurs during the metadata change. The flag is cleared after the swap is successfully completed.

If the temporary inode allocation fails, VxFS leaves the fullfsck flag on the 
disk. However, all temporary inodes can be cleaned up when not being in use, 
thus these temporary inodes do not result in corruption.

RESOLUTION:
The code is modified to clear the fullfsck flag if the structural inode conversion cannot create its temporary inode.

* 3666009 (Tracking ID: 3647749)

SYMPTOM:
An obsolete v_path is created for the VxFS node when the following steps are performed:
	
	1) Create a file(file1).
	2) Delete the file (file2).
	3) Create a new file(file2, has the same inode number as file1).
	4) vnode of file2 has an obsolete v_path. However, it still shows file1.

DESCRIPTION:
When VxFS reuses an inode, it performs some clear or reset operations to clean the obsolete information. However, the corresponding Solaris vnode may not be improperly handled, which leads to the obsolete v_path.

RESOLUTION:
The code is modified to call the vn_recycle() function in the VxFS inode clear routine to reset the corresponding Solaris vnode.

* 3666010 (Tracking ID: 3233276)

SYMPTOM:
On a 40 TB file system, the fsclustadm setprimary command consumes more than 2 minutes for execution. And, the unmount operation consumes more time causing a primary migration.

DESCRIPTION:
The old primary needs to process the delegated allocation units while migrating
from primary to secondary. The inefficient implementation of the allocation unit
list is consuming more time while removing the element from the list. As the file system size increases, the allocation unit list also increases, which results in additional migration time.

RESOLUTION:
The code is modified to process the allocation unit list efficiently. With this modification, the primary migration is completed in 1 second on the 40 TB file system.

* 3677165 (Tracking ID: 2560032)

SYMPTOM:
System may panics while upgrading VRTSvxfs in the presence of a zone
mounted on VxFS.

DESCRIPTION:
When the upgrade happens from base version to the target version, The post 
nstall script unloads the base level fdd module and loads the target level fdd 
modules when the VxFS module is still at the "base version" level. This leads 
to an inconsistency in the file device driver (fdd) and VxFS modules.

RESOLUTION:
The post install script is modified such as to avoid inconsistency.

* 3688210 (Tracking ID: 3689104)

SYMPTOM:
The module version of the  vxcafs module is not displayed when the "modinfo vxcafs" command is run.

DESCRIPTION:
When the "modinfo vxcafs" command is run, the output is not able to get the  module version. However, the version is displayed for VxFS, fdd, vxportal, and other VxFS kernel modules.

RESOLUTION:
The code is modified such the module version for vxcafs is displayed similar to the other VxFS kernel modules.

* 3697966 (Tracking ID: 3697964)

SYMPTOM:
When a file system is upgraded, the upgraded layout clears the superblock flags (fs_flags).

DESCRIPTION:
When a file system is upgraded, the new superblock structure gets populated with the field values. Most of these values are inherited from the old superblock. As a result, the fs_flags values are overwritten and the flags such as VX_SINGLEDEV are deleted from the superblock.

RESOLUTION:
The code is modified to restore the old superblock flags while upgrading the disk layout of a file system.

* 3699953 (Tracking ID: 3702136)

SYMPTOM:
LCT corruption is observed while mounting the file system on a secondary node.

DESCRIPTION:
While mounting the file system on the secondary node, the primary node allocates the extent for PNOLT (per node OLT) entry. The primary node can allocate a new extent or extend the previously allocated extent.
In case the primary node opts for the second option, then it allocates  the whole extent that was previously allocated. As a result, the primary node erases the existing valid data, which results in a LCT corruption.

RESOLUTION:
The code is modified such that when an existing PNOLT extent is expanded to make space for PNOLT entry then only the newly allocated part of the extent is zeroed out.

* 3715567 (Tracking ID: 3715566)

SYMPTOM:
VxFS fails to report an error  when the maxlink and nomaxlink options are set for disk layout version (DLV) lower than 10.

DESCRIPTION:
The maxlink and nomaxlink options allow you to enable and disable the maxlink support feature respectively. The maxlink support feature operates only on DLV version 10. Due to an issue, the maxlink and nomaxlink options wrongly appear in DLV versions lower than 10. However, when selected the options do not take effect.

RESOLUTION:
The code is modified such that VxFS reports an error when you attempt to  set the maxlink and nomaxlink options for DLV version lower than 10.

* 3718542 (Tracking ID: 3269553)

SYMPTOM:
VxFS returns inappropriate message for read of hole via ODM.

DESCRIPTION:
Sometimes sparse files containing temp or backup/restore files are created outside the Oracle database. And, Oracle can read these files only using the ODM. As a result, ODM fails with an ENOTSUP error.

RESOLUTION:
The code is modified to return zeros instead of an error.

* 3721458 (Tracking ID: 3721466)

SYMPTOM:
After a file system is upgraded from version 6 to 7, the vxupgrade(1M) command fails to set the VX_SINGLEDEV flag on a superblock.

DESCRIPTION:
The VX_SINGLEDEV flag was introduced in disk layout version 7.
The purpose of the flag is to indicate whether a file system resides only on a single device or a volume.
When the disk layout is upgraded from version 6 to 7, the flag is not inherited along with the other values since it was not supported in version 6.

RESOLUTION:
The code is modified to set the VX_SINGLEDEV flag when the disk layout is upgraded from version 6 to 7.

* 3725347 (Tracking ID: 3725346)

SYMPTOM:
Trimming of underlying SSD volume was not supported for AIX and Solar using "fsadm -R -o 
ssd" command.

DESCRIPTION:
The fsadm command with the -o ssd option ("fsadm -R -o ssd") is used to initiate the 
TRIM command on an underlying SSD volume, which was not supported on AIX and Solaris.

RESOLUTION:
The code is modified on AIX and Solaris to support the TRIM command on an underlying 
SSD volume.

* 3725569 (Tracking ID: 3731678)

SYMPTOM:
During an internal test, a debug assert was observed while handling the error scenario.

DESCRIPTION:
The issue occurred when the write stabilization (write data to both, fscache and HDD) occurs. The asserted function finds entry corresponding to the current IO request and sets error bits appropriately. As a result, a mismatch of address is observed between the buffer used for IO and the buffer used by the original user. The bp_baddr file stores the address of the buffer corresponding to the IO and the bp_origbaddr file stores the original buffer address corresponding to user request.

RESOLUTION:
The code is modified to handle the error scenario.

* 3726403 (Tracking ID: 3739618)

SYMPTOM:
sfcache command with the "-i" option may not show filesystem cache statistic periodically.

DESCRIPTION:
sfcache command with the "-i" option may not show filesystem cache statistic periodically.

RESOLUTION:
The code is modified to add a loop to print sfcache statistics at the specified interval.

* 3729111 (Tracking ID: 3729104)

SYMPTOM:
Man pages changes missing for smartiomode option of mount_vxfs (1M)

DESCRIPTION:
smartiomode option for mount_vxfs is missing in manpage.

RESOLUTION:
Modified the man page changes to reflect smartiomode option for
mount_vxfs.

* 3729704 (Tracking ID: 3719523)

SYMPTOM:
'vxupgrade' does not clear the superblock replica of old layout versions.

DESCRIPTION:
While upgrading the file system to a new layout version, a new superblock inode is allocated and an extent is allocated for the replica superblock. After writing the new superblock (primary + replica), VxFS frees the extent of the old superblock replica.
Now, if the primary superblock corrupts, the full fsck searches for replica to repair the file system. If it finds the replica of old superblock, it restores the file system to the old layout, instead of creating a new one. This behavior is wrong.
In order to take the file system to a new version, we should clear the replica of old superblock as part of vxupgrade, so that full fsck won't detect it later.

RESOLUTION:
Clear the replica of old superblock as part of vxupgrade.

* 3736133 (Tracking ID: 3736772)

SYMPTOM:
The sfcache(1M)command does not automatically enable write-back caching on file system once the cache size is increased to enable write-back caching.

DESCRIPTION:
When a file system, mounted with a write-back caching enabled, does not have sufficient caching space to enable write-back then read-cache gets enabled. This behavior is observed even when the size of cache area is grown. And, write-back fails to get automatically activated on the file system with the mount option set to smartiomode=writeback.

RESOLUTION:
The code is modified such that whenever cache area grows, all the file systems are scanned and write-back enable message is sent to file systems that are mounted with the write-back mode.

* 3743913 (Tracking ID: 3743912)

SYMPTOM:
Users could create sub-directories more than 64K for disk layouts having versions lower than 10.

DESCRIPTION:
In this release, the maxlink feature enables users to create sub-directories larger than 64K.This feature is supported on disk layouts whose versions are higher than or equal to 10. The macro VX_TUNEMAXLINK denotes the maximum limitation on sub-directories. And, its value was changed from 64K to 4 billion. Due to this, users could create more than 64K sub-directories for layout versions < 10 as well, which is undesirable.
This fix is applicable only on platforms other than AIX.

RESOLUTION:
The code is modified such that now you can set the value of sub-directory limitation to 64K for layouts whose versions are lower than 10.

* 3755796 (Tracking ID: 3756750)

SYMPTOM:
VxFS may leak memory when File Design Driver (FDD) module is unloaded before the cache file system is taken offline.

DESCRIPTION:
When FDD module is unloaded before the cache file system is taken offline, few FDD related structures in the cache file system remains to be free. As a result, memory leak is observed.

RESOLUTION:
The code is modified such that FDD related structure is not initialized for cache file systems.

Patch ID: 151773-01

* 3703631 (Tracking ID: 3615043)

SYMPTOM:
At times, while writing to a file, data could be missed.

DESCRIPTION:
While writing to a file when delayed allocation is on, Solaris could dishonour
the NON_CLUSTERING flag and 
cluster pages beyond the range for which we have issued the flushing, leading to
data loss.

RESOLUTION:
Make sure we clear the flag and flush the exact range, in case of dalloc.

* 3751050 (Tracking ID: 3751049)

SYMPTOM:
umountall operation failed on Solaris with error "V-3-20358: cannot open 
mnttab"

DESCRIPTION:
On Solaris, normally, fopen() returns an  EMFILE error for 32-bit 
applications when attempting  to  associate a stream with a file accessed by 
a file descriptor with a value greater than 255. When using umountall to 
umount more than 256 file systems, the command will fork child process and 
open more than 256 file descriptors at the same time, thus break the limitation and fail the operation.

RESOLUTION:
Use "F" mode in fopen call to avoid the 256 file descriptor limitation.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch fs-sol10_sparc-Patch-6.2.1.300.tar.gz to /tmp
2. Untar fs-sol10_sparc-Patch-6.2.1.300.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/fs-sol10_sparc-Patch-6.2.1.300.tar.gz
    # tar xf /tmp/fs-sol10_sparc-Patch-6.2.1.300.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSvxfs621P3 [<host1> <host2>...]

You can also install this patch together with 6.2.1 maintenance release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 6.2.1 directory and invoke the installmr script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installmr -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
For the Solaris 10 release, refer to the online manual pages for
instructions on using 'patchadd' and 'patchrm' scripts provided with
Solaris.  Any other special or non-generic installation instructions
should be described below as special instructions.  The following
example installs a patch to a standalone machine:
       example# patchadd /var/spool/patch/151230-03


REMOVING THE PATCH
------------------
Run the Uninstaller script to automatically remove the patch:
------------------------------------------------------------
To uninstall the patch perform the following step on at least one node in the cluster:
    # /opt/VRTS/install/uninstallVRTSvxfs621P3 [<host1> <host2>...]

Remove the patch manually:
-------------------------
The following example removes a patch from a standalone system:
       example# patchrm 151230-03


KNOWN ISSUES
------------
* Tracking ID: 3896260

SYMPTOM: Oracle database start failure, with trace log like this:

ORA-63999: data file suffered media failure
ORA-01114: IO error writing block to file 304 (block # 722821)
ORA-01110: data file 304: <file_name>
ORA-17500: ODM err:ODM ERROR V-41-4-2-231-28 No space left on device

WORKAROUND: No

* Tracking ID: 3896276

SYMPTOM: IO service times increased with IO intensive workload on high end 
server.

WORKAROUND: No

* Tracking ID: 3904464

SYMPTOM: Sequential reads slowed after scaling the conditional 
variables on which worker threads in VxFS sleep

WORKAROUND: No



SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE