fs-hpux1123-5.0MP2RP6
Obsolete
The latest patch(es) : fs-hpux1123-5.0MP2RP10 

 Basic information
Release type: Rolling Patch
Release date: 2012-03-09
OS update support: None
Technote: None
Documentation: None
Popularity: 3734 viewed    downloaded
Download size: 30.24 MB
Checksum: 3350074237

 Applies to one or more of the following products:
File System 5.0MP2 On HP-UX 11i v2 (11.23)
Storage Foundation 5.0MP2 On HP-UX 11i v2 (11.23)
Storage Foundation Cluster File System 5.0MP2 On HP-UX 11i v2 (11.23)
Storage Foundation for Oracle 5.0MP2 On HP-UX 11i v2 (11.23)
Storage Foundation for Oracle RAC 5.0MP2 On HP-UX 11i v2 (11.23)
Storage Foundation HA 5.0MP2 On HP-UX 11i v2 (11.23)

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
fs-hpux1123-5.0MP2RP10 2014-06-15
fs-hpux1123-5.0MP2RP9 (obsolete) 2013-12-03
fs-hpux1123-5.0MP2RP7 (obsolete) 2012-08-13

This patch supersedes the following patches: Release date
fs-hpux1123-5.0MP2RP5 (obsolete) 2011-08-30
fs-hpux1123-5.0MP2RP4 (obsolete) 2011-04-12

This patch requires: Release date
fs-hpux1123-5.0MP1RP4 (obsolete) 2008-12-29

 Fixes the following incidents:
2244365, 2405530, 2494597, 2556095, 2558844, 2607352, 2647802, 2654474, 2669100, 2684626

 Patch ID:
PHKL_42729
PHCO_42730

Readme file
                          * * * READ ME * * *
                * * * Veritas File System 5.0 MP2 * * *
                      * * * Rolling Patch 6 * * *
                         Patch Date: 2012-03-09


This document provides the following information:

   * PATCH NAME
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas File System 5.0 MP2 Rolling Patch 6


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvxfs
VRTSvxfs


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas File System 5.0 MP2
   * Veritas Storage Foundation for Oracle RAC 5.0 MP2
   * Veritas Storage Foundation Cluster File System 5.0 MP2
   * Veritas Storage Foundation 5.0 MP2
   * Veritas Storage Foundation High Availability 5.0 MP2
   * Veritas Storage Foundation for Oracle 5.0 MP2


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
HP-UX 11i v3 (11.31)


INCIDENTS FIXED BY THE PATCH
----------------------------
This patch fixes the following Symantec incidents:

Patch ID: PHCO_42730, PHKL_42729

* 2244365 (Tracking ID: 1143552)

SYMPTOM:
During re-tune operations on vx_ninode, memory hangs and
the system stops responding.

DESCRIPTION:
The dynamic reconfiguration added several locks to serialize
re-tuning operations on a different platform. This was not
suitable on the HP-UX platform. Among the several locks 
added, one of the dynamic reconfiguration locks was acquired
in ways that caused performance degradation on large 
systems.

RESOLUTION:
A re-tune lock has now been added to correct the memory 
hang.

* 2405530 (Tracking ID: 2274267)

SYMPTOM:
Unable to tune vxfs_bc_bufhwm on systems > 227 GB.When they try to tune
vxfs_bc_bufhwm, following error is displayed:

root@zu8003hp:/#kctune vxfs_bc_bufhwm=65536 
ERROR:   mesg 113: V-2-113: The specified value for vx_bc_bufhwm is 
	 greater than 90% of the total kernel memory 267444224   Kbytes.

DESCRIPTION:
Error is caused by integer overflow in calculation of upper bound for the tunable
value.

RESOLUTION:
Fixed integer overflow in the calculation of upper bound of the tunable.

* 2494597 (Tracking ID: 2429566)

SYMPTOM:
Memory used for VxFS internal buffer cache may significantly grow after 497 days
uptime when LBOLT(global which gives current system time) wraps over.

DESCRIPTION:
We calculate age of buffers based on LBOLT value. Like age = (current LBOLT -
LBOLT when buffer added to list). Buffer is reused when age becomes greater than
threshold.

When LBOLT wraps, current LBOLT becomes very small value and age becomes
negative. VxFS thinks that this is not old buffer and never reuses it. Buffer
cache memory usage increases as buffers are not reused.

RESOLUTION:
Now we check if the the LBOLT has wrapped around. If it is, we reassign the
buffer time with current LBOLT so that it gets reused after some time.

* 2556095 (Tracking ID: 2515380)

SYMPTOM:
The ff command hangs and later it exits after program exceeds memory
limit with following error.

# ff -F vxfs   /dev/vx/dsk/bernddg/testvol 
UX:vxfs ff: ERROR: V-3-24347: program limit of 30701385 exceeded for directory
data block list
UX:vxfs ff: ERROR: V-3-20177: /dev/vx/dsk/bernddg/testvol

DESCRIPTION:
'ff' command lists all files on device of vxfs file system. In 'ff' command we
do directory lookup. In a function we save the block addresses for a directory.
For that we traverse all the directory blocks.
Then we have function which keeps track of buffer in which we read directory
blocks and the extent up to which we have read directory blocks. This function
is called with offset and it return the offset up to which we have read the
directory blocks. The offset passed to this function has to be the offset within
the extent. But, we were wrongly passing logical offset which can be greater
than extent size. As a effect the offset returned gets wrapped to 0. The caller
thinks that we have not read anything and hence the loop.

RESOLUTION:
Remove call to function which maintains buffer offsets for reading data. That
call was incorrect and redundant. We actually call that function correctly from
one of the functions above.

* 2558844 (Tracking ID: 2532934)

SYMPTOM:
Performance of ftp transfer which use 'VOP_BREAD()' vnode operation for reads
can be slow when max_buf_data_size is set to 64K.

DESCRIPTION:
Read done using 'VOP_BREAD()' was not doing read ahead even for sequential I/Os
when max_buf_data_size is set to 64K. Therefore performance degradation was
seen. read ahead was not performed because of wrong read length passed to read 
ahead detection function.
It detects this by storing the last read offset plus the amount of read
requested into the inode. It then compares the next read's start offset with
this value. If it is sequential they should match. In this case we were not
supplying the correct read length field to the read ahead detection function. We
should have provided max_buf_data_size as the length but instead we provided
VX_MAXBSIZE(8192). Therefore detection function did not detect a valid read
ahead when using max_buf_data_size of 65536 and hence no read ahead was
performed. This in turn can cause cache miss which can affect performance.

RESOLUTION:
correct read length is now passed to read ahead detection function so that read
ahead on sequential I/O can get triggered correctly.

* 2607352 (Tracking ID: 2334061)

SYMPTOM:
When file system is mounted with tranflush option, operations requiring metadata
update take comparatively more time.

DESCRIPTION:
When VxFS file system is mounted with tranflush option, we flush transaction
metadata on the disk and wait for 100 milliseconds before
flushing next transaction. This delay is affecting severely operation of various
commands on the VxFS file system.

RESOLUTION:
Since the flushing is synchronous and is performed in loop, 100 milliseconds
delay is too much. To solve the problem, the delay is reduced to a more
reasonable 2 milliseconds value from 100 milliseconds.

* 2647802 (Tracking ID: 1067468)

SYMPTOM:
fsck(1M) command for full file system check enhancement for Memory Reduction.

DESCRIPTION:
1. Currently in fsck code path a flat bitmap is allocated per device. This can 
be optimized by allocating compressed bit map instead of a flat bitmap.

2. Currently in fsck code path 3 arrays are used to track inode's dotdot 
linkage. This can be optimized by eliminating 2 of these 3 redundant arrays.
 
3. Currently in fsck code path memory is allocated for inode tables based on the 
value of fsh_ninode. In MTS the ilist file can become sparse , increasing the 
value of fsh_ninode. This memory allocation can be optimized by allocating the 
memory based on the actual number of the inodes present in the ilist file.

RESOLUTION:
1. Changed the code to allocate a compressed bit map instead a flat bitmap.
2. Changed the code to eliminate the 2 of the 3 redundant arrays used to track 
inodes dotdot linkages.
3. Changed the code to allocate the memory to inode tables based on the actual 
number of inodes present in the ilist file.

* 2654474 (Tracking ID: 2651922)

SYMPTOM:
"ls -l" command on Local VxFS file system is running slow and high CPU 
usage is seen on HP platform.

DESCRIPTION:
This issue occurs when system is under inode pressure, and the 
most of inodes on inode free list are CFS inodes. On HP-UX platform, currently, 
CFS inodes are not allowed to be reused as local  inodes to avoid GLM deadlock 
issue when vxFS reconfig is in process. So if system needs a VxFS local inode, 
it has to take a amount of time to loop through all the inode free lists to 
find a local inode, if the free lists are almost filled up with CFS inodes.

RESOLUTION:
1. added a global "vxi_icache_cfsinodes" to count cfs inodes in 
inode cache. 2. relaxed the condition for converting cluster inode to local 
inode when the number of in-core cfs inodes is greater than the 
threshold"vx_clreuse_threshold" and reconfig is not in progress.

* 2669100 (Tracking ID: 848619)

SYMPTOM:
High CPU utilization occurs when several processes are performing direct I/Os 
on a VxFS file.

DESCRIPTION:
The high CPU utilization occurs because of high contention for a global spinlock
that protects the linked list containing the pages that are participating in the
direct I/Os.

RESOLUTION:
The hash list function used by VxFS code has been modified to ensure that the 
lock contention is reduced.

* 2684626 (Tracking ID: 1180759)

SYMPTOM:
System panics in case of the deadlock because of improper lock handling.

DESCRIPTION:
This issue could be due to vx_lockctl not allowing the release of locks if the
filesystem has been disabled.Due to this the process would have exited without
releasing the lock and another process that is trying to get a lock would hit a
deadlock causing the panic.

RESOLUTION:
Unlock all the file locks of the vnode in case of disabled file system to satisfy
an assertion in the HP-UX.


INSTALLING THE PATCH
--------------------
To install the VxFS 5.0-MP2RP6 patch:

a) To install this patch on a CVM cluster, install it one
 system at a time so that all the nodes are not brought down
 simultaneously.

b) The VxFS 5.0(GA) must be installed before applying these
  patches.

c) To verify the VERITAS file system level, execute:

     # swlist -l product | egrep -i 'VRTSvxfs'

  VRTSvxfs     5.0.01.04        VERITAS File System

Note: VRTSfsman is a corequisite for VRTSvxfs. So, VRTSfsman also
needs to be installed with VRTSvxfs.

    # swlist -l product | egrep -i 'VRTS'

  VRTSvxfs      5.0.01.04      Veritas File System
  VRTSfsman     5.0.01.02      Veritas File System Manuals

d) All prerequisite/corequisite patches must be installed. The Kernel patch
  requires a system reboot for both installation and removal.

e) To install the patch, execute the following command:

# swinstall -x autoreboot=true -s <patch_directory>  PHKL_42729 PHCO_42730

If the patch is not registered, you can register it
using the following command:

# swreg -l depot <patch_directory> 

The <patch_directory>  is the absolute path where the patch resides.


REMOVING THE PATCH
------------------
.To remove the VxFS 5.0-MP2RP6 patch:

a) Execute the following command:

# swremove -x autoreboot=true PHKL_42729 PHCO_42730


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE