fs-hpux1131-6.0.1.200

 Basic information
Release type: Patch
Release date: 2012-09-20
OS update support: None
Technote: None
Documentation: None
Popularity: 1295 viewed    downloaded
Download size: 75.88 MB
Checksum: 3289248876

 Applies to one or more of the following products:
Storage Foundation 6.0.1 On HP-UX 11i v3 (11.31)
Storage Foundation Cluster File System 6.0.1 On HP-UX 11i v3 (11.31)
Storage Foundation for Oracle RAC 6.0.1 On HP-UX 11i v3 (11.31)
Storage Foundation HA 6.0.1 On HP-UX 11i v3 (11.31)

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
2912412, 2912435, 2923805

 Patch ID:
PVCO_03965
PVKL_03964

Readme file
                          * * * READ ME * * *
                 * * * Veritas File System 6.0.1 * * *
                      * * * Public Hot Fix 2 * * *
                         Patch Date: 2012-09-18


This document provides the following information:

   * PATCH NAME
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas File System 6.0.1 Public Hot Fix 2


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
HP-UX 11i v3 (11.31)


INCIDENTS FIXED BY THE PATCH
----------------------------
This patch fixes the following Symantec incidents:

Patch ID: PVKL_03964, PVCO_03965

* 2912412 (Tracking ID: 2857629)

SYMPTOM:
When a new node takes over a primary for the file system, it could 
process stale shared extent records in a per node queue.  The primary will 
detect a bad record and set the full fsck flag.  It will also disable the file 
system to prevent further corruption.

DESCRIPTION:
Every node in the cluster that adds or removes references to shared extents, 
adds the shared extent records to a per node queue.  The primary node in the 
cluster processes the records in the per node queues and maintains reference 
counts in a global shared extent device.  In certain cases the primary node 
might process bad or stale records in the per node queue.  Two situations under 
which bad or stale records could be processed are:
    1. clone creation initiated from a secondary node immediately after primary 
migration to different node.
    2. queue wraparound on any node and take over of primary by new node 
immediately afterwards.
Full fsck might not be able to rectify the file system corruption.

RESOLUTION:
Update the per node shared extent queue head and tail pointers to correct values 
on primary before starting processing of shared extent records.

* 2912435 (Tracking ID: 2885592)

SYMPTOM:
vxdump of a file system which is compressed using vxcompress is aborted.

DESCRIPTION:
vxdump is aborted due to malloc() failure. malloc()fails due to a 
memory leak in vxdump command code while handling compressed extents.

RESOLUTION:
Fixed the memory leak.

* 2923805 (Tracking ID: 2590918)

SYMPTOM:
Upon new node in the cluster taking over as primary of the file system, 
there might be a significant delay in freeing up unshared extents.  This problem 
can occur only in the case when shared extent addition or deletions occurred 
immediately after primary switch over to different node in the cluster.

DESCRIPTION:
When a new node in the cluster takes over as primary for the file 
system, a file system thread in the new primary performs a full scan of the 
shared extent device file to free up any shared extents that have become 
completely unshared.  If heavy shared extent related activity such as additional 
sharing or unsharing of extents were to occur anywhere in the cluster while the 
full scan was being performed, the full scan could get interrupted.  Due to a 
bug, the full scan is marked as completed and scheduled further scans of the 
shared extent device are partial scans.  This will cause a substantial delay in 
freeing up some of the unshared extents in the device file.

RESOLUTION:
If the first full scan of shared extent device upon primary takeover 
gets interrupted, then do not mark the full scan as complete.


INSTALLING THE PATCH
--------------------

 To install the VxFS 6.0.100.200 patch using the installer:

a) Change to the patch top-level directory and run the install script as
   shown below.
   
    # cd <patch_dir> && ./installFS601P2 

   You will need to reboot each system after it is successfully patched.


 To install the VxFS 6.0.100.200 patch manually:

a) If you install this patch on a CVM cluster, install it one system at a
   time so that all the nodes are not brought down simultaneously.

b) VxFS 6.0.1(GA) must be installed before applying these patches.

c) To verify the VERITAS file system level, enter:

    # swlist -l product | egrep -i "VRTSvxfs"

   The resulting output should be:

    VRTSvxfs              6.0.100.000    VERITAS File System

d) All prerequisite/corequisite patches have to be installed.  The Kernel
   patch requires a system reboot for both installation and removal.

e) To install the patch, enter the following command:

    # swinstall -x autoreboot=true -s <patch_directory>  PHKL_03964 PVCO_03965

   In case the patch is not registered, the patch can be registered using
   the following command:

    # swreg -l depot <patch_directory> 

   where  <patch_directory>  is the absolute path where the patch resides.


REMOVING THE PATCH
------------------

 To remove the VxFS 6.0.100.200 patch using the uninstall script:

a) Change to the directory  /opt/VRTS/install  and run the uninstall script 
   as shown below.
   
    # cd /opt/VRTS/install && ./uninstallFS601P2 

   You will need to reboot each system after the patch is successfully removed.


 To remove the VxFS 6.0.100.200 patch manually:

a) Enter the following command:

    # swremove -x autoreboot=true PHKL_03964 PVCO_03965


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE