fs-sol10_sparc-6.0.1.200
Obsolete
The latest patch(es) : sfha-sol10_sparc-6.0.5 

 Basic information
Release type: Patch
Release date: 2012-09-20
OS update support: None
Technote: None
Documentation: None
Popularity: 1630 viewed    downloaded
Download size: 14.01 MB
Checksum: 1037402232

 Applies to one or more of the following products:
VirtualStore 6.0.1 On Solaris 10 SPARC
Storage Foundation 6.0.1 On Solaris 10 SPARC
Storage Foundation Cluster File System 6.0.1 On Solaris 10 SPARC
Storage Foundation for Oracle RAC 6.0.1 On Solaris 10 SPARC
Storage Foundation for Sybase ASE CE 6.0.1 On Solaris 10 SPARC
Storage Foundation HA 6.0.1 On Solaris 10 SPARC

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-sol10_sparc-6.0.3 (obsolete) 2013-02-01

 Fixes the following incidents:
2912412, 2912435, 2923805

 Patch ID:
148481-01

Readme file
                          * * * READ ME * * *
                 * * * Veritas File System 6.0.1 * * *
                      * * * Public Hot Fix 2 * * *
                         Patch Date: 2012-09-18


This document provides the following information:

   * PATCH NAME
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas File System 6.0.1 Public Hot Fix 2


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Solaris 10 SPARC


INCIDENTS FIXED BY THE PATCH
----------------------------
This patch fixes the following Symantec incidents:

Patch ID: 148481-01

* 2912412 (Tracking ID: 2857629)

SYMPTOM:
When a new node takes over a primary for the file system, it could 
process stale shared extent records in a per node queue.  The primary will 
detect a bad record and set the full fsck flag.  It will also disable the file 
system to prevent further corruption.

DESCRIPTION:
Every node in the cluster that adds or removes references to shared extents, 
adds the shared extent records to a per node queue.  The primary node in the 
cluster processes the records in the per node queues and maintains reference 
counts in a global shared extent device.  In certain cases the primary node 
might process bad or stale records in the per node queue.  Two situations under 
which bad or stale records could be processed are:
    1. clone creation initiated from a secondary node immediately after primary 
migration to different node.
    2. queue wraparound on any node and take over of primary by new node 
immediately afterwards.
Full fsck might not be able to rectify the file system corruption.

RESOLUTION:
Update the per node shared extent queue head and tail pointers to correct values 
on primary before starting processing of shared extent records.

* 2912435 (Tracking ID: 2885592)

SYMPTOM:
vxdump of a file system which is compressed using vxcompress is aborted.

DESCRIPTION:
vxdump is aborted due to malloc() failure. malloc()fails due to a 
memory leak in vxdump command code while handling compressed extents.

RESOLUTION:
Fixed the memory leak.

* 2923805 (Tracking ID: 2590918)

SYMPTOM:
Upon new node in the cluster taking over as primary of the file system, 
there might be a significant delay in freeing up unshared extents.  This problem 
can occur only in the case when shared extent addition or deletions occurred 
immediately after primary switch over to different node in the cluster.

DESCRIPTION:
When a new node in the cluster takes over as primary for the file 
system, a file system thread in the new primary performs a full scan of the 
shared extent device file to free up any shared extents that have become 
completely unshared.  If heavy shared extent related activity such as additional 
sharing or unsharing of extents were to occur anywhere in the cluster while the 
full scan was being performed, the full scan could get interrupted.  Due to a 
bug, the full scan is marked as completed and scheduled further scans of the 
shared extent device are partial scans.  This will cause a substantial delay in 
freeing up some of the unshared extents in the device file.

RESOLUTION:
If the first full scan of shared extent device upon primary takeover 
gets interrupted, then do not mark the full scan as complete.


INSTALLING THE PATCH
--------------------
    If the currently installed VRTSvxfs is below 6.0.100.000, you must 
    upgrade VRTSvxfs to 6.0.100.000 level before installing this patch.

    A system reboot MAY be required after installing this patch.

    The VRTSvxfs patch 148481-01 may be installed either manually (see 
    below) or using the included hotfix installer.

 * To apply patch using the hotfix installer:

    Execute the commands
        # cd <hotfix_location> && ./installFS601P2 

    where <hotfix_location> is the top of the hotfix directory tree.


 * To apply the patch manually:

 1. It is suggested that you quiesce or terminate all applications which 
    perform I/O on VxFS filesystems.  After VxFS has completed queued  
    requests, you should unmount all quiescent VxFS filesystems.

 2. On each system to be patched, execute the commands 

        # cd <patch_location> && patchadd  <patch_location>/148481-01 

    where  <patch_location>  is the  patches/  directory immediately 
    beneath  <hotfix_location> . 


REMOVING THE PATCH
------------------
    A system reboot MAY be required after removing this patch.

    The patch may be "rolled back" (removed) either manually or with 
    use of the uninstall script created during installation.

 * To remove the patch using the uninstall script:

    Execute the commands
        # cd /opt/VRTS/install && ./uninstallFS601P2 


 * To remove the patch manually:

 1. It is suggested that you quiesce or terminate all applications which 
    perform I/O through VxFS and unmount all quiescent VxFS filesystems.
    Refer to step 1 of "To install the patch manually" above.

 2. On each system from which the patch is to be removed, execute the 
    command 
        # patchrm 148481-01 


For additional examples please see the appropriate manual pages.


SPECIAL INSTRUCTIONS
--------------------
Sun introduced a page ordering vnode optimization in Solaris 9 and 10.
The optimization includes a new vnode flag, VMODSORT, which when turned on
indicates that the Virtual Memory (VM) should maintain the v_pages list
in an order depending on if a page is modified or unmodified.

Veritas File System (VxFS) can now take advantage of that flag,
which can result in significant performance improvements on operations
that depend on flushing, such as fsync.

This optimization requires the fixes for Sun BugID's 6393251 and 6538758
which are included in the Solaris kernel patch listed below.
Enabling VxFS VMODSORT functionality without the correct OS kernel patches
can result in data corruption.

Required operating system patches:

  (Solaris 9 SPARC)
     122300-11  (or greater)
        dependent patches:
        112233-12
        117171-17
        118558-39


To enable VxFS VMODSORT functionality, the following line must be added
to the  /etc/system  file after the vxfs forceload:

        set vxfs:vx_vmodsort


OTHERS
------
NONE