* * * READ ME * * * * * * Veritas File System 6.0.1 * * * * * * Public Hot Fix 3 * * * Patch Date: 2012-12-05 This document provides the following information: * PATCH NAME * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * OPERATING SYSTEMS SUPPORTED BY THE PATCH * INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCH PATCH NAME ---------- Veritas File System 6.0.1 Public Hot Fix 3 PACKAGES AFFECTED BY THE PATCH ------------------------------ VRTSvxfs BASE PRODUCT VERSIONS FOR THE PATCH ----------------------------------- * Veritas Storage Foundation for Oracle RAC 6.0.1 * Veritas Storage Foundation Cluster File System 6.0.1 * Veritas Storage Foundation 6.0.1 * Veritas Storage Foundation High Availability 6.0.1 * Symantec VirtualStore 6.0.1 * Veritas Storage Foundation for Sybase ASE CE 6.0.1 OPERATING SYSTEMS SUPPORTED BY THE PATCH ---------------------------------------- RHEL6 x86-64 INCIDENTS FIXED BY THE PATCH ---------------------------- This patch fixes the following Symantec incidents: Patch ID: 6.0.100.300 * 3008454 (Tracking ID: 3004466) SYMPTOM: Installation of 5.1SP1RP3 fails on RHEL 6.3 DESCRIPTION: Installation of 5.1SP1RP3 fails on RHEL 6.3 RESOLUTION: Updated the install script to handle the installation failure. Patch ID: 6.0.100.200 * 2912412 (Tracking ID: 2857629) SYMPTOM: When a new node takes over a primary for the file system, it could process stale shared extent records in a per node queue. The primary will detect a bad record and set the full fsck flag. It will also disable the file system to prevent further corruption. DESCRIPTION: Every node in the cluster that adds or removes references to shared extents, adds the shared extent records to a per node queue. The primary node in the cluster processes the records in the per node queues and maintains reference counts in a global shared extent device. In certain cases the primary node might process bad or stale records in the per node queue. Two situations under which bad or stale records could be processed are: 1. clone creation initiated from a secondary node immediately after primary migration to different node. 2. queue wraparound on any node and take over of primary by new node immediately afterwards. Full fsck might not be able to rectify the file system corruption. RESOLUTION: Update the per node shared extent queue head and tail pointers to correct values on primary before starting processing of shared extent records. * 2912435 (Tracking ID: 2885592) SYMPTOM: vxdump of a file system which is compressed using vxcompress is aborted. DESCRIPTION: vxdump is aborted due to malloc() failure. malloc()fails due to a memory leak in vxdump command code while handling compressed extents. RESOLUTION: Fixed the memory leak. * 2923805 (Tracking ID: 2590918) SYMPTOM: Upon new node in the cluster taking over as primary of the file system, there might be a significant delay in freeing up unshared extents. This problem can occur only in the case when shared extent addition or deletions occurred immediately after primary switch over to different node in the cluster. DESCRIPTION: When a new node in the cluster takes over as primary for the file system, a file system thread in the new primary performs a full scan of the shared extent device file to free up any shared extents that have become completely unshared. If heavy shared extent related activity such as additional sharing or unsharing of extents were to occur anywhere in the cluster while the full scan was being performed, the full scan could get interrupted. Due to a bug, the full scan is marked as completed and scheduled further scans of the shared extent device are partial scans. This will cause a substantial delay in freeing up some of the unshared extents in the device file. RESOLUTION: If the first full scan of shared extent device upon primary takeover gets interrupted, then do not mark the full scan as complete. Patch ID: 6.0.100.100 * 2907912 (Tracking ID: 2907908) SYMPTOM: VxFS and VxVM components fail to install post SLES11SP2 GA kernel versions (below mentioned kernel versions) 3.0.26-0.7.6 3.0.31-0.9.1 3.0.34-0.7.9 3.0.38-0.5.1 DESCRIPTION: Starting from 3.0 kernel onwards, the 3.0+ kernel versions have 3 numbers whereas 2.6 kernel versions had 4 numbers. It turns out that the logic to look at first 3 digits as the 'kern_major' needs to be updated to work for 3.0.* kernel versions and subsequently for SLES11SP2 new kernel RESOLUTION: Changed the code to look at only first 2 digits as the 'kern_major' for 3.0.* kernel versions * 2907921 (Tracking ID: 2907919) SYMPTOM: VxFS and VxVM components fail to install post SLES11SP2 GA kernel versions (below mentioned kernel versions) 3.0.26-0.7.6 3.0.31-0.9.1 3.0.34-0.7.9 3.0.38-0.5.1 DESCRIPTION: Starting from 3.0 kernel onwards, the 3.0+ kernel versions have 3 numbers whereas 2.6 kernel versions had 4 numbers. It turns out that the logic to look at first 3 digits as the 'kern_major' needs to be updated to work for 3.0.* kernel versions and subsequently for SLES11SP2 new kernel RESOLUTION: Changed the code to look at only first 2 digits as the 'kern_major' for 3.0.* kernel versions * 2907924 (Tracking ID: 2907923) SYMPTOM: VxFS and VxVM components fail to install post SLES11SP2 GA kernel versions (below mentioned kernel versions) 3.0.26-0.7.6 3.0.31-0.9.1 3.0.34-0.7.9 3.0.38-0.5.1 DESCRIPTION: Starting from 3.0 kernel onwards, the 3.0+ kernel versions have 3 numbers whereas 2.6 kernel versions had 4 numbers. It turns out that the logic to look at first 3 digits as the 'kern_major' needs to be updated to work for 3.0.* kernel versions and subsequently for SLES11SP2 new kernel RESOLUTION: Changed the code to look at only first 2 digits as the 'kern_major' for 3.0.* kernel versions * 2907932 (Tracking ID: 2907930) SYMPTOM: VxFS and VxVM components fail to install post SLES11SP2 GA kernel versions (below mentioned kernel versions) 3.0.26-0.7.6 3.0.31-0.9.1 3.0.34-0.7.9 3.0.38-0.5.1 DESCRIPTION: Starting from 3.0 kernel onwards, the 3.0+ kernel versions have 3 numbers whereas 2.6 kernel versions had 4 numbers. It turns out that the logic to look at first 3 digits as the 'kern_major' needs to be updated to work for 3.0.* kernel versions and subsequently for SLES11SP2 new kernel RESOLUTION: Changed the code to look at only first 2 digits as the 'kern_major' for 3.0.* kernel versions * 2910648 (Tracking ID: 2905579) SYMPTOM: VxVM rpm installation on SLES11SP2 fails for kernel version 3.0.26-0.7.6 and above with following error message: "This release of vxdmp does not contain any modules which are suitable for your 3.0.31-0.9-default kernel. error: %post(VRTSvxvm-6.0.100.000-GA_SLES11.x86_64) scriptlet failed, exit status 1" DESCRIPTION: SUSE for SLES11 SP2 released kernel version updates (3.0.26-0.7.6, 3.0.31-0.9.1, 3.0.34-0.7.9, 3.0.38-0.5.1) following its earlier kernel version 3.0.13- 0.27.1. VxVM kernel modules are built against 3.0.13-0.27.1, which are compatible to other kernel updates as well. But when installing installation scripts used to treat first three digits (e.g. 3.0.13) of version as short kernel version, which must match with kernel version of machine. As in case of updated kernel versions, short kernel version does not match and fails with error saying modules are not suitable for kernel. RESOLUTION: Installation scripts are now updated so that for 3.0 kernel series first 2 digits of kernel version are treated as short kernel version. INSTALLING THE PATCH -------------------- rpm -Uvh VRTSvxfs-6.0.100.300-RHEL6.x86_64.rpm REMOVING THE PATCH ------------------ rpm -e rpm_name SPECIAL INSTRUCTIONS -------------------- NONE OTHERS ------ NONE