* * * READ ME * * * * * * Cluster Server Agent Extension for SFCFS Distribution 5.1 SP1 RP2 * * * * * * P-patch 1 * * * Patch Date: 2012-06-27 This document provides the following information: * PATCH NAME * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * OPERATING SYSTEMS SUPPORTED BY THE PATCH * INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCH PATCH NAME ---------- Cluster Server Agent Extension for SFCFS Distribution 5.1 SP1 RP2 P-patch 1 PACKAGES AFFECTED BY THE PATCH ------------------------------ VRTScavf BASE PRODUCT VERSIONS FOR THE PATCH ----------------------------------- * Veritas Storage Foundation for Oracle RAC 5.1 SP1 * Veritas Storage Foundation Cluster File System 5.1 SP1 * Symantec VirtualStore 5.1 SP1 OPERATING SYSTEMS SUPPORTED BY THE PATCH ---------------------------------------- Solaris 10 X86 INCIDENTS FIXED BY THE PATCH ---------------------------- This patch fixes the following Symantec incidents: Patch ID: 143275-08 * 2694495 (Tracking ID: 2669724) SYMPTOM: CFSMount agent hit assert on monitor timeout with stack similar to the following: VCSAssert() VCSAgThreadTbl::add() VCSAgRes::call_entry_point() VCSAgIntState::process_res_fault() VCSAgISMonitoring::timedout() VCSAgRes::process_cmd() vcsag_service_thread_join() start_thread() Also following messages will apear in the engine log: Thread(4149205920) Agent is calling clean for resource(cfsmount12) because 1 successive invocations of the monitor procedure did not complete within the expected time. 2010 Jul 23 21:30:09 swsfs2_01 AgentFramework[3942]: ASSERTION FAILED DESCRIPTION: The pthread cancellation is not processed cleanly by the library glibc routines used in monitor entry point. Due to this the cancellation cleanup handler was not getting called. RESOLUTION: Replaced the glibc function with the unix system call to avoid it, e.g fopen() replaced with open and fgets() replaced with read(). * 2788309 (Tracking ID: 2684573) SYMPTOM: The performance of the cfsumount(1M) command for the VRTScavf package is slow when some checkpoints are deleted. DESCRIPTION: When a checkpoint is removed asynchronously, a kernel thread is started to handle the job in the background. If an unmount command is issued before these checkpoint removal jobs are completed, the command waits for the completion of these jobs. A forced unmount can interrupt the process of checkpoint deletion and the remaining work is left to the next mount. RESOLUTION: The code is modified to add a counter in the vxfsstat(1M) command to determine the number of checkpoint removal threads in the kernel. The '-c' option is added to the cfsumount(1M) command to force unmount a mounted file system if the checkpoint jobs are running. * 2824916 (Tracking ID: 2824895) SYMPTOM: vcscvm test cfsumount was failing while umount of a storage checkpoint instance through cfsumount. DESCRIPTION: While umount through cfsumount when storage checkpoint instance on one of the node of cluster is mounted, the cfsumount didn't succeeded on nodes where parent is not ONLINE. RESOLUTION: Update proper checks in check_offline function to make sure in failed umount case and return correct umount exit code. Patch ID: 143275-07 * 2406572 (Tracking ID: 2146573) SYMPTOM: Customer reported this as performance issue. The application was running slower when this problem happens. The SAR utility on HP-UX was showing some processes in PRI state. DESCRIPTION: (1) Complexity: This is an extremely complex performance concern in an environment where different applications are in great competition for the same system-wide resources. (2) Performance overheads: Competition for system resources is expected. However intense competition can create large overheads due to increased repetitive initialization. We approached this issue in multiple ways. Provided customer some tuning and application changes. RESOLUTION: No product bug was identified from Symantec side. However a GLM patch was provided to customer which does two things. 1. Different names given to different GLM spin locks. So that we can identify various locks from spin watcher output. 2. We take avoiding a spinlock which keeps accounting if how many times allocation did in GLM. Removing this lock will not cause any regression. Also, we have not said that it will solve this performance problem. INSTALLING THE PATCH -------------------- For the Solaris 10 release, refer to the online manual pages for instructions on using 'patchadd' and 'patchrm' scripts provided with Solaris. Any other special or non-generic installation instructions should be described below as special instructions. The following example installs a patch to a standalone machine: example# patchadd /var/spool/patch/143275-08 REMOVING THE PATCH ------------------ The following example removes a patch from a standalone system: example# patchrm 143275-08 For additional examples please see the appropriate manual pages. SPECIAL INSTRUCTIONS -------------------- Sun introduced a page ordering vnode optimization in Solaris 9 and 10. The optimization includes a new vnode flag, VMODSORT, which when turned on indicates that the Virtual Memory (VM) should maintain the v_pages list in an order depending on if a page is modified or unmodified. Veritas File System (VxFS) can now take advantage of that flag, which can result in significant performance improvements on operations that depend on flushing, such as fsync. This optimization requires the fixes for Sun BugID's 6393251 and 6538758 which are included in the Solaris kernel patch listed below. Enabling VxFS VMODSORT functionality without the correct OS kernel patches can result in data corruption. Required operating system patches: (Solaris 9 SPARC) 122300-11 (or greater) dependent patches: 112233-12 117171-17 118558-39 To enable VxFS VMODSORT functionality, the following linemust be added to the /etc/system file after the vxfs forceload: set vxfs:vx_vmodsort OTHERS ------ NONE