sfrac-sol_sparc-5.0MP3HF2

 Basic information
Release type: Hot Fix
Release date: 2009-09-18
OS update support: None
Technote: None
Documentation: None
Popularity: 1343 viewed    downloaded
Download size: 7.76 MB
Checksum: 3518470227

 Applies to one or more of the following products:
Storage Foundation for Oracle RAC 5.0MP3 On Solaris 10 SPARC
Storage Foundation for Oracle RAC 5.0MP3 On Solaris 8 SPARC
Storage Foundation for Oracle RAC 5.0MP3 On Solaris 9 SPARC

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
1766110, 1839455

 Patch ID:
141757-02, 141756-02, 141758-02

Readme file
OS: Solaris
OS Version: Solaris Sparc 5.8, 5.9, 5.10
Fixes Applied for Products:
	VRTSdbac - Veritas SF Oracle RAC by Symantec

Additional Instructions:
Please read the instructions below before installing the patch.

 PATCH VRTSdbac 5.0MP3HF2 for Veritas SF Oracle RAC 5.0MP3
===============================================================

 Patch Date:  September, 2009

This README provides information on:
 * BEFORE GETTING STARTED
 * CRC AND BYTE COUNT
 * FIXES AND ENHANCEMENTS INCLUDED IN THE PATCH
 * PACKAGES AFFECTED BY THE PATCH
 * INSTALLING THE PATCH
 * UNINSTALLING THE PATCH


BEFORE GETTING STARTED:
----------------------
This patch only applies to:
	VRTSdbac 5.0MP3 running on Solaris Sparc 5.8, 5.9 or 5.10

Ensure that you are running the supported configurations before
installing this patch.


FIXES AND ENHANCEMENTS INCLUDED IN THE PATCH: 
--------------------------------------------
Etrack Incidents: 1839455, 1766110

SDR's of Fixed Symantec Incidents:
--------------------------------
Symantec Incident : 1839455
Symptom:
	VCS log shows that the cssd monitor timed out and eventually 
	cssd resource faulted.
Defect Description:
	The cssd agent uses ps(1m) command to find out whether the three 
	Oracle clusterware daemons are running. The "ps -ef" command was 
	executed for checking the presence of every Oracle clusterware 
	daemon. On a heavily loaded system with large number of processes 
	the "ps -ef" command may take significant time, this was 
	resulting in cssd monitor time out.
Resolution:
	Optimize cssd monitor script to use single ps(1m) command with 
	"-o [args]" flags to reduce the execution time.

SDR of VRTSdbac 5.0MP3HF1:
-------------------------
Symantec Incident :  1766110
Symptom:
	Oracle takes long time to start with VCSIPC compared to 
	Oracle UDP/IPC.
Defect Description:
	The post/wait behavior is very sensitive to acknowledging posts 
	sent outside of the actual system calls. The VCSIPC implementation 
	does not remember posts which are performed outside of the 
	receiving process's actual wait call. There is no way to know if 
	post should be called by Oracle or not without being generally 
	aggressive, which would result in performance degradation 
	under high load.
Resolution:
	Following steps are taken care of to fix the issue:
	1. Remember posts even if performed outside of the receiving 
	   processes actual wait call.
	2. If multiple posts are received before wait is called, only one 
	   should be acknowledged (i.e. use a flag rather than 
	   a counter to remember posts)
	3. If a post is received before the wait call, you should still 
	   progress the IPC state, rather than returning POSTED immediately.


PACKAGES AFFECTED BY THE PATCH:
-------------------------------
This patch updates the following SF Oracle RAC package(s) 
	VRTSdbac from 5.0MP3 or higher to 5.0MP3HF2


INSTALLING THE PATCH:
--------------------
The following steps should be run on all nodes in the VCS cluster:

Stopping services in the cluster node:
--------------------------------------
The following steps should be run on each node in the cluster, one at a time:

1. Shutdown Oracle instances on all nodes of the cluster.
   a) If the database instances are configured under VCS control, offline 
   the corresponding VCS database resource. As superuser, enter:
	# /opt/VRTSvcs/bin/hagrp -offline [oracle-group] -sys [node_name]

   Make sure the [oracle-resource] is offline.
   From any one node of the cluster enter:
	# /opt/VRTSvcs/bin/hares -state [oracle-resource]

   b) If the database instances are not configured under VCS control, run 
   the following on any one node in the cluster (as Oracle user) enter:
	$ $ORACLE_HOME/bin/srvctl stop database -d [database_name]

2. For Oracle 9i, stop gsd. As Oracle user, enter:
	$ $ORACLE_HOME/bin/gsdctl stop

3. For Oracle 10g or Oracle 11g, stop Oracle clusterware.
    As superuser, enter:
	# hares -offline [cssd-resource] -sys [node-name]

4. On each node of the cluster, unconfigure and unload VCSMM.
    Un-configure VCSMM:
	# /etc/init.d/vcsmm stop
    Verify that port 'o' has been closed:
	# /sbin/gabconfig -a
    The display should not have port 'o' listed.
    Unload VCSMM:
	# modinfo | grep vcsmm
    Take note of VCSMM module id from the output.
	# modunload -i [vcsmm_module_id]

5. On each node of the cluster, unconfigure and unload LMX.
    Un-configure LMX:
	# /etc/init.d/lmx stop
    Unload LMX:
	# modinfo | grep lmx
    Take note of LMX module id from the output.
	# modunload -i [lmx_module_id]


Installing the Patch:
--------------------
1. Un-compress the downloaded patch from Symantec.
   Change directory to the unzipped patch location. 
   Install the VRTSdbac 5.0MP3HF2 patch using the following command:
		# patchadd [patch-id]
   Where [patch-id] is either of the following depending on 
   system configuration,
	For SunOS Release 5.8, [patch-id] is 141756-02.
	For SunOS Release 5.9, [patch-id] is 141757-02.
	For SunOS Release 5.10, [patch-id] is 141758-02.

2. Verify that the new patch has been installed:
	# showrev -p | grep [patch-id]
   You will find the following output on display with the patch 
   installed properly:
Patch: [patch-id] Obsoletes:  Requires: [5.0MP3-patch-id] Incompatibles:  
Packages: VRTSdbac


Re-starting services in the cluster node:
-----------------------------------------
1. Relink Oracle with the SF Oracle RAC 5.0MP3HF1 libraries:

      Refer section "Relinking the SF Oracle RAC libraries to Oracle RAC" 
      in "Veritas Storage Foundation for Oracle  RAC Installation and 
      Configuration Guide Solaris 5.0 Maintenance Pack 3"

2. On each node of the cluster, start VCSMM. As a superuser, enter:
	# /etc/init.d/vcsmm start
   Verify that port 'o' is us:
	# /sbin/gabconfig -a
	The display should have port 'o' listed.

3. On each node of the cluster, start LMX. As a superuser, enter:
	# /etc/init.d/lmx start

4. For Oracle 9i, start gsd on all nodes of the cluster.
   As Oracle user, enter
	$ $ORACLE_HOME/bin/gsdctl start

5. For Oracle 10g or Oracle 11g, start Oracle clusterware. As superuser, enter:
	# hares -online [cssd-resource] -sys [node-name]

6. Start Oracle instances on all nodes of the cluster.
   a) If the database instances are configured under VCS control, 
   online the corresponding VCS database resource. 
   As superuser, enter:
	# /opt/VRTSvcs/bin/hagrp -online [oracle-group] -sys [node_name]

   Make sure the [oracle-resource] is online.
   From any one node of the cluster enter:
	# /opt/VRTSvcs/bin/hares -state [oracle-resource]

   b) If the database instances are not configured under VCS control, 
   run the following on any one node in the cluster (as Oracle user):
	$ $ORACLE_HOME/bin/srvctl start database -d [database_name]


UNINSTALLING THE PATCH:
-----------------------
Follow the steps below on each cluster node to remove, 
the patch from the cluster:

Steps to remove the Patch from a cluster node:
---------------------------------------------
1. Follow the steps provided under "Stopping services in the cluster node" 
   section above, to stop the services on the node.

2. Remove the patch by the following command:
	# patchrm [patch-id]
   Where [patch-id] is either of the following depending on system 
   configuration,
	For SunOS Release 5.8, [patch-id] is 141756-02.
	For SunOS Release 5.9, [patch-id] is 141757-02.
	For SunOS Release 5.10, [patch-id] is 141758-02.

3. Verify that the patch has been removed from the system:
	# showrev -p | grep [patch-id]
   There shall be no result on the output.

4. Restart the node following the steps under 
   "Re-starting services in the cluster node" section above.