sfha-sles9_ia64-4.1MP4RP2

 Basic information
Release type: Rolling Patch
Release date: 2008-05-16
OS update support: None
Technote: 304004
Documentation:
             
                sfha_readfirst.pdf        
                        
Popularity: 455 viewed    downloaded
Download size: 49.58 MB
Checksum: 2869812135

 Applies to one or more of the following products:
Cluster Server 4.1 MP4 On SLES9 ia64
File System 4.1 MP4 On SLES9 ia64
Storage Foundation 4.1 MP4 On SLES9 ia64
Storage Foundation Cluster File System 4.1 MP4 On SLES9 ia64
Storage Foundation HA 4.1 MP4 On SLES9 ia64
Volume Manager 4.1 MP4 On SLES9 ia64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
1135784, 1163182, 1168032, 1169621, 1183235, 1198311, 1199279, 1205965, 1207161, 1210268, 1210799, 1210800, 1211385, 1211445, 1211541, 1256572

 Patch ID:
VRTSgms-4.1.40.20-MP4RP2_SLES9
VRTSodm-common-4.1.40.20-MP4RP2_SLES9
VRTSvcsdr-4.1.40.20-MP4RP2_SLES9
VRTSvxfen-4.1.40.20-MP4RP2_SLES9
VRTSvxfs-platform-4.1.40.20-MP4RP2_SLES9
VRTSvxvm-platform-4.1.40.20-MP4RP2_SLES9
VRTSvxvm-common-4.1.40.20-MP4RP2_SLES9
VRTSvxfs-common-4.1.40.20-MP4RP2_SLES9
VRTSglm-4.1.40.20-MP4RP2_SLES9
VRTSgab-4.1.40.20-MP4RP2_SLES9
VRTSllt-4.1.40.20-MP4RP2_SLES9
VRTSfspro-4.1.40.20-MP4RP2_SLES9
VRTSodm-platform-4.1.40.20-MP4RP2_SLES9

Readme file
Date: 2008-05-16

OS: Linux

OS Version: rhel4_x86_64

Etrack Incidents: 1211385, 1211445, 1211541, 1168032, 1183235, 1205965, 1210268, 1135784, 1163182, 1169621, 1198311, 1199279, 1207161, 1256572, 1210799, 1210800

Install/Uninstall Instructions:

Installing RP2 on a Cluster
An upgrade on a cluster requires stopping cluster failover functionality during the
entire procedure. However, if you use CFS and CVM, the CFS and CVM services
remain available. The upgrade is performed in several stages:
- Freeze service group operations and stop VCS on the cluster.
- Select a group of one or more cluster nodes to upgrade, and leave a group of one
or more nodes running.
- Take the first group offline and install the software patches.
- Bring the first group (with the newly installed patches) online to restart cluster
failover services.
- Upgrade the nodes in the second group, and bring them online. The cluster is
fully restored.
Depending on the clusters, you can use the following procedures to install RP2:
- Installing RP2 on a VCS Cluster
- Installing RP2 on an SFCFS cluster
- Installing RP2 on an SFRAC cluster
Installing RP2 on a VCS Cluster
To install RP2 on a cluster:
1. Log in as superuser.
2. Verify that /opt/VRTS/bin is in your PATH so you can execute all product
commands.
3. Switch the service group to a node that is running.
# hagrp -switch service_group -to nodename
4. Make the VCS configuration writable. On a node that you want to upgrade, type:
# haconf -makerw
5. Freeze the HA service group operations. Enter the following command on each
node if you selected a group of nodes to upgrade:
# hasys -freeze -persistent nodename
6. Make the VCS configuration read-only:
# haconf -dump -makero
7. Select the group of nodes that are to be upgraded first, and follow step 8 through
step 18 for these nodes.
8. Stop VCS. Enter the following command on each node in the group that is
upgraded:
# hastop -local
9. Stop the VCS command server:
# killall CmdServer
10. Stop cluster fencing, GAB, and LLT.
# /etc/init.d/vxfen stop
# /etc/init.d/gab stop
# /etc/init.d/llt stop
11. If required, you can upgrade the nodes at this stage, and patch them to a
supported kernel version.
See "Supported Platforms" on page 2.
12. On each node, run the following commands to upgrade to 4.1 MP4 RP2.
# rpm -Uvh VRTSllt-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSgab-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvxfen-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvcsdr-4.1.40.20-MP4RP2_dist.arch.rpm
where dist is the supported Linux distribution and arch is the supported
Linux architecture.
See "Supported Platforms" on page 2.
See "Packages for Cluster Server" on page 6.
13. Shut down and reboot each of the upgraded nodes. After the nodes come back up,
application failover capability is available for that group.
14. Run the following commands to start VCS:
# /etc/init.d/llt start
# /etc/init.d/gab start
# /etc/init.d/vxfen start
# /etc/init.d/vcs start
15. Make the VCS configuration writable again from any node in the upgraded
group:
# haconf -makerw
16. Unfreeze the service group operations. Perform this task on each node if you had
upgraded a group of nodes.:
# hasys -unfreeze -persistent nodename
17. Make the VCS configuration read-only:
# haconf -dump -makero
18. Switch the service group to the original node:
# hagrp -switch service_group -to nodename
19. Repeat step 8 through step 18 for the second group of nodes.

Installing RP2 on an SFCFS cluster
To install RP2 on a cluster:
1. Log in as superuser.
2. Verify that /opt/VRTS/bin is in your PATH so you can execute all product
commands.
3. Switch the service group to a node that is running.
# hagrp -switch service_group -to nodename
4. From any node in the cluster, make the VCS configuration writable:
# haconf -makerw
5. Enter the following command to freeze HA service group operations on each
node:
# hasys -freeze -persistent nodename
6. Make the configuration read-only:
# haconf -dump -makero
7. Select the group of nodes that are to be upgraded first, and follow step 8 through
step 34 for these nodes.
8. Stop VCS by entering the following command on each node in the group being
upgraded:
# hastop -local
9. Stop the VCS command server:
# killall CmdServer
10. Unregister CFS from GAB.
# fsclustadm cfsdeinit
11. Stop cluster fencing, GAB, and LLT.
# /etc/init.d/vxfen stop
# /etc/init.d/gab stop
# /etc/init.d/llt stop
12. Check if each node's root disk is under VxVM control by running this command.
# df -v /
The root disk is under VxVM control if /dev/vx/dsk/rootvol is listed as being
mounted as the root (/) file system. If so, unmirror and unencapsulate the root
disk as described in the following steps:
a. Use the vxplex command to remove all the plexes of the volumes rootvol,
swapvol, usr, var, opt and home that are on disks other than the root disk.
For example, the following command removes the plexes mirrootvol-01,
and mirswapvol-01 that are configured on a disk other than the root disk:
# vxplex -o rm dis mirrootvol-01 mirswapvol-01
Note Do not remove the plexes on the root disk that correspond to the original
disk partitions.
b. Enter the following command to convert all the encapsulated volumes in the
root disk back to being accessible directly through disk partitions instead of
through volume devices. There must be at least one other disk in the rootdg
disk group in addition to the root disk for vxunroot to succeed.
# /etc/vx/bin/vxunroot
Following the removal of encapsulation, the system is rebooted from the
unencapsulated root disk.
13. If required, you can upgrade the nodes at this stage, and patch them to a
supported kernel version.
See "Supported Platforms" on page 2.
14. On each node, use the following command to check if any Storage Checkpoints
are mounted:
# df -T | grep vxfs
If any Storage Checkpoints are mounted, on each node in the cluster unmount all
Storage Checkpoints.
# umount /checkpoint_name
15. On each node, use the following command to check if any VxFS file systems are
mounted:
# df -T | grep vxfs
a. If any VxFS file systems are present, on each node in the cluster unmount all
the VxFS file systems:
# umount /filesystem
b. On each node, verify that all file systems have been cleanly unmounted:
# echo "8192B.p S" | fsdb -t vxfs filesystem | grep clean
flags 0 mod 0 clean clean_value
A clean_value value of 0x5a indicates the file system is clean, 0x3c indicates
the file system is dirty, and 0x69 indicates the file system is dusty. A dusty
file system has pending extended operations.
c. If a file system is not clean, enter the following commands for that file system:
# fsck -t vxfs filesystem
# mount -t vxfs filesystem mountpoint
# umount mountpoint
This should complete any extended operations that were outstanding on the
file system and unmount the file system cleanly.
There may be a pending large fileset clone removal extended operation if the
umount command fails with the following error:
file system device busy
You know for certain that an extended operation is pending if the following
message is generated on the console:
Storage Checkpoint asynchronous operation on file_system
file system still in progress.
d. If an extended operation is pending, you must leave the file system mounted
for a longer time to allow the operation to complete. Removing a very large
fileset clone can take several hours.
e. Repeat the following command to verify that the unclean file system is now
clean:
# echo "8192B.p S" | fsdb -t vxfs filesystem | grep clean
flags 0 mod 0 clean clean_value
16. If you have created any Veritas Volume Replicator (VVR) replicated volume
groups (RVGs) on your system, perform the following steps:
a. Stop all applications that are involved in replication. For example, if a data
volume contains a file system, unmount it.
b. Use the vxrvg stop command to stop each RVG individually:
# vxrvg -g diskgroup stop rvg_name
c. On the Primary node, use the vxrlink status command to verify that all
RLINKs are up-to-date:
# vxrlink -g diskgroup status rlink_name
Caution To avoid data corruption, do not proceed until all RLINKs are
up-to-date.
17. Stop activity to all VxVM volumes.
For example, stop any applications such as databases that access the volumes, and
unmount any file systems that have been created on the volumes.
18. On each node, stop all VxVM volumes by entering the following command for
each disk group:
# vxvol -g diskgroup stopall
To verify that no volumes remain open, use the following command:
# vxprint -Aht -e v_open
19. Check if the VEA service is running:
# /opt/VRTS/bin/vxsvcctrl status
If the VEA service is running, stop it:
# /opt/VRTS/bin/vxsvcctrl stop
20. On each node, run the following commands to upgrade to 4.1 MP4 RP2.
See "Supported Platforms" on page 2.
See "Packages for Storage Foundation" on page 7.
# rpm -Uvh VRTSvxvm-common-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvxvm-platform-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvxfs-common-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvxfs-platform-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSllt-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSgab-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvxfen-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvcsdr-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSglm-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSfspro-4.1.40.20-MP4RP2_dist.arch.rpm
where dist is RHEL4, RHEL5, SLES9, or SLES10 and arch is i586, i686, ia64, or
x86_64 as appropriate.
21. Shut down and reboot each of the upgraded nodes. After the nodes come back up,
application failover capability is available for that group.
22. If you need to re-encapsulate and mirror the root disk on each of the nodes, follow
the procedures in the "Administering Disks" chapter of the Veritas Volume
Manager Administrator's Guide.
23. If necessary, reinstate any missing mount points in the /etc/fstab file on each
node.
24. Run the following commands to start the SFCFS processes:
# /etc/init.d/llt start
# /etc/init.d/gab start
# /etc/init.d/vxfen start
# /etc/init.d/vcs start
25. Make the VCS configuration writable again from any node in the upgraded
group:
# haconf -makerw
26. Enter the following command on each node in the upgraded group to unfreeze
HA service group operations:
# hasys -unfreeze -persistent nodename
27. Make the configuration read-only:
# haconf -dump -makero
28. Switch the service group to the original node:
# hagrp -switch service_group -to nodename
29. Bring the CVM service group online on each node in the upgraded group:
# hagrp -online cvm -sys nodename
30. Restart all the volumes by entering the following command for each disk group:
# vxvol -g diskgroup startall
31. If you stopped any RVGs in step 16, restart each RVG:
# vxrvg -g diskgroup start rvg_name
32. Remount all VxFS file systems on all nodes:
# mount /filesystem
33. Remount all Storage Checkpoints on all nodes:
# mount /checkpoint_name
34. Check if the VEA service was restarted:
# /opt/VRTS/bin/vxsvcctrl status
If the VEA service is not running, restart it:
# /opt/VRTS/bin/vxsvcctrl start
35. Repeat step 8 through step 34 for the second group of nodes.

Installing RP2 on an SFRAC cluster
To install RP2 on a cluster:
1. Log in as superuser.
2. Verify that /opt/VRTS/bin is in your PATH so you can execute all product
commands.
3. Switch the service group to a node that is running.
# hagrp -switch service_group -to nodename
4. From any node in the cluster, make the VCS configuration writable:
# haconf -makerw
5. Enter the following command to freeze HA service group operations on each
node:
# hasys -freeze -persistent nodename
6. Make the configuration read-only:
# haconf -dump -makero
7. Select the group of nodes that are to be upgraded first, and follow step 8 through
step 33 for these nodes.
8. Stop all Oracle resources and the database on all nodes if there are any.
If you use Oracle 10g, you must also stop CRS on all nodes:
a. If CRS is controlled by VCS:
As superuser, enter the following command on each node in the cluster.
# hares -offline cssd-resource -sys nodename
b. If CRS is not controlled by VCS:
Use the following command on each node to stop CRS.
# /etc/init.d/init.crs stop
On stopping CRS if any gsd relevant process remains alive, you must kill that
process.
9. Stop VCS by entering the following command on each node in the group being
upgraded:
# hastop -local
10. Stop the VCS command server:
# killall CmdServer
11. Stop VCSMM and LMX if they are running.
# /etc/init.d/vcsmm stop
# /etc/init.d/lmx stop
12. Unregister CFS from GAB.
# fsclustadm cfsdeinit
13. Stop cluster fencing, GAB, and LLT.
# /etc/init.d/vxfen stop
# /etc/init.d/gab stop
# /etc/init.d/llt stop
14. Check if each node’s root disk is under VxVM control by running this command.
# df -v /
The root disk is under VxVM control if /dev/vx/dsk/rootvol is listed as being
mounted as the root (/) file system. If so, unmirror and unencapsulate the root
disk as described in the following steps:
a. Use the vxplex command to remove all the plexes of the volumes rootvol,
swapvol, usr, var, opt and home that are on disks other than the root disk.
For example, the following command removes the plexes mirrootvol-01,
and mirswapvol-01 that are configured on a disk other than the root disk:
# vxplex -o rm dis mirrootvol-01 mirswapvol-01
Note Do not remove the plexes on the root disk that correspond to the original
disk partitions.
b. Enter the following command to convert all the encapsulated volumes in the
root disk back to being accessible directly through disk partitions instead of
through volume devices. There must be at least one other disk in the rootdg
disk group in addition to the root disk for vxunroot to succeed.
# /etc/vx/bin/vxunroot
Following the removal of encapsulation, the system is rebooted from the
unencapsulated root disk.
15. If required, you can upgrade the nodes at this stage, and patch them to a
supported kernel version.
Note If you are upgrading an SFRAC cluster, you must upgrade the nodes at this
stage to one of the operating system versions that this RP release supports.
See "Supported Platforms" on page 2.
16. On each node, use the following command to check if any VxFS file systems are
mounted:
# df -T | grep vxfs
c. If any VxFS file systems are present, on each node in the cluster unmount all
the VxFS file systems:
# umount /filesystem
d. On each node, verify that all file systems have been cleanly unmounted:
# echo "8192B.p S" | fsdb -t vxfs filesystem | grep clean
flags 0 mod 0 clean clean_value
A clean_value value of 0x5a indicates the file system is clean, 0x3c indicates
the file system is dirty, and 0x69 indicates the file system is dusty. A dusty
file system has pending extended operations.
e. If a file system is not clean, enter the following commands for that file system:
# fsck -t vxfs filesystem
# mount -t vxfs filesystem mountpoint
# umount mountpoint
This should complete any extended operations that were outstanding on the
file system and unmount the file system cleanly.
There may be a pending large fileset clone removal extended operation if the
umount command fails with the following error:
file system device busy
You know for certain that an extended operation is pending if the following
message is generated on the console:
Storage Checkpoint asynchronous operation on file_system
file system still in progress.
f. If an extended operation is pending, you must leave the file system mounted
for a longer time to allow the operation to complete. Removing a very large
fileset clone can take several hours.
g. Repeat the following command to verify that the unclean file system is now
clean:
# echo "8192B.p S" | fsdb -t vxfs filesystem | grep clean
flags 0 mod 0 clean clean_value
17. Stop activity to all VxVM volumes.
For example, stop any applications such as databases that access the volumes, and
unmount any file systems that have been created on the volumes.
18. On each node, stop all VxVM volumes by entering the following command for
each disk group:
# vxvol -g diskgroup stopall
To verify that no volumes remain open, use the following command:
# vxprint -Aht -e v_open
19. Check if the VEA service is running:
# /opt/VRTS/bin/vxsvcctrl status
If the VEA service is running, stop it:
# /opt/VRTS/bin/vxsvcctrl stop
20. On each node, run the following commands to upgrade to 4.1 MP4 RP2.
# rpm -Uvh VRTSvxvm-common-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvxvm-platform-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvxfs-common-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvxfs-platform-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSllt-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSgab-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvxfen-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSvcsdr-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSglm-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSgms-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSodm-common-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSodm-platform-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSdbac-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -Uvh VRTSfspro-4.1.40.20-MP4RP2_dist.arch.rpm
where dist is RHEL4 or SLES9 and arch is i686 or x86_64 as appropriate.
See "Supported Platforms" on page 2.
See "Packages Included in this Rolling Patch" on page 6.
21. Shut down and reboot each of the upgraded nodes. After the nodes come back up,
application failover capability is available for that group.
22. If you need to re-encapsulate and mirror the root disk on each of the nodes, follow
the procedures in the "Administering Disks" chapter of the Veritas Volume
Manager Administrator's Guide.
23. If necessary, reinstate any missing mount points in the /etc/fstab file on each
node.
24. Run the following commands to start the SFRAC processes:
# /etc/init.d/llt start
# /etc/init.d/gab start
# /etc/init.d/vxfen start
# /etc/init.d/vcsmm start
# /etc/init.d/lmx start
# /etc/init.d/vcs start
25. Make the VCS configuration writable again from any node in the upgraded
group:
# haconf -makerw
26. Enter the following command on each node in the upgraded group to unfreeze
HA service group operations:
# hasys -unfreeze -persistent nodename
27. Make the configuration read-only:
# haconf -dump -makero
28. Switch the service group to the original node:
# hagrp -switch service_group -to nodename
29. Bring the CVM service group online on each node in the upgraded group:
# hagrp -online cvm -sys nodename
30. Restart all the volumes by entering the following command for each disk group:
# vxvol -g diskgroup startall
31. If CRS is not controlled by VCS, use the following command on each node to start
CRS.
# /etc/init.d/init.crs start
32. Remount all VxFS file systems on all nodes:
# mount /filesystem
33. Check if the VEA service was restarted:
# /opt/VRTS/bin/vxsvcctrl status
If the VEA service is not running, restart it:
# /opt/VRTS/bin/vxsvcctrl start
34. Repeat step 8 through step 33 for the second group of nodes.
35. Relink Oracle's CRS and database libraries for SFRAC:
a. Run:
/opt/VRTS/install/installsfrac -configure
b. Choose the correct relinking option for your version of Oracle:
- Relink SFRAC for Oracle 9i (Only for RHEL4)
- Relink SFRAC for Oracle 10g Release 1
- Relink SFRAC for Oracle 10g Release 2

Installing RP2 on a Standalone System
You can use this procedure to install RP2 on a standalone system that runs SF.
To install RP2 on a standalone system:
1. Log in as superuser.
2. Verify that /opt/VRTS/bin is in your PATH so you can execute all product
commands.
3. Check if the root disk is under VxVM control by running this command:
# df -v /
The root disk is under VxVM control if /dev/vx/dsk/rootvol is listed as being
mounted as the root (/) file system. If so, unmirror and unencapsulate the root
disk as described in the following steps:
a. Use the vxplex command to remove all the plexes of the volumes rootvol,
swapvol, usr, var, opt and home that are on disks other than the root disk.
For example, the following command removes the plexes mirrootvol-01,
and mirswapvol-01 that are configured on a disk other than the root disk:
# vxplex -o rm dis mirrootvol-01 mirswapvol-01
Note Do not remove the plexes on the root disk that correspond to the original
disk partitions.
b. Enter the following command to convert all the encapsulated volumes in the
root disk back to being accessible directly through disk partitions instead of
through volume devices. There must be at least one other disk in the rootdg
disk group in addition to the root disk for vxunroot to succeed.
# /etc/vx/bin/vxunroot
Following the removal of encapsulation, the system is rebooted from the
unencapsulated root disk.
4. If required, you can upgrade the system at this stage, and patch it to a supported
kernel version.
5. Use the following command to check if any VxFS file systems or Storage
Checkpoints are mounted:
# df -T | grep vxfs
6. Unmount all Storage Checkpoints and file systems:
# umount /checkpoint_name
# umount /filesystem
7. Verify that all file systems have been cleanly unmounted:
# echo "8192B.p S" | fsdb -t vxfs filesystem | grep clean
flags 0 mod 0 clean clean_value
A clean_value value of 0x5a indicates the file system is clean, 0x3c indicates the
file system is dirty, and 0x69 indicates the file system is dusty. A dusty file system
has pending extended operations.
a. If a file system is not clean, enter the following commands for that file system:
# fsck -t vxfs filesystem
# mount -t vxfs filesystem mountpoint
# umount mountpoint
This should complete any extended operations that were outstanding on the
file system and unmount the file system cleanly.
There may be a pending large fileset clone removal extended operation if the
umount command fails with the following error:
file system device busy
You know for certain that an extended operation is pending if the following
message is generated on the console:
Storage Checkpoint asynchronous operation on file_system
file system still in progress.
b. If an extended operation is pending, you must leave the file system mounted
for a longer time to allow the operation to complete. Removing a very large
fileset clone can take several hours.
c. Repeat step 7 to verify that the unclean file system is now clean.
8. If you have created any Veritas Volume Replicator (VVR) replicated volume
groups (RVGs) on your system, perform the following steps:
a. Stop all applications that are involved in replication. For example, if a data
volume contains a file system, unmount it.
b. Use the vxrvg stop command to stop each RVG individually:
# vxrvg -g diskgroup stop rvg_name
c. On the Primary node, use the vxrlink status command to verify that all
RLINKs are up-to-date:
# vxrlink -g diskgroup status rlink_name
Caution To avoid data corruption, do not proceed until all RLINKs are
up-to-date.
9. Stop activity to all VxVM volumes. For example, stop any applications such as
databases that access the volumes, and unmount any file systems that have been
created on the volumes.
10. Stop all VxVM volumes by entering the following command for each disk group:
# vxvol -g diskgroup stopall
To verify that no volumes remain open, use the following command:
# vxprint -Aht -e v_open
11. Check if the VEA service is running:
# /opt/VRTS/bin/vxsvcctrl status
If the VEA service is running, stop it:
# /opt/VRTS/bin/vxsvcctrl stop
12. Use the following commands to upgrade to 4.1 MP4 RP2.
# rpm -U VRTSvxvm-common-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -U VRTSvxvm-platform-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -U VRTSvxfs-common-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -U VRTSvxfs-platform-4.1.40.20-MP4RP2_dist.arch.rpm
# rpm -U VRTSfspro-4.1.40.20-MP4RP2_dist.arch.rpm
where dist is RHEL4, RHEL5, SLES9 or SLES10, and arch is i586, i686, ia64 or
x86_64 as appropriate. (See "Packages Included in this Rolling Patch" on
page 6.)
13. Shut down and reboot the system.
14. If necessary, reinstate any missing mount points in the /etc/fstab file.
15. Restart all the volumes by entering the following command for each disk group:
# vxvol -g diskgroup startall
16. If you stopped any RVGs in step 8, restart each RVG:
# vxrvg -g diskgroup start rvg_name
17. Remount all VxFS file systems and Storage Checkpoints:
# mount /filesystem
# mount /checkpoint_name
18. Check if the VEA service was restarted:
# /opt/VRTS/bin/vxsvcctrl status
If the VEA service is not running, restart it:
# /opt/VRTS/bin/vxsvcctrl start
19. If you need to re-encapsulate and mirror the root disk, follow the procedures in
the "Administering Disks" chapter of the Veritas Volume Manager Administrator's
Guide.
Removing the
RP2 packages
Roll back of the RP2 packages to the release 4.1 MP4 version of the packages is not
supported. It is recommended that you follow the steps in the following sections to
remove all the installed Veritas packages, and then perform a complete reinstallation
of the release 4.1 MP4 software.
- Removing the RP2 packages for VCS
- Removing the RP2 packages for SF or SFCFS
- Removing the RP2 packages for SFRAC

Removing the RP2 packages for VCS
Perform the following procedure to uninstall the RP2 packages from a VCS cluster.
To uninstall the Veritas software:
1. Log in as superuser.
2. Verify that /opt/VRTS/bin is in your PATH so you can execute all product
commands.
3. Stop VCS along with all the resources. Then, stop the remaining resources
manually:
# /etc/init.d/vcs stop
4. Stop the VCS command server:
# killall CmdServer
5. Uninstall VCS:
# cd /opt/VRTS/install
# ./uninstallvcs [-usersh]
6. If vxfen was originally configured in enabled mode, type the following on all the
nodes:
# rm /etc/vxfenmode
7. Reboot all nodes.
After uninstalling the packages, refer to the Veritas Cluster Server Release Notes for 4.1
MP4 to reinstall the 4.1 MP4 software.
Removing the RP2 packages for SF or SFCFS
Perform the following procedure to uninstall the RP2 packages from SF or SFCFS
systems.

To uninstall the Veritas software:
1. Log in as superuser.
2. Verify that /opt/VRTS/bin is in your PATH so you can execute all product
commands.
3. Stop VCS along with all the resources. Then, stop the remaining resources
manually:
# /etc/init.d/vcs stop
4. Stop the VCS command server:
# killall CmdServer
5. Uninstall VCS:
# cd /opt/VRTS/install
# ./uninstallvcs [-usersh]
6. If cluster fencing was originally configured in enabled mode, type the following
on all the nodes:
# rm /etc/vxfenmode
7. Check if the root disk is under VxVM control by running this command:
# df -v /
The root disk is under VxVM control if /dev/vx/dsk/rootvol is listed as being
mounted as the root (/) file system. If so, unmirror and unencapsulate the root
disk as described in the following steps:
a. Use the vxplex command to remove all the plexes of the volumes rootvol,
swapvol, usr, var, opt and home that are on disks other than the root disk.
For example, the following command removes the plexes mirrootvol-01,
and mirswapvol-01 that are configured on a disk other than the root disk:
# vxplex -o rm dis mirrootvol-01 mirswapvol-01
Note Do not remove the plexes on the root disk that correspond to the original
disk partitions.
b. Enter the following command to convert all the encapsulated volumes in the
root disk back to being accessible directly through disk partitions instead of
through volume devices. There must be at least one other disk in the rootdg
disk group in addition to the root disk for vxunroot to succeed.
# /etc/vx/bin/vxunroot
Following the removal of encapsulation, the system is rebooted from the
unencapsulated root disk.
8. Use the following command to check if any VxFS file systems or Storage
Checkpoints are mounted:
# df -T | grep vxfs
9. Unmount all Storage Checkpoints and file systems:
# umount /checkpoint_name
# umount /filesystem
10. If you have created any Veritas Volume Replicator (VVR) replicated volume
groups (RVGs) on your system, perform the following steps:
a. Stop all applications that are involved in replication. For example, if a data
volume contains a file system, unmount it.
b. Use the vxrvg stop command to stop each RVG individually:
# vxrvg -g diskgroup stop rvg_name
c. On the Primary node, use the vxrlink status command to verify that all
RLINKs are up-to-date:
# vxrlink -g diskgroup status rlink_name
Caution To avoid data corruption, do not proceed until all RLINKs are
up-to-date.
11. Stop activity to all VxVM volumes. For example, stop any applications such as
databases that access the volumes, and unmount any file systems that have been
created on the volumes.
12. Stop all VxVM volumes by entering the following command for each disk group:
# vxvol -g diskgroup stopall
To verify that no volumes remain open, use the following command:
# vxprint -Aht -e v_open
13. Check if the VEA service is running:
# /opt/VRTS/bin/vxsvcctrl status
If the VEA service is running, stop it:
# /opt/VRTS/bin/vxsvcctrl stop
14. To shut down and remove the installed Veritas packages, use the appropriate
command in the /opt/VRTS/install directory. For example, to uninstall the
Storage Foundation or Veritas Storage Foundation for DB2 packages, use the
following commands:
# cd /opt/VRTS/install
# ./uninstallsf [-usersh]
You can use this command to remove the packages from one or more systems.
The -usersh option is required if you are using the remote shell (RSH) rather
than the secure shell (SSH) to uninstall the software simultaneously on several
systems.
Note Provided that the remote shell (RSH) or secure shell (SSH) has been
configured correctly, this command can be run on a single node of the
cluster to install the software on all the cluster nodes.
After uninstalling the Veritas software, refer to the appropriate product's 4.1 MP4
Release Notes document to reinstall the 4.1 MP4 software.
Removing the RP2 packages for SFRAC
Perform the following procedure to uninstall the RP2 packages from SFRAC systems.

To uninstall the Veritas software:
1. Stop Oracle and CRS on each cluster node.
- If CRS is controlled by VCS, log in as superuser on any system in the cluster
and enter the following command for each node in the cluster:
# /opt/VRTSvcs/bin/hares -offline cssd-resource -sys galaxy
Where galaxy is the name of the cluster node.
- If CRS is not controlled by VCS, use the following command on each node to
stop CRS:
# /etc/init.d/init.crs stop
2. Unencapsulate root disk if necessary.
# df /
The root disk is under VxVM control if /dev/vx/dsk/rootvol is listed as being
mounted as the root (/) file system.
# vxplex -o rm dis mirrootvol-01 mirswapvol-01
# /etc/vx/bin/vxunroot
3. Unmount all vxfs mounts and all file systems on VxVM volumes.
4. Stop all volumes for each disk group.
# vxvol -g diskgroup stopall
5. Stop VCS along with all the resources. Then stop remaining resources manually.
# hastop -all
6. Back up current configuration files on each cluster node. Note that some of the
files may not exist.
# mkdir -p /var/sfrac41mp4-config-save/etc/vx/vras
# mkdir -p
/var/sfrac41mp4-config-save/etc/VRTSvcs/conf/config
# cp -p /etc/llttab /etc/llthosts /etc/gabtab /etc/vxfendg
/etc/vxfenmode
/var/sfrac41mp4-config-save/etc/
# cp -p /etc/VRTSvcs/conf/config/main.cf
/var/sfrac41mp4-config-save/etc/VRTSvcs/conf/config/
# cp -p /etc/vx/vxddl.exclude /etc/vx/darecs
/etc/vx/disk.info /etc/vx/jbod.info /etc/vx/.aascsi3
/etc/vx/.apscsi3 /etc/vx/volboot
/etc/vx/array.info /etc/vx/ddl.support /etc/vx/disks.exclude
/etc/vx/cntrls.exclude
/etc/vx/enclr.exclude /etc/vx/.newnames /etc/vx/guid.state
/etc/vx/vxvm_tunables
/etc/vx/vxdmp_tunables /etc/vx/vvrports
/var/sfrac41mp4-config-save/etc/vx
# cp -p /etc/vx/vras/.rdg /etc/vx/vras/vras_env
/var/sfrac41mp4-config-save/etc/vx/vras/
7. Uninstall SFRAC.
# cd /opt/VRTS/install
# ./uninstallsfrac galaxy nebula
8. Uninstall all the remaining infrastructure VRTS rpms manually on each cluster
node.
# ./uninstallinfr galaxy nebula
After uninstalling the packages, refer to the Storage Foundation for Oracle RAC
Release Notes for 4.1 MP4 to reinstall the 4.1 MP4 software. After reinstalling
4.1MP4 software, restore the configuration files from the backup created in step 6.