![]() |
![]() |
![]() |
![]() |
![]() |
The shared storage for SF Oracle RAC must support SCSI-3 persistent reservations to enable I/O fencing. To review general guidelines on the process of checking disks in the SF Oracle RAC configuration menu, see Viewing guidelines for checking SCSI-3 support.
SF Oracle RAC involves two types of shared storage: data disks to store shared data, and coordinator disks, which are small LUNs (typically three per cluster), to control access to data disks by the nodes. Both data disks and the disks used as coordinator disks must be SCSI-3 compliant.
Setting up I/O fencing involves:
A disk or LUN that supports SCSI-3 persistent reservations requires that two nodes have simultaneous access to the same disks.
To verify node access to the same disk
vxdisk
scandisks
command to scan all disk drives and their attributes, update the VxVM device list, and reconfigure DMP with the new devices. For example, type:
See the Veritas Volume Manager documentation for details on adding and configuring disks.
vxdmpadm
getdmpnode
command to determine the VxVM name by which a disk drive (or LUN) is known.
/dev/hdisk75
is identified by VxVM as EMC0_17
:
# vxdmpadm getdmpnode nodename=hdisk75
NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME
vxdisk list
vxvm_device_name to see additional information about the disk, including the AIX device name. For example:
vxfenadm
command with the -i
option to verify the same serial number for the LUN is generated on all paths to the LUN.
For example, an EMC array is accessible by the /dev/r
hdisk75
path on node A and the /dev/rh
disk76
path on node B.
Expect the same serial number details to appear when you enter the equivalent command on node B using the /dev/r
hdisk76
path.
On a disk from another manufacturer, Hitachi Data Systems, the output is different and may resemble:
Before using the vxfentsthdw
utility to test the shared storage arrays support SCSI-3 persistent reservations and I/O fencing, make sure to test disks serving as coordinator disks (see Configuring coordinator disks). Keep in mind that the tests overwrite and destroy data on the disks unless you use the -r
option. Review these guidelines on testing support for SCSI-3:
vxfenadm
-i
diskpath command to verify the disk serial number.
ssh
(default) or remsh
communication. If you use remsh
, launch the vxfentsthdw utility with the -n
option.
vxfentsthdw
utility has additional options suitable for testing many disks. You can test disks without destroying data using the -r
option. The options for testing disk groups (-g
) and disks listed in a file (-f)
are described in detail:
To run the vxfentsthdw utility
******** WARNING!!!!!!!! ********
THIS UTILITY WILL DESTROY THE DATA ON THE DISK!!
Do you still want to continue : [y/n] (default: n) y
Enter the disk name to be checked for SCSI-3 PGR on node galaxy in the format: /dev/rhdiskx
Enter the disk name to be checked for SCSI-3 PGR on node nebula in the format: /dev/rhdiskx
Make sure it's the same disk as seen by nodes galaxy and nebula
Regardless if the disk names are identical, the names must refer to the same physical disk to facilitate the testing.
vxfentsthdw
utility reports the disk is ready for I/O fencing on each node.
vxfentsthdw
utility for each disk you intend to verify.
Note
If you have checked disks before configuring SF Oracle RAC components, return to Configuring SF Oracle RAC Components to continue.
If disks cannot be successfully verified
If the vxfentsthdw
utility cannot successfully verify that the storage devices can support SCSI-3 PR, you may need to remove keys that are written to the disk during the testing. For troubleshooting:
See Removing existing keys from disks.
Note
SF Oracle RAC I/O fencing and EMC together do not support the use of gate keeper devices as coordinator disks. Such administrative devices are intended for EMC use only.
I/O fencing requires coordinator disks that are configured in a disk group and accessible to each node. These disks enables the vxfen
driver to resolve potential split-brain conditions and prevent data corruption. For a description of I/O fencing and the role of coordinator disks:
See I/O fencing
Because coordinator disks are not used to store data, configure them as the smallest possible LUN on a disk array to avoid wasting space. Symantec recommends using hardware-based mirroring for coordinator disks.
Review these requirements and make sure you already added and initialized disks for use as coordinator disks:
vxfencoorddg
). Set the coordinator attribute when creating the disk group to prevent the disks in the group from being used for other purposes.
Configuring coordinator disks involves three phases:
vxfencoorddg
, the coordinator disk group
vxfentsthdw
-c
utility
vxfendg
file
SF Oracle RAC uses a "coordinator" attribute for disk groups. The vxfen
driver uses this attribute to prevent the reassignment of coordinator disks to other disk groups. The procedure that follows includes the setting of this attribute.
Refer to the Veritas Volume Manger documentation for more information on the coordinator attribute.
Creating the coordinator disk group (vxfencoorddg)
From one node, create a disk group named vxfencoorddg
. This group must contain an odd number of disks or LUNs and a minimum of three disks.
For example, assume the disks have the device names EMC0_12
, EMC0_16, and EMCO_17
.
To create the coordinator disk group
Refer to the Veritas Volume Manager documentation for details on creating disk groups.
Testing the coordinator disk group with vxfentsthdw -c
Review these requirements before testing the coordinator disk group (vxfencoorddg
) with the vxfentsthdw
utility:
vxfencoorddg
disk group is accessible from two nodes.
rsh
permission set such that each node has root user access to the other. Temporarily modify the /.rhosts
file to enable cluster communications for the vxfentsthdw
utility, placing a "+" character in the first line of the file. You can also limit the remote access to specific systems. Refer to the manual page for the /.rhosts
file for more
vxfenadm
-i
diskpath
command to verify the serial number. See Verifying the nodes see the same disk.
In the procedure, the vxfentsthdw
utility tests the three disks one disk at a time from each node. From the galaxy
node, the disks are:
From the nebula
node, the same disks are seen as:
To test the coordinator disk group
vxfentsthdw
command with the -c
option. For example, type:
vxfencoorddg
disk group is ready for use.
If a disk in the coordinator disk group fails verification, complete these operations:
vxdiskadm
utility to remove the failed disk or LUN from the vxfencoorddg
disk group. Refer to the Veritas Volume Manager documentation.
If you need to replace a disk in an active coordinator disk group, refer to the topic in the troubleshooting section.
See Adding or removing coordinator disks.
After setting up and testing the coordinator disk group, configure it for use.
-t
option to avoid automatically importing it when the nodes restart:
# echo "vxfencoorddg" > /etc/vxfendg
Do no use spaces between the quotes in the "vxfencoorddg" text.
This command creates the /etc/vxfendg
file, which includes the name of the coordinator disk group. Based on the contents of the /etc/vxfendg
file, the rc
script creates the /etc/vxfentab
file for use by the vxfen
driver when the system starts. /etc/vxfentab
invokes the vxfenconfig
command, which configures the vxfen
driver to start and use the coordinator disks listed in /etc/vxfentab
. /etc/vxfentab
is a generated file; do not modify this file.
Reviewing a sample /etc/vxfentab file
On each node, the list of coordinator disks is in the /etc/vxfentab
file. The same disks may appear using different names on each node.
/etc/vxfentab
file resembles:
/etc/vxfentab
file resembles:
If you must remove or add disks in an existing coordinator disk group, see the procedure in the troubleshooting chapter.
See Adding or removing coordinator disks.
Enabling fencing involves editing the UseFence attribute in the VCS configuration file (main.cf
), verifying the configuration file syntax, copying the main.cf
to other nodes, setting the contents of the vxfenmode file (DMP or raw), and restarting the fencing driver and VCS.
/etc/VRTSvcs/conf/config/main.cf
:
main.cf
file:
vi
or another text editor to edit the main.cf
file. Modify the list of cluster attributes by adding the UseFence
attribute and assigning its value of SCSI3
:
/etc/VRTSvcs/conf/config/main.cf
file:
rcp
or another utility, copy the VCS configuration file from a node (for example, galaxy
) to the remaining cluster nodes. On each remaining node, type: