Symantec logo

Setting up I/O fencing

The shared storage for SF Oracle RAC must support SCSI-3 persistent reservations to enable I/O fencing. To review general guidelines on the process of checking disks in the SF Oracle RAC configuration menu, see Viewing guidelines for checking SCSI-3 support.

SF Oracle RAC involves two types of shared storage: data disks to store shared data, and coordinator disks, which are small LUNs (typically three per cluster), to control access to data disks by the nodes. Both data disks and the disks used as coordinator disks must be SCSI-3 compliant.

Setting up I/O fencing involves:

  1. Adding data disks and coordinator disks, verifying the systems see the same disks
  2. Testing data disks and coordinator disks for SCSI-3 compliance
  3. Configuring coordinator disks
  4. Enabling I/O fencing in the VCS configuration.
  5. If you are installing SF Oracle RAC and want to check the disks for SCSI-3 compliance before you configure the SF Oracle RAC components, use the procedures:
  6. If you have already tested that some or all the disks you have added are SCSI-3 compliant and have configured SF Oracle RAC components, go to the procedure Configuring coordinator disks.
Verifying the nodes see the same disk

A disk or LUN that supports SCSI-3 persistent reservations requires that two nodes have simultaneous access to the same disks.

 To verify node access to the same disk

  1. Use the following command to list the disks:

    devfsadm

  2. Use the vxdisk scandisks command to scan all disk drives and their attributes, update the VxVM device list, and reconfigure DMP with the new devices. For example, type:

    # vxdisk scandisks

    See the Veritas Volume Manager documentation for details on adding and configuring disks.

  3. Initialize the disks as VxVM disks using one of these methods:
  4. To confirm whether a disk or LUN supports SCSI-3 persistent reservations, two nodes must have simultaneous access to the same disks. Because a shared disk is likely to have a different name on each node, check the serial number to verify the identity of the disk. Use the vxfenadm command with the -i option to verify the same serial number for the LUN is generated on all paths to the LUN.

    For example, an EMC array is accessible by the /dev/rdsk/c2t13d0s2 path on node A and the /dev/rdsk/c2t11d0s2 path on node B.

    From node A, type:

    # vxfenadm -i /dev/rdsk/c2t13d0s2

    Vendor id : EMC

    Product id : SYMMETRIX

    Revision : 5567

    Serial Number : 42031000a

    Expect the same serial number details to appear when you enter the equivalent command on node B using the /dev/dsk/c2t11d0s2 path.

    On a disk from another manufacturer, Hitachi Data Systems, the output is different and may resemble:

    # vxfenadm -i /dev/rdsk/c2t0d2s2

    Vendor id : HITACHI

    Product id : OPEN-3 -SUN

    Revision : 0117

    Serial Number : 0401EB6F0002

    Refer to the vxfenadm(1M) manual page for more information.

Testing the disks using the vxfentsthdw script

Before using the vxfentsthdw utility to test the shared storage arrays support SCSI-3 persistent reservations and I/O fencing, make sure to test disks serving as coordinator disks (see Configuring coordinator disks). Keep in mind that the tests overwrite and destroy data on the disks unless you use the -r option. Review these guidelines on testing support for SCSI-3:

 To run the vxfentsthdw utility

  1. Make sure system-to-system communication is functioning properly before performing this step.

    See Setting up inter-system communication.

  2. From one node, start the utility.
  3. After reviewing the overview and warning about overwriting data on the disks, confirm to continue the process and enter the node names.

    ******** WARNING!!!!!!!! ********

    THIS UTILITY WILL DESTROY THE DATA ON THE DISK!!

    Do you still want to continue : [y/n] (default: n) y

    Enter the first node of the cluster: galaxy

    Enter the second node of the cluster: nebula

  4. Enter the name of the disk you are checking. For each node, the disk may be known by the same name.

    Enter the disk name to be checked for SCSI-3 PGR on node galaxy in the format: /dev/rdsk/cxtxdxsx

    /dev/rdsk/c2t13d0s2

    Enter the disk name to be checked for SCSI-3 PGR on node nebula in the format: /dev/rdsk/cxtxdxsx

    Make sure it's the same disk as seen by nodes galaxy and nebula

    /dev/rdsk/c2t13d0s2

    Regardless if the disk names are identical, the names must refer to the same physical disk to facilitate the testing.

  5. After performing the check, make sure the vxfentsthdw utility reports the disk is ready for I/O fencing on each node.
  6. Run the vxfentsthdw utility for each disk you intend to verify.

      Note   If you have checked disks before configuring SF Oracle RAC components, return to Configuring SF Oracle RAC Components to continue.



If disks cannot be successfully verified

If the vxfentsthdw utility cannot successfully verify that the storage devices can support SCSI-3 PR, you may need to remove keys that are written to the disk during the testing. For troubleshooting:

See Removing existing keys from disks.


  Note   SF Oracle RAC I/O fencing and EMC together do not support the use of gate keeper devices as coordinator disks. Such administrative devices are intended for EMC use only.


Configuring coordinator disks

I/O fencing requires coordinator disks that are configured in a disk group and accessible to each node. These disks enables the vxfen driver to resolve potential split-brain conditions and prevent data corruption. For a description of I/O fencing and the role of coordinator disks:

See I/O fencing

Because coordinator disks are not used to store data, configure them as the smallest possible LUN on a disk array to avoid wasting space. Symantec recommends using hardware-based mirroring for coordinator disks.

Review these requirements and make sure you already added and initialized disks for use as coordinator disks:

Configuring coordinator disks involves three phases:


Coordinator attribute

SF Oracle RAC uses a "coordinator" attribute for disk groups. The vxfen driver uses this attribute to prevent the reassignment of coordinator disks to other disk groups. The procedure that follows includes the setting of this attribute.

Refer to the Veritas Volume Manger documentation for more information on the coordinator attribute.


Creating the coordinator disk group (vxfencoorddg)

From one node, create a disk group named vxfencoorddg. This group must contain an odd number of disks or LUNs and a minimum of three disks.

For example, assume the disks have the device names c1t1d0, c2t1d0, and c3t1d0.

  To create the coordinator disk group

  1. On one node, create the disk group by specifying the device name of one of the disks; the option coordinator=on sets the coordinator attribute:

    # vxdg -o coordinator=on init vxfencoorddg c1t1d0

  2. Add the other two disks to the disk group:

    # vxdg -g vxfencoorddg adddisk c2t1d0

    # vxdg -g vxfencoorddg adddisk c3t1d0

Refer to the Veritas Volume Manager documentation for details on creating disk groups.


Testing the coordinator disk group with vxfentsthdw -c

Review these requirements before testing the coordinator disk group (vxfencoorddg) with the vxfentsthdw utility:

In the procedure, the vxfentsthdw utility tests the three disks one disk at a time from each node. From the galaxy node, the disks are:

/dev/rdsk/c1t1d0s2, /dev/rdsk/c2t1d0s2, and /dev/rdsk/c3t1d0s2

From the nebula node, the same disks are seen as:

/dev/rdsk/c4t1d0s2, /dev/rdsk/c5t1d0s2, and /dev/rdsk/c6t1d0s2

 To test the coordinator disk group

  1. Use the vxfentsthdw command with the -c option. For example, type:

    # /opt/VRTSvcs/vxfen/bin/vxfentsthdw -c vxfencoorddg

  2. Enter the nodes you are using to test the coordinator disks.
  3. Review the output to ensure the tests are successful. After testing all disks in the disk group, the vxfencoorddg disk group is ready for use.

    If a disk in the coordinator disk group fails verification, complete these operations:

If you need to replace a disk in an active coordinator disk group, refer to the topic in the troubleshooting section.

See Adding or removing coordinator disks.


Creating the vxfendg file

After setting up and testing the coordinator disk group, configure it for use.

 To create the vxfendg file

  1. Deport the disk group:

    # vxdg deport vxfencoorddg

  2. Import the disk group with the -t option to avoid automatically importing it when the nodes restart:

    # vxdg -t import vxfencoorddg

  3. Deport the disk group. This operation prevents the coordinator disks from serving other purposes:

    # vxdg deport vxfencoorddg

  4. On all nodes, type:

    # echo "vxfencoorddg" > /etc/vxfendg

    Do no use spaces between the quotes in the "vxfencoorddg" text.

    This command creates the /etc/vxfendg file, which includes the name of the coordinator disk group. Based on the contents of the /etc/vxfendg file, the rc script creates the /etc/vxfentab file for use by the vxfen driver when the system starts. /etc/vxfentab invokes the vxfenconfig command, which configures the vxfen driver to start and use the coordinator disks listed in /etc/vxfentab. /etc/vxfentab is a generated file; do not modify this file.


Reviewing a sample /etc/vxfentab file

On each node, the list of coordinator disks is in the /etc/vxfentab file. The same disks may appear using different names on each node.

If you must remove or add disks in an existing coordinator disk group, see the procedure in the troubleshooting chapter.

See Adding or removing coordinator disks.

Enabling fencing in the VCS configuration

Enabling fencing involves editing the UseFence attribute in the VCS configuration file (main.cf), verifying the configuration file syntax, copying the main.cf to other nodes, setting the contents of the vxfenmode file (DMP or raw), and restarting the fencing driver and VCS.

 To enable I/O fencing

  1. Save the existing VCS configuration file, /etc/VRTSvcs/conf/config/main.cf:

    # haconf -dump -makero

  2. Stop VCS on all nodes with the command:

# hastop -all

  1. On each node, enter the following command:

    # /etc/init.d/vxfen stop

  2. Make a backup copy of the main.cf file:

    # cd /etc/VRTSvcs/conf/config

    # cp main.cf main.orig

  3. On one node, use vi or another text editor to edit the main.cf file. Modify the list of cluster attributes by adding the UseFence attribute and assigning its value of SCSI3:

    cluster rac_cluster1 (

    UserNames = { admin = "cDRpdxPmHpzS." }

    Administrators = { admin }

    HacliUserLevel = COMMANDROOT

    CounterInterval = 5

    UseFence = SCSI3

    )

  4. Save and close the file.
  5. Verify the syntax of the /etc/VRTSvcs/conf/config/main.cf file:

    # hacf -verify /etc/VRTSvcs/conf/config

  6. Using rcp or another utility, copy the VCS configuration file from a node (for example, galaxy) to the remaining cluster nodes. On each remaining node, type:

    # rcp galaxy:/etc/VRTSvcs/conf/config/main.cf

    /etc/VRTSvcs/conf/config

  7. Depending on whether you want to use the DMP configuration or the raw device configuration, use one of the following commands:
  8. On each node enter the sequence of commands that resembles the following example in which the DMP device is configured:

    # echo vxfencoorddg > /etc/vxfendg

    # /etc/init.d/vxfen start

    # /opt/VRTS/bin/hastart