Symantec logo

Before you configure coordinator disks

I/O fencing requires coordinator disks to be configured in a disk group that each cluster system can access. The use of coordinator disks enables the vxfen driver to resolve potential split-brain conditions and prevent data corruption. A coordinator disk is not used for data storage, so it can be configured as the smallest LUN on a disk array to avoid wasting space.

Coordinator disks must meet the following requirements:

Setting up the disk group for coordinator disks

If you have already added and initialized disks you intend to use as coordinator disks, you can begin the following procedure at step 4.

 To set up the disk group for coordinator disks

  1. Physically add the three disks you intend to use for coordinator disks. All cluster nodes should physically share them. Veritas recommends that you use the smallest size disks or LUNs, so that space for data is not wasted.
  2. If necessary, use the vxdisk scandisks command to scan the disk drives and their attributes. This command updates the VxVM device list and reconfigures DMP with the new devices. For example:

    # vxdisk scandisks

  3. Use the command vxdisksetup command to initialize a disk as a VxVM disk. The example command that follows specifies the CDS format:

    # vxdisksetup -i vxvm_device_name format=cdsdisk

    For example:

    # vxdisksetup -i /dev/rdsk/c2t0d2s2 format=cdsdisk

    Repeat this command for each disk you intend to use as a coordinator disk.

  4. From one node, create a disk group named vxfencoorddg. This group must contain an odd number of disks or LUNs and a minimum of three disks. Symantec recommends that you use only three coordinator disks, and that you use the smallest size disks or LUNs to conserve disk space.

    For example, assume the disks have the device names c1t1d0, c2t1d0, and c3t1d0.

  5. On any node, create the disk group by specifying the device name of one of the disks.

    # vxdg -o coordinator=on init vxfencoorddg c1t1d0

  6. Add the other two disks to the disk group.

    # vxdg -g vxfencoorddg adddisk c2t1d0

    # vxdg -g vxfencoorddg adddisk c3t1d0

See the Veritas Volume Manager Administrator's Guide.

Requirements for testing the coordinator disk group

Review the requirments for testing the coordinator disk group.


Run the vxfentsthdw utility

Review the following guidelines on testing support for SCSI-3:

Testing the coordinator disk group

After Setting up the disk group for coordinator disks, test the coordinator disk group.

 To test the coordinator disk group

  1. To start the utility from one node, type the following command:

    # /opt/VRTSvcs/vxfen/bin/vxfentsthdw

    Make sure system-to-system communication is functioning properly before performing this step.

    See the vxfentsthdw(1M) man page.

  2. After reviewing the overview and warning about overwriting data on the disks, confirm to continue the process and enter the node names.
  3. Enter the name of the disk you are checking.

    For example, /dev/rdsk/c4t8d0s2.

Creating the vxfendg file

After Testing the coordinator disk group, configure it for use.

 To create the vxfendg file

  1. To deport the disk group, type the following command:

    # vxdg deport vxfencoorddg

  2. Import the disk group with the -t option to avoid automatically importing it when the nodes restart:

    # vxdg -t import vxfencoorddg

  3. Deport the disk group. This operation prevents the coordinator disks from serving other purposes:

    # vxdg deport vxfencoorddg

  4. On all nodes, type:

    # echo "vxfencoorddg" > /etc/vxfendg

    Do no use spaces between the quotes in the "vxfencoorddg" text.

    This command creates the /etc/vxfendg file, which includes the name of the coordinator disk group. Based on the contents of the /etc/vxfendg file, the rc script creates the /etc/vxfentab file for use by the vxfen driver when the system starts. The rc script also invokes the vxfenconfig command, which configures the vxfen driver to start and use the coordinator disks listed in /etc/vxfentab. /etc/vxfentab is a generated file; do not modify this file.

Enabling fencing in the VCS configuration

After I/O Fencing has been configured on all cluster nodes, copy the sample vxfenmode file over the /etc/vxfenmode file.

# cp /etc/vxfen.d/vxfenmode_scsi3_dmp /etc/vxfenmode

Enabling fencing involves editing the UseFence attribute in the VCS configuration file (main.cf), verifying the configuration file syntax, copying the main.cf to other nodes, and rebooting all nodes to start the fencing driver and VCS with fencing enabled.

 To enable I/O fencing

  1. Save the existing VCS configuration file, /etc/VRTSvcs/conf/config/main.cf:

    # haconf -dump -makero

  2. Stop VCS on all nodes:

    # hastop -all

  1. Make a backup copy of the main.cf file:

    # cd /etc/VRTSvcs/conf/config

    # cp main.cf main.orig

  2. On one node, use vi or another text editor to edit the main.cf file. Modify the list of cluster attributes by adding the UseFence attribute and assigning its value of SCSI3:

    cluster rac_cluster1 (

    UserNames = { admin = "cDRpdxPmHpzS." }

    Administrators = { admin }

    HacliUserLevel = COMMANDROOT

    CounterInterval = 5

    UseFence = SCSI3

    )

  3. Save and close the file.
  4. Verify the syntax of the /etc/VRTSvcs/conf/config/main.cf file:

    # hacf -verify /etc/VRTSvcs/conf/config

  5. Using rcp or another utility, copy the VCS configuration file from a node (for example, galaxy) to the remaining cluster nodes. On each remaining node, type:

    # rcp galaxy:/etc/VRTSvcs/conf/config/main.cf

    /etc/VRTSvcs/conf/config

  6. Change /etc/vxfenmode on all nodes from disabled to scsi3:

    # vxfenmode=scsi3

  7. Stop VCS on all nodes:

    # hastop -all

  8. With the configuration file in place on each system, shut down and restart each node. For example, type:

    # shutdown -y -i6 -g0

    To ensure that I/O fencing is properly shut down, use the shutdown command instead of the reboot command.