I/O fencing requires coordinator disks to be configured in a disk group that each cluster system can access. The use of coordinator disks enables the vxfen
driver to resolve potential split-brain conditions and prevent data corruption. A coordinator disk is not used for data storage, so it can be configured as the smallest LUN on a disk array to avoid wasting space.
Coordinator disks must meet the following requirements:
There must be at least three coordinator disks and the total number of coordinator disks must be an odd number. This ensures a majority of disks can be achieved.
Each of the coordinator disks must use a physically separate disk or LUN.
Each of the coordinator disks should be on a different disk array, if possible.
Coordinator disks in a disk array should use hardware-based mirroring.
The coordinator disks must support SCSI-3 PR. Note that the use of the vxfentsthdw
utility to test for SCSI-3 PR support requires that disks be 1MB or greater. Smaller disks can be tested manually. Contact Veritas support (http://support.veritas.com) for the procedure.
If you have already added and initialized disks you intend to use as coordinator disks, you can begin the following procedure at step 5.
To set up the disk group for coordinator disks
vxdisk
scandisks
command to scan the disk drives and their attributes. This command updates the VxVM device list and reconfigures DMP with the new devices. For example:
vxdisksetup
command to initialize a disk as a VxVM disk. The example command that follows specifies the CDS format:
vxfencoorddg
. This group must contain an odd number of disks or LUNs and a minimum of three disks. Symantec recommends that you use only three coordinator disks, and that you use the smallest size disks or LUNs to conserve disk space.
For example, assume the disks have the device names /dev/sdz
/dev/sdaa
, and sdab
.
See the Veritas Volume Manager Administrator's Guide.
Running the vxfentsthdw utility
Review these guidelines on testing support for SCSI-3:
If you did not configure ssh, enable each node to have remote rsh access to the other nodes during installation and disk verification. On each node, placing a "+" character in the first line of the /.rhosts
file gives remote access to the system running the install program. You can limit the remote access to specific nodes. Refer to the manual page for the /.rhosts
file for more information. Remove the remote rsh access permissions after the installation and disk verification process.
vxfentsthdw -n
command.
vxfenadm
-i
diskpath command to verify the disk serial number.
vxfentsthdw
utility has additional options suitable for testing many disks. You can test disks without destroying data using the -r
option. The options for testing disk groups (-g
) and disks listed in a file (-f
) are described in detail:
After setting up, test the coordinator disk group.
To test the coordinator disk group
# /opt/VRTSvcs/vxfen/bin/vxfentsthdw
Make sure system-to-system communication is functioning properly before performing this step.
After setting up and testing the coordinator disk group, configure it for use.
-t
option to avoid automatically importing it when the nodes restart:
# echo "vxfencoorddg" > /etc/vxfendg
Do no use spaces between the quotes in the "vxfencoorddg" text.
This command creates the /etc/vxfendg
file, which includes the name of the coordinator disk group. Based on the contents of the /etc/vxfendg
file, the rc
script creates the /etc/vxfentab
file for use by the vxfen
driver when the system starts. The rc script also invokes the vxfenconfig
command, which configures the vxfen
driver to start and use the coordinator disks listed in /etc/vxfentab
. /etc/vxfentab
is a generated file; do not modify this file.