This section describes how to configure server-based I/O fencing for the VCS cluster. With server-based I/O fencing, a combination of CP servers and SCSI-3 compliant coordinator disks can act as coordination points for I/O fencing.
To configure the VCS cluster with server-based I/O fencing
Ensure that the CP server(s) are configured and reachable from the cluster. If coordinator disks are to be used as coordination points, ensure that they are SCSI-3 compliant.
Run the installvcs -fencing command to configure fencing.
/opt/VRTS/install/installvcs -fencing
The installer creates a vxfenmode file on each node. The file is located at /etc/vxfenmode.
The following procedure can be used as an example to configure server-based I/O fencing. In this procedure example, there is one CP server and two disks acting as the coordination points.
To configure fencing configuration using the installer - CP client-based fencing
After installing and configuring VCS on the VCS cluster, the user issues the following command for configuring fencing:
After issuing the command, the installer displays Symantec copyright information and the location of log files for the configuration process.
Access and review these log files if there is any problem with the installation process. The following is an example of the command output:
Logs for installvcs are being created in /var/tmp/installvcs-LqwKwB.
Next, the installer displays the current cluster information for verification purposes. The following is an example of the command output:
Cluster information verification: Cluster Name: clus1 Cluster ID Number: 4445 Systems: galaxy nebula
The cluster name, systems, and ID number are all displayed.
You are then asked whether you want to configure I/O fencing for the cluster. Enter "y" for yes. The rsh (or ssh) communication with the cluster nodes is then checked by the installer.
Next, you are prompted to select one of the following options for your fencing configuration:
Fencing configuration 1) Configure CP client based fencing 2) Configure disk based fencing 3) Configure fencing in disabled mode Select the fencing mechanism to be configured in this Application Cluster [1-3,q]
Enter the total number of coordination points including both servers and disks. This number should be at least 3.
Enter the total number of co-ordination points including both CP servers and disks: [b] (3)
Enter the total number of coordinator disks among the coordination points. In this example, there are two coordinator disks.
Enter the total number of disks among these: [b] (0) 2
Enter the Virtual IP addresses and host names for each of the Coordination Point servers.
Note: |
The installer assumes these values to be the identical as viewed from all the client cluster nodes. |
Enter the Virtual IP address/fully qualified host name for the Co-ordination Point Server #1:: [b] 10.209.80.197
Enter the port that the CP server would be listening on.
Enter the port in the range [49152, 65535] which the Co-ordination Point Server 10.209.80.197 would be listening on or simply accept the default port suggested: [b] (14250)
Enter the fencing mechanism for the disk or disks.
Enter fencing mechanism for the disk(s) (raw/dmp): [b,q,?] raw
The installer then displays a list of available disks to choose from to set up as coordinator points.
Select disk number 1 for co-ordination point 1) c3t0d0s2 2) c3t1d0s3 3) c3t2d0s4 Please enter a valid disk which is available from all the cluster nodes for co-ordination point [1-3,q] 1
Select a disk from the displayed list.
Ensure that the selected disk is available from all the VCS cluster nodes.
Read the displayed recommendation from the installer to verify the disks prior to proceeding:
It is strongly recommended to run the 'VxFen Test Hardware' utility located at '/opt/VRTSvcs/vxfen/bin/vxfentsthdw' in another window before continuing. The utility verifies if the shared storage you intend to use is configured to support I/O fencing. Use the disk you just selected for this verification. Come back here after you have completed the above step to continue with the configuration.
Symantec recommends that you verify that the disks you are using as coordination points have been configured to support I/O fencing. Press Enter to continue.
You are then prompted to confirm your disk selection after performing a 'vxfentsthdw' test.
The installer then displays a list of available disks to choose from to set up as coordinator points.
Select a disk from the displayed list for the second coordinator point.
Ensure that the selected disk is available from all the VCS cluster nodes.
Proceed to read the displayed recommendation from the installer to verify the disks prior to proceeding.
You are then prompted to confirm your disk selection after performing a 'vxfentsthdw' test.
Proceed to enter a disk group name for the coordinator disks or accept the default.
Enter the disk group name for coordinating disk(s): [b] (vxfencoorddg)
The installer now begins verification of the coordination points. At the end of the verification process, the following information is displayed:
Total number of coordination points being used: 3 CP Server (Port): 1. 10.209.80.197 (14250) SCSI-3 disks: 1. c3t0d0s2 2. c3t1d0s3 Disk Group name for the disks in customized fencing: vxfencoorddg Disk mechanism used for customized fencing: raw
Your are then prompted to accept the above information. Press Enter to accept the default (y) and continue.
The disks and disk group are initialized and the disk group deported on the VCS cluster node.
The installer now automatically determines the security configuration of the CP server's side and takes the appropriate action:
While it is recommended to have secure communication configured between CP Servers and CP client cluster, the client cluster must be in the same mode (secure or non-secure) as the CP servers are.
Since the CP servers are configured in secure mode, the installer will configure the client cluster also as a secure cluster. Press [Enter] to continue: Trying to configure Security on the cluster: All systems already have established trust within the Symantec Product Authentication Service domain root@galaxy.symantec.com
Enter whether you are using different root brokers for the CP servers and VCS clusters.
If you are using different root brokers, then the installer tries to establish trust between the authentication brokers of the CP servers and the VCS cluster nodes for their communication.
After entering "y" for yes or "n" for no, press Enter to continue.
If you entered "y" for yes in step 18, then you are also prompted for the following information:
Hostname for the authentication broker for any one of the CP servers
Port number where the authentication broker for the CP server is listening for establishing trust
Hostname for the authentication broker for any one of the VCS cluster nodes
Port number where the authentication broker for the VCS cluster is listening for establishing trust
The installer then displays your I/O fencing configuration and prompts you to indicate whether the displayed I/O fencing configuration information is correct.
If the information is correct, enter "y" for yes.
CPS Admin utility location: /opt/VRTScps/bin/cpsadm Cluster ID: 2122 Cluster Name: clus1 UUID for the above cluster: {ae5e589a-1dd1-11b2-dd44-00144f79240c}
The installer then updates the VCS cluster information on each of the CP Servers to ensure connectivity between them.
The installer then populates the file /etc/vxfenmode with the above details in each of the CP VCS cluster nodes.
Updating client cluster information on CP Server 10.210.80.199 Adding the client cluster to the CP Server 10.210.80.199 .................. Done Registering client node galaxy with CP Server 10.210.80.199.............. Done Adding CPClient user for communicating to CP Server 10.210.80.199 ......... Done Adding cluster clus1 to the CPClient user on CP Server 10.210.80.199 ... Done Registering client node nebula with CP Server 10.210.80.199 ............. Done Adding CPClient user for communicating to CP Server 10.210.80.199 ......... Done Adding cluster clus1 to the CPClient user on CP Server 10.210.80.199 ... Done Updating /etc/vxfenmode file on galaxy .................................. Done Updating /etc/vxfenmode file on nebula ......... ........................ Done
For additional information about the vxfenmode file in mixed disk and CP server mode, or pure server-based mode:
You are then prompted to configure the CP agent on the client cluster.
Do you want to configure CP Agent on the client cluster? [y,n,q] (y) Enter a non-existing name for the service group for CP Agent: [b] (vxfen) Adding CP Agent via galaxy ........................ Done
The VCS and the fencing process are then stopped and restarted on each VCS cluster node, and the I/O configuration process then finished.
Stopping VCS on galaxy ............................ Done Stopping Fencing on galaxy ........................ Done Stopping VCS on nebula ............................ Done Stopping Fencing on nebula ........................ Done
At the end of this process, the installer then displays the location of the configuration log files, summary files, and response files.