Configuring server-based I/O fencing

This section describes how to configure server-based I/O fencing for the VCS cluster. With server-based I/O fencing, a combination of CP servers and SCSI-3 compliant coordinator disks can act as coordination points for I/O fencing.

To configure the VCS cluster with server-based I/O fencing

  1. Ensure that the CP server(s) are configured and reachable from the cluster. If coordinator disks are to be used as coordination points, ensure that they are SCSI-3 compliant.

  2. Run the installvcs -fencing command to configure fencing.

    For example:

    /opt/VRTS/install/installvcs -fencing

    The installer creates a vxfenmode file on each node. The file is located at /etc/vxfenmode.

The following procedure can be used as an example to configure server-based I/O fencing. In this procedure example, there is one CP server and two disks acting as the coordination points.

To configure fencing configuration using the installer - CP client-based fencing

  1. After installing and configuring VCS on the VCS cluster, the user issues the following command for configuring fencing:

    /opt/VRTS/install/installvcs -fencing

  2. After issuing the command, the installer displays Symantec copyright information and the location of log files for the configuration process.

    Access and review these log files if there is any problem with the installation process. The following is an example of the command output:

    Logs for installvcs are being created in /var/tmp/installvcs-LqwKwB.
  3. Next, the installer displays the current cluster information for verification purposes. The following is an example of the command output:

    Cluster information verification:
    
    Cluster Name: clus1
    Cluster ID Number: 4445
    Systems: galaxy nebula
    

    The cluster name, systems, and ID number are all displayed.

    You are then asked whether you want to configure I/O fencing for the cluster. Enter "y" for yes. The rsh (or ssh) communication with the cluster nodes is then checked by the installer.

  4. Next, you are prompted to select one of the following options for your fencing configuration:

    Fencing configuration
         1)  Configure CP client based fencing
         2)  Configure disk based fencing
         3)  Configure fencing in disabled mode
    
    Select the fencing mechanism to be configured in this 
    Application Cluster [1-3,q]

    Select the first option for CP client-based fencing.

  5. Enter the total number of coordination points including both servers and disks. This number should be at least 3.

    For example:

    Enter the total number of co-ordination points including both 
    CP servers and disks: [b] (3)
  6. Enter the total number of coordinator disks among the coordination points. In this example, there are two coordinator disks.

    For example:

    Enter the total number of disks among these: 
    [b] (0) 2
  7. Enter the Virtual IP addresses and host names for each of the Coordination Point servers.

    Note:

    The installer assumes these values to be the identical as viewed from all the client cluster nodes.

    For example:

    Enter the Virtual IP address/fully qualified host name
    for the Co-ordination Point Server #1:: 
    [b] 10.209.80.197
  8. Enter the port that the CP server would be listening on.

    For example:

    Enter the port in the range [49152, 65535] which the 
    Co-ordination Point Server 10.209.80.197 
    would be listening on or simply accept the default port suggested: 
    [b] (14250)
  9. Enter the fencing mechanism for the disk or disks.

    For example:

    Enter fencing mechanism for the disk(s) (raw/dmp): 
    [b,q,?] raw
  10. The installer then displays a list of available disks to choose from to set up as coordinator points.

    Select disk number 1 for co-ordination point
    
    1)  c3t0d0s2
    2)  c3t1d0s3
    3)  c3t2d0s4
    
    Please enter a valid disk which is available from all the 
    cluster nodes for co-ordination point [1-3,q] 1

    Select a disk from the displayed list.

    Ensure that the selected disk is available from all the VCS cluster nodes.

  11. Read the displayed recommendation from the installer to verify the disks prior to proceeding:

    It is strongly recommended to run the 'VxFen Test Hardware' utility 
    located at '/opt/VRTSvcs/vxfen/bin/vxfentsthdw' in another window 
    before continuing. The utility verifies if the shared storage 
    you intend to use is configured to support I/O
    fencing. Use the disk you just selected for this
    verification. Come back here after you have completed
    the above step to continue with the configuration.

    Symantec recommends that you verify that the disks you are using as coordination points have been configured to support I/O fencing. Press Enter to continue.

    You are then prompted to confirm your disk selection after performing a 'vxfentsthdw' test.

    Press Enter to accept the default (y) and continue.

  12. The installer then displays a list of available disks to choose from to set up as coordinator points.

    Select a disk from the displayed list for the second coordinator point.

    Ensure that the selected disk is available from all the VCS cluster nodes.

  13. Proceed to read the displayed recommendation from the installer to verify the disks prior to proceeding.

    Press Enter to continue.

  14. You are then prompted to confirm your disk selection after performing a 'vxfentsthdw' test.

    Press Enter to accept the default (y) and continue.

  15. Proceed to enter a disk group name for the coordinator disks or accept the default.

    Enter the disk group name for coordinating disk(s): 
    [b] (vxfencoorddg) 
  16. The installer now begins verification of the coordination points. At the end of the verification process, the following information is displayed:

    • Total number of coordination points being used

    • CP Server Virtual IP/hostname and port number

    • SCSI-3 disks

    • Disk Group name for the disks in customized fencing

    • Disk mechanism used for customized fencing

    For example:

    Total number of coordination points being used: 3
    CP Server (Port): 
        1. 10.209.80.197 (14250)
    SCSI-3 disks:
        1. c3t0d0s2
        2. c3t1d0s3
    Disk Group name for the disks in customized fencing: vxfencoorddg
    Disk mechanism used for customized fencing: raw

    Your are then prompted to accept the above information. Press Enter to accept the default (y) and continue.

    The disks and disk group are initialized and the disk group deported on the VCS cluster node.

  17. The installer now automatically determines the security configuration of the CP server's side and takes the appropriate action:

    • If the CP server's side is configured for security, then the VCS cluster's side will be configured for security.

    • If the CP server's side is not configured for security, then the VCS cluster's side will not be configured for security.

    For example:

    While it is recommended to have secure communication 
    configured between CP Servers and CP client cluster, the client cluster 
    must be in the same mode (secure or non-secure) as the CP servers are.
    Since the CP servers are configured in secure mode, the installer 
    will configure the client cluster also as a secure cluster.
    
    Press [Enter] to continue: 
    
    Trying to configure Security on the cluster:
    
    All systems already have established trust within the 
    Symantec Product Authentication Service domain
    root@galaxy.symantec.com
  18. Enter whether you are using different root brokers for the CP servers and VCS clusters.

    If you are using different root brokers, then the installer tries to establish trust between the authentication brokers of the CP servers and the VCS cluster nodes for their communication.

    After entering "y" for yes or "n" for no, press Enter to continue.

  19. If you entered "y" for yes in step 18, then you are also prompted for the following information:

    • Hostname for the authentication broker for any one of the CP servers

    • Port number where the authentication broker for the CP server is listening for establishing trust

    • Hostname for the authentication broker for any one of the VCS cluster nodes

    • Port number where the authentication broker for the VCS cluster is listening for establishing trust

    Press Enter to continue.

  20. The installer then displays your I/O fencing configuration and prompts you to indicate whether the displayed I/O fencing configuration information is correct.

    If the information is correct, enter "y" for yes.

    For example:

    CPS Admin utility location: /opt/VRTScps/bin/cpsadm     
    Cluster ID: 2122
    Cluster Name: clus1
    UUID for the above cluster: {ae5e589a-1dd1-11b2-dd44-00144f79240c}
  21. The installer then updates the VCS cluster information on each of the CP Servers to ensure connectivity between them.

    The installer then populates the file /etc/vxfenmode with the above details in each of the CP VCS cluster nodes.

    For example:

    Updating client cluster information on CP Server 10.210.80.199
    
    
    Adding the client cluster to the CP Server 10.210.80.199 .................. Done
    
    Registering client node galaxy with CP Server 10.210.80.199.............. Done
    Adding CPClient user for communicating to CP Server 10.210.80.199 ......... Done
    Adding cluster clus1 to the CPClient user on CP Server 10.210.80.199 ... Done
    
    Registering client node nebula with CP Server 10.210.80.199 ............. Done
    Adding CPClient user for communicating to CP Server 10.210.80.199 ......... Done
    Adding cluster clus1 to the CPClient user on CP Server 10.210.80.199 ... Done
    
    Updating /etc/vxfenmode file on galaxy .................................. Done
    Updating /etc/vxfenmode file on nebula ......... ........................ Done

    For additional information about the vxfenmode file in mixed disk and CP server mode, or pure server-based mode:

    See About I/O fencing configuration files.

  22. You are then prompted to configure the CP agent on the client cluster.

    Do you want to configure CP Agent on the client cluster? [y,n,q]
    (y) 
    
    Enter a non-existing name for the service group for CP Agent: 
    [b] (vxfen) 
        
    Adding CP Agent via galaxy ........................ Done
  23. The VCS and the fencing process are then stopped and restarted on each VCS cluster node, and the I/O configuration process then finished.

    Stopping VCS on galaxy ............................ Done
    Stopping Fencing on galaxy ........................ Done
    Stopping VCS on nebula ............................ Done
    Stopping Fencing on nebula ........................ Done
  24. At the end of this process, the installer then displays the location of the configuration log files, summary files, and response files.