Configuring clusters for global cluster setup

Perform the following steps to configure the clusters for disaster recovery:

Configuring global cluster components at the primary site

Perform the following steps to configure global cluster components at the primary site. If you have already completed these steps during the VCS cluster configuration at the primary site, then proceed to the next task to set up a VCS cluster at the secondary site.

See Installing and configuring VCS at the secondary site

Run the GCO Configuration wizard to create or update the ClusterService group. The wizard verifies your configuration and validates it for a global cluster setup. You must have installed the required licenses on all nodes in the cluster.

See Installing a VCS license

To configure global cluster components at the primary site

  1. Start the GCO Configuration wizard.

    # gcoconfig

  2. The wizard discovers the NIC devices on the local system and prompts you to enter the device to be used for the global cluster. Specify the name of the device and press Enter.
  3. If you do not have NIC resources in your configuration, the wizard asks you whether the specified NIC will be the public NIC used by all systems. Enter y if it is the public NIC; otherwise enter n. If you entered n, the wizard prompts you to enter the names of NICs on all systems.
  4. Enter the virtual IP to be used for the global cluster.

    You must use either IPv4 or IPv6 address. VCS does not support configuring clusters that use different Internet Protocol versions in a global cluster.

  5. If you do not have IP resources in your configuration, the wizard does the following:
  6. The wizard prompts for the values for the network hosts. Enter the values.
  7. The wizard starts running commands to create or update the ClusterService group. Various messages indicate the status of these commands. After running these commands, the wizard brings the ClusterService group online.
  8. Verify that the gcoip resource that monitors the virtual IP address for inter-cluster communication is online.

    # hares -state gcoip

Installing and configuring VCS at the secondary site

Perform the following steps to set up a VCS cluster at the secondary site.

To install and configure VCS at the secondary site

  1. At the secondary site, install and configure VCS cluster.

    Note the following points for this task:

  2. Verify that the gcoip resource that monitors the virtual IP address for inter-cluster communication is online.

    # hares -state gcoip

Securing communication between the wide-area connectors

Perform the following steps to configure secure communication between the wide-area connectors.

To secure communication between the wide-area connectors

  1. Verify that the Symantec Product Authentication Service (AT) is running in both the clusters.
  2. If the clusters use different root brokers, establish trust between the clusters.

    For example in a VCS global cluster environment with two clusters, perform the following steps to establish trust between the clusters:

  3. On each cluster, take the wac resource offline on the node where the wac resource is online. For each cluster, run the following command:

    hares -offline wac -sys node_where_wac_is_online

  4. Update the values of the StartProgram and MonitorProcesses attributes of the wac resource:

    hares -modify wac StartProgram \

    "/opt/VRTSvcs/bin/wacstart -secure"

    hares -modify wac MonitorProcesses \

    "/opt/VRTSvcs/bin/wac -secure"

  5. On each cluster, bring the wac resource online. For each cluster, run the following command on any node:

    hares -online wac -sys systemname

Configuring remote cluster objects

After you set up the VCS and replication infrastructure at both sites, you must link the two clusters. You must configure remote cluster objects at each site to link the two clusters. The Remote Cluster Configuration wizard provides an easy interface to link clusters.

To configure remote cluster objects

Configuring additional heartbeat links (optional)

You can configure additional heartbeat links to exchange ICMP heartbeats between the clusters.

To configure an additional heartbeat between the clusters (optional)

  1. On Cluster Explorer's Edit menu, click Configure Heartbeats.
  2. In the Heartbeat configuration dialog box, enter the name of the heartbeat and select the check box next to the name of the cluster.
  3. Click the icon in the Configure column to open the Heartbeat Settings dialog box.
  4. Specify the value of the Arguments attribute and various timeout and interval fields. Click + to add an argument value; click - to delete it.

    If you specify IP addresses in the Arguments attribute, make sure the IP addresses have DNS entries.

  5. Click OK.
  6. Click OK in the Heartbeat configuration dialog box.

    Now, you can monitor the state of both clusters from the Java Console.

Configuring the Steward process (optional)

In case of a two-cluster global cluster setup, you can configure a Steward to prevent potential split-brain conditions, provided the proper network infrastructure exists.

See The Steward process: Split-brain in two-cluster global clusters.

To configure the Steward process for clusters not running in secure mode

  1. Identify a system that will host the Steward process.

    Make sure both clusters can connect to the system through a ping command.

  2. Copy the file steward from a node in the cluster to the Steward system. The file resides at the following path:

    /opt/VRTSvcs/bin/

  3. In both clusters, set the Stewards attribute to the IP address of the system running the Steward process. For example:

    cluster cluster1938 (

    UserNames = { admin = gNOgNInKOjOOmWOiNL }

    ClusterAddress = "10.182.147.19"

    Administrators = { admin }

    CredRenewFrequency = 0

    CounterInterval = 5

    Stewards = {"10.212.100.165"}

    }

  4. On the system designated to host the Steward, start the Steward process:

    steward -start

To configure the Steward process for clusters running in secure mode

  1. Verify the prerequisites for securing Steward communication are met.

    See Prerequisites for clusters running in secure mode.

    To verify that the wac process runs in secure mode, do the following:

  2. Identify a system that will host the Steward process.

    Make sure both clusters can connect to the system through a ping command.

  3. Copy the steward file from a node in the cluster to the Steward system. The file resides at the following path:

    /opt/VRTSvcs/bin/

  4. Install the Symantec Product Authentication Services client on the system that is designated to run the Steward process.

    See the Symantec Product Authentication Service documentation for instructions.

  5. Create an account for the Steward in any authentication broker of the clusters that are part of the global cluster. All cluster nodes serve as authentication brokers when the cluster runs in secure mode.

    vssat addprpl --pdrtype local --domain HA_SERVICES --prplname Steward_GCO_systemname --password password --prpltype service

    When creating the account, make sure the following conditions are met:

  6. Note the password used to create the account.
  7. Retrieve the broker hash for the account.

    vssat showbrokerhash

  8. Create a credential package (steward.cred) for this account. Note that the credential package will be bound to a system.

    vssat createpkg --prplname Steward_GCO_systemname --domain vx:HA_SERVICES@<fully_qualified_name_of_cluster_node_on_which_this_command_is_being_run> --broker systemname:2821 --password password --hash <brokerhash_obtained_in_above_step> --out steward.cred --host_ctx systemname_on_which_steward_will_run

  9. Copy the file steward.cred to the system designated to run the Steward process.

    Copy the file to the directory where the steward is installed.

  10. Execute the credential package on the system designated to run the Steward process.

    vssat execpkg --in <path_to_credential>\steward.cred --ob --host_ctx

    The variable <path_to_credential> represents the directory to which you coped the steward credentials.

  11. On the Steward system, create a file called Steward.conf and populate it with the following information:

    broker=system_name

    accountname=accountname

    domain=HA_SERVICES@FQDN_of_system_that_issued_the_certificate

  12. In both clusters, set the Stewards attribute to the IP address of the system that runs the Steward process. For example:

    cluster cluster1938 (

    UserNames = { admin = gNOgNInKOjOOmWOiNL }

    ClusterAddress = "10.182.147.19"

    Administrators = { admin }

    CredRenewFrequency = 0

    CounterInterval = 5

    Stewards = {"10.212.100.165"}

    }

  13. On the system designated to run the Steward, start the Steward process:

    steward -start -secure

To stop the Steward process