Setting up replication between parallel global cluster sites

You have configured Cluster Server (VCS) service groups for the database on each cluster. Each cluster requires an additional virtual IP address associated with the cluster for cross-cluster communication. The VCS installation and creation of the ClusterService group typically involves defining this IP address.

Configure a global cluster by setting:

Table: Tasks for configuring a parallel global cluster

Task

Description

Prepare to configure global parallel clusters

Before you configure a global cluster, review the following requirements:

  • Cluster names on the primary and secondary sites must be unique.

  • Node and resource names must be unique within a cluster but not across clusters.

  • Each cluster requires a virtual IP address associated with the cluster. The VCS installation and creation of the ClusterService group typically involves defining this IP address. If you did not configure the ClusterService group when you installed your SFHA Solutions product, configure it when you configure global clustering.

  • One WAN (Wide Area Network) heartbeat must travel between clusters, assuming each cluster has the means to monitor the health of the remote cluster. Configure the heartbeat resource manually.

  • All database user and group IDs must be the same on all nodes.

  • The database, which is replicated from the storage on the primary site to the secondary site, must be defined in a global group having the same name on each cluster. Each resource in the group may differ from cluster to cluster, but clients redirected to a remote cluster after a wide-area failover must see the same application as the one in the primary cluster.

Configure a global cluster using the global clustering wizard.

See “To modify the ClusterService group for global clusters using the global clustering wizard”.

Define the remote global cluster and heartbeat objects

See “To define the remote cluster and heartbeat”.

Configure global service groups for database resources

See “To configure global service groups for database resources”.

Start replication between the sites.

For software-based replication using Volume Replicator (VVR):

See About configuring a parallel global cluster using Volume Replicator (VVR) for replication.

For replication using Oracle Data Guard see the Data Guard documentation by Oracle.

For replication using hardware-based replication see the replicated agent guide for your hardware.

See the Cluster Server Bundled Agents Guide

Test the HA/DR configuration before putting it into production

See Testing a parallel global cluster configuration.

The global clustering wizard completes the following tasks:

To modify the ClusterService group for global clusters using the global clustering wizard

  1. On the primary cluster, start the GCO Configuration wizard:
    # /opt/VRTSvcs/bin/gcoconfig
    			
  2. The wizard discovers the NIC devices on the local system and prompts you to enter the device to be used for the global cluster. Specify the name of the device and press Enter.
  3. If you do not have NIC resources in your configuration, the wizard asks you whether the specified NIC will be the public NIC used by all the systems. Enter y if it is the public NIC; otherwise enter n. If you entered n, the wizard prompts you to enter the names of NICs on all systems.
  4. Enter the virtual IP address for the local cluster.
  5. If you do not have IP resources in your configuration, the wizard prompts you for the netmask associated with the virtual IP. The wizard detects the netmask; you can accept the suggested value or enter another one.

    The wizard starts running commands to create or update the ClusterService group. Various messages indicate the status of these commands. After running these commands, the wizard brings the ClusterService failover group online on any one of the nodes in the cluster.

After configuring global clustering, add the remote cluster object to define the IP address of the cluster on the secondary site, and the heartbeat object to define the cluster-to-cluster heartbeat. Heartbeats monitor the health of remote clusters. VCS can communicate with the remote cluster only after you set up the heartbeat resource on both clusters.

To define the remote cluster and heartbeat

  1. On the primary site, enable write access to the configuration:
    # haconf -makerw
    			
  2. On the primary site, define the remote cluster and its virtual IP address.

    In this example, the remote cluster is clus2 and its IP address is 10.11.10.102:

    # haclus -add clus2 10.11.10.102
    			
  3. Complete step 1 and step 2 on the secondary site using the name and IP address of the primary cluster.

    In this example, the primary cluster is clus1 and its IP address is 10.10.10.101:

    # haclus -add clus1 10.10.10.101
    			
  4. On the primary site, add the heartbeat object for the cluster. In this example, the heartbeat method is ICMP ping.
    # hahb -add Icmp
    			
  5. Define the following attributes for the heartbeat resource:

    • ClusterList lists the remote cluster.

    • Arguments enable you to define the virtual IP address for the remote cluster.

    For example:
    # hahb -modify Icmp ClusterList clus2
    # hahb -modify Icmp Arguments 10.11.10.102 -clus clus2
    			
  6. Save the configuration and change the access to read-only on the local cluster:
    # haconf -dump -makero
    			
  7. Complete step 4-6 on the secondary site using appropriate values to define the cluster on the primary site and its IP as the remote cluster for the secondary cluster.
  8. It is advisible to modify "OnlineRetryLimit" & "OfflineWaitLimit" attribute of IP resource type to 1 on both the clusters:
    # hatype -modify IP  OnlineRetryLimit  1
    # hatype -modify IP  OfflineWaitLimit  1
  9. Verify cluster status with the hastatus -sum command on both clusters.
    # hastatus -sum

    For example, for SF Oracle RAC, the final output should resemble the output displayed below, from rac_clus101 (primary):

    For example, the final output should resemble the output displayed below, from rac_clus101 (primary):

    # hastatus -sum
    .........
    -- WAN HEARTBEAT STATE
    -- Heartbeat       To                   State     
    
    L  Icmp            clus2          ALIVE     
    
    -- REMOTE CLUSTER STATE
    -- Cluster         State     
    
    M  clus2     RUNNING   
    
    -- REMOTE SYSTEM STATE
    -- cluster:system       State                Frozen              
    
    N  clus2:sys3  RUNNING              0                   
    N  clus2:sys4  RUNNING              0     
  10. Display the global setup by executing haclus -list command.
    # haclus -list
    				clus1
    				clus2
    			

    Example of heartbeat additions to the main.cf file on the primary site:

    .
    .
    remotecluster clus2 (
    Cluster Address = "10.11.10.102"
    )
    heartbeat Icmp (
        ClusterList = { clus2 }
        Arguments @clus2 = { "10.11.10.102" }
        )
    
    system sys1 (
        )
    .
    .

    Example heartbeat additions to the main.cf file on the secondary site:

    .
    .
    remotecluster clus1 (
        Cluster Address = "10.10.10.101"
        )
    
    heartbeat Icmp (
        ClusterList = { clus1 }
        Arguments @clus1 = { "10.10.10.101" }
        )
    
    system sys3 (
        )
    .
    .

    See the Cluster Server Administrator's Guide for more details for configuring the required and optional attributes of the heartbeat object.

To configure global service groups for database resources

  1. Configure and enable global groups for databases and resources.
  2. To test real data in an environment where HA/DR has been configured, schedule a planned migration to the secondary site for testing purposes.

    For example:

    See “To migrate the role of primary site to the remote site”.

    See “To migrate the role of new primary site back to the original primary site”.

  3. Upon successful testing, bring the environment into production.

For more information about VCS replication agents:

See the Cluster Server Bundled Agents Guide

For complete details on using VVR in a shared disk environment:

See the Veritas InfoScale™ Replication Administrator's Guide.