Taking over the primary role by the remote cluster

Takeover occurs when the remote cluster on the secondary site starts the application that uses replicated data. This situation may occur if the secondary site perceives the primary site as dead, or when the primary site becomes inaccessible (perhaps for a known reason). For a more detailed description of concepts of taking over the primary role:

See the Veritas Volume Replicator Administrator's Guide.

Before enabling the secondary site to take over the primary role, the administrator on the secondary site must "declare" the type of failure at the remote (primary, in this case) site and designate the failure type using one of the options for the haclus command.

Table: Options for the remote cluster to take over the primary role

Takeover options

Description

Disaster

When the cluster on the primary site is inaccessible and appears dead, the administrator declares the failure type as "disaster." For example, fire may destroy a data center, including the primary site and all data in the volumes. After making this declaration, the administrator can bring the service group online on the secondary site, which now has the role as "primary" site.

Outage

When the administrator of a secondary site knows the primary site is inaccessible for a known reason, such as a temporary power outage, the administrator may declare the failure as an "outage." Typically, an administrator expects the primary site to return to its original state.

After the declaration for an outage occurs, the RVGSharedPri agent enables DCM logging while the secondary site maintains the primary replication role. After the original primary site becomes alive and returns to its original state, DCM logging makes it possible to use fast fail back resynchronization when data is resynchronized to the original cluster.

Before attempting to resynchronize the data using the fast fail back option from the current primary site to the original primary site, take the precaution at the original primary site of making a snapshot of the original data. This action provides a valid copy of data at the original primary site for use in the case the current primary site fails before the resynchronization is complete.

Disconnect

When both clusters are functioning properly and the heartbeat link between the clusters fails, a split-brain condition exists. In this case, the administrator can declare the failure as "disconnect," which means no attempt will occur to take over the role of the primary site at the secondary site. This declaration is merely advisory, generating a message in the VCS log indicating the failure results from a network outage rather than a server outage.

Replica

In the rare case where the current primary site becomes inaccessible while data is resynchronized from that site to the original primary site using the fast fail back method, the administrator at the original primary site may resort to using a data snapshot (if it exists) taken before the start of the fast fail back operation. In this case, the failure type is designated as "replica".

The examples illustrate the steps required for an outage takeover and resynchronization.

To take over after an outage

  1. From any node of the secondary site, issue the haclus command:
    # haclus -declare outage -clus clus1
    			
  2. After declaring the state of the remote cluster, bring the database_grp service group online on the secondary site. For example:
    # hagrp -online -force database_grp -any
    			

To resynchronize after an outage

  1. On the original primary site, create a snapshot of the RVG before resynchronizing it in case the current primary site fails during the resynchronization. Assuming the disk group is data_disk_group and the RVG is rac1_rvg, type:
    # vxrvg -g data_disk_group -F snapshot rac1_rvg
    				

    See the Veritas Storage Foundation and High Availability Solutions Replication Administrator's Guide for details on RVG snapshots.

  2. Resynchronize the RVG. From any node of the current primary site, issue the hares command and the -action option with the fbsync action token to resynchronize the RVGSharedPri resource. For example:
    # hares -action ora_vvr_shpri fbsync -sys mercury
    				
  3. Perform one of the following commands, depending on whether the resynchronization of data from the current primary site to the original primary site is successful:

    • If the resynchronization of data is successful, use the vxrvg command with the snapback option to reattach the snapshot volumes on the original primary site to the original volumes in the specified RVG:

      # vxrvg -g data_disk_group snapback rac1_rvg
      						
    • A failed attempt at the resynchronization of data (for example, a disaster hits the primary RVG when resynchronization is in progress) could generate inconsistent data.

      You can restore the contents of the RVG data volumes from the snapshot taken in step 1:

      # vxrvg -g data_disk_group snaprestore rac1_rvg