Microsoft campus cluster failure scenarios

Different failure and recovery scenarios can occur with a Microsoft campus cluster and InfoScale Storage installed.

The site scenarios that can occur when there is a cluster server failure include the following:

Manual failover of a cluster between two sites should be performed only after coordination between the two sites to ensure that the primary server has in fact failed. If the primary server is still active and you manually import a cluster disk group containing the cluster quorum to the secondary (failover) server, a split-brain situation occurs. There may be data loss if the split-brain situation occurs because each plex of the mirrored volume may be updated independently when the same disk group is imported on both nodes.

For additional details on the manual failover scenario, see the following topic:

See Microsoft cluster quorum and quorum arbitration.

The following table lists failure situations and the outcomes that occur.

Table: List of failure situations and possible outcomes

Failure Situation

Outcome

Comments

Application fault

May mean the services stopped for an application, a NIC failed, or a database table went offline.

Failover

If the services stop for an application failure, the application automatically fails over to the other site.

Server failure (Site A)

May mean that a power cord was unplugged, a system hang occurred, or another failure caused the system to stop responding.

Failover

Assuming a two-node cluster pair, failing a single node results in a cluster failover. There will be a temporary service interruption for cluster resources that are moved from the failed node to the remaining live node.

Server failure (Site B)

May mean that a power cord was unplugged, a system hang occurred, or another failure caused the system to stop responding.

No interruption of service.

Failure of the passive site (Site B) does not interrupt service to the active site (Site A).

Partial SAN network failure

May mean that SAN fiber channel cables were disconnected to Site A or Site B Storage.

No interruption of service.

Assuming that each of the cluster nodes has some type of Dynamic Multi-Pathing (DMP) solution, removing one SAN fiber cable from a single cluster node should not effect any cluster resources running on that node, because the underlying DMP solution should seamlessly handle the SAN fiber path failover.

Private IP Heartbeat Network Failure

May mean that the private NICs or the connecting network cables failed.

No interruption of service.

With the standard two-NIC configuration for a cluster node, one NIC for the public cluster network and one NIC for the private heartbeat network, disabling the NIC for the private heartbeat network should not effect the cluster software and the cluster resources, because the cluster software will simply route the heartbeat packets through the public network.

Public IP Network Failure

May mean that the public NIC or LAN network has failed.

Failover.

Mirroring continues.

When the public NIC on the active node, or public LAN fails, clients cannot access the active node, and failover occurs.

Public and Private IP or Network Failure

May mean that the LAN network, including both private and public NIC connections, has failed.

No interruption of service. No Public LAN access.

Mirroring continues.

The site that owned the quorum resource right before the "network partition" remains as owner of the quorum resource, and is the only surviving cluster node. The cluster software running on the other cluster node self-terminates because it has lost the cluster arbitration for the quorum resource.

Lose Network Connection (SAN & LAN), failing both heartbeat and connection to storage

May mean that all network and SAN connections are severed, for example if a single pipe is used between buildings for the Ethernet and storage.

No interruption of service. Disks on the same node are functioning. Mirroring is not working.

The node/site that owned the quorum resource right before the "network partition" remains as owner of the quorum resource, and is the only surviving cluster node. The cluster software running on the other cluster node self-terminates because it has lost the cluster arbitration for the quorum resource. By default Microsoft clustering clussvc service will try to auto-start every minute, so after LAN/SAN communication has been re-established, Microsoft clustering clussvc will auto-start and will be able to re-join the existing cluster.

Storage Array failure on Site A, or on Site B

May mean that a power cord was unplugged, or a storage array failure caused the array to stop responding.

No interruption of service. Disks on the same node are functioning. Mirroring is not working.

The campus cluster is divided equally between two sites with one array at each site. Completely failing one storage array should not effect on the cluster or any cluster resources that are currently online. However, you will not be able to move any cluster resources between nodes after this storage failure, because neither node will be able to obtain a majority of disks within the cluster disk group.

Site A failure (power)

Means that all access to site A, including server and storage, is lost.

Manual failover.

If the failed site contains the cluster node that owned the quorum resource, then the overall cluster would be offline and cannot be brought online on the remaining live site without manual intervention.

Site B failure (power)

Means that all access to site B, including server and storage, is lost.

No interruption of service. Disks on the same node are functioning. Mirroring is not working.

If the failed site did not contain the cluster node that owned the quorum resource, then the cluster would still be alive with whatever cluster resources that were online on that node right before the site failure.