Application fault
May mean the services stopped for an application, a NIC failed, or a database table went offline
|
Failover |
If the services stop for an application failure, the application automatically fails over to the other site. |
Server failure (Site A)
May mean that a power cord was unplugged, a system hang occurred, or another failure caused the system to stop responding
|
Failover |
Assuming a two-node cluster pair, failing a single node results in a cluster failover. Service is temporarily interrupted for cluster resources that are moved from the failed node to the remaining live node. |
Server failure (Site B)
May mean that a power cord was unplugged, a system hang occurred, or another failure caused the system to stop responding
|
No interruption of service |
Failure of the passive site (Site B) does not interrupt service to the active site (Site A). |
Partial SAN network failure
May mean that SAN fiber channel cables were disconnected to Site A or Site B Storage
|
No interruption of service |
Assuming that each of the cluster nodes has some type of Dynamic Multi-Pathing (DMP) solution, removing one SAN fiber cable from a single cluster node should not effect any cluster resources running on that node, because the underlying DMP solution should seamlessly handle the SAN fiber path failover. |
Private IP Heartbeat Network Failure
May mean that the private NICs or the connecting network cables failed
|
No interruption of service |
With the standard two-NIC configuration for a cluster node, one NIC for the public cluster network and one NIC for the private heartbeat network, disabling the NIC for the private heartbeat network should not effect the cluster software and the cluster resources, because the cluster software simply routes the heartbeat packets through the public network. |
Public IP Network Failure
May mean that the public NIC or LAN network has failed
|
Failover
Mirroring continues.
|
When the public NIC on the active node, or public LAN fails, clients cannot access the active node, and failover occurs. |
Public and Private IP or Network Failure
May mean that the LAN network, including both private and public NIC connections, has failed
|
|
The site that owned the quorum resource right before the "network partition" remains the owner of the quorum resource, and is the only surviving cluster node. The cluster software running on the other cluster node self-terminates because it has lost the cluster arbitration for the quorum resource. |
Lose Network Connection (SAN & LAN), failing both heartbeat and connection to storage
May mean that all network and SAN connections are severed; for example, if a single pipe is used between buildings for the Ethernet and storage
|
|
The node/site that owned the quorum resource right before the "network partition" remains the owner of the quorum resource, and is the only surviving cluster node. The cluster software running on the other cluster node self-terminates because it has lost the cluster arbitration for the quorum resource. By default, the Microsoft clustering clussvc service tries to auto-start every minute, so after LAN/SAN communication has been re-established, the Microsoft clustering clussvc auto-starts and will be able to re-join the existing cluster. |
Storage Array failure on Site A, or on Site B
May mean that a power cord was unplugged, or a storage array failure caused the array to stop responding
|
|
The campus cluster is divided equally between two sites with one array at each site. Completely failing one storage array should have no effect on the cluster or any cluster resources that are online. However, you cannot move any cluster resources between nodes after this storage failure, because neither node will be able to obtain a majority of disks within the cluster disk group. |
Site A failure (power)
Means that all access to site A, including server and storage, is lost
|
Manual failover |
If the failed site contains the cluster node that owned the quorum resource, then the overall cluster is offline and cannot be onlined on the remaining live site without manual intervention. |
Site B failure (power)
Means that all access to site B, including server and storage, is lost
|
|
If the failed site did not contain the cluster node that owned the quorum resource, then the cluster is still alive with whatever cluster resources that were online on that node right before the site failure. |