This section describes how VCS works with VxVM to provide high availability in a campus cluster environment.
In a campus cluster setup, VxVM automatically mirrors volumes across sites. To enhance read performance, VxVM reads from the plexes at the local site where the application is running. VxVM writes to plexes at both the sites.
In the event of a storage failure at a site, VxVM detaches all the disks at the failed site from the diskgroup to maintain data consistency. When the failed storage comes back online, VxVM automatically reattaches the site to the diskgroup and recovers the plexes.
See Veritas Volume Manager Administrator's Guide for more information.
When service group or system faults occur, VCS fails over the service groups or the nodes based on the values you set for the service group attributes SystemZones and AutoFailOver.
For campus cluster setup, you must define the SystemZones attribute in such a way that the nodes at each site are grouped together. Depending on the value of the AutoFailOver attribute, VCS failover behavior is as follows:
Sample definition for these service group attributes in the VCS main.cf is as follows:
SystemList = { node1=0, node2=1, node3=2, node4=3 }
SystemZones = { node1=0, node2=0, node3=1, node4=1 }
Failure scenarios in campus cluster lists the possible failure scenarios and how VCS campus cluster recovers from these failures.
If the value of the AutoFailOver attribute is set to 0, VCS requires administrator intervention to initiate a fail over in both the cases of node failure. | |
VCS does not fail over the service group when such a storage failure occurs. VxVM detaches the site from the diskgroup if any volume in that diskgroup does not have at least one valid plex at the site where the disks failed. VxVM does not detach the site from the diskgroup in the following cases:
If only some of the disks that failed come online and if the vxrelocd daemon is running, VxVM relocates the remaining failed disks to any available disks. Then, VxVM automatically reattaches the site to the diskgroup and resynchronizes the plexes to recover the volumes. If all the disks that failed come online, VxVM automatically reattaches the site to the diskgroup and resynchronizes the plexes to recover the volumes. | |
VCS acts based on the DiskGroup agent's PanicSystemOnDGLoss attribute value. See Veritas Bundled Agents Reference Guide for more information. | |
All nodes and storage at a site fail. Depending on the value of the AutoFailOver attribute, VCS fails over the Oracle service group as follows:
Because the storage at the failed site is inaccessible, VCS imports the disk group in the application service group with all devices at the failed site marked as NODEVICE. When the storage at the failed site comes online, VxVM automatically reattaches the site to the diskgroup and resynchronizes the plexes to recover the volumes. | |
Nodes at each site lose connectivity to the nodes at the other site The failure of private interconnects between the nodes can result in split brain scenario and cause data corruption. Review the details on other possible causes of split brain and how I/O fencing protects shared data from corruption. Symantec recommends that you configure I/O fencing to prevent data corruption in campus clusters. | |
Nodes at each site lose connectivity to the storage and the nodes at the other site Symantec recommends that you configure I/O fencing to prevent split brain and serial split brain conditions.
|