Symantec logo

Connectivity policy of shared disk groups

A shared disk group provides concurrent read and write access to the volumes that it contains for all nodes in a cluster. A shared disk group can only be created on the master node. This has the following advantages and implications:

The practical implication of this design is that I/O failure on any node results in the configuration of all nodes being changed. This is known as the global detach policy. However, in some cases, it is not desirable to have all nodes react in this way to I/O failure. To address this, an alternate way of responding to I/O failures, known as the local detach policy, was introduced.

The local detach policy is intended for use with shared mirrored volumes in a cluster. This policy prevents I/O failure on a single slave node from causing a plex to be detached. This would require the plex to be resynchronized when it is subsequently reattached. The local detach policy is available for disk groups that have a version number of 70 or greater.

For small mirrored volumes, non-mirrored volumes, volumes that use hardware mirrors, and volumes in private disk groups, there is no benefit in configuring the local detach policy. In most cases, it is recommended that you use the default global detach policy.

The detach policies have no effect if the master node loses access to all copies of the configuration database and logs in a disk group. If this happened in releases prior to 4.1, the master node always disabled the disk group. Release 4.1 introduces the disk group failure policy, which allows you to change this behavior for critical disk groups. This policy is only available for disk groups that have a version number of 120 or greater.

See "Global detach policy" on page 409.

See "Local detach policy" on page 409.

See "Disk group failure policy" on page 410.

See "Guidelines for choosing detach and failure policies" on page 411.

Global detach policy
Warning: The global detach policy must be selected when Dynamic MultiPathing (DMP) is used to manage multipathing on Active/Passive arrays, This ensures that all nodes correctly coordinate their use of the active path.

The global detach policy is the traditional and default policy for all nodes on the configuration. If there is a read or write I/O failure on a slave node, the master node performs the usual I/O recovery operations to repair the failure, and the plex is detached cluster-wide. All nodes remain in the cluster and continue to perform I/O, but the redundancy of the mirrors is reduced. When the problem that caused the I/O failure has been corrected, the mirrors that were detached must be recovered before the redundancy of the data can be restored.

Local detach policy
Warning: Do not use the local detach policy if you use the VCS agents that monitor the cluster functionality of Veritas Volume Manager, and which are provided with Veritas Storage FoundationTM for Cluster File System HA and Veritas Storage Foundation for databases HA. These agents do not notify VCS about local failures.

The local detach policy is designed to support failover applications in large clusters where the redundancy of the volume is more important than the number of nodes that can access the volume. If there is a write failure on a slave node, the master node performs the usual I/O recovery operations to repair the failure, and additionally contacts all the nodes to see if the disk is still acceptable to them. If the write failure is not seen by all the nodes, I/O is stopped for the node that first saw the failure, and the application using the volume is also notified about the failure.

If required, configure the cluster management software to move the application to a different node, and/or remove the node that saw the failure from the cluster. The volume continues to return write errors, as long as one mirror of the volume has an error. The volume continues to satisfy read requests as long as one good plex is available.

If the reason for the I/O error is corrected and the node is still a member of the cluster, it can resume performing I/O from/to the volume without affecting the redundancy of the data.

The vxdg command can be used to set the disk detach policy on a shared disk group.

See "Setting the disk detach policy on a shared disk group" on page 431.

Cluster behavior under I/O failure to a mirrored volume for different disk detach policies summarizes the effect on a cluster of I/O failure to the disks in a mirrored volume.

Cluster behavior under I/O failure to a mirrored volume for different disk detach policies

Type of I/O failure

Local
(diskdetpolicy=local)

Global (diskdetpolicy=global)

Failure of path to one disk in a volume for a single node 

Reads fail only if no plexes remain available to the affected node. Writes to the volume fail. 

The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain. 

Failure of paths to all disks in a volume for a single node 

I/O fails for the affected node. 

The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain. 

Failure of one or more disks in a volume for all nodes. 

The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain. 

The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain. 

Disk group failure policy

The local detach policy by itself is insufficient to determine the desired behavior if the master node loses access to all disks that contain copies of the configuration database and logs. In this case, the disk group is disabled. As a result, the other nodes in the cluster also lose access to the volume. In release 4.1, the disk group failure policy is introduced to determine the behavior of the master node in such cases.

Behavior of master node for different failure policies shows how the behavior of the master node changes according to the setting of the failure policy.

Behavior of master node for different failure policies

Type of I/O failure

Leave
(dgfailpolicy=leave)

Disable (dgfailpolicy=dgdisable)

Master node loses access to all copies of the logs. 

The master node panics with the message "klog update failed" for a failed kernel-initiated transaction, or "cvm config update failed" for a failed user-initiated transaction. 

The master node disables the disk group. 

The behavior of the master node under the disk group failure policy is independent of the setting of the disk detach policy. If the disk group failure policy is set to leave, all nodes panic in the unlikely case that none of them can access the log copies.

The vxdg command can be used to set the failure policy on a shared disk group.

See "Setting the disk group failure policy on a shared disk group" on page 431.

Guidelines for choosing detach and failure policies

In most cases it is recommended that you use the global detach policy, and particularly if any of the following conditions apply:

The local detach policy may be suitable in the following cases:

If you have a critical disk group that you do not want to become disabled in the case that the master node loses access to the copies of the logs, set the disk group failure policy to leave. This prevents I/O failure on the master node disabling the disk group. However, critical applications running on the master node fail if they lose access to the other shared disk groups. In such a case, it may be preferable to set the policy to dgdisable, and to allow the disk group to be disabled.

The default settings for the detach and failure policies are global and dgdisable respectively. You can use the vxdg command to change both the detach and failure policies on a shared disk group, as shown in this example:

# vxdg -g diskgroup set diskdetpolicy=local dgfailpolicy=leave