Symantec logo

Specifying the I/O policy

You can use the vxdmpadm setattr command to change the I/O policy for distributing I/O load across multiple paths to a disk array or enclosure. You can set policies for an enclosure (for example, HDS01), for all enclosures of a particular type (such as HDS), or for all enclosures of a particular array type (such as A/A for Active/Active, or A/P for Active/Passive).

Warning: Starting with release 4.1 of VxVM, I/O policies are recorded in the file /etc/vx/dmppolicy.info, and are persistent across reboots of the system.
Do not edit this file yourself.

The following policies may be set:

adaptive 

This policy attempts to maximize overall I/O throughput from/to the disks by dynamically scheduling I/O on the paths. It is suggested for use where I/O loads can vary over time. For example, I/O from/to a database may exhibit both long transfers (table scans) and short transfers (random look ups). The policy is also useful for a SAN environment where different paths may have different number of hops. No further configuration is possible as this policy is automatically managed by DMP. 

In this example, the adaptive I/O policy is set for the enclosure enc1

# vxdmpadm setattr enclosure enc1 \

  iopolicy=adaptive

balanced [partitionsize=size

This policy is designed to optimize the use of caching in disk drives and RAID controllers. The size of the cache typically ranges from 120KB to 500KB or more, depending on the characteristics of the particular hardware. During normal operation, the disks (or LUNs) are logically divided into a number of regions (or partitions), and I/O from/to a given region is sent on only one of the active paths. Should that path fail, the workload is automatically redistributed across the remaining paths. 

You can use the size argument to the partitionsize attribute to specify the partition size. The partition size in blocks is adjustable in powers of 2 from 2 up to 231. 

The default value for the partition size is 2048 blocks (1MB). A value that is not a power of 2 is silently rounded down to the nearest acceptable value. Specifying a partition size of 0 is equivalent to the default partition size of 2048 blocks (1MB). For example, the suggested partition size for an Hitachi HDS 9960 A/A array is from 32,768 to 131,072 blocks (16MB to 64MB) for an I/O activity pattern that consists mostly of sequential reads or writes. 


  Note   The benefit of this policy is lost if the value is set larger than the cache size.


The default value can be changed by adjusting the value of the dmp_pathswitch_blks_shift tunable parameter. 

See "Tunable parameters" on page 476. 

The next example sets the balanced I/O policy with a partition size of 4096 blocks (2MB) on the enclosure enc0: 

# vxdmpadm setattr enclosure enc0 \

  iopolicy=balanced partitionsize=4096

minimumq 

This policy sends I/O on paths that have the minimum number of outstanding I/O requests in the queue for a LUN. This is suitable for low-end disks or JBODs where a significant track cache does not exist. No further configuration is possible as DMP automatically determines the path with the shortest queue. 

The following example sets the I/O policy to minimumq for a JBOD: 

# vxdmpadm setattr enclosure Disk \

  iopolicy=minimumq

This is the default I/O policy for A/A arrays. 

priority 

This policy is useful when the paths in a SAN have unequal performance, and you want to enforce load balancing manually. You can assign priorities to each path based on your knowledge of the configuration and performance characteristics of the available paths, and of other aspects of your system. 

See "Setting the attributes of the paths to an enclosure" on page 157. 

In this example, the I/O policy is set to priority for all SENA arrays: 

# vxdmpadm setattr arrayname SENA \

  iopolicy=priority

round-robin 

This policy shares I/O equally between the paths in a round-robin sequence. For example, if there are three paths, the first I/O request would use one path, the second would use a different path, the third would be sent down the remaining path, the fourth would go down the first path, and so on. No further configuration is possible as this policy is automatically managed by DMP. 

The next example sets the I/O policy to round-robin for all Active/Active arrays: 

# vxdmpadm setattr arraytype A/A \

  iopolicy=round-robin

This is the default I/O policy for A/P and Asymmetric Active/Active (A/A-A) arrays. 

singleactive 

This policy routes I/O down the single active path. This policy can be configured for A/P arrays with one active path per controller, where the other paths are used in case of failover. If configured for A/A arrays, there is no load balancing across the paths, and the alternate paths are only used to provide high availability (HA). If the currently active path fails, I/O is switched to an alternate active path. No further configuration is possible as the single active path is selected by DMP. 

The following example sets the I/O policy to singleactive for JBOD disks: 

# vxdmpadm setattr arrayname DISK \

  iopolicy=singleactive

Scheduling I/O on the paths of an Asymmetric Active/Active array

You can specify the use_all_paths attribute in conjunction with the adaptive, balanced, minimumq, priority and round-robin I/O policies to specify whether I/O requests are to be scheduled on the secondary paths in addition to the primary paths of an Asymmetric Active/Active (A/A-A) array. Depending on the characteristics of the array, the consequent improved load balancing can increase the total I/O throughput. However, this feature should only be enabled if recommended by the array vendor. It has no effect for array types other than A/A-A.

For example, the following command sets the balanced I/O policy with a partition size of 4096 blocks (2MB) on the enclosure enc0, and allows scheduling of I/O requests on the secondary paths:

# vxdmpadm setattr enclosure enc0 iopolicy=balanced \

partitionsize=4096 use_all_paths=no

The default setting for this attribute is use_all_paths=no.

Example of applying load balancing in a SAN

This example describes how to configure load balancing in a SAN environment where there are multiple primary paths to an Active/Passive device through several SAN switches. As can be seen in this sample output from the vxdisk list command, the device c3t2d15s2 has eight primary paths:

# vxdisk list c3t2d15s2

Device: c3t2d15s2

...

numpaths: 8

c2t0d15s2 state=enabled type=primary

c2t1d15s2 state=enabled type=primary

c3t1d15s2 state=enabled type=primary

c3t2d15s2 state=enabled type=primary

c4t2d15s2 state=enabled type=primary

c4t3d15s2 state=enabled type=primary

c5t3d15s2 state=enabled type=primary

c5t4d15s2 state=enabled type=primary

In addition, the device is in the enclosure ENC0, belongs to the disk group mydg, and contains a simple concatenated volume myvol1.

The first step is to enable the gathering of DMP statistics:

# vxdmpadm iostat start

Next the dd command is used to apply an input workload from the volume:

# dd if=/dev/vx/rdsk/mydg/myvol1 of=/dev/null &

By running the vxdmpadm iostat command to display the DMP statistics for the device, it can be seen that all I/O is being directed to one path, c5t4d15s2:

# vxdmpadm iostat show dmpnodename=c3t2d15s2 interval=5 count=2

...

cpu usage = 11294us per cpu memory = 32768b

OPERATIONS KBYTES AVG TIME(ms)

PATHNAME READS WRITES READS WRITES READS WRITES

c2t0d15s2 0 0 0 0 0.000000 0.000000

c2t1d15s2 0 0 0 0 0.000000 0.000000

c3t1d15s2 0 0 0 0 0.000000 0.000000

c3t2d15s2 0 0 0 0 0.000000 0.000000

c4t2d15s2 0 0 0 0 0.000000 0.000000

c4t3d15s2 0 0 0 0 0.000000 0.000000

c5t3d15s2 0 0 0 0 0.000000 0.000000

c5t4d15s2 10986 0 5493 0 0.411069 0.000000

The vxdmpadm command is used to display the I/O policy for the enclosure that contains the device:

# vxdmpadm getattr enclosure ENC0 iopolicy

ENCLR_NAME DEFAULT CURRENT

============================================

ENC0 Round-Robin Single-Active

This shows that the policy for the enclosure is set to singleactive, which explains why all the I/O is taking place on one path.

To balance the I/O load across the multiple primary paths, the policy is set to round-robin as shown here:

# vxdmpadm setattr enclosure ENC0 iopolicy=round-robin

# vxdmpadm getattr enclosure ENC0 iopolicy

ENCLR_NAME DEFAULT CURRENT

============================================

ENC0 Round-Robin Round-Robin

The DMP statistics are now reset:

# vxdmpadm iostat reset

With the workload still running, the effect of changing the I/O policy to balance the load across the primary paths can now be seen.

# vxdmpadm iostat show dmpnodename=c3t2d15s2 interval=5 count=2

...

cpu usage = 14403us per cpu memory = 32768b

OPERATIONS KBYTES AVG TIME(ms)

PATHNAME READS WRITES READS WRITES READS WRITES

c2t0d15s2 2041 0 1021 0 0.396670 0.000000

c2t1d15s2 1894 0 947 0 0.391763 0.000000

c3t1d15s2 2008 0 1004 0 0.393426 0.000000

c3t2d15s2 2054 0 1027 0 0.402142 0.000000

c4t2d15s2 2171 0 1086 0 0.390424 0.000000

c4t3d15s2 2095 0 1048 0 0.391221 0.000000

c5t3d15s2 2073 0 1036 0 0.390927 0.000000

c5t4d15s2 2042 0 1021 0 0.392752 0.000000

The enclosure can be returned to the single active I/O policy by entering the following command:

# vxdmpadm setattr enclosure ENC0 iopolicy=singleactive