Example of applying load balancing in a SAN

This example describes how to use Dynamic Multi-Pathing (DMP) to configure load balancing in a SAN environment where there are multiple primary paths to an Active/Passive device through several SAN switches.

As shown in this sample output from the vxdisk list command, the device hdisk18 has eight primary paths:

# vxdisk list hdisk18

Device: hdisk18
  .
  .
  .
numpaths: 8
hdisk11 state=enabled type=primary
hdisk12 state=enabled type=primary
hdisk13 state=enabled type=primary
hdisk14 state=enabled type=primary
hdisk15 state=enabled type=primary
hdisk16 state=enabled type=primary
hdisk17 state=enabled type=primary
hdisk18 state=enabled type=primary

In addition, the device is in the enclosure ENC0, belongs to the disk group mydg, and contains a simple concatenated volume myvol1.

The first step is to enable the gathering of DMP statistics:

# vxdmpadm iostat start

Next, use the dd command to apply an input workload from the volume:

# dd if=/dev/vx/rdsk/mydg/myvol1 of=/dev/null &

By running the vxdmpadm iostat command to display the DMP statistics for the device, it can be seen that all I/O is being directed to one path, hdisk18:

# vxdmpadm iostat show dmpnodename=hdisk18 interval=5 count=2
    .
    .
    .
cpu usage = 11294us per cpu memory = 32768b
             OPERATIONS           KBYTES         AVG TIME(ms)
PATHNAME   READS   WRITES   READS   WRITES   READS      WRITES
hdisk11    0       0        0       0         0.00        0.00
hdisk12    0       0        0       0         0.00        0.00
hdisk13    0       0        0       0         0.00        0.00
hdisk14    0       0        0       0         0.00        0.00
hdisk15    0       0        0       0         0.00        0.00
hdisk16    0       0        0       0         0.00        0.00
hdisk17    0       0        0       0         0.00        0.00
hdisk18    10986   0        5493    0         0.41        0.00

The vxdmpadm command is used to display the I/O policy for the enclosure that contains the device:

# vxdmpadm getattr enclosure ENC0 iopolicy

ENCLR_NAME     DEFAULT          CURRENT
============================================
ENC0           MinimumQ      Single-Active

This shows that the policy for the enclosure is set to singleactive, which explains why all the I/O is taking place on one path.

To balance the I/O load across the multiple primary paths, the policy is set to round-robin as shown here:

# vxdmpadm setattr enclosure ENC0 iopolicy=round-robin
# vxdmpadm getattr enclosure ENC0 iopolicy

ENCLR_NAME    DEFAULT            CURRENT
============================================
ENC0          MinimumQ           Round-Robin

The DMP statistics are now reset:

# vxdmpadm iostat reset

With the workload still running, the effect of changing the I/O policy to balance the load across the primary paths can now be seen.

# vxdmpadm iostat show dmpnodename=hdisk18 interval=5 count=2
    .
    .
    .
cpu usage = 14403us per cpu memory = 32768b
               OPERATIONS           KBYTES          AVG TIME(ms)
PATHNAME     READS   WRITES   READS    WRITES   READS      WRITES
hdisk11      2041    0        1021     0         0.39        0.00
hdisk12      1894    0        947      0         0.39        0.00
hdisk13      2008    0        1004     0         0.39        0.00
hdisk14      2054    0        1027     0         0.40        0.00
hdisk15      2171    0        1086     0         0.39        0.00
hdisk16      2095    0        1048     0         0.39        0.00
hdisk17      2073    0        1036     0         0.39        0.00
hdisk18      2042    0        1021     0         0.39        0.00

The enclosure can be returned to the single active I/O policy by entering the following command:

# vxdmpadm setattr enclosure ENC0 iopolicy=singleactive