Migrating from PowerPath to DMP on a Virtual I/O server for a dual-VIOS configuration

This following example procedure illustrates a migration from PowerPath to DMP on the Virtual I/O server, in a configuration with two VIO Servers.

Example configuration values:

Managed System:  dmpviosp6 
VIO server1: dmpvios1 
VIO server2: dmpvios2 
VIO clients: dmpvioc1 
SAN LUNs: EMC Clariion array 
Current multi-pathing solution on VIO server: EMC PowerPath

To migrate dmpviosp6 from PowerPath to DMP

  1. Before migrating, back up the Virtual I/O server to use for reverting the system in case of issues.

    See the IBM website for information about backing up Virtual I/O server.

  2. Shut down all of the VIO clients that are serviced by the VIO Server.
    dmpvioc1$ halt
  3. Log into the VIO server partition.Use the following command to access the non-restricted root shell. All subsequent commands in this procedure must be invoked from the non-restricted shell.
    $ oem_setup_env
  4. The following command shows lsmap output before migrating PowerPath VTD devices to DMP:
    dmpvios1$ /usr/ios/cli/ioscli lsmap -all
    SVSA           Physloc                      Client Partition ID 
    -------------- ---------------------------- -------------------- 
    vhost0         U9117.MMA.0686502-V2-C11     0x00000004 
    
    VTD              P0 
    Status           Available 
    LUN              0x8100000000000000 
    Backing device   hdiskpower0 
    Physloc          U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L4
    0034037
    00000000 
    
    VTD              P1 
    Status           Available 
    LUN              0x8200000000000000 
    Backing device   hdiskpower1
    Physloc          U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L40
    0240C10
    0000000 
    
    VTD              P2 
    Status           Available 
    LUN              0x8300000000000000 
    Backing device   hdiskpower2
    Physloc          U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L40
    02409A00000000
  5. Unconfigure all VTD devices from all virtual adapters on the system:
    dmpvios1$ rmdev -p vhost0
    P0 Defined
    P1 Defined
    P2 Defined

    Repeat this step for all other virtual adapters.

  6. Migrate the devices from PowerPath to DMP.

    Unmount the file system and varyoff volume groups residing on the PowerPath devices.

    Display the volume groups (vgs) in the configuration:

    dmpvios1$ lsvg 
    rootvg 
    brunovg 
    dmpvios1$ lsvg -p brunovg
    brunovg: 
    PV_NAME     PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION 
    hdiskpower3 active   511       501    103..92..102..102..102

    Use the varyoffvg command on all affected vgs:

    dmpvios1$ varyoffvg brunovg

    Unmanage the EMC Clariion array from PowerPath control

    # powermt unmanage class=clariion
    hdiskpower0 deleted
    hdiskpower1 deleted
    hdiskpower2 deleted
    hdiskpower3 deleted
  7. Reboot VIO server1
    dmpvios1$ reboot
  8. After the VIO server1 reboots, verify that all of the existing volume groups on the VIO server1 and MPIO VTDs on the VIO server1 are successfully migrated to DMP.
    dmpvios1$ lsvg -p brunovg
    brunovg: 
    PV_NAME      PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION 
    emc_clari0_138 active  511      501    103..92..102..102..102

    Verify the mappings of the LUNs on the migrated volume groups:

    dmpvios1$ lsmap -all
    SVSA           Physloc                    Client Partition ID 
    -------------- -------------------------- ------------------ 
    vhost0         U9117.MMA.0686502-V2-C11   0x00000000 
    VTD              P0 
    Status           Available 
    LUN              0x8100000000000000 
    Backing device   emc_clari0_130
    Physloc 
    
    VTD              P1 
    Status           Available 
    LUN              0x8200000000000000 
    Backing device   emc_clari0_136
    Physloc 
    
    VTD              P2 
    Status           Available 
    LUN              0x8300000000000000 
    Backing device   emc_clari0_137 
    Physloc 
  9. Repeat step 1 to step 8 for VIO server2.
  10. Start all of the VIO clients.