How to implement physical to virtual migration (P2V)

Migrating data from a physical server to a virtualized guest, the LUNs are first physically connected to the host, and then the LUNs are mapped in KVM from the host to the guest.

This use case procedure is very similar to the server consolidation use case and the procedures are quite similar. Physical to virtual migration is the process used to achieve server consolidation.

This use case requires Veritas Storage Foundation HA or Veritas Storage Foundation Cluster File System HA in the KVM host and Veritas Storage Foundation in the KVM guest. For setup information:

See Installing and configuring storage solutions in the host.

See Installing and configuring storage solutions in the KVM guest.

There are two options:

To implement physical to virtual migration with Storage Foundation in the host and guest

  1. Find the Linux device IDs of the devices which need mapping.
    # vxdg list diskgroup
  2. For each disk in the disk group:
    # vxdmpadm getsubpaths dmpnodename=device
    # ls -al /dev/disk/by-id/* | grep subpath

If Storage Foundation is not installed on the host, before decommissioning the physical server, identify the LUNs which require mapping by using the devices serial numbers. The LUNs can be mapped to the guest using the persistent "by-path" device links.

To implement physical to virtual migration if Storage Foundation is not installed in the host

  1. On the physical server, identify the LUNs which must be mapped on the KVM host.
    • Collect a list of disks and associated disk groups.

      # vxdisk -o alldgs list
      DEVICE       TYPE           DISK      GROUP     STATUS
      disk_1       auto:none      -         -         online invalid
      sda          auto:none      -         -         online invalid
      3pardata0_2  auto:cdsdisk   disk01    data_dg   online
      3pardata0_3  auto:cdsdisk   disk02    data_dg   online
    • Collect a list of the disks and the disks serial numbers.

      # vxdisk -p -x LUN_SERIAL_NO list
      	DEVICE		     LUN_SERIAL_NO
      	disk_1               3JA9PB27
              sda                  0010B9FF111B5205
              3pardata0_2          2AC00002065C
              3pardata0_3          2AC00003065C
  2. Deport the disk group on the physical machine.
  3. Map the LUNs to the virtualization host.

    On the virtualization host, identify the LUNs which were part of the disk group using the serial number. The udev database can be used to identify the devices on the host which need to be mapped.

    # udevadm info --export-db  | grep -v part |
                     grep -i DEVLINKS=.*200173800013420d0.* | \
    																	cut -d\  -f 4
    /dev/disk/by-path/pci-0000:0a:03.0-fc-0x20210002ac00065c:0x0020000
    /dev/disk/by-path/pci-0000:0a:03.1-fc-0x21210002ac00065c:0x0020000
    
    # udevadm info --export-db  | grep -v part |
                     grep -i DEVLINKS=.*200173800013420d0.* | \ 
    																	cut -d\  -f 4
    /dev/disk/by-path/pci-0000:0a:03.0-fc-0x20210002ac00065c:0x0040000
    /dev/disk/by-path/pci-0000:0a:03.1-fc-0x21210002ac00065c:0x0040000

    Map the LUNs to the guest. As there are multiple paths in this example, the paths syn-link can be used to ensure consistent device mapping for all four paths.

    # virsh attach-disk guest1 \
    /dev/disk/by-path/pci-0000:0a:03.0-fc-0x20210002ac00065c:0x0020000 \
          vdb
    # virsh attach-disk guest1 \
    /dev/disk/by-path/pci-0000:0a:03.1-fc-0x21210002ac00065c:0x0020000 \
          vdc
    # virsh attach-disk guest1 \
    /dev/disk/by-path/pci-0000:0a:03.0-fc-0x20210002ac00065c:0x00040000 \
          vdd
    # virsh attach-disk guest1 \
    /dev/disk/by-path/pci-0000:0a:03.1-fc-0x21210002ac00065c:0x00040000 \
          vde
  4. Verify that the devices are correctly mapped to the guest. The configuration changes can be made persistent by redefining the guest.
    # virsh dumpxml guest1 > /tmp/guest1.xml
    # virsh define /tmp/guest1.xm

In the procedure example, the disk group data_dg is mapped to guest1 using the DMP devices to map the storage.

To implement physical to virtual migration with Storage Foundation in the guest and host

  1. Map the LUNs to the virtualization host.
  2. On the virtualization host, identify the devices which require mapping. For example, the devices with the disk group data_dg are mapped to guest1.
    # vxdisk -o alldgs list |grep data_dg
    3pardata0_1  auto:cdsdisk    -            (data_dg)    online
    3pardata0_2  auto:cdsdisk    -            (data_dg)    online
  3. Map the devices to the guest.
    # virsh attach-disk guest1 /dev/vx/dmp/3pardata0_1 vdb
    Disk attached successfully
    
    # virsh attach-disk guest1 /dev/vx/dmp/3pardata0_2 vdc
    Disk attached successfully
    
  4. In the guest, verify that all devices are correctly mapped and that the disk group is available.
    # vxdisk scandisks
    # vxdisk -o alldgs list |grep data_dg
    3pardata0_1  auto:cdsdisk    -            (data_dg)    online
    3pardata0_2  auto:cdsdisk    -            (data_dg)    online
    
  5. In the virtualization host make the mapping persistent by redefining the guest:
    # virsh dumpxml guest1 > /tmp/guest1.xml
    # virsh define /tmp/guest1.xml

To use a Veritas Volume Manager volume as a boot device when configuring a new virtual machine

  1. Follow Red Hat's recommended steps to install and boot a VM guest.

    When requested to select managed or existing storage for the boot device, use the full path to the VxVM storage volume block device, for example /dev/vx/dsk/boot_dg/bootdisk-vol.

  2. If using the virsh-install utility, enter the full path to the VxVM volume block device with the --disk parameter, for example, --disk path=/dev/vx/dsk/boot_dg/bootdisk-vol.