Migrating data from a physical server to a virtualized guest, the LUNs are first physically connected to the host, and then the LUNs are mapped in KVM from the host to the guest.
This use case procedure is very similar to the server consolidation use case and the procedures are quite similar. Physical to virtual migration is the process used to achieve server consolidation.
This use case requires Storage Foundation HA or Storage Foundation Cluster File System HA in the KVM host and Storage Foundation in the KVM guest. For setup information:
See Installing Veritas InfoScale Solutions in the kernel-based virtual machine environment.
There are three options:
If Veritas InfoScale Solutions products are installed on both the physical server and the virtual host, identifying the LUNs which need mapping is made easy. Once the LUNs are connected to the virtual host, 'vxdisk - o alldgs list' can be used to identify the devices in the disk group which require mapping.
If Veritas InfoScale Solutions products are not installed on the virtual host and the physical server is a Linux system, the devices which need mapping can be identified by using the device IDs on the physical server.
If Veritas InfoScale Solutions products are installed only on the physical server and the SF administration utility for RHEV, vxrhevadm
, is installed on the RHEV-M machine, you can identify the exact DMP device mapping on the guest. However, for volume and file system mappings, run heuristics to identify exact device mappings on the host.
If Storage Foundation is not installed on the host, before decommissioning the physical server, identify the LUNs which require mapping by using the devices serial numbers. The LUNs can be mapped to the guest using the persistent "by-path" device links.
To implement physical to virtual migration if Storage Foundation is not installed in the host (KVM-only)
The udev database can be used to identify the devices on the host which need to be mapped.
# udevadm info --export-db | grep '/dev/disk/by-path' | \ cut -d' ' -f4 /dev/disk/by-path/pci-0000:05:00.0-fc-0x5006016239a01884-lun-1 /dev/disk/by-path/pci-0000:05:00.0-fc-0x5006016239a01884-lun-2
Map the LUNs to the guest. As there are multiple paths in this example, the paths sym-link can be used to ensure consistent device mapping for all four paths.
# virsh attach-disk guest1 \ /dev/disk/by-path/pci-0000:05:00.0-fc-0x5006016239a01884-lun-1 \ vdb # virsh attach-disk guest1 \ /dev/disk/by-path/pci-0000:05:00.0-fc-0x5006016239a01884-lun-2 \ vdc
# virsh dumpxml guest1 > /tmp/guest1.xml # virsh define /tmp/guest1.xm
To implement physical to virtual migration with Storage Foundation in the guest and host (KVM-only)
# vxdisk -o alldgs list |grep data_dg 3pardata0_1 auto:cdsdisk - (data_dg) online 3pardata0_2 auto:cdsdisk - (data_dg) online
# virsh attach-disk guest1 /dev/vx/dmp/3pardata0_1 vdb Disk attached successfully # virsh attach-disk guest1 /dev/vx/dmp/3pardata0_2 vdc Disk attached successfully
# vxdisk scandisks # vxdisk -o alldgs list |grep data_dg 3pardata0_1 auto:cdsdisk - (data_dg) online 3pardata0_2 auto:cdsdisk - (data_dg) online
# virsh dumpxml guest1 > /tmp/guest1.xml # virsh define /tmp/guest1.xml
To implement physical to virtual migration with Storage Foundation only in the guest and the SF administration utility for RHEV, vxrhevadm
, on the RHEV Manager
# vxdisk list -guest1 <data_dg> DMP nodes # vxprint -guest1 <data_dg> -v, volume # vxfs, file created on vxfs filesystem
# ./vxrhevadm -p <password> -n <VM name> -d <dmpnode> attach Attached a dmp node to the specified virtual machine # ./vxrhevadm -p <password> -n <VM name> -v <volume> attach Attached a volume device to the specified virtual machine # ./vxrhevadm -p <password> -n <VM name> -f <file>:raw attach Attached a file system device to the specified virtual machine
To use a Veritas Volume Manager volume as a boot device when configuring a new virtual machine
When requested to select managed or existing storage for the boot device, use the full path to the VxVM storage volume block device, for example /dev/vx/dsk/boot_dg/bootdisk-vol.
To use a Storage Foundation component as a boot device when configuring a new virtual machine
When requested to select managed or existing storage for the boot device, use the full path to the VxVM storage volume block device, file system device, or DMP node.
For example /dev/vx/dsk/boot_dg/bootdisk-vol
Likewise, /dev/vx/dsk/boot_dg/bootdisk-file, or /dev/vx/dsk/boot_dg/bootdisk-dmpnode.
# /opt/VRTSrhevm/bin/vxrhevadm -p \
<rhevm-password> -n <vmname> -d <dmpnode-path> attach
# /opt/VRTSrhevm/bin/vxrhevadm -p \
<rhevm-password> -n <vmname> -v <volume-path> attach
# /opt/VRTSrhevm/bin/vxrhevadm -p \
<rhevm-password> -n <vmname> -f <file-path:raw> | <file-path:qcow2> attach