Using Veritas Volume Manager snapshots for cloning Logical Domain boot disks

The following highlights the steps to clone the boot disk from an existing LDom using VxVM snapshots, and makes use of the third-mirror breakoff snapshots.

See Provisioning Veritas Volume Manager volumes as boot disks for guest Logical Domains.

Figure: Example of using Veritas Volume Manager snapshots for cloning Logical Domain boot disks illustrates an example of using Veritas Volume Manager snapshots for cloning Logical Domain boot disks.

Figure: Example of using Veritas Volume Manager snapshots for cloning Logical Domain boot disks

Example of using Veritas Volume Manager snapshots for cloning Logical Domain boot disks

Before this procedure, ldom1 has its boot disk contained in a large volume, /dev/vx/dsk/boot_dg/bootdisk1-vol.

This procedure involves the following steps:

To clone the boot disk using Veritas Volume Manager snapshots

  1. Create a third-mirror breakoff snapshot of the source volume bootdisk1-vol. To create the snapshot, you can either take some of the existing ACTIVE plexes in the volume, or you can use the following command to add new snapshot mirrors to the volume:

    primary# vxsnap [-b] [-g diskgroup] addmir volume [nmirror=N] 
    \ [alloc=storage_attributes]

    By default, the vxsnap addmir command adds one snapshot mirror to a volume unless you use the nmirror attribute to specify a different number of mirrors. The mirrors remain in the SNAPATT state until they are fully synchronized. The -b option can be used to perform the synchronization in the background. Once synchronized, the mirrors are placed in the SNAPDONE state.

    For example, the following command adds two mirrors to the volume, bootdisk1-vol, on disks mydg10 and mydg11:

    primary# vxsnap -g boot_dg addmir bootdisk1-vol nmirror=2 
    \ alloc=mydg10,mydg11

    If you specify the -b option to the vxsnap addmir command, you can use the vxsnap snapwait command to wait for synchronization of the snapshot plexes to complete, as shown in the following example:

    primary# vxsnap -g boot_dg snapwait bootdisk1-vol nmirror=2
  2. To create a third-mirror break-off snapshot, use the following form of the vxsnap make command.

    Caution:

    Shut down the guest domain before executing the vxsnap command to take the snapshot.

    primary# vxsnap [-g diskgroup] make \ source=volume[/newvol=snapvol] 
    \ {/plex=plex1[,plex2,...]|/nmirror=number]}

    Either of the following attributes may be specified to create the new snapshot volume, snapvol, by breaking off one or more existing plexes in the original volume:

    plex

    Specifies the plexes in the existing volume that are to be broken off. This attribute can only be used with plexes that are in the ACTIVE state.

    nmirror

    Specifies how many plexes are to be broken off. This attribute can only be used with plexes that are in the SNAPDONE state. Such plexes could have been added to the volume by using the vxsnap addmir command.

    Snapshots that are created from one or more ACTIVE or SNAPDONE plexes in the volume are already synchronized by definition.

    For backup purposes, a snapshot volume with one plex should be sufficient.

    For example,

    primary# vxsnap -g boot_dg make 
    \ source=bootdisk1-vol/newvol=SNAP-bootdisk1-vol/nmirror=1

    Here bootdisk1-vol makes source; SNAP-bootdisk1-vol is the new volume and 1 is the nmirror value.

    The block device for the snapshot volume will be /dev/vx/dsk/boot_dg/SNAP-bootdisk1-vol.

  3. Configure a service by exporting the/dev/vx/dsk/boot_dg/SNAP-bootdisk1-volfile as a virtual disk.

    primary# ldm add-vdiskserverdevice \ 
    /dev/vx/dsk/boot_dg/SNAP-bootdisk1-vol vdisk2@primary-vds0
  4. Add the exported disk to ldom1 first.

    primary# ldm bind ldom1
    primary# ldm start ldom1
  5. Start ldom1 and boot ldom1 from its primary boot disk vdisk1.

    primary# ldm bind ldom1
    primary# ldm start ldom1
  6. If the new virtual disk device node entires do not show up in the/dev/[r]dsk directories, then run the devfsadm command in the guest domain:

    ldom1# devfsadm -C

    where vdisk2 is the c0d2s# device.

    ldom1# ls /dev/dsk/c0d2s*
    /dev/dsk/c0d2s0 /dev/dsk/c0d2s2 /dev/dsk/c0d2s4 /dev/dsk/c0d2s6
    /dev/dsk/c0d2s1 /dev/dsk/c0d2s3 /dev/dsk/c0d2s5 /dev/dsk/c0d2s7
  7. Mount the root file system of c0d2s0 and modify the /etc/vfstab entries such that all c#d#s# entries are changed to c0d0s#. You must do this because ldom2 is a new LDom and the first disk in the OS device tree is always named as c0d0s#.

  8. After you change the vfstab file, unmount the file system and unbind vdisk2 from ldom1:

    primary# ldm remove-vdisk vdisk2 ldom1
  9. Bind vdisk2 to ldom2 and then start and boot ldom2.

    primary# ldm add-vdisk vdisk2 vdisk2@primary-vds0 ldom2
    primary# ldm bind ldom2
    primary# ldm start ldom2

    After booting ldom2, appears as ldom1 on the console because the other host-specific parameters like hostname and IP address are still that of ldom1.

    ldom1 console login:
  10. To change the parameters bring ldom2 to single-user mode and run the sys-unconfig command.

  11. Reboot ldom2.

    During the reboot, the operating system prompts you to configure the host-specific parameters such as hostname and IP address, which you must enter corresponding to ldom2.

  12. After you have specified all these parameters, ldom2 boots successfully.