Symantec logo

Reinstalling the system and recovering VxVM

To reinstall the system and recover the Veritas Volume Manager configuration, the following steps are required:

Prepare the system for reinstallation

To prevent the loss of data on disks not involved in the reinstallation, involve only the root disk and any other disks that contain portions of the operating system in the reinstallation procedure. For example, if the /usr file system is configured on a separate disk, leave that disk connected. Several of the automatic options for installation access disks other than the root disk without requiring confirmation from the administrator.

Disconnect all other disks containing volumes (or other data that should be preserved) prior to reinstalling the operating system. For example, if you originally installed the operating system with the home file system on a separate disk, disconnect that disk to ensure that the home file system remains intact.

Reinstall the operating system

Once any failed or failing disks have been replaced and disks not involved with the reinstallation have been detached, reinstall the operating system as described in your operating system documentation. Install the operating system prior to installing VxVM.

Ensure that no disks other than the root disk are accessed in any way while the operating system installation is in progress. If anything is written on a disk other than the root disk, the Veritas Volume Manager configuration on that disk may be destroyed.

  Note   During reinstallation, you can change the system's host name (or host ID). It is recommended that you keep the existing host name, as this is assumed by the procedures in the following sections.

Reinstalling Veritas Volume Manager

 To reinstall Veritas Volume Manager

  1. Reinstall the Veritas software from the installation disks.

    See the Installation Guide.

    Warning: Do not use vxinstall to initialize VxVM.
  2. If required, use the vxlicinst command to install the Veritas Volume Manager license key.

    See the vxlicinst(1) manual page.

Recovering the Veritas Volume Manager configuration

Once the Veritas Volume Manager packages have been loaded, and you have installed the license for VxVM, recover the Veritas Volume Manager configuration.

 To recover the Veritas Volume Manager configuration

  1. Touch /etc/vx/reconfig.d/state.d/install-db.
  2. Shut down the system.
  3. Reattach the disks that were removed from the system.
  4. Reboot the system.
  5. When the system comes up, bring the system to single-user mode using the following command:

# exec init S

  1. When prompted, enter the password and press Return to continue.
  2. Remove files involved with installation that were created when you loaded VxVM, but which are no longer needed, using the following command:

# rm -rf /etc/vx/reconfig.d/state.d/install-db

  1. Start some Veritas Volume Manager I/O daemons using the following command:

    # vxiod set 10

  1. Start the Veritas Volume Manager configuration daemon, vxconfigd, in disabled mode using the following command:

# vxconfigd -m disable

  1. Initialize the vxconfigd daemon using the following command:

# vxdctl init

  1. Initialize the DMP subsystem using the following command:

    # vxdctl initdmp

  2. Enable vxconfigd using the following command:

# vxdctl enable

The configuration preserved on the disks not involved with the reinstallation has now been recovered. However, because the root disk has been reinstalled, it does not appear to VxVM as a VM disk. The configuration of the preserved disks does not include the root disk as part of the VxVM configuration.

If the root disk of your system and any other disks involved in the reinstallation were not under VxVM control at the time of failure and reinstallation, then the reconfiguration is complete at this point.

If the root disk (or another disk) was involved with the reinstallation, any volumes or mirrors on that disk (or other disks no longer attached to the system) are now inaccessible. If a volume had only one plex contained on a disk that was reinstalled, removed, or replaced, then the data in that volume is lost and must be restored from backup.

Cleaning up the system configuration

After reinstalling VxVM, you must clean up the system configuration.

 To clean up the system configuration

  1. Remove any volumes associated with rootability. This must be done if the root disk (and any other disk involved in the system boot process) was under Veritas Volume Manager control.

    The following volumes must be removed:


    Contains the root file system. 


    Contains the swap area. 

    standvol (if present) 

    Contains the stand file system. 

    usr (if present) 

    contains the usr file system. 

    To remove the root volume, use the vxedit command:

# vxedit -fr rm rootvol

Repeat this command, using swapvol, standvol and usr in place of rootvol, to remove the swap, stand, and usr volumes.

  1. After completing the rootability cleanup, you must determine which volumes need to be restored from backup. The volumes to be restored include those with all mirrors (all copies of the volume) residing on disks that have been reinstalled or removed. These volumes are invalid and must be removed, recreated, and restored from backup. If only some mirrors of a volume exist on reinstalled or removed disks, these mirrors must be removed. The mirrors can be re-added later.

    Establish which VM disks have been removed or reinstalled using the following command:

    # vxdisk list

    This displays a list of system disk devices and the status of these devices. For example, for a reinstalled system with three disks and a reinstalled root disk, the output of the vxdisk list command is similar to this:


    c0t0d0s2  sliced   - -         error

    c0t1d0s2  sliced   disk02 mydg    online

    c0t2d0s2  sliced   disk03 mydg    online

    -         -        disk01 mydg    failed was:c0t0d0s2

The display shows that the reinstalled root device, c0t0d0s2, is not associated with a VM disk and is marked with a status of error. The disks disk02 and disk03 were not involved in the reinstallation and are recognized by VxVM and associated with their devices (c0t1d0s2 and c0t2d0s2). The former disk01, which was the VM disk associated with the replaced disk device, is no longer associated with the device (c0t0d0s2).

If other disks (with volumes or mirrors on them) had been removed or replaced during reinstallation, those disks would also have a disk device listed in error state and a VM disk listed as not associated with a device.

  1. Once you know which disks have been removed or replaced, locate all the mirrors on failed disks using the following command:

# vxprint [-g diskgroup] -sF "%vname" -e'sd_disk = "disk"'

where disk is the access name of a disk with a failed status. Be sure to enclose the disk name in quotes in the command. Otherwise, the command returns an error message. The vxprint command returns a list of volumes that have mirrors on the failed disk. Repeat this command for every disk with a failed status.

The following is sample output from running this command:

# vxprint -g mydg -sF "%vname" -e'sd_disk = "disk01"'


  1. Check the status of each volume and print volume information using the following command:

# vxprint -th volume

where volume is the name of the volume to be examined. The vxprint command displays the status of the volume, its plexes, and the portions of disks that make up those plexes. For example, a volume named v01 with only one plex resides on the reinstalled disk named disk01. The vxprint -th v01 command produces the following output:




v v01 - DISABLED ACTIVE 24000 SELECT - fsgen

pl v01-01 v01 DISABLED NODEVICE 24000 CONCAT - RW

sd disk01-06 v01-01 disk01 245759 24000 0 c1t5d1 ENA

The only plex of the volume is shown in the line beginning with pl. The STATE field for the plex named v01-01 is NODEVICE. The plex has space on a disk that has been replaced, removed, or reinstalled. The plex is no longer valid and must be removed.

  1. Because v01-01 was the only plex of the volume, the volume contents are irrecoverable except by restoring the volume from a backup. The volume must also be removed. If a backup copy of the volume exists, you can restore the volume later. Keep a record of the volume name and its length, as you will need it for the backup procedure.

    Remove irrecoverable volumes (such as v01) using the following command:

    # vxedit -r rm v01

  1. It is possible that only part of a plex is located on the failed disk. If the volume has a striped plex associated with it, the volume is divided between several disks. For example, the volume named v02 has one striped plex striped across three disks, one of which is the reinstalled disk disk01. The vxprint -th v02 command produces the following output:




v v02 - DISABLED ACTIVE 30720 SELECT v02-01 fsgen

pl v02-01 v02 DISABLED NODEVICE 30720 STRIPE 3/128 RW

sd disk02-02 v02-01 disk01 424144 10240 0/0 c1t5d2 ENA

sd disk01-05 v02-01 disk01 620544 10240 1/0 c1t5d3 DIS

sd disk03-01 v02-01 disk03 620544 10240 2/0 c1t5d4 ENA

The display shows three disks, across which the plex v02-01 is striped (the lines starting with sd represent the stripes). One of the stripe areas is located on a failed disk. This disk is no longer valid, so the plex named v02-01 has a state of NODEVICE. Since this is the only plex of the volume, the volume is invalid and must be removed. If a copy of v02 exists on the backup media, it can be restored later. Keep a record of the volume name and length of any volume you intend to restore from backup.

Remove invalid volumes (such as v02) using the following command:

# vxedit -r rm v02

  1. A volume that has one mirror on a failed disk can also have other mirrors on disks that are still valid. In this case, the volume does not need to be restored from backup, since the data is still valid on the valid disks.

    The output of the vxprint -th command for a volume with one plex on a failed disk (disk01) and another plex on a valid disk (disk02) is similar to the following:




v v03 - DISABLED ACTIVE 0720 SELECT - fsgen

pl v03-01 v03 DISABLED ACTIVE 30720 CONCAT -  RW

sd disk02-01 v03-01 disk01 620544 30720 0 c1t5d5 ENA

pl v03-02 v03 DISABLED NODEVICE 30720 CONCAT - RW

sd disk01-04 v03-02 disk03 262144 30720 0 c1t5d6 DIS

This volume has two plexes, v03-01 and v03-02. The first plex (v03-01) does not use any space on the invalid disk, so it can still be used. The second plex (v03-02) uses space on invalid disk disk01 and has a state of NODEVICE. Plex v03-02 must be removed. However, the volume still has one valid plex containing valid data. If the volume needs to be mirrored, another plex can be added later. Note the name of the volume to create another plex later.

To remove an invalid plex, use the vxplex command to dissociate and then remove the plex from the volume. For example, to dissociate and remove the plex v03-02, use the following command:

# vxplex -o rm dis v03-02

  1. Once all invalid volumes and plexes have been removed, the disk configuration can be cleaned up. Each disk that was removed, reinstalled, or replaced (as determined from the output of the vxdisk list command) must be removed from the configuration.

    To remove a disk, use the vxdg command. For example, to remove the failed disk disk01, use the following command:

    # vxdg rmdisk disk01

    If the vxdg command returns an error message, invalid mirrors exist.

    Repeat step 2 through step 7 until all invalid volumes and mirrors are removed.

  2. Once all the invalid disks have been removed, the replacement or reinstalled disks can be added to Veritas Volume Manager control. If the root disk was originally under Veritas Volume Manager control or you now wish to put the root disk under Veritas Volume Manager control, add this disk first.

    To add the root disk to Veritas Volume Manager control, use the vxdiskadm command:

    # vxdiskadm

    From the vxdiskadm main menu, select menu item 2 (Encapsulate a disk). Follow the instructions and encapsulate the root disk for the system.

  3. When the encapsulation is complete, reboot the system to multi-user mode.
  4. Once the root disk is encapsulated, any other disks that were replaced should be added using the vxdiskadm command. If the disks were reinstalled during the operating system reinstallation, they should be encapsulated; otherwise, they can be added.
  5. Once all the disks have been added to the system, any volumes that were completely removed as part of the configuration cleanup can be recreated and their contents restored from backup. The volume recreation can be done by using the vxassist command or the graphical user interface.

    For example, to recreate the volumes v01 and v02, use the following command:

    # vxassist make v01 24000

    # vxassist make v02 30720 layout=stripe nstripe=3

Once the volumes are created, they can be restored from backup using normal backup/restore procedures.

  1. Recreate any plexes for volumes that had plexes removed as part of the volume cleanup. To replace the plex removed from volume v03, use the following command:

    # vxassist mirror v03

    Once you have restored the volumes and plexes lost during reinstallation, recovery is complete and your system is configured as it was prior to the failure.

  2. Start up hot-relocation, if required, by either rebooting the system or manually start the relocation watch daemon, vxrelocd (this also starts the vxnotify process).
    Warning: Hot-relocation should only be started when you are sure that it will not interfere with other reconfiguration procedures.

    To determine if hot-relocation has been started, use the following command to search for its entry in the process table:

    # ps -ef | grep vxrelocd

    See the Veritas Volume Manager Administrator's Guide.

    See the vxrelocd(1M) manual page.