Replacing the coordinator disk group in a cluster that is online

You can also replace the coordinator disk group using the vxfenswap utility. The following example replaces the coordinator disk group vxfencoorddg with a new disk group vxfendg.

To replace the coordinator disk group

  1. Make sure system-to-system communication is functioning properly.
  2. Determine the value of the FaultTolerance attribute.

    # hares -display coordpoint -attribute FaultTolerance -localclus

  3. Estimate the number of coordination points you plan to use as part of the fencing configuration.
  4. Set the value of the FaultTolerance attribute to 0.

    Note:

    It is necessary to set the value to 0 because later in the procedure you need to reset the value of this attribute to a value that is lower than the number of coordination points. This ensures that the Coordpoint Agent does not fault.

  5. Check the existing value of the LevelTwoMonitorFreq attribute.
    # hares -display coordpoint -attribute LevelTwoMonitorFreq -localclus

    Note:

    Make a note of the attribute value before you proceed to the next step. After migration, when you re-enable the attribute you want to set it to the same value.

  6. Disable level two monitoring of CoordPoint agent.
    # haconf -makerw
    # hares -modify coordpoint LevelTwoMonitorFreq 0
    # haconf -dump -makero
  7. Make sure that the cluster is online.
    # vxfenadm -d
    I/O Fencing Cluster Information:
    ================================
    Fencing Protocol Version: 201
    Fencing Mode: SCSI3
    Fencing SCSI3 Disk Policy: dmp
    Cluster Members:
    		* 0 (sys1)
    		1 (sys2)
    RFSM State Information:
    		node 0 in state 8 (running)
    		node 1 in state 8 (running)
  8. Find the name of the current coordinator disk group (typically vxfencoorddg) that is in the /etc/vxfendg file.
    # cat /etc/vxfendg
    vxfencoorddg
  9. Find the alternative disk groups available to replace the current coordinator disk group.
    # vxdisk -o alldgs list
    DEVICE				TYPE				DISK		GROUP				 STATUS
    rhdisk64				auto:cdsdisk				-		(vxfendg)				  online
    rhdisk65				auto:cdsdisk				-		(vxfendg)				  online
    rhdisk66				auto:cdsdisk				-		(vxfendg)				  online
    rhdisk75				auto:cdsdisk				-		(vxfencoorddg)	online
    rhdisk76				auto:cdsdisk				-		(vxfencoorddg)	online
    rhdisk77				auto:cdsdisk				-		(vxfencoorddg)	online
  10. Validate the new disk group for I/O fencing compliance. Run the following command:
    # vxfentsthdw -c vxfendg

    See Testing the coordinator disk group using the -c option of vxfentsthdw.

  11. If the new disk group is not already deported, run the following command to deport the disk group:
    # vxdg deport vxfendg
  12. Perform one of the following:
    • Create the /etc/vxfenmode.test file with new fencing mode and disk policy information.

    • Edit the existing the /etc/vxfenmode with new fencing mode and disk policy information and remove any preexisting /etc/vxfenmode.test file.

    Note that the format of the /etc/vxfenmode.test file and the /etc/vxfenmode file is the same.

    See the Cluster Server Installation Guide for more information.

  13. From any node, start the vxfenswap utility. For example, if vxfendg is the new disk group that you want to use as the coordinator disk group:
    # vxfenswap -g vxfendg [-n]

    The utility performs the following tasks:

    • Backs up the existing /etc/vxfentab file.

    • Creates a test file /etc/vxfentab.test for the disk group that is modified on each node.

    • Reads the disk group you specified in the vxfenswap command and adds the disk group to the /etc/vxfentab.test file on each node.

    • Verifies that the serial number of the new disks are identical on all the nodes. The script terminates if the check fails.

    • Verifies that the new disk group can support I/O fencing on each node.

  14. If the disk verification passes, the utility reports success and asks if you want to replace the coordinator disk group.
  15. Confirm whether you want to clear the keys on the coordination points and proceed with the vxfenswap operation.

    Do you want to clear the keys on the coordination points 
    and proceed with the vxfenswap operation? [y/n] (default: n) y
  16. Review the message that the utility displays and confirm that you want to replace the coordinator disk group. Else skip to step 21.
    Do you wish to commit this change? [y/n] (default: n) y

    If the utility successfully commits, the utility moves the /etc/vxfentab.test file to the /etc/vxfentab file.

    The utility also updates the /etc/vxfendg file with this new disk group.

  17. Import the new disk group if it is not already imported before you set the coordinator flag "on".
    # vxdg -t import vxfendg 
  18. Set the coordinator attribute value as "on" for the new coordinator disk group.
    # vxdg -g vxfendg set  coordinator=on

    Set the coordinator attribute value as "off" for the old disk group.

    # vxdg -g vxfencoorddg set  coordinator=off
  19. Deport the new disk group.
    #  vxdg deport vxfendg
  20. Verify that the coordinator disk group has changed.
    # cat /etc/vxfendg
    vxfendg

    The swap operation for the coordinator disk group is complete now.

  21. If you do not want to replace the coordinator disk group, answer n at the prompt.

    The vxfenswap utility rolls back any changes to the coordinator disk group.

  22. Re-enable the LevelTwoMonitorFreq attribute of the CoordPoint agent. You may want to use the value that was set before disabling the attribute.
    # haconf -makerw
    # hares -modify coordpoint LevelTwoMonitorFreq Frequencyvalue
    # haconf -dump -makero

    where Frequencyvalue is the value of the attribute.

  23. Set the FaultTolerance attribute to a value that is lower than 50% of the total number of coordination points.

    For example, if there are four (4) coordination points in your configuration, then the attribute value must be lower than two (2).If you set it to a higher value than two (2) the CoordPoint agent faults.