Configuring Storage Foundation Cluster File System and Cluster Volume Manager agents on the new node

This section describes how to configure SFCFS and CVM agents on the new node.

To configure SFCFS and CVM agents on the new node

  1. Verify if /etc/VRTSvcs/conf/config/.stale file is present, before starting VCS.

    If /etc/VRTSvcs/conf/config/.stale file is not present, enter:

    # touch /etc/VRTSvcs/conf/config/.stale
  2. Start the VCS server and vxfen on the new node:

    • Start LLT and GAB on the new node:

      # /etc/init.d/llt start
      # /etc/init.d/gab start
    • For starting vxfen in the disable mode, run the following commands on system03:

      # echo vxfen_mode=disabled > /etc/vxfenmode
      # /etc/init.d/vxfen start
    • For starting vxfen in the enabled mode:

      • Copy the following files from one of the existing cluster nodes to the new node:

        /etc/vxfenmode
        /etc/vxfendg
      • Run the following command:

        # /etc/init.d/vxfen start
  3. On the new node, verify that the GAB port memberships are a and b. Run the following command:

    # /sbin/gabconfig -a
  4. Determine the CVM master node:

    # vxdctl -c mode
  5. Make a backup copy of the main.cf file. Enter the following commands:

    # cd /etc/VRTSvcs/conf/config
    # cp main.cf main.cf.2node
  6. Open the VCS configuration for writing and add the new node. For example:

    # haconf -makerw 
    # hasys -add system03
  7. Add the new node to the CVM system list and specify a failover priority of zero:

    # hagrp -modify cvm SystemList -add system03 X

    where X is one more than the index of the last system in System list of CVM service group in /etc/VRTSvcs/conf/config/main.cf.

  8. Add the new node to the CVM AutoStartList:

    # hagrp -modify cvm AutoStartList -add system03
  9. Node ID can be obtained from CVMNodeId of /etc/VRTSvcs/conf/config/main.cf. Add the new node, system03, and its node ID, #, to the cvm_clust resource:

    # hares -modify cvm_clus CVMNodeId -add system03 2
  10. Write the new VCS configuration to disk:

    # haconf -dump -makero
  11. Verify the syntax of main.cf file:

    # hacf -verify .
  12. To enable the existing cluster to recognize the new node, execute on all the nodes in the cluster:

    # /etc/vx/bin/vxclustadm -m vcs -t gab reinit
    # /etc/vx/bin/vxclustadm nidmap
  13. Start CVM on the newly added node.

    • Determine the node ID:

      # cat /etc/llthosts
    • Verify that this host ID is seen by the GAB module:

      # gabconfig -a
    • Start the VCS engine.

      • If on the newly added node ports f, u, v, or w are present before hastart, then the newly added node must be rebooted to properly start VCS. To properly start VCS:

        # shutdown -r
      • If on the newly added node ports f, u, v or w were not present before hastart, then use the following command to start VCS:

        # hastart
  14. Check the system status to see whether the new node is online:

        # hastatus -sum
        -- SYSTEM STATE
        -- System         State       Frozen 
        A      system01   RUNNING     0
        A      system02   RUNNING     0
        A      system03   RUNNING     0
    
        -- GROUP STATE
        -- Group   System     Probed  AutoDisabled  State
        B cvm      system01   Y       N             ONLINE
        B cvm      system02   Y       N             ONLINE
        B cvm      system03   Y       N             ONLINE
  15. Add shared disk groups to the cluster configuration:

    # cfsdgadm add cfsdg system03=sw
  16. Create a /mnt on system03 and run the following commands for the shared mount points:

    # cfsmntadm modify /mnt add system03=rw

    See cfsmntadm(1M) manual page.

  17. Use thecfsmount command to cluster mount /mnt on the new node:

    # cfsmount /mnt