Symantec logo

Configuring SFCFS and CVM agents on the new node

You must configure the SFCFS and CVM agents, after rebooting the new system.

 To configure SFCFS and CVM agents on the new node

  1. Verify if /etc/VRTSvcs/conf/config/.state file is present, before starting VCS.

    If /etc/VRTSvcs/conf/config/.state file is not present, enter:

    # touch /etc/VRTSvcs/conf/config/.state

  2. Start the VCS server and vxfen on system03.
    1. Use hastart on system03 for starting the VCS server.
    2. For starting vxfen in the disable mode, run the following commands on system03:

      # echo vxfen_mode=disabled > /etc/vxfenmode

      # /etc/init.d/vxfen start

    3. For starting vxfen in the enabled mode:
      • Copy the following files from one of the existing cluster nodes to system03:

        /etc/vxfenmode

/etc/vxfendg

  1. Check that there are no service groups dependent on CVM, such as SFCFS, that are still online:

# hagrp -dep cvm

  1. If there are any dependencies, take them offline on all the nodes:

# hagrp -offline cvm -sys system01

# hagrp -offline cvm -sys system02

  1. Open the VCS configuration for writing:

# haconf —makerw

  1. Add the new node to the CVM system list and specify a failover priority of zero:

# hagrp —modify cvm SystemList -add system03 X

where X is one more than the index of the last system in System list of CVM service group in /etc/VRTSvcs/conf/config/main.cf.

  1. Add the new node to the CVM AutoStartList:

# hagrp —modify cvm AutoStartList system01 system02 system03

  1. Node ID can be obtained from CVMNodeId of /etc/VRTSvcs/conf/config/main.cf. Add the new node, system03, and its node ID, #, to the cvm_clust resource:

# hares —modify cvm_clus CVMNodeId -add system03 2

  1. Write the new VCS configuration to disk:

# haconf —dump -makero

  1. Put the CVM resources back online, in the following order:

# hagrp -online cvm -sys system01

# hagrp -online cvm -sys system02

# hagrp -online cvm -sys system03

  1. Check the system status to see whether the new node is online:

# hastatus —sum

-- SYSTEM STATE

-- System State Frozen

A system01 RUNNING 0

A system02 RUNNING 0

A system03 RUNNING 0

-- GROUP STATE

-- Group System Probed AutoDisabled State

B cvm system01 Y N ONLINE

B cvm system02 Y N ONLINE

B cvm system03 Y N ONLINE

  1. Add shared disk groups to the cluster configuration:

    # cfsdgadm add cfsdg system03=sw

  2. Create a /mnt on system03 and run the following commands:

    # cfsmntadm modify /mnt add system03=rw

    Refer to cfsmntadm man page for more details.

  3. Use cfsmount command to cluster mount /mnt back on all the nodes:

    # cfsmount /mnt