Migrating from SFHA to SFCFSHA 6.0

This section describes how to migrate Storage Foundation High Availability (SFHA) 6.0 to Storage Foundation Cluster File System High Availability (SFCFSHA) 6.0.

The product installer does not support direct upgrades from a previous version of SFHA or SFCFSHA6.0. Ensure that you upgrade the existing SFHA to 6.0 before beginning this procedure.

To migrate from SFHA 6.0 to SFCFSHA 6.0

  1. Back up the existing SFHA main.cf file before beginning the upgrade.
  2. Confirm that the storage disks are visible on all the nodes in the 6.0 SFHA cluster.
  3. Bring all the failover service groups offline, using the following command:
    # hagrp -offline group_name -any

    The above command brings the service group offline on the node where the service group is currently online.

  4. Unmount all the VxFS file systems which are not under VCS control. If the local file systems are under VCS control, then VCS unmounts the file systems when the failover service group is brought offline.

    On the nodes that have any mounted VxFS local file systems that are not under VCS control:

    # umount -F vxfs -a
  5. Stop all the activity on the volumes and deport the local disk groups. If the local disk groups are part of VCS failover service groups, then VCS deports the disk groups when the failover service group is brought offline in step 3.
    # vxvol -g dg_name stopall
    # vxdg deport dg_name
  6. Upgrade the existing SFHA to SFCFSHA 6.0:

    For SFCFSHA:

    # ./installsfcfsha
  7. After installation is completed, reboot all the nodes.
  8. After all nodes are rebooted, bring up CVM and the resources.
  9. Verify that all SFHA processes have started. You can verify using the following commands:
    # gabconfig -a
    # hastatus -sum
  10. Configure CVM and the resources:
    # /opt/VRTS/bin/cfscluster config

    This automatically detects the cluster configuration such as node-names, cluster name, and cluster=id and brings up CVM resources on all the nodes in the cluster.

    To verify:

    # gabconfig -a
    # hastatus -sum
  11. Find out which node in the cluster, is the master node:
    # /opt/VRTS/bin/vxclustadm nidmap
  12. On the master node, import disk groups:
    # vxdg -s import dg_name

    This release supports certain commands to be executed from the slave node such as vxdg -s import dg_name.

    See the Storage Foundation Cluster File System Administor's Guide for more information.

  13. Start all the volumes on the imported disk group , run the following command:
    # vxvol -g dg_name name startall
  14. To have the VxFS file system to be under VCS control, run the following command
    # cfsmntadm add shared_diskgroup_name volume_name \
    		mount_point all=cluster_mount_options

    This command creates Parallel service groups in VCS comprising of the supplied parameters of Diskgroup, Volume & Mountpoint.

  15. Mount the CFS file system on all the nodes in the cluster:
    # cfsmount mount_point
  16. On the CVM Master node, re-import all the required disk groups which must be in shared mode:

    Import all other local disk groups which have not been imported in shared mode in step 12.

    # vxdg -s import dg_name
  17. Start all the volumes whose disk groups have been imported as shared. Use the following command:
    # vxdg -g dg_name startall
  18. Repeat steps 14 and 15 for any of the VxFS file systems which VCS needs to monitor through Failover service groups.