Modifying the VCS Configuration on the Secondary Site

The following are highlights of the procedure to modify the existing VCS configuration on the secondary site:

The following steps are similar to those performed on the primary site.

Note:

The example precedure illustrates the confguration process using a manual file editing method. If you are using the Java Console, some steps do not apply in the same order.

To modify VCS on the secondary site

  1. Log into one of the nodes on the secondary site as root.
  2. Use the following command to save the existing configuration to disk, and make the configuration read-only while making changes:
    # haconf -dump -makero
  3. Use the following command to make a backup copy of the main.cf file:
    # cd /etc/VRTSvcs/conf/config
    # cp main.cf main.orig
  4. Use vi or another text editor to edit the main.cf file. Edit the CVM group on the secondary site.

    Review the sample configuration file after the SFCFSHA installation to see the CVM configuration.

    In our example, the secondary site has clus2 consisting of the nodes mercury and jupiter. To modify the CVM service group on the secondary site, use the CVM group on the primary site as your guide.

  5. Add a failover service group using the appropriate values for your cluster and nodes. Include the following resources:

    • RVGLogowner resource. The node on which the group is online functions as the log owner (node connected to the second cluster for the purpose of replicating data).

    • IP resource

    • NIC resources

    Example RVGLogowner service group:
    group rlogowner ( 
        SystemList = { mercury = 0, jupiter = 1 } 
        AutoStartList = { mercury, jupiter } 
        )
    
        IP logowner_ip (
           Device = bge0
           Address = "10.11.9.102"   
           NetMask = "255.255.255.0"   
           )
    NIC nic (
           Device = bge0
    							NetworkHosts = { "10.10.8.1" }
    							NetworkType = ether
    							)
    RVGLogowner logowner (   
           RVG = rac1_rvg   
           DiskGroup = oradatadg   
           )
    
    requires group RVGgroup online local firm
    logowner requires logowner_ip
    logowner_ip requires nic
  6. Add the RVG service group using the appropriate values for your cluster and nodes.

    The following is an example RVGgroup service group:

    group RVGgroup (  
        SystemList = { mercury = 0, jupiter = 1 }  
        Parallel = 1  
        AutoStartList = { mercury, jupiter }  
        )
    
    RVGShared racdata_rvg (  
        RVG = rac1_rvg  
        DiskGroup = oradatadg  
        )
    
        CVMVolDg racdata_voldg (
            CVMDiskGroup = oradatadg
            CVMActivation = sw   
            )
    
    requires group cvm online local firm
    racdata_rvg requires racdata_voldg
  7. Add an application service group. Use the application service group on the primary site as a model for the application service group on the secondary site.

    • Define the application service group as a global group by specifying the clusters on the primary and secondary sites as values for the ClusterList group attribute.

      Note:

      This action must be performed on the primary or secondary site, but not on both.

    • Assign this global group the same name as the group on the primary site; for example, database_grp.

    • Include the ClusterList and ClusterFailOverPolicy cluster attributes. Symantec recommends using the Manual value.

    • Add the RVGSharedPri resource to the group configuration.

    • Remove the CVMVolDg resource, if it has been configured in your previous configuration. This resource is now part of the RVG service group.

    • Specify the service group to depend (online, local, firm) on the RVG service group.

    Example of the application group on the secondary site:
    group database_grp ( 
        SystemList = { mercury = 0, jupiter = 1 }
    	   ClusterList = { clus2 = 0, clus1 = 1 } 
        Parallel = 1 
        OnlineRetryInterval = 300 
        ClusterFailOverPolicy = Manual 
        Authority = 1 
        AutoStartList = { mercury, jupiter } 
        )
    
    				RVGSharedPri ora_vvr_shpri (
    								RvgResourceName = racdata_rvg
    								OnlineRetryLimit = 0
    								)
    
     
        CFSMount oradata_mnt (  
            MountPoint = "/oradata"  
            BlockDevice = "/dev/vx/dsk/oradatadg/racdb_vol"  
            Critical = 0 
    								)

    Process vxfend (
    PathName = "/sbin/vxfend"
    Arguments = "-m sybase -k /tmp/vcmp_socket"
    )
    						RVGSharedPri ora_vvr_shpri (  
            RvgResourceName = racdata_rvg  
            OnlineRetryLimit = 0  
            ) 
    
    requires group RVGgroup online local firm
    oradata_mnt requires ora_vvr_shpri
  8. Save and close the main.cf file.
  9. Use the following command to verify the syntax of the /etc/VRTSvcs/conf/config/main.cf file:
      # hacf -verify /etc/VRTSvcs/conf/config
    					
  10. Stop and restart VCS.
    # hastop -all -force

    Wait for port h to stop on all nodes, and then restart VCS with the new configuration on all primary nodes:

    # hastart
    					
  11. Verify that VCS brings all resources online. On one node, enter the following command:
    # hagrp -display
    					

    The application, RVG, and CVM groups are online on both nodes of the primary site. The RVGLogOwner and ClusterService groups are online on one node of the cluster. If either the RVG group or the RVGLogOwner group is partially online, manually bring the groups online using the hagrp -online command. This information applies to the secondary site, except for the application group which must be offline.

  12. Verify the service groups and their resources that are brought online. On one node, enter the following command:
    # hagrp -display

    The application service group is offline on the secondary site, but the ClusterService, CVM, RVG log owner, and RVG groups are online.

    This completes the setup for an SFCFSHA global cluster using VVR for replication. Symantec recommends testing a global cluster before putting it into production.