Modifying the VCS configuration on the secondary site

The following are highlights of the procedure to modify the existing VCS configuration on the secondary site:

The following steps are similar to those performed on the primary site.

To modify VCS on the secondary site

  1. Log into one of the nodes on the secondary site as root.
  2. Use the following command to save the existing configuration to disk, and make the configuration read-only while making changes:
    # haconf -dump -makero
  3. Use the following command to make a backup copy of the main.cf file:
    # cd /etc/VRTSvcs/conf/config
    # cp main.cf main.orig
  4. Use vi or another text editor to edit the main.cf file. Edit the CVM group on the secondary site.

    Review the sample configuration file after the VCS installation to see the CVM configuration.

    See “To view sample configuration files for SF Oracle RAC”.

    See “To view sample configuration files for SF Sybase CE”.

    In our example, the secondary site has clus2 consisting of the nodes sys3 and sys4. To modify the CVM service group on the secondary site, use the CVM group on the primary site as your guide.

  5. Add a failover service group using the appropriate values for your cluster and nodes. Include the following resources:

    • RVGLogowner resource. The node on which the group is online functions as the log owner (node connected to the second cluster for the purpose of replicating data).

    • IP resource

    • NIC resources

    Example RVGLogowner service group:
    group rlogowner ( 
        SystemList = { sys3 = 0, sys4 = 1 } 
        AutoStartList = { sys3, sys4 } 
        )
    
        IP logowner_ip (
           Device = eth0
           Address = "10.11.9.102"   
           NetMask = "255.255.255.0"   
           ) 
    NIC nic (
           Device = eth0
    							NetworkHosts = { "10.10.8.1" }
    							NetworkType = ether
    							) 
    RVGLogowner logowner (   
           RVG = dbdata_rvg   
           DiskGroup = dbdatadg   
           )
    requires group RVGgroup online local firm
    logowner requires logowner_ip
    logowner_ip requires nic
  6. Add the RVG service group using the appropriate values for your cluster and nodes.

    The following is an example RVGgroup service group:

    group RVGgroup (  
        SystemList = { sys3 = 0, sys4 = 1 }  
        Parallel = 1  
        AutoStartList = { sys3, sys4 }  
        )
    
    RVGShared dbdata_rvg (  
        RVG = dbdata_rvg  
        DiskGroup = dbdatadg  
        )
    
        CVMVolDg dbdata_voldg (
            CVMDiskGroup = dbdatadg
            CVMActivation = sw   
            )
    
    requires group cvm online local firm
    dbdata_rvg requires dbdata_voldg
  7. It is advisible to modify "OnlineRetryLimit" & "OfflineWaitLimit" attribute of IP resource type to 1 on both the clusters:
    # hatype -modify IP  OnlineRetryLimit  1
    # hatype -modify IP  OfflineWaitLimit  1
  8. Add an database service group. Use the database service group on the primary site as a model for the database service group on the secondary site.

    • Define the database service group as a global group by specifying the clusters on the primary and secondary sites as values for the ClusterList group attribute.

    • Assign this global group the same name as the group on the primary site. For example, database_grp.

    • Include the ClusterList and ClusterFailOverPolicy cluster attributes. Symantec recommends using the Manual value.

    • Add the RVGSharedPri resource to the group configuration.

    • Remove the CVMVolDg resource, if it has been configured in your previous configuration. This resource is now part of the RVG service group.

    • Specify the service group to depend (online, local, firm) on the RVG service group.

    See configuration examples below.
  9. Save and close the main.cf file.
  10. Use the following command to verify the syntax of the /etc/VRTSvcs/conf/config/main.cf file:
      # hacf -verify /etc/VRTSvcs/conf/config
    				
  11. Stop and restart VCS.
    # hastop -all -force

    Wait for port h to stop on all nodes, and then restart VCS with the new configuration on all primary nodes one at a time.

    # hastart
    				
  12. Verify that VCS brings all resources online. On one node, enter the following command:
    # hagrp -display
    				

    The database, RVG, and CVM groups are online on both nodes of the primary site. The RVGLogOwner and ClusterService groups are online on one node of the cluster. If either the RVG group or the RVGLogOwner group is partially online, manually bring the groups online using the hagrp -online command. This information applies to the secondary site, except for the database group which must be offline.

  13. Verify the service groups and their resources that are brought online. On one node, enter the following command:
    # hagrp -display

    The database service group is offline on the secondary site, but the ClusterService, CVM, RVG log owner, and RVG groups are online.

    This completes the setup for a global cluster using VVR for replication. Symantec recommends testing a global cluster before putting it into production.

Example of the Oracle RAC database group on the secondary site:

group database_grp ( 
    SystemList = { sys3 = 0, sys3 = 1 }
	   ClusterList = { clus2 = 0, clus1 = 1 } 
    Parallel = 1 
    OnlineRetryInterval = 300 
    ClusterFailOverPolicy = Manual 
    Authority = 1 
    AutoStartList = { sys3, sys4 } 
    )

				RVGSharedPri dbdata_vvr_shpri (
								RvgResourceName = rdbdata_rvg
								OnlineRetryLimit = 0
								)
 
    CFSMount dbdata_mnt (  
        MountPoint = "/dbdata"  
        BlockDevice = "/dev/vx/dsk/dbdatadg/dbdata_vol"  
        Critical = 0 
								)
			RVGSharedPri dbdata_vvr_shpri (  
        RvgResourceName = dbdata_rvg  
        OnlineRetryLimit = 0  
        ) 

Oracle rac_db (  
     Sid @sys3 = vrts1  
     Sid @sys4 = vrts2  
     Owner = Oracle  
     Home = "/oracle/orahome"  
     Pfile @sys3 = "/oracle/orahome/dbs/initvrts1.ora"  
     Pfile @sys4 = "/oracle/orahome/dbs/initvrts2.ora"  
     StartUpOpt = SRVCTLSTART  
     ShutDownOpt = SRVCTLSTOP  
     )

requires group RVGgroup online local firm
dbdata_mnt requires dbdata_vvr_shpri
rac_db requires dbdata_mnt

RVGSharedPri dbdata_vvr_shpri (  
        RvgResourceName = dbdata_rvg  
        OnlineRetryLimit = 0  
        ) 

requires group RVGgroup online local firm
dbdata_mnt requires dbdata_vvr_shpri

Example of the Sybase ASE CE database group on the secondary site:

.
group sybase ( 
    SystemList = { sys3 = 0, sys4 = 1 }
	   ClusterList = { clus2 = 0, clus1 = 1 } 
    Parallel = 1 
    OnlineRetryInterval = 300 
    ClusterFailOverPolicy = Manual 
    Authority = 1 
  # AutoStart = 0 here so faulting will not happen
    AutoStartList = { sys3, sys4 } 
    )
 
    CFSMount dbdata_mnt (  
        MountPoint = "/dbdata"  
        BlockDevice = "/dev/vx/dsk/dbdatadg/dbdata_vol"  
        )
 
RVGSharedPri dbdata_vvr_shpri (  
        RvgResourceName = dbdata_rvg  
        OnlineRetryLimit = 0  
        )

									CFSMount quorum_101_quorumvol_mnt (
													MountPoint = "/quorum"
													BlockDevice = "/dev/vx/dsk/quorum_101/quorumvol"
													)

								 CVMVolDg quorum_101_voldg (
													CVMDiskGroup = quorum_101
													CVMVolume = { quorumvol }
													CVMActivation = sw
													)

Sybase ase (  
     Sid @sys3 = ase1  
     Sid @sys4 = ase2  
     Owner = sybase  
     Home = "/sybase" 
			 	Version = 15
     SA = sa
	    Quorum_dev = "/quorum/q.dat"  
     )

requires group RVGgroup online local firm
dbdata_mnt requires dbdata_vvr_shpri
ase requires vxfend
ase requires dbdata_mnt
ase requires quorum_101_quorumvol_mnt
quorum_101_quorumvol_mnt requires quorum_101_voldg