The setup requirements for the secondary site parallel the requirements for the primary site with a few additions or exceptions as noted below.
Table: Tasks for setting up a parallel global cluster at the secondary site
Task |
Description |
---|---|
Set up the cluster |
|
Set up the database |
See “To set up the SFCFSHA database for the secondary site”. See “To set up the Oracle RAC database for the secondary site”. See “To set up the Sybase ASE CE database for the secondary site”. |
Important requirements for parallel global clustering:
Cluster names on the primary and secondary sites must be unique.
You must use the same OS user and group IDs for your database for installation and configuration on both the primary and secondary clusters.
For Oracle RAC, you must use the same directory structure, name, permissions for the CRS/GRID and database binaries.
You can use an existing parallel cluster or you can install a new cluster for your secondary site.
Consult your product installation guide for planning information as well as specific configuration guidance for the steps below.
See the Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide.
See the Storage Foundation for Oracle RAC Configuration and Upgrade Guide.
See the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide.
To set up the cluster on secondary site
For a multi-node cluster, configure I/O fencing.
For SFCFSHA, you will need to set up:
Local storage for database software
Shared storage for resources which are not replicated as part of the hardware-based or host-based replication
Replicated storage for database files
You must use the same directory structure, name, permissions for the quorum and database binaries as on the primary.
For SF Oracle RAC, you will need to set up:
Local storage for Oracle RAC and CRS binaries
Shared storage for OCR and Vote disk which is not replicated as part of the hardware-based or host-based replication
Replicated shared storage for database files
You must use the same directory structure, name, permissions for the CRS/GRID and database binaries as on the primary.
For SF Sybase CE, you will need to set up:
Shared storage for File sSystem and Cluster File System for Sybase ASE CE binaries which is not replicated
Shared storage for the quorum device which is not replicated
Replicated storage for database files
You must use the same directory structure, name, permissions for the quorum and database binaries as on the primary.
Verify the configuration using procedures in the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide.
After successful database installation and configuration, verify that database resources are up on all nodes.
Oracle Clusterware/Grid Infrastructure software
Oracle RAC database software
The Oracle RAC binary versions must be exactly same on both sites.
After successful Oracle RAC installation and configuration, verify that CRS daemons and resources are up on all nodes.
$GRID_HOME/bin/crsctl stat res -t
Do not create the database. The database will be replicated from the primary site.
To set up the SFCFSHA database for the secondary site
Create the directory for the CFS mount point which will host the database data and control files.
Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database and control files when the failover occurs and the secondary is promoted to become the primary site.
To set up the Oracle RAC database for the secondary site
Create the directory for the CFS mount point which will host the database data and control files.
Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database and control files when the failover occurs and the secondary is promoted to become the primary site.
For example, copy init$ORACLE_SID.ora and orapw$ORACLE_SID.ora from $ORACLE_HOME/dbs at the primary to $ORACLE_HOME/dbs at the secondary.
$ mkdir -p $ORACLE_BASE/diag $ mkdir -p $ORACLE_BASE/admin $ mkdir -p $ORACLE_BASE/admin/adump
On both the primary and secondary sites, edit the file:
$ORACLE_HOME/dbs/init$ORACLE_SID.ora
as
remote_listener = 'SCAN_NAME:1521' SPFILE=<SPFILE NAME>
Registering the database only has to be done once from any node in the secondary cluster.Use the following command as the Oracle database software owner
$ $ORACLE_HOME/bin/srvctl add database -d database_name -o oracle_home
$ $ORACLE_HOME/bin/srvctl modify database -d database_name -y manual
You need only perform this change once from any node in the cluster.
$ $ORACLE_HOME/bin/srvctl add instance -d database_name \ -i instance_name -n node-name
If the secondary cluster has more than one node, you must add instances using the srvctl command.
For example, if the database instance name is racdb, the instance name on sys3 is racdb1 and on sys4 is racdb2.
$ $ORACLE_HOME/bin/srvctl add instance -d racdb -i racdb1 -n sys3 $ $ORACLE_HOME/bin/srvctl add instance -d racdb -i racdb2 -n sys4
To set up the Sybase ASE CE database for the secondary site