Configuring the secondary site

The setup requirements for the secondary site parallel the requirements for the primary site with a few additions or exceptions as noted below.

Table: Tasks for setting up a parallel global cluster at the secondary site

Task

Description

Set up the cluster

See “To set up the cluster on secondary site”.

Set up the database

See “To set up the SFCFSHA database for the secondary site”.

See “To set up the Oracle RAC database for the secondary site”.

See “To set up the Sybase ASE CE database for the secondary site”.

Important requirements for parallel global clustering:

You can use an existing parallel cluster or you can install a new cluster for your secondary site.

Consult your product installation guide for planning information as well as specific configuration guidance for the steps below.

See the Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide.

See the Storage Foundation for Oracle RAC Configuration and Upgrade Guide.

See the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide.

To set up the cluster on secondary site

  1. Install and configure servers and storage.
  2. If you are using hardware-based replication, install the sofware for managing your array.
  3. Verify that you have the correct installation options enabled, whether you are using keyless licensing or installing keys manually. You must have the GCO option for a global cluster. If you are using VVR for replication, you must have it enabled.
  4. Prepare, install, and configure your Storage Foundation and High Availability (SFHA) Solutions product according to the directions in your product's installation guide.

    For a multi-node cluster, configure I/O fencing.

  5. For a single-node cluster, do not enable I/O fencing. Fencing will run in disabled mode.
  6. Prepare systems and storage for a global cluster. Identify the hardware and storage requirements before installing your database software.

    For SFCFSHA, you will need to set up:

    • Local storage for database software

    • Shared storage for resources which are not replicated as part of the hardware-based or host-based replication

    • Replicated storage for database files

    • You must use the same directory structure, name, permissions for the quorum and database binaries as on the primary.

    For SF Oracle RAC, you will need to set up:

    • Local storage for Oracle RAC and CRS binaries

    • Shared storage for OCR and Vote disk which is not replicated as part of the hardware-based or host-based replication

    • Replicated shared storage for database files

    • You must use the same directory structure, name, permissions for the CRS/GRID and database binaries as on the primary.

    For SF Sybase CE, you will need to set up:

    • Shared storage for File sSystem and Cluster File System for Sybase ASE CE binaries which is not replicated

    • Shared storage for the quorum device which is not replicated

    • Replicated storage for database files

    • You must use the same directory structure, name, permissions for the quorum and database binaries as on the primary.

    • Verify the configuration using procedures in the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide.

    Note:

    You must use the same directory structure, name, permissions for the CRS/GRID and database binaries.

  7. For SFCFSHA, install and configure your database binaries. Consult your database documentation.

    Note:

    Resources which will not be replicated must be on non-replicated shared storage.

    After successful database installation and configuration, verify that database resources are up on all nodes.

  8. For Oracle RAC, see the instructions in the Storage Foundation for Oracle RAC Configuration and Upgrade Guide for installing and configuring:
    • Oracle Clusterware/Grid Infrastructure software

    • Oracle RAC database software

    • The Oracle RAC binary versions must be exactly same on both sites.

    Note:

    OCR and Vote disk must be on non-replicated shared storage.

    After successful Oracle RAC installation and configuration, verify that CRS daemons and resources are up on all nodes.

    $GRID_HOME/bin/crsctl stat res -t
  9. For SF Sybase CE, see the instructions in the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide for installing and configuring Sybase ASE CE binaries.

    Note the following configuration requirements:

    • The quorum device must be on non-replicated shared storage.

    • The Sybase binary versions must be exactly same on both sites, including the ESD versions.

    • Configure Sybase Binaries mounts/volumes under VCS control manually on the secondary site.

Do not create the database. The database will be replicated from the primary site.

To set up the SFCFSHA database for the secondary site

  1. If you are using hardware-based replication, the database, disk group, and volumes will be replicated from the primary site.

    Create the directory for the CFS mount point which will host the database data and control files.

  2. If you are using VVR for replication, create an identical disk group and volumes for the replicated content with the same names and size as listed on the primary site.

    Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database and control files when the failover occurs and the secondary is promoted to become the primary site.

  3. Create subdirectories for the database as you did on the primary site.

To set up the Oracle RAC database for the secondary site

  1. If you are using hardware-based replication, the database, disk group, and volumes will be replicated from the primary site.

    Create the directory for the CFS mount point which will host the database data and control files.

  2. If you are using VVR for replication, create an identical disk group and volumes for the replicated content with the same names and size as listed on the primary site.

    Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database and control files when the failover occurs and the secondary is promoted to become the primary site.

  3. On each node in the cluster, copy the initialization files (pfiles,spfiles) from the primary cluster to the secondary cluster maintaining the same directory path.

    For example, copy init$ORACLE_SID.ora and orapw$ORACLE_SID.ora from $ORACLE_HOME/dbs at the primary to $ORACLE_HOME/dbs at the secondary.

  4. As Oracle user, create the following subdirectories on the secondary site to parallel the directories on the primary site:
    $ mkdir -p $ORACLE_BASE/diag
    $ mkdir -p $ORACLE_BASE/admin
    $ mkdir -p $ORACLE_BASE/admin/adump 

    On both the primary and secondary sites, edit the file:

    $ORACLE_HOME/dbs/init$ORACLE_SID.ora

    as

    remote_listener = 'SCAN_NAME:1521'
    SPFILE=<SPFILE NAME>
  5. Configure listeners on the secondary site with same name as on primary. You can do this by one of the following methods:
    • Copy the listener.ora and tnsnames.ora files from the primary site and update the names as appropriate for the secondary site.

    • Use Oracle's netca utility to to configure the listener.ora and tnsnames.ora files on the secondary site.

  6. On the secondary site, register the database using the srvctl command as the database software owner.

    Registering the database only has to be done once from any node in the secondary cluster.Use the following command as the Oracle database software owner

    $ $ORACLE_HOME/bin/srvctl add database -d database_name -o oracle_home
  7. To prevent automatic database instance restart, change the Management policy for the database (automatic, manual) to MANUAL using the srvctl command:
    $ $ORACLE_HOME/bin/srvctl modify database -d database_name -y manual
    					

    You need only perform this change once from any node in the cluster.

  8. Register the instances using srvctl command. Execute the following command on each node:
    $ $ORACLE_HOME/bin/srvctl add instance -d database_name \
    -i instance_name -n node-name
    					

    If the secondary cluster has more than one node, you must add instances using the srvctl command.

    For example, if the database instance name is racdb, the instance name on sys3 is racdb1 and on sys4 is racdb2.

    $ $ORACLE_HOME/bin/srvctl add instance -d racdb -i racdb1 -n sys3
    
    $ $ORACLE_HOME/bin/srvctl add instance -d racdb -i racdb2 -n sys4
    					
  9. Register all other resources (for example listener, ASM, service) present in cluster/GRID at the primary site to the secondary site using the srvctl command or crs_register. For command details, see Oracle documentation at Metalink.

To set up the Sybase ASE CE database for the secondary site

  1. Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database files when the failover occurs and the secondary is promoted to become the primary site.
  2. Create an identical disk group and volumes for the replicated content with the same names and size as listed on the primary site.