Configuring and starting up ASM on remaining nodes for 11gR2 or 12c

This procedure is applicable when Oracle grid infrastructure is installed on all nodes and ASM is configured on the first node. See Configuring Oracle ASM on the first node of the cluster.

Configure the remaining nodes by following the procedure given below.

To create ASM and start ASM on remaining nodes:

  1. Copy the spfile for ASM from the ASM Disks that is to be used on the remote node.

    For example:

    ASMCMD> spget +DATA/asm/asmparameterfile/<registry_file>

    ASMCMD> spcopy +DATA/asm/asmparameterfile/<registry_file> /u01/oraHome/dbs/spfileASM.ora

  2. Stop the database and the ASM Disk group on the first node.
  3. Copy the spfile from the first node to the remote node.
  4. Copy $ORACLE_BASE/admin/SID* from first node to the remote node.
  5. Add an asm instance on the remote node.

    For example: $/u01/product/11.2.0/grid/bin/srvctl add asm -p /u01/oraHome/dbs/spfileASM.ora

  6. Start the asm instance using the srvctl command.
  7. Import the VxVM disk group on the remote node.

To configure database on remaining nodes that use ASM disk-groups (11gR2 or 12c):

  1. Run the srvctl add database command to register the Oracle Database(s) that is running on the nodes.


    Use the credentials of the Oracle software owner to register the database. For more information on the complete list of parameters, refer to the Oracle documentation.

  2. Login to the Oracle ASM instance running on the remote node.
  3. Run the SQL> ALTER disk group <DGname> mount command to mount the Oracle ASM disk groups.
  4. Repeat Step 3 to mount all the required disk groups.
  5. Run the $GRID_HOME/bin/crsctl stat res -t -init command to check if the disk groups are auto-registered to OHASD.

    The output displays ora.<DGname>.dg for the registered disk groups.

  6. Run the $GRID_HOME/bin/srvctl modify database -a <diskgroup_list> command to add the Oracle ASM disk groups as a dependency to the database(s).