Creating Oracle Clusterware and Oracle database home directories manually

You can create the Oracle Clusterware and Oracle database home directories either on local storage or on shared storage. This step is mandatory if you plan to place the directories on shared VxVM disks. If you plan to place the directories on storage local to each node, you may or may not perform this step. When the installer prompts for the home directories at the time of installing Oracle Clusterware and Oracle database, it creates the directories locally on each node, if they do not exist.

Use one of the following options to create the directories:

Local storage

See “To create the file system and directories on local storage for Oracle Clusterware and Oracle RAC database”.

Shared storage

See “To create the file system and directories on shared storage for Oracle Clusterware and Oracle database”.

To create the file system and directories on local storage for Oracle Clusterware and Oracle RAC database

The sample commands in the procedure are for node galaxy. Repeat the steps on each node of the cluster.

  1. As the root user, create a VxVM local diskgroup bindg_hostname on each node:

    # vxdg init bindg_galaxy Disk_1
  2. Create a volume binvol_hostname on each node:

    # vxassist -g bindg_galaxy make binvol_galaxy 12G
  3. Create a filesystem with the volume, binvol_hostname, on each node.

    # mkfs -F vxfs /dev/vx/rdsk/bindg_galaxy/binvol_galaxy
  4. Mount the filesystem (/app) on each node:

    # mount -F vxfs /dev/vx/dsk/bindg_galaxy/binvol_galaxy 
    /app
  5. Create the following directories for Oracle RAC (ORACLE_BASE, CRS_HOME, ORACLE_HOME) on each node:

    # mkdir -p /app/oracle
    # mkdir -p /app/crshome
    # mkdir -p /app/oracle/orahome
  6. Change the ownership and permissions on each node:

    # chown -R oracle:oinstall /app
    # chmod -R 744 /app
  7. Add an entry for the filesystem in the /etc/vfstab file on each node:

    Edit the /etc/vfstab file, list the new file system, and specify "yes" for the mount at boot column for each node:

    # device    device   mount  FS    fsck   mount     mount 
    # to mount to fsck  point  type  pass  at  boot options 
    # 
    .
    /dev/vx/dsk/bindg_galaxy/binvol_galaxy
    /dev/vx/rdsk/bindg_galaxy/binvol_galaxy \
    /app vxfs 1 yes -
  8. Repeat all the steps on each node of the cluster.

To create the file system and directories on shared storage for Oracle Clusterware and Oracle database

Perform the following steps on one of the nodes in the cluster.

  1. As the root user, create a VxVM shared disk group bindg:

    # vxdg -s init bindg Disk_1
  2. Create separate volumes for Oracle Clusterware (crsbinvol) and Oracle database (orabinvol):

    # vxassist -g bindg make crsbinvol 5G
    # vxassist -g bindg make orabinvol 7G
  3. Create the following directories for Oracle, ORACLE_BASE, CRS_HOME, ORACLE_HOME.

    The file system and directories created on shared storage in this procedure are based on the following layout:

    $ORACLE_BASE

    /app/oracle

    Both /app and /app/oracle are on local storage.

    $CRS_HOME

    /app/crshome

    /app is on local storage.

    /app/crshome is on shared storage.

    $ORACLE_HOME

    /app/oracle/orahome

    /app/oracle is on local storage.

    /app/oracle/orahome is on shared storage.

    # mkdir -p /app/oracle
    # mkdir -p /app/crshome
    # mkdir -p /app/oracle/orahome
  4. Create the file system with the volume orabinvol:

    # mkfs -F vxfs /dev/vx/rdsk/bindg/crsbinvol
    # mkfs -F vxfs /dev/vx/rdsk/bindg/orabinvol
  5. Mount the file systems. Perform this step on each node.

    # mount -F vxfs -o cluster /dev/vx/dsk/bindg/crsbinvol \
    /app/crshome
    # mount -F vxfs -o cluster /dev/vx/dsk/bindg/orabinvol \
    /app/oracle/orahome
  6. Change the ownership and permissions on all nodes of the cluster.

    Note:

    The ownership and permissions must be changed on all nodes of the cluster because /app/oracle must be owned by oracle:oinstall, otherwise /app/oracle/oraInventory does not get created correctly on all the nodes. This can cause the Oracle Universal Installer to fail.

    # chown -R oracle:oinstall /app
    # chmod -R 744 /app
  7. Add the CVMVolDg and CFSMount resources to the VCS configuration.

    See “To add the CFSMount and CVMVolDg resources to the VCS configuration using CLI”.

To add the CFSMount and CVMVolDg resources to the VCS configuration using CLI

  1. Change the permissions on the VCS configuration file:

    # haconf -makerw
  2. Configure the CVM volumes under VCS:

    # hares -add crsorabin_voldg CVMVolDg cvm
    # hares -modify crsorabin_voldg Critical 0
    # hares -modify crsorabin_voldg CVMDiskGroup bindg
    # hares -modify crsorabin_voldg CVMVolume -add crsbinvol
    # hares -modify crsorabin_voldg CVMVolume -add orabinvol
    # hares -modify crsorabin_voldg CVMActivation sw
  3. Set up the file system under VCS:

    # hares -add crsbin_mnt CFSMount cvm
    # hares -modify crsbin_mnt Critical 0
    # hares -modify crsbin_mnt MountPoint "/app/crshome"
    # hares -modify crsbin_mnt BlockDevice  \ 
    "/dev/vx/dsk/bindg/crsbinvol"
    # hares -add orabin_mnt CFSMount cvm
    # hares -modify orabin_mnt Critical 0
    # hares -modify orabin_mnt MountPoint "/app/oracle/orahome"
    # hares -modify orabin_mnt BlockDevice  \ 
    "/dev/vx/dsk/bindg/orabinvol"
  4. Link the parent and child resources:

    # hares -link crsorabin_voldg cvm_clus
    # hares -link crsbin_mnt crsorabin_voldg
    # hares -link crsbin_mnt vxfsckd
    # hares -link orabin_mnt crsorabin_voldg
    # hares -link orabin_mnt vxfsckd
  5. Enable the resources:

    # hares -modify crsorabin_voldg Enabled 1
    # hares -modify crsbin_mnt Enabled 1
    # hares -modify orabin_mnt Enabled 1
    # haconf -dump -makero
  6. Verify the resource configuration in the main.cf file.

    CFSMount crsbin_mnt (  
                   Critical = 0                 
                   MountPoint = "/app/crshome"
    															BlockDevice = "/dev/vx/dsk/bindg/crsbinvol"                        
                   )
    
    CFSMount orabin_mnt (  
                   Critical = 0                 
                   MountPoint = "/app/oracle/orahome"                 
                   BlockDevice = "/dev/vx/dsk/bindg/orabinvol"                        
                   )
    
    CVMVolDg crsorabin_voldg ( 
        				      Critical = 0 
                  CVMDiskGroup = bindg
                  CVMVolume = { crsbinvol, orabinvol } 
                  CVMActivation = sw
                  )     
    crsbin_mnt requires crsorabin_voldg       
    crsbin_mnt requires vxfsckd
    orabin_mnt requires crsorabin_voldg       
    orabin_mnt requires vxfsckd   
    crsorabin_voldg requires cvm_clus

  7. Verify that the resources are online on all systems in the cluster.

    # hares -state crsorabin_voldg
    # hares -state crsbin_mnt
    # hares -state orabin_mnt

    Note:

    At this point, the crsorabin_voldg resource is reported offline, and the underlying volumes are online. Therefore, you need to manually bring the resource online on each node.

    To bring the resource online manually:

    # hares -online crsorabin_voldg -sys galaxy
    # hares -online crsorabin_voldg -sys nebula