You can create the Oracle Clusterware and Oracle database home directories either on local storage or on shared storage. This step is mandatory if you plan to place the directories on shared VxVM disks. If you plan to place the directories on storage local to each node, you may or may not perform this step. When the installer prompts for the home directories at the time of installing Oracle Clusterware and Oracle database, it creates the directories locally on each node, if they do not exist.
Use one of the following options to create the directories:
Local storage |
See “To create the file system and directories on local storage for Oracle Clusterware and Oracle RAC database”. |
Shared storage |
To create the file system and directories on local storage for Oracle Clusterware and Oracle RAC database
The sample commands in the procedure are for node galaxy. Repeat the steps on each node of the cluster.
As the root user, create a VxVM local diskgroup bindg_hostname
on each node:
# vxdg init bindg_galaxy Disk_1
Create a volume binvol_hostname
on each node:
# vxassist -g bindg_galaxy make binvol_galaxy 12G
Create a filesystem with the volume, binvol_hostname, on each node.
# mkfs -F vxfs /dev/vx/rdsk/bindg_galaxy/binvol_galaxy
Mount the filesystem (/app
) on each node:
# mount -F vxfs /dev/vx/dsk/bindg_galaxy/binvol_galaxy /app
Create the following directories for Oracle RAC (ORACLE_BASE, CRS_HOME, ORACLE_HOME
) on each node:
# mkdir -p /app/oracle
# mkdir -p /app/crshome
# mkdir -p /app/oracle/orahome
Change the ownership and permissions on each node:
# chown -R oracle:oinstall /app # chmod -R 744 /app
Add an entry for the filesystem in the /etc/vfstab
file on each node:
Edit the /etc/vfstab file, list the new file system, and specify "yes" for the mount at boot column for each node:
# device device mount FS fsck mount mount # to mount to fsck point type pass at boot options # . /dev/vx/dsk/bindg_galaxy/binvol_galaxy /dev/vx/rdsk/bindg_galaxy/binvol_galaxy \ /app vxfs 1 yes -
To create the file system and directories on shared storage for Oracle Clusterware and Oracle database
Perform the following steps on one of the nodes in the cluster.
As the root user, create a VxVM shared disk group bindg
:
# vxdg -s init bindg Disk_1
Create separate volumes for Oracle Clusterware (crsbinvol
) and Oracle database (orabinvol
):
# vxassist -g bindg make crsbinvol 5G # vxassist -g bindg make orabinvol 7G
Create the following directories for Oracle, ORACLE_BASE, CRS_HOME, ORACLE_HOME
.
The file system and directories created on shared storage in this procedure are based on the following layout:
# mkdir -p /app/oracle
# mkdir -p /app/crshome
# mkdir -p /app/oracle/orahome
Create the file system with the volume orabinvol
:
# mkfs -F vxfs /dev/vx/rdsk/bindg/crsbinvol # mkfs -F vxfs /dev/vx/rdsk/bindg/orabinvol
Mount the file systems. Perform this step on each node.
# mount -F vxfs -o cluster /dev/vx/dsk/bindg/crsbinvol \ /app/crshome # mount -F vxfs -o cluster /dev/vx/dsk/bindg/orabinvol \ /app/oracle/orahome
Change the ownership and permissions on all nodes of the cluster.
# chown -R oracle:oinstall /app # chmod -R 744 /app
Add the CVMVolDg and CFSMount resources to the VCS configuration.
See “To add the CFSMount and CVMVolDg resources to the VCS configuration using CLI”.
To add the CFSMount and CVMVolDg resources to the VCS configuration using CLI
Change the permissions on the VCS configuration file:
# haconf -makerw
Configure the CVM volumes under VCS:
# hares -add crsorabin_voldg CVMVolDg cvm
# hares -modify crsorabin_voldg Critical 0
# hares -modify crsorabin_voldg CVMDiskGroup bindg
# hares -modify crsorabin_voldg CVMVolume -add crsbinvol
# hares -modify crsorabin_voldg CVMVolume -add orabinvol
# hares -modify crsorabin_voldg CVMActivation sw
Set up the file system under VCS:
# hares -add crsbin_mnt CFSMount cvm
# hares -modify crsbin_mnt Critical 0
# hares -modify crsbin_mnt MountPoint "/app/crshome"
# hares -modify crsbin_mnt BlockDevice \ "/dev/vx/dsk/bindg/crsbinvol"
# hares -add orabin_mnt CFSMount cvm
# hares -modify orabin_mnt Critical 0
# hares -modify orabin_mnt MountPoint "/app/oracle/orahome"
# hares -modify orabin_mnt BlockDevice \ "/dev/vx/dsk/bindg/orabinvol"
Link the parent and child resources:
# hares -link crsorabin_voldg cvm_clus
# hares -link crsbin_mnt crsorabin_voldg
# hares -link crsbin_mnt vxfsckd
# hares -link orabin_mnt crsorabin_voldg
# hares -link orabin_mnt vxfsckd
# hares -modify crsorabin_voldg Enabled 1
# hares -modify crsbin_mnt Enabled 1
# hares -modify orabin_mnt Enabled 1
# haconf -dump -makero
Verify the resource configuration in the main.cf file.
CFSMount crsbin_mnt ( Critical = 0 MountPoint = "/app/crshome" BlockDevice = "/dev/vx/dsk/bindg/crsbinvol" ) CFSMount orabin_mnt ( Critical = 0 MountPoint = "/app/oracle/orahome" BlockDevice = "/dev/vx/dsk/bindg/orabinvol" ) CVMVolDg crsorabin_voldg ( Critical = 0 CVMDiskGroup = bindg CVMVolume = { crsbinvol, orabinvol } CVMActivation = sw ) crsbin_mnt requires crsorabin_voldg crsbin_mnt requires vxfsckd orabin_mnt requires crsorabin_voldg orabin_mnt requires vxfsckd crsorabin_voldg requires cvm_clus
Verify that the resources are online on all systems in the cluster.
# hares -state crsorabin_voldg
# hares -state crsbin_mnt
# hares -state orabin_mnt
Note: |
At this point, the crsorabin_voldg resource is reported offline, and the underlying volumes are online. Therefore, you need to manually bring the resource online on each node. |
To bring the resource online manually:
# hares -online crsorabin_voldg -sys galaxy
# hares -online crsorabin_voldg -sys nebula