![]() |
![]() |
![]() |
![]() |
![]() |
A process that starts, stops, and monitors all configured resources of a type, and reports their status to VCS.
A failover configuration where each system runs a service group. If either fails, the other one takes over and runs both service groups. Also known as a symmetric configuration.
A failover configuration consisting of one service group on a primary system, and one dedicated backup system. Also known as an asymmetric configuration.
A cluster is one or more computers that are linked together for the purpose of multiprocessing and high availability. The term is used synonymously with VCS cluster, meaning one or more computers that are part of the same GAB membership.
A Java-based graphical user interface to manage VCS clusters. It provides complete administration capabilities for a cluster, and can run on any system inside or outside the cluster, on any operating system that supports Java.
A Web-based graphical user interface for monitoring and administering the cluster.
Administrators with clusters in physically disparate areas can set the policy for migrating applications from one location to another if clusters in one geographic area become unavailable due to an unforeseen event. Disaster recovery requires heartbeating and replication.
A way to improve cluster resiliency, gabdisk enables a heartbeat to be placed on a physical disk shared by all systems in the cluster.
A failover occurs when a service group faults and is migrated to another system.
Group Atomic Broadcast (GAB) is a communication mechanism of the VCS engine that manages cluster membership, monitors heartbeat communication, and distributes information throughout the cluster.
A VCS service group which spans across two or more clusters. The ClusterList
attribute for this group contains the list of clusters over which the group spans.
A process that monitors and, when required, restarts had.
The core VCS process that runs on each system. The had process maintains and communicates information about the resources running on the local system and receives information about resources running on other systems in the cluster.
A node is in jeopardy when it is missing one of the two required heartbeat connections. When a node is running with one heartbeat only (in jeopardy), VCS does not restart the applications on a new node. This action of disabling failover is a safety mechanism that prevents data corruption.
Low Latency Transport (LLT) is a communication mechanism of the VCS engine that provides kernel-to-kernel communications and monitors network communications.
The file in which the cluster configuration is stored.
The Monitor Program informs the application agent whether the application process is online or offline, and properly returning service requests.
If all network connections between any two groups of systems fail simultaneously, a network partition occurs. When this happens, systems on both sides of the partition can restart applications from the other side resulting in duplicate services, or "split-brain." A split brain occurs when two independent systems configured in a cluster assume they have exclusive access to a given resource (usually a file system or volume). The most serious problem caused by a network partition is that it affects the data on shared disks. See Jeopardy and Seeding.
The physical host or system on which applications and service groups reside. When systems are linked by VCS, they become nodes in a cluster.
An N-to-1 configuration is based on the concept that multiple, simultaneous server failures are unlikely; therefore, a single backup server can protect multiple active servers. When a server fails, its applications move to the backup server. For example, in a 4-to-1 configuration, one server can protect four servers, which reduces redundancy cost at the server level from 100 percent to 25 percent.
N-to-N refers to multiple service groups running on multiple servers, with each service group capable of being failed over to different servers in the cluster. For example, consider a four-node cluster with each node supporting three critical database instances. If any node fails, each instance is started on a different node, ensuring no single node becomes overloaded.
N-to-M (or Any-to-Any) refers to multiple service groups running on multiple servers, with each service group capable of being failed over to different servers in the same cluster, and also to different servers in a linked cluster. For example, consider a four-node cluster with each node supporting three critical database instances and a linked two-node back-up cluster. If all nodes in the four-node cluster fail, each instance is started on a node in the linked back-up cluster.
Replication is the synchronization of data between systems where shared storage is not feasible. The systems that are copied may be in local backup clusters or remote failover sites. The major advantage of replication, when compared to traditional backup methods, is that current data is continuously available.
Individual components that work together to provide application services to the public network. A resource may be a physical component such as a disk or network interface card, a software component such as Oracle8i or a Web server, or a configuration component such as an IP address or mounted file system.
A dependency between resources is indicated by the keyword "requires" between two resource names. This indicates the second resource (the child) must be online before the first resource (the parent) can be brought online. Conversely, the parent must be offline before the child can be taken offline. Also, faults of the children are propagated to the parent.
Each resource in a cluster is identified by a unique name and classified according to its type. VCS includes a set of pre-defined resource types for storage, networking, and application services.
Seeding is used to protect a cluster from a preexisting network partition. By default, when a system comes up, it is not seeded. Systems can be seeded automatically or manually. Only systems that have been seeded can run VCS. Systems are seeded automatically only when: an unseeded system communicates with a seeded system or all systems in the cluster are unseeded and able to communicate with each other. See Network Partition.
A service group is a collection of resources working together to provide application services to clients. It typically includes multiple resources, hardware- and software-based, working together to provide a single service.
A service group dependency provides a mechanism by which two service groups can be linked by a dependency rule, similar to the way resources are linked.
Storage devices that are connected to and used by two or more systems.
Simple Network Management Protocol (SNMP) developed to manage nodes on an IP network.
The current activity status of a resource, group or system. Resource states are given relative to both systems.
The physical system on which applications and service groups reside. When a system is linked by VCS, it becomes a node in a cluster. See Node.
The types.cf file describes standard resource types to the VCS engine; specifically, the data required to control a specific resource.
A unique IP address associated with the cluster. It may be brought up on any system in the cluster, along with the other resources of the service group. This address, also known as the IP alias, should not be confused with the base IP address, which is the IP address that corresponds to the host name of a system.
Symbols
.rhosts, editing to remove remsh permissions
A
adding a node
procedure for Oracle 10g
agents
CFSMount
CVMCluster
CVMVolDg
CVMVxconfigd
attributes
CFSMount agent
CVMCluster agent
CVMVolDg agent
CVMVxconfigd
for CVM and Oracle
UseFence
C
centralized cluster management
CFSMount agent
description
clone databases, creating
Cluster File System (CFS)
creating databases on
overview
cluster management
cluster nodes
adding
removing
Cluster Volume Manager (CVM)
overview
commands
dbed_analyzer
dbed_ckptcreate
dbed_ckptdisplay
dbed_ckptremove
dbed_ckptrollback
dbed_ckptumount
dbed_clonedb
dbed_update
format
gcoconfig
vcsmmconfig
vradmin
vxassist
vxdctl enable
vxdg list
vxedit
vxedit (set shared volume mode)
vxfen start
vxfenadm
vxfenclearpre
vxprint
vxstorage_stats
vxvol
communications
GAB
configurations
backing up main.cf for SF Oracle RAC upgrade
copying main.cf for SF Oracle RAC upgrade
editing for removed nodes
modifying
of service groups
configuring CVM
configuring CVM and Oracle service groups manually
configuring Oracle
modifying the VCS configuration
configuring Oracle 10g
checking the configuration
configuring VCS
Cluster Connector
Cluster Management Console
coordinator disks
defined
for I/O fencing
setting up
cron
scheduling Storage Checkpoints
crontab file
CSSD agent
CVMCluster agent
description
type definition
CVMTypes.cf file
CVMVolDg agent
description
CVMVxconfigd agent
description
D
data disks
for I/O fencing
database
creating
database instance
stopping
databases
creating
creating for Oracle9i
upgrading
dbca
creating databases on CFS
creating databases on raw volumes
description
dbed_analyzer command
dbed_ckptcreate command
dbed_ckptdisplay command
dbed_ckptremove command
dbed_ckptrollback command
dbed_ckptumount command
dbed_clonedb command
dbed_update command
disk groups
overview
disk groups and volumes
disk space
required for SF Oracle RAC
disks
adding and initializing
coordinator
testing with vxfentsthdw
verifying node access
drivers
tunable parameters
E
enterprise agent for Oracle
environment variables
MANPATH
PATH
error messages
LMX
node ejection
VXFEN
VxVM errors related to I/O fencing
F
file
Oracle agent log file location
format command
G
GAB
overview
port memberships
gcoconfig command
getcomms
troubleshooting SF Oracle RAC
getdbac
troubleshooting SF Oracle RAC
Global Cluster Option (GCO)
overview
global clustering
adding VVR types to VCS configuration
configuring GCO
configuring VCS to replicate database volumes
illustration of dependencies
in SF Oracle RAC environment
migration and takeover
setting up replication
gsd
stopping
H
hagetcf
troubleshooting SF Oracle RAC
I
I/O fencing
event scenarios
operations
overview
setting up
starting
testing and scenarios
Installation
installation
of Oracle9i
of SF Oracle RAC
utilities
installing
Root Broker
installing Oracle 10g
installing SF Oracle RAC
removing temporary remsh access permissions
IP address
configuring public virtual IP addresses
L
licenses
obtaining
removing keys
Listener
description
LMX
error messages
tunable parameters
logs
for VCS and Oracle agents
M
main.cf
after SF Oracle RAC and Oracle installation
after SF Oracle RAC installation
managing clusters, centrally
MANPATH environment variable
migrating Oracle
N
nodes
adding
removing
O
OCR
creating directories on CFS
creating volumes on raw volumes
operating systems
supported
Oracle
adding a node for Oracle 10g
adding or removing a node for Oracle 10g
agent log file location
applying patchsets
configuring virtual IP addresses
creating volumes for OCR and Vote-disk for Oracle 10g
migrating
removing a node for Oracle 10g
supported versions
upgrading
Oracle 10g
editing the CVM group
Oracle Disk Manager (ODM)
disabling library
overview
Oracle Enterprise Manager
Oracle instance
definition
Oracle service group
configuring
creating
Oracle9i
creating databases
creating storage location for SRVM
disabling ODM library
installing Release 2
overview of installation
upgrading databases
P
patches
for HP
PATH environment variable
preparing to install Oracle 10g
PrivNIC agent
R
registrations
for I/O fencing
key formatting
removing a node
from a cluster
remsh permissions
removing
reservations
description
Root Broker
installing
S
scheduling Storage Checkpoints
SCSI-3 persistent reservations
verifying
service groups
configuring for Oracle and CVM
configuring for SF Oracle RAC
Oracle
overview of CVM
using configuration wizard
SF Oracle RAC
adding nodes
configuring
configuring GCO
configuring VCS service groups
coordinator disks
disabling ODM library
error messages
high-level view
I/O fencing
information required during installation
overview of components
overview of installation methods
phases of installation and configuration
removing nodes
sample configuration files
shared storage
Storage Checkpoints
Storage Mapping
Storage Rollback
troubleshooting
tunable parameters of kernel drivers
upgrading from 3.5 to 4.1
using Storage Checkpoints
using uninstallsfrac
SFRAC
adding VVR types to VCS configuration
configuring VCS to replicate database volumes
illustration of dependencies
migration and takeover
setting up replication
split brain
description
SRVM
creating storage location
storage
for I/O fencing
Storage Checkpoints
backing up and recovering databases
creating
description
determining space requirements
displaying
performance
removing
scheduling
unmounting
using the CLI
verifying
Storage Mapping
configuring arrays
dbed_analyzer command
description
displaying information for a list of tablespaces
enabling Oracle file mapping
Oracle Enterprise Manager
ORAMAP
using the vxstorage_stats command
verifying feature setup
verifying Oracle file mapping setup
views
Storage Rollback
description
guidelines for recovery
Symantec Product Authentication Service
T
troubleshooting
U
Uninstalling DBE/AC
uninstalling SF Oracle RAC
uninstallsfrac
procedures
upgrade
disk layout
operating system
SF Oracle RAC
V
VCS (Veritas Cluster Server)
agent log file location
VCSIPC
errors in trace/log files
overview
warnings in trace files
vcsmmconfig command
Veritas Volume Replicator (VVR)
overview
volumes
creating databases on
overview
Vote-disk
creating directories on CFS
creating volumes on raw volumes
vradmin command
vxassist command
vxdctl command
vxdg list command
vxedit command
VXFEN
informational messages
tunable parameters
vxfen command
vxfenadm command
vxfenclearpre command
vxfentsthdw utility
vxprint command
vxstorage_stats command
VxVM
error messages related to I/O fencing
vxvol command