Use the following procedure to perform a planned replacement of customized coordination points (CP servers or SCSI-3 disks) without incurring application downtime on an online client cluster.
Note If multiple clusters share the same CP server, you must perform this replacement procedure in each cluster.
You can use the vxfenswap utility to replace coordination points when fencing is running in customized mode in an online cluster, with vxfen_mechanism=cps. The utility does not support migration from server-based fencing (vxfen_mode=customized) to disk-based fencing (vxfen_mode=scsi3) and vice-versa in an online cluster.
However, in a cluster that is offline you can migrate from disk-based fencing to server-based fencing and vice-versa:
Perform the tasks on the CP server and on the VCS cluster nodes as described in the "Enable fencing in a client cluster with a new CP server" scenario.
Perform the tasks on the VCS cluster nodes as described in the "Enable fencing in a client cluster with a new CP server" scenario.
See Deployment and migration scenarios for CP server
You can cancel the coordination point replacement operation at any time using the vxfenswap -a cancel
command.
See About the vxfenswap utility
To replace coordination points for an online cluster
This can be checked by running the following commands:
# cpsadm -s
cpserver
-a list_nodes
# cpsadm -s
cpserver
-a list_users
If the client cluster nodes are not present here, prepare the new CP server(s) for use by the client cluster.
For example, enter the following command:
I/O Fencing Cluster Information:
================================
The values of the /etc/vxfenmode file have to be updated on all the nodes in the VCS cluster.
Review and if necessary, update the vxfenmode parameters for security, the coordination points, and if applicable to your configuration, vxfendg.
Refer to the text information within the vxfenmode file for additional information about these parameters and their new possible values.
The vxfenswap utility requires secure ssh connection to all the cluster nodes.