Set up replication according to the following best practices:
Create one RVG for each application, rather than for each server. For example, if a server is running three separate databases that are being replicated, create three separate RVGs for each database. Creating three separate RVGs helps to avoid write-order dependency between the applications and provides three separate SRLs for maximum performance per application.
Create one RVG per disk group. Creating one RVG per disk group enables you to efficiently implement application clustering for high availability, where only one RVG needs to be failed over by the service group. If the disk group contains more than one RVG, the applications using the other RVGs would have to be stopped to facilitate the failover. You can use the Disk Group Split feature to migrate application volumes to their own disk groups before associating the volumes to the RVG.
Plan the size and layout of the data volumes based on the requirement of your application.
Plan the size of the network between the Primary and each Secondary host.
Lay out the SRL appropriately to support the performance characteristics needed by the application. Because all writes to the data volumes in an RVG are first written to the SRL, the total write performance of an RVG is bound by the total write performance of the SRL. For example, dedicate separate disks to SRLs and if possible dedicate separate controllers to the SRL.
Size the SRL appropriately to avoid overflow.
See Sizing the SRL.
The Volume Replicator Advisor (VRAdvisor), a tool to collect and analyze samples of data, can help you determine the optimal size of the SRL.
Include all the data volumes used by the application in the same RVG. This is mandatory.
Provide dedicated bandwidth for VVR over a separate network. The RLINK replicates data critical to the survival of the business. Compromising the RLINK compromises the business recovery plan.
Use the same names for the data volumes on the Primary and Secondary nodes. If the data volumes on the Primary and Secondary have different names, you must map the name of the Secondary data volume to the appropriate Primary data volume.
See Mapping the name of a Secondary data volume to a differently named Primary data volume.
Use the same name and size for the SRLs on the Primary and Secondary nodes because the Secondary SRL becomes the Primary SRL when the Primary role is transferred.
Mirror all data volumes and SRLs. This is optional if you use hardware-based mirroring.
The vradmin utility creates corresponding RVGs on the Secondary of the same name as the Primary. If you choose to use the vxmake command to create RVGs, use the same names for corresponding RVGs on the Primary and Secondary nodes.
Associate a DCM to each data volume on the Primary and the Secondary if the DCMs had been removed for some reason. By default, the vradmin createpri and vradmin addsec commands add DCMs if they do not exist.
If you are setting up replication in a shared environment, before you do so, determine the node that is performing the most writes by running the vxstat command on each node for a suitable period of time, and then after you set up replication, specify that node as the logowner. Note that the logowner is not supported as Secondary.
In a shared disk group environment, the cluster master server node will be selected as the logowner by default.
The on-board write cache should not be used with VVR. The application must also store data to disk rather than maintaining it in memory. The takeover system, which could be a peer primary node in case of clustered configurations or the secondary site, must be capable of accessing all required information. This requirement precludes the use of anything inside a single system inaccessible by the peer. NVRAM accelerator boards and other disk caching mechanisms for performance are acceptable, but must be done on the external array and not on the local host.