When the CFS primary node fails

If the server on which the Cluster File System (CFS) primary node is running fails, the remaining cluster nodes elect a new primary node. The new primary node reads the file system intent log and completes any metadata updates that were in process at the time of the failure. Application I/O from other nodes may block during this process and cause a delay. When the file system is again consistent, application processing resumes.

Because nodes using a cluster file system in secondary node do not update file system metadata directly, failure of a secondary node does not require metadata repair. CFS recovery from secondary node failure is therefore faster than from a primary node failure.

See the fsclustadm(1M) manual page for more information.