About Flexible Storage Sharing

Flexible Storage Sharing (FSS) enables network sharing of local storage, cluster wide. The local storage can be in the form of Direct Attached Storage (DAS) or internal disk drives. Network shared storage is enabled by using a network interconnect between the nodes of a cluster.

FSS allows network shared storage to co-exist with physically shared storage, and logical volumes can be created using both types of storage creating a common storage namespace. Logical volumes using network shared storage provide data redundancy, high availability, and disaster recovery capabilities, without requiring physically shared storage, transparently to file systems and applications.

FSS is supported for CVM protocol versions 140 and above.

Figure: Flexible Storage Sharing Environment shows a Flexible Storage Sharing environment.

Figure: Flexible Storage Sharing Environment

Flexible Storage Sharing Environment

Flexible Storage Sharing use cases

The following list includes several use cases for which you would want to use the FSS feature:

Use of local storage in current use cases

The FSS feature supports all current use cases of the Symantec Storage Foundation for Oracle RAC (SF Oracle RAC) stack without requiring SAN-based storage.

Off-host processing

Data Migration:

  • From shared (SAN) storage to network shared storage

  • From network shared storage to SAN storage

  • From storage connected to one node (DAS)/cluster to the storage connected to a different node (DAS)/cluster, that do not share the storage

Back-up/Snapshots:

An additional node can take a back-up by joining the cluster and reading from volumes/snapshots that are hosted on the DAS/shared storage, which is connected to one or more nodes of the cluster, but not the host taking the back-up.

DAS SSD benefits leveraged with existing SF Oracle RAC features

  • Mirroring across DAS SSDs connected to individual nodes of the cluster. DAS SSDs provides better performance than SAN storage (including SSDs). FSS provides a way to share these SSDs across cluster.

  • Keeping one mirror on the SSD and another on the SAN storage provides faster read access due to the SSDs, and also provide high availability of data due to the SAN storage.

  • There are several best practices for using SSDs with Storage Foundation. All the use-cases are possible with SAN attached SSDs in clustered environment. With FSS, DAS SSDs can also be used for similar purposes.

FSS with SmartIO for file system caching

If the nodes in the cluster have internal SSDs as well as HDDs, the HDDs can be shared over the network using FSS. You can use SmartIO to set up a read/write-back cache using the SSDs. The read cache can service volumes created using the network-shared HDDs.

Campus cluster configuration

Campus clusters can be set up without the need for Fibre Channel (FC) SAN connectivity between sites.

See Administering Flexible Storage Sharing.

Limitations of Flexible Storage Sharing

Note the following limitations for using Flexible Storage Sharing (FSS):

  • FSS is only supported on clusters of up to 8 nodes.

  • Disk initialization operations should be performed only on nodes with local connectivity to the disk.

  • FSS does not support the use of boot disks, opaque disks, and non-VxVM disks for network sharing.

  • Hot-relocation is disabled on FSS disk groups.

  • The vxresize operation is not supported on volumes and file systems from the slave node.

  • FSS does not support non-SCSI3 disks connected to multiple hosts.

  • Dynamic Lun Expansion (DLE) is not supported.

  • FSS only supports instant data change object (DCO), created using the vxsnap operation or by specifying "logtype=dco dcoversion=20" attributes during volume creation.

  • By default creating a mirror between SSD and HDD is not supported through vxassist, as the underlying mediatypes are different. To workaround this issue, you can create a volume with one mediatype, for instance the HDD, which is the default mediatype, and then later add a mirror on the SSD.

    For example:

    # vxassist -g diskgroup make volume size init=none
    # vxassist -g diskgroup mirror volume mediatype:ssd
    # vxvol -g diskgroup init active volume

    See Administering mirrored volumes using vxassist.