About Flexible Storage Sharing

Flexible Storage Sharing (FSS) enables network sharing of local storage, cluster wide. The local storage can be in the form of Direct Attached Storage (DAS) or internal disk drives. Network shared storage is enabled by using a network interconnect between the nodes of a cluster.

FSS allows network shared storage to co-exist with physically shared storage, and logical volumes can be created using both types of storage creating a common storage namespace. Logical volumes using network shared storage provide data redundancy, high availability, and disaster recovery capabilities, without requiring physically shared storage, transparently to file systems and applications.

FSS can be used with SmartIO technology for remote caching to service nodes that may not have local SSDs.

FSS is supported for CVM protocol versions 140 and above.

Figure: Flexible Storage Sharing Environment shows a Flexible Storage Sharing environment.

Figure: Flexible Storage Sharing Environment

Flexible Storage Sharing Environment

Flexible Storage Sharing use cases

The following list includes several use cases for which you would want to use the FSS feature:

Use of local storage in current use cases

The FSS feature supports all current use cases of the Storage Foundation for Oracle RAC (SF Oracle RAC) stack without requiring SAN-based storage.

Off-host processing

Data Migration:

  • From shared (SAN) storage to network shared storage

  • From network shared storage to SAN storage

  • From storage connected to one node (DAS)/cluster to the storage connected to a different node (DAS)/cluster, that do not share the storage

Back-up/Snapshots:

An additional node can take a back-up by joining the cluster and reading from volumes/snapshots that are hosted on the DAS/shared storage, which is connected to one or more nodes of the cluster, but not the host taking the back-up.

DAS SSD benefits leveraged with existing SF Oracle RAC features

  • Mirroring across DAS SSDs connected to individual nodes of the cluster. DAS SSDs provides better performance than SAN storage (including SSDs). FSS provides a way to share these SSDs across cluster.

  • Keeping one mirror on the SSD and another on the SAN storage provides faster read access due to the SSDs, and also provide high availability of data due to the SAN storage.

  • There are several best practices for using SSDs with Storage Foundation. All the use-cases are possible with SAN attached SSDs in clustered environment. With FSS, DAS SSDs can also be used for similar purposes.

FSS with SmartIO for file system caching

If the nodes in the cluster have internal SSDs as well as HDDs, the HDDs can be shared over the network using FSS. You can use SmartIO to set up a read/write-back cache using the SSDs. The read cache can service volumes created using the network-shared HDDs.

FSS with SmartIO for remote caching

FSS works with SmartIO to provide caching services for nodes that do not have local SSD devices.

In this scenario, Flexible Storage Sharing (FSS) exports SSDs from nodes that have a local SSD. FSS then creates a pool of the exported SSDs in the cluster. From this shared pool, a cache area is created for each node in the cluster. Each cache area is accessible only to that particular node for which it is created. The cache area can be of type, VxVM or VxFS.

The cluster must be a CVM cluster.

The volume layout of the cache area on remote SSDs follows the simple stripe layout, not the default FSS allocation policy of mirroring across host. If the caching operation degrades performance on a particular volume, then caching is disabled for that particular volume. The volumes that are used to create cache areas must be created on disk groups with disk group version 200 or later. However, data volumes that are created on disk groups with disk group version 190 or later can access the cache area created on FSS exported devices.

Note:

CFS write-back caching is not supported for cache areas created on remote SSDs.

Apart from the CVM/CFS license, the SmartIO license is required to create cache areas on the exported devices.

For more information, see the document Veritas InfoScale SmartIO for Solid State Drives Solutions Guide.

Campus cluster configuration

Campus clusters can be set up without the need for Fibre Channel (FC) SAN connectivity between sites.

See Administering Flexible Storage Sharing.

Limitations of Flexible Storage Sharing

Note the following limitations for using Flexible Storage Sharing (FSS):

  • FSS is only supported on clusters of up to 8 nodes.

  • Disk initialization operations should be performed only on nodes with local connectivity to the disk.

  • FSS does not support the use of boot disks, opaque disks, and non-VxVM disks for network sharing.

  • Hot-relocation is disabled on FSS disk groups.

  • The vxresize operation is not supported on volumes and file systems from the slave node.

  • FSS does not support non-SCSI3 disks connected to multiple hosts.

  • Dynamic LUN Expansion (DLE) is not supported.

  • FSS only supports instant data change object (DCO), created using the vxsnap operation or by specifying "logtype=dco dcoversion=20" attributes during volume creation.

  • By default creating a mirror between SSD and HDD is not supported through vxassist, as the underlying mediatypes are different. To workaround this issue, you can create a volume with one mediatype, for instance the HDD, which is the default mediatype, and then later add a mirror on the SSD.

    For example:

    # vxassist -g diskgroup make volume size init=none
    # vxassist -g diskgroup mirror volume mediatype:ssd
    # vxvol -g diskgroup init active volume

    See Administering mirrored volumes using vxassist.