test veritas logo


mount_vxfs(1M)

NAME

mount_vxfs - mount a VxFS file system

SYNOPSIS

mount [ -t vxfs ][ generic_options ][ -o specific_options ]
{ special | mount_point }

mount [ -t vxfs ][ generic_options ][ -o specific_options ]

 special  mount_point

AVAILABILITY

VRTSvxfs

DESCRIPTION

The mount command notifies the system that special, a VxFS block special device, is available to users from mount_point, which must exist before mount is invoked. mount_point becomes the name of the root of the newly mounted file system special.

You can specify multiple -o options using a comma-separated list.

Only a privileged user can mount file systems.

NOTES

The mount command automatically runs the VxFS fsck command to clean up the intent log if the mount command detects a dirty log in the file system. This functionality is only supported on file systems mounted on a Veritas Volume Manager (VxVM) volume.

Cluster File System Issues

You cannot cluster mount a file system that has disk layout Version 6,7 or 8. You must first mount the file system on one node and upgrade the file system to a supported disk layout version using the vxupgrade command. After upgrading the disk layout version, you can cluster mount the file system.

The mount command reserves a shared volume when invoked. If the shared volume is in use by another command, the mount command fails.

Attempting to cluster mount a file system that is too small might fail due to a lack of free space, as mounting a cluster file system sometimes allocates metadata.

Use crw, nomtime, and seconly only on cluster-mounted file systems.

Several options behave differently on cluster mounts than on local mounts. See the CLUSTER FILE SYSTEM BEHAVIOR section.

Be careful when accessing shared volumes with utilities such as dd that can write data to disk. Such utilities can destroy data being accessed from other nodes.

OPTIONS

generic_options
  Supported by the generic mount command. See the mount(8) manual page.
-t vxfs Specifies the VxFS file system type.
-o Specifies the VxFS-specific options in a comma-separated list. The available options are:
ckptautomnt=off|ro|rw
  Sets the automount behavior. The default value is off. All Storage Checkpoints must be mounted manually with the mount command.
If you specify ro, all file system Storage Checkpoints are automatically mounted read-only in a hidden directory named .checkpoint in the root directory of the file system. This directory does not appear in directory listings. The .checkpoint directory contains a mount point named for each Storage Checkpoint.
If you specify rw, the behavior is the same as with ro, but the Storage Checkpoints are writable.
During a remount, the ckptautomnt option cannot be changed from rw to ro.
blkclear
  Clears all data extents before allocating the extents to a file. Clearing data extents requires the synchronous zeroing of certain newly allocated extents. blkclear improves security by preventing previously deleted data from accidentally appearing in newly allocated space after a system crash. blkclear degrades performance. Most file systems match the VxFS default of off because this corner case is difficult to exploit deterministically and because the risk is limited to users possibly viewing previously deleted data they might not be authorized to see.
cio Mounts the file system for concurrent reads and writes. Concurrent I/O (CIO) offers performance benefits for well-behaved major commercial databases via mechanisms of unlocking and unbuffering. Only these databases and other applications that do not need POSIX concurrency guarantees should use CIO.
The cio option cannot be disabled by remounting the file system. To disable the cio option, unmount the file system, then mount the file system without the cio option.
ckpt=ckpt_name
  Mounts the Storage Checkpoint of a VxFS file system. ckpt_name is the name of a file system Storage Checkpoint previously created by the fsckptadm command. See the fsckptadm(1M) manual page. mount_point is the directory on which to mount the Storage Checkpoint. special is the Storage Checkpoint pseudo device. Storage Checkpoints are mounted on pseudo devices that do not appear in the system name space. The pseudo devices exist only while the Storage Checkpoint is mounted. Storage Checkpoint pseudo device names have the following format:
 

original_file_system_device_path:ckpt_name

Storage Checkpoints are mounted read-only by default when you use the /sbin/mount.vxfs command. Due to the behavior of the Linux operating system, Storage Checkpoints are mounted read/write by default when you use the /bin/mount command. You must specify the ro option to mount a Storage Checkpoint as read-only when you use the /bin/mount command. A file system must be mounted before mounting any of its Storage Checkpoints. A file system can be unmounted only after all of its Storage Checkpoints are unmounted.
cluster
  Mounts a file system in shared mode. See the CLUSTER FILE SYSTEM SPECIFIC OPTIONS section.
convosync=direct|dsync|unbuffered|closesync|delay
  Alters the file system caching behavior for O_SYNC and O_DSYNC I/O operations.
The direct value handles any reads or writes with the O_SYNC or O_DSYNC flags as if the VX_DIRECT caching advisory is set.
The dsync value handles any writes with the O_SYNC flag as if the VX_DSYNC caching advisory is set. The dsync value does not modify behavior for writes with O_DSYNC set.
The unbuffered value handles any reads or writes with the O_SYNC or O_DSYNC flags as if the VX_UNBUFFERED caching advisory is set.
The closesync value delays O_SYNC or O_DSYNC writes so that the writes do not take effect immediately.
If the closesync, dsync, direct, or unbuffered value is set and a file is written to using a file descriptor with the O_SYNC or O_DSYNC flag set, the equivalent of an fdatasync(2) call is performed on the final close of the descriptor.
The delay value delays O_SYNC or O_DSYNC writes so that the writes do not take effect immediately. With this option, VxFS changes O_SYNC or O_DSYNC writes into delayed writes. No special action is performed when closing a file. This option effectively cancels data integrity guarantees typically provided by opening a file with O_SYNC or O_DSYNC.
See the vxfsio(7) manual page for information about VX_DIRECT, VX_DSYNC, and VX_UNBUFFERED.
crw See the CLUSTER FILE SYSTEM SPECIFIC OPTIONS section.
datainlog | nodatainlog
  Generally, VxFS does O_SYNC or O_DSYNC writes by logging the data and the time change to the inode (datainlog). If the nodatainlog option is used, the logging of synchronous writes is disabled; O_SYNC writes the data into the file and updates the inode synchronously before returning to the user.
delayfsck
  Allows mounting a corrupted file system in read-write mode, provided that the log replay succeeds (if the log is dirty). Log replay is triggered automatically as a part of the mount operation. The behavior of a corrupted file system mounted with this option, would be similar to the existing behavior of a file system, in which corruption has been detected while it is mounted. Note that any file system with corruption, must still be repaired offline using "fsck -o full". This option just allows continued use of the file system in its degraded state.
fixinternalext=size
  For internal allocations of a file, an extent of fixed size (or its multiple) is allocated. The value of size is in bytes and has to be a multiple of file system block size. Value of size can be provided as 1024, 2048 or in human readable form such as 1K, 2K, 1M.
ioerror=disable|nodisable|wdisable|mwdisable|mdisable
  Sets the policy for handling I/O errors on a mounted file system. VxFS offers multiple error policies to handle different storage technologies for which a single approach is inadequate. mwdisable is the default ioerror mount option for local mounts. On cluster mounts, mdisable is the default ioerror mount option.
I/O errors can occur while reading or writing file data, or while reading or writing metadata. The file system can respond to these I/O errors either by halting or by gradually degrading. ioerror provides five policies that determine how the file system responds to the various errors. All five policies limit data corruption, either by stopping the file system or by marking a corrupted inode as bad.
The following matrix shows the file system’s response to the various errors depending on the policy set:

file file metadata metadata read write read write ---------------------------------------- disable | disable | disable | disable | disable | ---------------------------------------- nodisable | degrade | degrade | degrade | disable | ---------------------------------------- wdisable | degrade | disable | degrade | disable | ---------------------------------------- mwdisable | degrade | degrade | degrade | disable | ---------------------------------------- mdisable | degrade | degrade | disable | disable | ----------------------------------------

If disable is selected, VxFS disables the file system after detecting any I/O error. You must then unmount the file system and correct the condition causing the I/O error. After the problem is repaired, run fsck and mount the file system again. In most cases, replay fsck is sufficient to repair the file system. A full fsck is only required if the file system’s metadata was structural damaged. Select disable when using redundant underlying storage, such as RAID-5 or mirrored disks.
If nodisable option is selected, the behavior will be same as mwdisable ioerror policy. For more information see the mwdisable option.
For file data read and write errors, VxFS sets the VX_DATAIOERR flag in the super-block. For metadata read errors, VxFS sets the VX_FULLFSCK flag in the super-block. For metadata write errors, VxFS sets the VX_FULLFSCK and VX_METAIOERR flags in the super-block and may mark associated metadata as bad on disk. VxFS then prints the appropriate error messages to the console. See the Storage Foundation Administrator’s Guide for information on what actions to take for specific errors.
You should stop the file system as soon as possible and repair the condition causing the I/O error. After the problem is repaired, run fsck and mount the file system again.
If wdisable (write disable) or mwdisable (metadata-write disable) is selected, the file system is disabled or degraded, as shown in the matrix, depending on the type of error encountered. Select wdisable or mwdisable for environments where read errors are more likely to persist than write errors. Examples include environments where read errors likely signal a connection failure to redundant storage, as well as environments using non-redundant storage.
If there is serious damage to the file system or if there is structural corruption of file system metadata, VxFS marks the file system for full fsck regardless of which I/O error policy is in effect.
If the policy selected is mdisable (metadata disable), the file system is disabled if a metadata read or write fails. However, the file system continues to operate if the failure is confined to data extents.
largefiles | nolargefiles
  These options do not turn largefiles capability on and off, and are only provided for compatibility with the UNIX Large File Summit standards. Most users should turn on largefiles via the mkfs or fsadm command and ignore this mount option.
These options test whether a file system is largefiles capable. If nolargefiles is specified and the mount succeeds, the file system does not contain any files two gigabytes or larger and such files cannot be created. If largefiles is specified and the mount succeeds, the file system can contain files two gigabytes or larger and large files can be created. For a mount to succeed, the option must match the largefiles flag as specified by mkfs or fsadm. If no option is specified, VxFS automatically uses the persistent value set by mkfs or fsadm.
log | delaylog | tmplog
  Controls the timing of flushing the VxFS intent log and other metadata to disk, which affects when operations are guaranteed persistent after a system failure. The default is delaylog.
In the following description, the term "effects of system calls" refers to changes to file system data and metadata caused by the system call, excluding changes to st_atime. See the stat(2) manual page for more information about st_atime.
In log mode, all system calls other than write(2), writev(2), and pwrite(2) are guaranteed to be persistent once the system call returns to the application.
In delaylog mode, the effects of most system calls other than write(2), writev(2), and pwrite(2) are guaranteed to be persistent approximately 3 seconds after the system call returns to the application. This provides better consistency guarantees than most other file systems in which most system calls are not persistent until approximately 30 seconds or more after the call returns.
In tmplog mode, the effects of system calls have persistence guarantees that are similar to those in delaylog mode. In addition, enhanced flushing of delayed extending writes is disabled, which results in better performance, but increases the chances of data loss or of unitialized data appearing in a file that was being actively written at the time of a system failure. This mode is only recommended for temporary file systems.
In delaylog and log mode, the rename(2) system call flushes the source file to disk to guarantee the persistence of the file data before renaming it. In both modes, the rename is also guaranteed to be persistent when the system call returns. Some scripts and programs try to update a file atomically by writing the new file contents to a temporary file and then renaming it on top of the target file. Both these modes let such scripts and programs achieve atomicity.
In all cases, VxFS is fully POSIX compliant. The effects of the fsync(2) and fdatasync(2) system calls are guaranteed to be persistent once the calls return. The persistence guarantees for data or metadata modified by write(2), writev(2), or pwrite(2) are not affected by the logging mount options. The effects of these system calls are guaranteed to be persistent only if the O_SYNC, O_DSYNC, VX_DSYNC, or VX_DIRECT flag, as modified by the convosync= mount option, has been specified for the file descriptor.
The log, delaylog, and tmplog mount options do not alter NFS server behavior. In all cases, VxFS complies with the persistency requirements of the NFS v2, v3, and v4 standards.
logiosize=size
  Read-modify-write storage devices can see improved write performance when writes are multiples of the sector size, because storage can avoid the read-modify step. With the logiosize mount option, VxFS writes its intent log in multiples of size bytes, to improve performance with such devices. The values for size can be 512, 1024, 2048, 4096, or 8192. The default value is the device’s sector size. Similarly, the file system block size, as set by mkfs_vxfs(1M) should be the same or larger than the device’s sector size.
mincache=direct|dsync|unbuffered|closesync|tmpcache
  Alters the caching behavior of the file system.
The direct value handles any reads without the O_SYNC flag, or any writes without the O_SYNC flag, VX_DSYNC, VX_DIRECT, and VX_UNBUFFERED caching advisories, as if the VX_DIRECT caching advisory was set.
The unbuffered value handles any reads without the O_SYNC flag, or any writes without the O_SYNC flag, VX_DSYNC, VX_DIRECT, and VX_UNBUFFERED caching advisories, as if the VX_UNBUFFERED caching advisory was set.
The dsync value handles any writes without the O_SYNC flag, or one of the VX_DIRECT, VX_DSYNC, and VX_UNBUFFERED caching advisories, as if the VX_DSYNC caching advisory was set.
For the closesync, dsync, unbuffered, and direct values, when the final close of a file descriptor referencing a file is performed, the equivalent of an fdatasync(2) call is performed.
The tmpcache value disables delayed extending writes, trading off integrity for performance. If blkclear is used in conjunction with tmpcache, newly allocated extents are not zeroed. If the system crashes, uninitialized data may appear in files that were being written at the time of a system crash.
See the vxfsio(7) manual page for more information on VX_DIRECT, VX_DSYNC, and VX_UNBUFFERED.
mntlock=ID
  Specifies that the mount point will be locked with the specified identifier, ID, which will disallow unmounting the file system except if the umount command is invoked with the mntunlock option. The identifier can be any ASCII string of up to 31 bytes.
noatime
  Directs the file system to ignore file access time updates except when they coincide with updates to ctime or mtime. See the stat(1u) manual page. By default, the file system records access time (atime). The noatime option can improve performance on file systems where access times are not important.
noauto
  Allows the file system to be mounted explicitly. That is, the -a option will not cause the file system to be mounted. This option is normally used for file systems listed in the /etc/fstab file, which should not be mounted automatically at boot time.
nomtime
  See the CLUSTER FILE SYSTEM BEHAVIOR section.
protected=off|on
  When set to on, makes the file system read-only for all applications except file replication. Use this option to protect a VFR target from accidental modification. While a file system is protected, some administrative commands are unavailable.
quota | grpquota | usrquota
  Enables disk quotas. The quota option is valid only on file systems that are mounted read/write (rw).
The quota option enables both user and group quotas. The usrquota option enables only user quotas. The grpquota option enables only group quotas.
User quotas require a user quota file owned by root in the file system root directory. Group quotas require a group quota file. If the appropriate file does not exist, the file is created. The user and group quota files are layout version dependenet and are as follows:

Layout Version User Quota File Group Quota File Quota Support --------------------------------------------------- 10 | quotas.64 | quotas.grp.64 | 64-bit | --------------------------------------------------- 9 (or earlier) | quotas | quotas.grp | 32-bit | ---------------------------------------------------

These files store each user’s or group’s usage limits.
VxFS stores quota information in private file system metadata. If the file system is mounted with quotas enabled, and if the file system was previously mounted with quotas disabled and was modified, the quota information is rebuilt. This may take some time depending on the amount of information to rebuild. See the vxedquota(1M) manual page for details on managing quotas.
remount
  Changes the mount options for a mounted file system. In particular, remount changes the logging and caching policies. The remount option also changes a file system from read-only to read/write.
The remount option cannot change a file system from read/write to read-only, nor can it set the snapof or snapsize attributes.
rw | ro
  Read/write or read-only. The default is rw.
seconly
  See the CLUSTER FILE SYSTEM SPECIFIC OPTIONS section.
smartiomode=nocache|read|writeback|cfusion
  Specifies the mode for the SmartIO caching. If you specify nocache, the SmartIO caching is disabled for the mount point. If you specify read, SmartIO caching is enabled in read mode for the mount point. If you specify writeback, SmartIO caching is enabled in writeback mode for the mount point.
If you specify cfusion, SmartIO caching is enabled in distributed read cache mode for the mount point. cfusion is supported only with cluster mount option.
smartiocache=[cache_area] | [[rd_cache]:[wb_cache]]
  Specifies the cache to be used for SmartIO caching. If you specify cache_area, depending on the smartiomode this cache area will be used for read caching or both read and writeback caching. If you specify rd_cache:wb_cache, rd_cache will be used for read caching and wb_cache for writeback caching. If smartiocache option is not used, online default cache area will be used for caching.
For more information about SmartIO, see the Storage Foundation and High Availability Solutions SmartIO for Solid State Devices Solutions Guide.
snapof=filesystem
  Mounts a snapshot of the specified file system, where filesystem is either the directory on which a VxFS file system is mounted, or the block special file containing a mounted VxFS file system.
The filesystem argument cannot refer to a multi-volume file system unless the file system contains only one volume. The special argument cannot refer to a volume set.
snapsize=size
  Used in conjunction with snapof. size is the size in sectors of the snapshot file system being mounted. This option is required only when the device driver cannot determine the size of snapof_special, and defaults to the entire device if not specified.
suid | nosuid
  setuid is honored or ignored on execution. The default is suid.
uniqueino
  Causes inode numbers of automounted Storage Checkpoints to be modified so that they are unique. Without this option, inode numbers may be duplicated across Storage Checkpoints. The unique inode numbers do not fit in 32 bits.

CLUSTER FILE SYSTEM SPECIFIC OPTIONS

The following options apply only to cluster file systems.
cluster Mounts a file system in shared mode. special must be a shared volume in a Cluster Volume Manager (CVM) environment See the vxdg(1M) manual page, particularly the import command, for more information on shared volumes. Other cluster nodes can also mount special in shared mode. The remount option cannot be used to switch between the local and the shared modes.
The first node to mount special is called the primary node. Other nodes are called secondary nodes. Secondary nodes can be mounted read/write (rw) if the primary node is mounted read-only (ro).
crw The cluster read/write option allows asymmetric mounts, with some nodes read/write and others read-only. The crw option must be specified with the -o cluster option. If crw is not specified, the default is for the secondary nodes to match the primary node’s read/write setting.
You can use crw in conjunction with rw or ro as shown in the following mount compatibility matrix:

Secondary ----------------- ------------------------------------ Primary ro rw ro,crw rw,crw ----------------- ------------------------------------ ro yes no no no rw no yes yes yes ro,crw no yes yes yes rw,crw no yes yes yes

If the primary is mounted with ro,crw or rw,crw as shown in the first column, the secondary read and write capabilities can still be set independently. For a cluster mount, rw on the primary enables cluster-wide read/write capability.
The read and write capabilities can be changed from the original setting to another using the -o remount option. The read and write capabilities can be changed according to the following matrix:

From/To ro rw ro,crw rw,crw ----------------- ------------------------------------ ro no yes yes yes rw no yes no yes ro,crw no yes yes yes rw,crw no yes no yes

If a cluster file system is mounted read/write (rw), the underlying disk group must have the activation mode attribute set to shared-write (sw).
If a node mounts a cluster file system that is ro,crw and the disk group activation mode is shared-read (sr), that node can never become the primary node and must be mounted as a seconly mount. See the seconly option. See the Storage Foundation Cluster File System High Availability Administrator’s Guide and the vxdg(1M) manual page for more information on disk activation modes.
nomtime The nomtime option delays updating the file modification time in the specified cluster file system. This removes the need to synchronize file modification and change times, which improves performance. Use the nomtime option in high-performance computing environments where multiple nodes simultaneously write to the same file, and where the application does not need consistent, cluster-wide file modification times. See the Storage Foundation Cluster File System High Availability Administrator’s Guide for more information on parallel I/O.
seconly Mounts a shared file system as a secondary node, only. A secondary-only node cannot assume the primaryship for the specified shared file system. For a mount with the seconly option to succeed, the primary node must already be mounted. The seconly option also requires the -o cluster option. The seconly option overrides any policy that was set using the fsclustadm command. See the fsclustadm(1M) manual page. This option can be set or reset using the -o remount option. A remount with seconly fails if that node is already the primary node for the file system. A file system with a seconly mount will be disabled when the file system is unmounted on the primary node. No other node can become primary until all nodes have mounted the file system with a seconly mount.
Future releases will not support the seconly mount option.

CLUSTER FILE SYSTEM BEHAVIOR

Several options behave differently in cluster mounts. The following options are applied only to the node on which the command is run. To set these options on all cluster nodes, use the cfsmntadm command.
convosync=direct | dsync | unbuffered | closesync | delay
datainlog | nodatainlog
mincache=direct | dsync | unbuffered | closesync | tmpcache
log | delaylog | tmplog
  In a cluster mount, the file system must have disk layout Version 7 or later to use the log, delaylog, and tmplog options.

The following options behave differently or require specific usage to be used in a cluster:

ckpt=ckpt_name
  To mount a Storage Checkpoint in shared mode on a cluster file system, you must also specify the -o cluster option. See the EXAMPLES section.
ioerror=disable|nodisable|wdisable|mwdisable|mdisable
  Only disable and mdisable are allowed in cluster file systems; mdisable is the cluster mount default.
These settings behave differently on cluster file systems. With disable or mdisable, the file system is disabled only on the node where the I/O error occurs. All other nodes can still access the file system. If the I/O error is on the CFS primary, a new primary is chosen from the remaining nodes.
remount A cluster file system cannot be remounted as a local file system, and vice versa. Changes made to to any per-node options by the remount option, such as caching and logging, apply only to the node on which the remount is run.
snapof=filesystem
  On cluster file systems, snapshots can be created on any cluster node and backup operations can be performed from that node. The snapshot of a cluster file system is accessible only on the node where it is created. That is, the snapshot file system itself cannot be cluster mounted. See the Storage Foundation Cluster File System High Availability Administrator’s Guide for information on creating snapshots on cluster file systems.
suid | nosuid
  The suid and nosuid options are applicable only on the node where these options are given to mount. They are not applied to every node on the cluster. This applies for remount as well.
nodev | noexec
  The nodev and noexec options are applicable only on the node where these options are given to mount. They are not applied to every node on the cluster. This applies for remount as well.

CONFLICTING OPTIONS

The following table lists mount options that conflict with other mount options. The mount command fails if the first option in a row on the table is combined with any following option in that row. For example, do not use the rw mount option with either the ro, snapof, or snapsize mount option.

Mount Option Conflicting Mount Options ----------------- ------------------------------------ blkclear logiosize, ro ckpt convosync, datainlog, logiosize, nodatainlog, quota, remount, seconly, smartiomode, smartiocache, fixinternalext cluster snapof, snapsize, tranflush convosync ro, snapof, snapsize crw snapof, snapsize datainlog nodatainlog, ro, snapof, snapsize delaylog log, tmplog, snapof, snapsize fixinternalext ro, ckpt largefiles nolargefiles log delaylog, ro, snapof, snapsize, tmplog logiosize ckpt, ro, snapof, snapsize mincache snapof, snapsize nodatainlog datainlog, ro, snapof, snapsize nolargefiles largefiles nosuid suid quota ckpt, ro remount snapof, snapsize ro convosync, datainlog, log, nodatainlog, rw, tmplog, fixinternalext rw ro, snapof, snapsize seconly snapsize, snapof, ckpt smartiomode ckpt smartiocache ckpt snapof blkclear, cluster, delaylog, log, mincache, rw, tmplog snapsize blkclear, cluster, delaylog, log, mincache, rw, tmplog suid largefiles, logiosize, nolargefiles, nosuid tmplog delaylog, log, ro, snapof, snapsize ----------------- ------------------------------------

EXAMPLES

To mount a Storage Checkpoint of a file system, first mount the file system, then mount the Storage Checkpoint:

# mount -t vxfs /dev/vx/dsk/dg_name/volname /fsdir # mount -t vxfs -o ckpt=myckpt /dev/vx/dsk/dg_name/volname:myckpt /ckptdir

To unmount a file system, unmount the Storage Checkpoint first, then unmount the file system:


# umount /ckptdir # umount /fsdir

To unmount a file system:


# umount /fsdir

To mount a Storage Checkpoint of a cluster file system:


# mount -t vxfs -o cluster,ckpt=ckpt_name \
/dev/vx/dsk/dg_name/volname:ckpt_name /ckpt_mount_point

To have Storage Checkpoints mounted automatically at boot time, list the Storage Checkpoints in the /etc/fstab file as in the following example:


/dev/vx/dsk/fsvol/vol1 /fsvol vxfs defaults 1 1 /dev/vx/dsk/fsvol/vol1:may_19 /fsvol_may_19 vxfs ckpt=may_19 1 1

FILES

/etc/mtab Table of mounted file systems.
/etc/fstab Table of file systems.
/mount_point/quotas User quotas file for 32-bit quota.
/mount_point/quotas.64 User quotas file for 64-bit quota.
/mount_point/quotas.grp Group quotas file for 32-bit quota.
/mount_point/quotas.grp.64 Group quotas file for 64-bit quota.

SEE ALSO

cfsmntadm(1M), fsadm_vxfs(1M), fsck_vxfs(1M), fsckptadm(1M), fsclustadm(1M), mkfs_vxfs(1M), qlogmk(1M), umount(1M), vxdg(1M), vxedquota(1M), stat(1U), fdatasync(2), fsync(2), setuid(2), stat(2), fs_vxfs(4), fstab(5), vxfsio(7), mount(8)

Storage Foundation and High Availability Solutions SmartIO for Solid State Devices Solutions Guide,
Storage Foundation Administrator’s Guide,
Storage Foundation Release Notes,
Storage Foundation Cluster File System High Availability Administrator’s Guide.


VxFS 7.4 mount_vxfs(1M)