read_pref_io
|
The
preferred read request size. The file system uses this in conjunction with
the read_nstream value to determine how much data to read ahead. The default
value is 64K. |
write_pref_io
|
The preferred write request size. The file system uses this
in conjunction with the write_nstream value to determine how to do flush behind
on writes. The default value is 64K. |
read_nstream
|
The
number of parallel read requests of size read_pref_io to have outstanding
at one time. The file system uses the product of read_nstream multiplied by read_pref_io
to determine its read ahead size. The default value for read_nstream is 1. |
write_nstream
|
The number of parallel write requests of size write_pref_io
to have outstanding at one time. The file system uses the product of write_nstream
multiplied by write_pref_io to determine when to do flush behind on writes.
The default value for write_nstream is 1. |
discovered_direct_iosz
|
Any file I/O requests larger than discovered_direct_iosz
are handled as discovered direct I/O. A discovered direct I/O is unbuffered
similar to direct I/O, but it does not require a synchronous commit of the
inode when the file is extended or blocks are allocated. For larger I/O requests,
the CPU time for copying the data into the page cache and the cost of using
memory to buffer the I/O data becomes more expensive than the cost of doing
the disk I/O. For these I/O requests, using discovered direct I/O is more
efficient than regular I/O. The default value of this parameter is 256K. |
fcl_keeptime
|
Specifies
the minimum amount of time, in seconds, that the VxFS File Change Log (FCL)
keeps records in the log. When the oldest 8K block of FCL records have been
kept longer than the value of fcl_keeptime, they are purged from the FCL and
the extents nearest to the beginning of the FCL file are freed. This process
is referred to as "punching a hole." Holes are punched in the
FCL file in 8K chunks.
If the fcl_maxalloc parameter
is set, records are purged from the FCL if the amount of space allocated to
the FCL exceeds fcl_maxalloc, even if the elapsed time the records have been
in the log is less than the value of fcl_keeptime. If the file system runs
out of space before fcl_keeptime is reached, the FCL is deactivated.
Either
or both of the fcl_keeptime or fcl_maxalloc parameters must be set before
the File Change Log can be activated.
|
fcl_maxalloc
|
Specifies
the maximum amount of space that can be allocated to the VxFS File Change
Log (FCL). The FCL file is a sparse file that grows as changes occur in the
file system. When the space allocated to the FCL file reaches the fcl_maxalloc
value, the oldest FCL records are purged from the FCL and the extents nearest
to the beginning of the FCL file are freed. This process is referred to as
"punching a hole." Holes are punched in the FCL file in 8K chunks.
If the file system runs out of space before fcl_maxalloc is reached, the
FCL is deactivated.
The minimum value of fcl_maxalloc is 4 MB. The default value is fs_size/33.
Either or both of the fcl_maxalloc or fcl_keeptime
parameters must be set before the File Change Log can be activated. fcl_maxalloc
does not apply to disk lay out Versions 1 through 5.
|
fcl_winterval
|
Specifies the time, in seconds, that must elapse before the
VxFS File Change Log (FCL) records a data overwrite, data extending write,
or data truncate for a file. The ability to limit the number of repetitive
FCL records for continuous writes to the same file is important for file system
performance and for applications processing the FCL. fcl_winterval is best
set to an interval less than the shortest interval between reads of the FCL
by any application. This way all applications using the FCL can be assured
of finding at least one FCL record for any file experiencing continuous data
changes.
fcl_winterval is enforced for all files in the file
system. Each file maintains its own time stamps, and the elapsed time between
FCL records is per file. This elapsed time can be overridden using the VxFS
FCL sync public API.
See the vxfs_fcl_sync (3) manual page.
|
hsm_write_ prealloc
|
For a file managed by a hierarchical storage management (HSM)
application, hsm_write_prealloc preallocates disk blocks before data is migrated
back into the file system. An HSM application usually migrates the data back
through a series of writes to the file, each of which allocates a few blocks.
By setting hsm_write_ prealloc (hsm_write_ prealloc=1), a sufficient number
of disk blocks are allocated on the first write to the empty file so that
no disk block allocation is required for subsequent writes. This improves
the write performance during migration.
The hsm_write_ prealloc
parameter is implemented outside of the DMAPI specification, and its usage
has limitations depending on how the space within an HSM-controlled file is
managed. It is advisable to use hsm_write_ prealloc only when recommended by
the HSM application controlling the file system.
|
initial_extent_size
|
Changes the default initial extent size. VxFS determines,
based on the first write to a new file, the size of the first extent to be
allocated to the file. Normally the first extent is the smallest power of
2 that is larger than the size of the first write. If that power of 2 is less
than 8K, the first extent allocated is 8K. After the initial extent, the file
system increases the size of subsequent extents with each allocation.
See max_seqio_extent_size.
Since
most applications write to files using a buffer size of 8K or less, the increasing
extents start doubling from a small initial extent. initial_extent_size can
change the default initial extent size to be larger, so the doubling policy
starts from a much larger initial size and the file system does not allocate
a set of small extents at the start of file. Use this parameter only on file
systems that have a very large average file size. On these file systems
it results in fewer extents per file and less fragmentation. initial_extent_size
is measured in file system blocks.
|
inode_aging_count
|
Specifies the maximum number of inodes to place on an inode
aging list. Inode aging is used in conjunction with file system Storage Checkpoints
to allow quick restoration of large, recently deleted files. The aging list
is maintained in first-in-first-out (fifo) order up to maximum number of inodes
specified by inode_aging_count. As newer inodes are placed on the list, older
inodes are removed to complete their aging process. For best performance,
it is advisable to age only a limited number of larger files before completion
of the removal process. The default maximum number of inodes to age is 2048. |
inode_aging_size
|
Specifies the minimum size to qualify a deleted inode for
inode aging. Inode aging is used in conjunction with file system Storage Checkpoints
to allow quick restoration of large, recently deleted files. For best performance,
it is advisable to age only a limited number of larger files before completion
of the removal process. Setting the size too low can push larger file inodes
out of the aging queue to make room for newly removed smaller file inodes. |
max_direct_iosz
|
The maximum size of a direct I/O request that are issued
by the file system. If a larger I/O request comes in, then it is broken up
into max_direct_iosz chunks. This parameter defines how much memory an I/O
request can lock at once, so it should not be set to more than 20 percent
of memory. |
max_diskq
|
Limits
the maximum disk queue generated by a single file. When the file system is
flushing data for a file and the number of pages being
flushed exceeds max_diskq, processes are blocked until the amount of data being
flushed decreases. Although this does not limit the actual disk queue, it prevents
flushing processes from making the system unresponsive. The default value
is 1 MB. |
max_seqio_extent_size
|
Increases or decreases the maximum size of an extent. When
the file system is following its default allocation policy for sequential
writes to a file, it allocates an initial extent which is large enough for
the first write to the file. When additional extents are allocated, they are
progressively larger because the algorithm tries to double the size of the file with
each new extent. As such, each extent can hold several writes worth of data. This
is done to reduce the total number of extents in anticipation of continued
sequential writes. When the file stops being written, any unused space is
freed for other files to use. Normally, this allocation stops increasing the
size of extents at 262144 blocks, which prevents one file from holding too much
unused space. max_seqio_extent_size is measured in file system blocks. The default and minimum value of is 2048 blocks. |
write_throttle
|
The write_throttle
parameter is useful in special situations where a computer system has a combination
of a large amount of memory and slow storage devices. In this configuration,
sync operations, such as fsync(), may take long enough
to complete that a system appears to hang. This behavior occurs because the
file system is creating dirty pages (in-memory updates) faster than they can
be asynchronously flushed to disk without slowing system performance.
Lowering
the value of write_throttle limits the number of dirty pages per file that
a file system generates before flushing the pages to disk. After the number
of dirty pages for a file reaches the write_throttle threshold, the file
system starts flushing pages to disk even if free memory is still available.
The
default value of write_throttle is zero, which puts no limit on the number
of dirty pages per file. If non-zero, VxFS limits the number of dirty pages
per file to write_throttle pages.
The default value typically
generates a large number of dirty pages, but maintains fast user writes. Depending
on the speed of the storage device, if you lower write_throttle, user write
performance may suffer, but the number of dirty pages is limited, so sync
operations complete much faster.
Because lowering write_throttle
may in some cases delay write requests (for example, lowering write_throttle
may increase the file disk queue to the max_diskq value, delaying user writes
until the disk queue decreases), it is advisable not to change the value of write_throttle
unless your system has a combination of large physical memory and slow storage
devices.
|