Tunable parameters for core VxVM

Table: Kernel tunable parameters for core VxVM lists the kernel tunable parameters for VxVM.

You can tune the parameters using the vxtune command or the operating system method, unless otherwise noted.

Table: Kernel tunable parameters for core VxVM

Parameter

Description

vol_checkpt_default

The interval at which utilities performing recoveries or resynchronization operations load the current offset into the kernel as a checkpoint. A system failure during such operations does not require a full recovery, but can continue from the last reached checkpoint.

The default value is 20480 sectors (10MB).

Increasing this size reduces the overhead of checkpoints on recovery operations at the expense of additional recovery following a system failure during a recovery.

vol_default_iodelay

The count in clock ticks for which utilities pause if they have been directed to reduce the frequency of issuing I/O requests, but have not been given a specific delay time. This tunable is used by utilities performing operations such as resynchronizing mirrors or rebuilding RAID-5 columns.

The default value is 50 ticks.

Increasing this value results in slower recovery operations and consequently lower system impact while recoveries are being performed.

vol_max_adminio_poolsz

The maximum size of the memory pool that is used for administrative I/O operations. VxVM uses this pool when throttling administrative I/O.

The default value is 64MB. The maximum size must not be greater than the value of the voliomem_maxpool_sz parameter.

vol_max_vol

This parameter cannot be tuned with the vxtune command. The maximum number of volumes that can be created on the system. The minimum permitted value is 1. The maximum permitted value is the maximum number of minor numbers representable on the system.

The default value is 65534.

vol_maxio

The maximum size of logical I/O operations that can be performed without breaking up the request. I/O requests to VxVM that are larger than this value are broken up and performed synchronously. Physical I/O requests are broken up based on the capabilities of the disk device and are unaffected by changes to this maximum logical request limit.

The default value is 2048 sectors (1 MB).

The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio.

If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio.

The maximum limit for vol_maxio is 20% of the smaller of physical memory or kernel virtual memory. It is inadvisable to go over this limit.

vol_maxioctl

The maximum size of data that can be passed into VxVM via an ioctl call. Increasing this limit allows larger operations to be performed. Decreasing the limit is not generally recommended, because some utilities depend upon performing operations of a certain size and can fail unexpectedly if they issue oversized ioctl requests.

The default value is 32768 bytes (32KB).

vol_maxparallelio

The number of I/O operations that the vxconfigd daemon is permitted to request from the kernel in a single VOL_VOLDIO_READ per VOL_VOLDIO_WRITE ioctl call.

The default value is 256. This value should not be changed.

vol_maxspecialio

The maximum size of an I/O request that can be issued by an ioctl call. Although the ioctl request itself can be small, it can request that a large I/O request be performed. This tunable limits the size of these I/O requests. If necessary, a request that exceeds this value can be failed, or the request can be broken up and performed synchronously.

The default value is 2048 sectors.

Raising this limit can cause difficulties if the size of an I/O request causes the process to take more memory or kernel virtual mapping space than exists and thus deadlock. The maximum limit for this tunable is 20% of the smaller of physical memory or kernel virtual memory. It is inadvisable to go over this limit, because deadlock is likely to occur.

If stripes are larger than the value of this tunable, full stripe I/O requests are broken up, which prevents full-stripe read/writes. This throttles the volume I/O throughput for sequential I/O or larger I/O requests.

This tunable limits the size of an I/O request at a higher level in VxVM than the level of an individual disk. For example, for an 8 by 64KB stripe, a value of 256KB only allows I/O requests that use half the disks in the stripe; thus, it cuts potential throughput in half. If you have more columns or you have used a larger interleave factor, then your relative performance is worse.

This tunable must be set, as a minimum, to the size of your largest stripe (RAID-0 or RAID-5).

vol_stats_enable

Enables or disables the I/O stat collection for Veritas Volume manager objects. The default value is 1, since this functionality is enabled by default.

vol_subdisk_num

The maximum number of subdisks that can be attached to a single plex. The default value of this tunable is 4096.

voliomem_chunk_size

The granularity of memory chunks used by VxVM when allocating or releasing system memory. A larger granularity reduces CPU overhead by allowing VxVM to retain hold of a larger amount of memory.

The value of this tunable parameter depends on the page size of the system. You cannot specify a value larger than the default value. If you change the value, VxVM aligns the values to the page size when the system reboots.

The default value is 32 KB for 512 Byte page size.

voliomem_maxpool_sz

The maximum memory requested from the system by VxVM for internal purposes. This tunable has a direct impact on the performance of VxVM as it prevents one I/O operation from using all the memory in the system.

VxVM allocates two pools that can grow up to this size, one for RAID-5 and one for mirrored volumes. Additional pools are allocated if instant (Copy On Write) snapshots are present.

A write request to a RAID-5 volume that is greater than one fourth of the pool size is broken up and performed in chunks of one tenth of the pool size.

A write request to a mirrored volume that is greater than the pool size is broken up and performed in chunks of the pool size.

The default value is 134217728 (128MB).

The value of voliomem_maxpool_sz must be greater than the value of volraid_minpool_size.

The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio.

voliot_errbuf_dflt

The default size of the buffer maintained for error tracing events. This buffer is allocated at driver load time and is not adjustable for size while VxVM is running.

The default value is 16384 bytes (16KB).

Increasing this buffer can provide storage for more error events at the expense of system memory. Decreasing the size of the buffer can result in an error not being detected via the tracing device. Applications that depend on error tracing to perform some responsive action are dependent on this buffer.

voliot_iobuf_default

The default size for the creation of a tracing buffer in the absence of any other specification of desired kernel buffer size as part of the trace ioctl.

The default value is 8192 bytes (8 KB).

If trace data is often being lost due to this buffer size being too small, then this value can be increased.

voliot_iobuf_limit

The upper limit to the size of memory that can be used for storing tracing buffers in the kernel. Tracing buffers are used by the VxVM kernel to store the tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool.

Increasing this size can allow additional tracing to be performed at the expense of system memory usage. Setting this value to a size greater than can readily be accommodated on the system is inadvisable.

The default value is 131072 bytes (128 KB).

voliot_iobuf_max

The maximum buffer size that can be used for a single trace buffer. Requests of a buffer larger than this size are silently truncated to this size. A request for a maximal buffer size from the tracing interface results (subject to limits of usage) in a buffer of this size.

The default value is 65536 bytes (64 KB).

Increasing this buffer can provide for larger traces to be taken without loss for very heavily used volumes.

Do not increase this value above the value for the voliot_iobuf_limit tunable value.

voliot_max_open

The maximum number of tracing channels that can be open simultaneously. Tracing channels are clone entry points into the tracing device driver. Each vxtrace process running on a system consumes a single trace channel.

The default number of channels is 32.

The allocation of each channel takes up approximately 20 bytes even when the channel is not in use.

volraid_minpool_size

This parameter cannot be tuned with the vxtune command. The initial amount of memory that is requested from the system by VxVM for RAID-5 operations. The maximum size of this memory pool is limited by the value of voliomem_maxpool_sz.

The default value is 8192 sectors (4MB).

volraid_rsrtransmax

The maximum number of transient reconstruct operations that can be performed in parallel for RAID-5. A transient reconstruct operation is one that occurs on a non-degraded RAID-5 volume that has not been predicted. Limiting the number of these operations that can occur simultaneously removes the possibility of flooding the system with many reconstruct operations, and so reduces the risk of causing memory starvation.

The default value is 1.

Increasing this size improves the initial performance on the system when a failure first occurs and before a detach of a failing object is performed, but can lead to memory starvation.

autostartvolumes

Turns on or off the automatic volume recovery. When set to on, VxVM automatically recovers and starts disabled volumes when you import, join, move, or split a disk group. When set to off, turns off this behavior. The default value is on.

fssmartmovethreshold

The threshold for an individual file system, in percentage full. After this threshold is reached, the SmartMove feature is not used. The default value is 100.

reclaim_on_delete_start_time

The time of day when the reclamation begins on a thin LUN, after a volume using that LUN is deleted. Specified in 24 hour time (HH:MM). The default value is 22:10.

reclaim_on_delete_wait_period

The number of days to wait before starting to reclaim space on a thin LUN, after a volume using that LUN is deleted. Specified as an integer from −1 to 366, where −1 specifies immediately and 366 specifies never. The default value is 1.

usefssmartmove

The state of the SmartMove feature. Valid values are:

  • thinonly − use for Thin disks only.

  • all − use for all disks.

  • none − turn off the SmartMove feature.

The default value is all.