Symantec logo

Tunable parameters

Except where noted, the values of the tunable parameters are changed by using the vxvoltune command. The exceptions are those DMP tunables that are adjustable using the vxdmptune and vxdmpadm command.

Tunable parameters lists the tunable parameters.

Tunable parameters

Parameter

Description

dmp_cache_open 

If set to on, the first open of a device that is performed by an array support library (ASL) is cached. This enhances the performance of device discovery by minimizing the overhead caused by subsequent opens by ASLs. If set to off, caching is not performed. 

The default value is on

The value of this tunable can only be changed by using the vxdmpadm settune command. 

dmp_enable_restore_daemon 

Set to 1 to enable the DMP path restoration thread; set to 0 to disable. 

The default value is 1. 

Use the vxdmptune or vxdmpadm utility to display or adjust the value of this tunable. 

See "Configuring DMP path restoration policies" on page 170. 

dmp_daemon_count 

The number of kernel threads that are available for servicing path error handling, path restoration and other DMP administrative tasks. 

The default number of threads is 10. 

dmp_delayq_interval 

How long DMP should wait before retrying I/O after an array fails over to a standby path. Some disk arrays are not be capable of accepting I/O requests immediately after failover. 

The default value is 15 seconds. 

dmp_failed_io_threshold 

The time limit that DMP waits for a failed I/O request to return before the device is marked as INSANE, I/O is avoided on the path, andany remaining failed I/O requests are returned to the application layer without performing any error analysis. 

The default value is 57600 seconds (16 hours). 

The value of this tunable may be changed by using the vxdmpadm settune command. 

dmp_fast_recovery 

Whether DMP should attempt to obtain SCSI error information directly from the HBA interface. 

Setting the value to on can potentially provide faster error recovery, provided that the HBA interface supports the error enquiry feature. 

If set to off, the HBA interface is not used. 

The default setting is on

dmp_health_time 

DMP detects intermittently failing paths, and prevents I/O requests from being sent on them. The value of dmp_health_time represents the time in seconds for which a path must stay healthy. If a path's state changes back from enabled to disabled within this time period, DMP marks the path as intermittently failing, and does not re-enable the path for I/O until dmp_path_age seconds elapse.  

The default value is 60 seconds. 

A value of 0 prevents DMP from detecting intermittently failing paths. The value of this tunable may be changed by using the vxdmpadm settune command. 

dmp_log_level 

The level of detail that is displayed for DMP console messages. The following level values are defined:

1 — Display all DMP log messages that existed in releases before 5.0. 

2 — Display level 1 messages plus messages that relate to I/O throttling, suspected paths, repeated path failures and DMP node migration. 

3 — Display level 1 and 2 messages plus messages that relate to I/O errors, I/O error analysis and path media errors. 

4 — Display level 1, 2 and 3 messages plus messages that relate to setting or changing attributes on a path. 

The default value is 1. 

dmp_path_age 

The time for which an intermittently failing path needs to be monitored as healthy before DMP once again attempts to schedule I/O requests on it. 

The default value is 300 seconds. 

A value of 0 prevents DMP from detecting intermittently failing paths. The value of this tunable may be changed by using the vxdmpadm settune command. 

dmp_pathswitch_blks_shift 

The default number of contiguous I/O blocks (expressed as the integer exponent of a power of 2; for example 11 represents 2048 blocks) that are sent along a DMP path to an Active/Active array before switching to the next available path. 

The default value is set to 11 so that 2048 blocks (1MB) of contiguous I/O are sent over a DMP path before switching. For intelligent disk arrays with internal data caches, better throughput may be obtained by increasing the value of this tunable. For example, for the HDS 9960 A/A array, the optimal value is between 15 and 17 for an I/O activity pattern that consists mostly of sequential reads or writes. 

This parameter only affects the behavior of the balanced I/O policy. A value of 0 disables multipathing for the policy unless the vxdmpadm command is used to specify a different partition size for an array. 

See "Specifying the I/O policy" on page 159. 

The value of this tunable may be changed by using the vxdmpadm settune command. 

dmp_probe_idle_lun 

If DMP statistics gathering is enabled, set to 1 to have the DMP path restoration thread probe idle LUNs, or to 0 to turn off this feature. (Idle LUNs are VM disks on which no I/O requests are scheduled.) The value of this tunable is only interpreted when DMP statistics gathering is enabled. Turning off statistics gathering also disables idle LUN probing. 

The default value is 1. 

The value of this tunable may be changed by using the vxdmpadm settune command. 

dmp_queue_depth 

The maximum number of queued I/O requests on a path during I/O throttling. 

The default value is 20. 

The value of this tunable may be changed by using the vxdmpadm settune command. A value can also be set for paths to individual arrays by using the vxdmpadm command. 

See "Configuring the I/O throttling mechanism" on page 168. 

dmp_restore_daemon_cycles 

If the DMP restore policy is CHECK_PERIODIC, the number of cycles after which the CHECK_ALL policy is called. Use the vxdmptune or vxdmpadm utility to display or adjust the value of this tunable. 

dmp_restore_daemon_interval 

The time in seconds between two invocations of the DMP path restoration thread. Use the vxdmptune or vxdmpadm utility to display or adjust the value of this tunable. 

dmp_restore_daemon_policy 

The DMP restore policy, which can be set to 0 (CHECK_ALL), 1 (CHECK_DISABLED), 2 (CHECK_PERIODIC), or 3 (CHECK_ALTERNATE). Use the vxdmptune or vxdmpadm utility to display or adjust the value of this tunable. 

dmp_retry_count 

If an inquiry succeeds on a path, but there is an I/O error, the number of retries to attempt on the path. 

The default value is 5. 

The value of this tunable may be changed by using the vxdmpadm settune command. A value can also be set for paths to individual arrays by using the vxdmpadm command. 

See "Configuring the response to I/O failures" on page 166. 

dmp_retry_timeout 

The maximum time period for which DMP retries the SCSI-3 Persistent Reserve operation with A/P arrays. 

The default value is 120 seconds. 

This parameter has no direct effect on I/O processing by DMP. 

Disabling a switch port can trigger a fabric reconfiguration, which can take time to stabilize. 

During this period, attempting to register PGR keys through the secondary path to an array may fail with an error condition, such as unit attention or device reset, or the return of vendor-specific sense data. 

The retry period allows a fabric reconfiguration (usually a transient condition) to not be seen as an error by DMP. 

Do not set the value of the retry period too high. This can delay the failover process, and result in I/O sluggishness or suppression of I/O activity during the retry period. 

The value of this tunable may be changed by using the vxdmpadm settune command.  

dmp_scsi_timeout 

Determines the timeout value to be set for any SCSI command that is sent via DMP. If the HBA does not receive a response for a SCSI command that it has sent to the device within the timeout period, the SCSI command is returned with a failure error code. 

The default value is 30 seconds. 

The value of this tunable may be changed by using the vxdmpadm settune command.  

dmp_stat_interval 

The time interval between gathering DMP statistics. 

The default and minimum value is 1 second. 

vol_checkpt_default 

The interval at which utilities performing recoveries or resynchronization operations load the current offset into the kernel as a checkpoint. A system failure during such operations does not require a full recovery, but can continue from the last reached checkpoint. 

The default value is 20480 sectors (10MB).  

Increasing this size reduces the overhead of checkpointing on recovery operations at the expense of additional recovery following a system failure during a recovery. 

vol_default_iodelay 

The count in clock ticks for which utilities pause if they have been directed to reduce the frequency of issuing I/O requests, but have not been given a specific delay time. This tunable is used by utilities performing operations such as resynchronizing mirrors or rebuilding RAID-5 columns. 

The default for this tunable is 50 ticks. 

Increasing this value results in slower recovery operations and consequently lower system impact while recoveries are being performed. 

vol_fmr_logsz 

The maximum size in kilobytes of the bitmap that Non-Persistent FastResync uses to track changed blocks in a volume. The number of blocks in a volume that are mapped to each bit in the bitmap depends on the size of the volume, and this value changes if the size of the volume is changed. For example, if the volume size is 1 gigabyte and the system block size is 512 bytes, a vol_fmr_logsz value of 4 yields a map contains 32,768 bits, each bit representing one region of 64 blocks. 

The larger the bitmap size, the fewer the number of blocks that are mapped to each bit. This can reduce the amount of reading and writing required on resynchronization, at the expense of requiring more non-pageable kernel memory for the bitmap. Additionally, on clustered systems, a larger bitmap size increases the latency in I/O performance, and it also increases the load on the private network between the cluster members. This is because every other member of the cluster must be informed each time a bit in the map is marked. 

Since the region size must be the same on all nodes in a cluster for a shared volume, the value of the vol_fmr_logsz tunable on the master node overrides the tunable values on the slave nodes, if these values are different. Because the value of a shared volume can change, the value of vol_fmr_logsz is retained for the life of the volume. 

In configurations which have thousands of mirrors with attached snapshot plexes, the total memory overhead can represent a significantly higher overhead in memory consumption than is usual for VxVM. 

The default value of this tunable is 4KB. The minimum and maximum permitted values are 1KB and 8KB. 


  Note   The value of this tunable does not have any effect on Persistent FastResync.


vol_max_volumes 

The maximum number of volumes that can be created on the system. The minimum and maximum permitted values are 1 and the maximum number of minor numbers representable on the system. 

The default value is 65534. 

vol_maxio 

The maximum size of logical I/O operations that can be performed without breaking up the request. I/O requests to VxVM that are larger than this value are broken up and performed synchronously. Physical I/O requests are broken up based on the capabilities of the disk device and are unaffected by changes to this maximum logical request limit. 

The default value is 2048 sectors (1MB). 

The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio

If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio

vol_maxioctl 

The maximum size of data that can be passed into VxVM via an ioctl call. Increasing this limit allows larger operations to be performed. Decreasing the limit is not generally recommended, because some utilities depend upon performing operations of a certain size and can fail unexpectedly if they issue oversized ioctl requests. 

The default value is 32768 bytes (32KB). 

vol_maxparallelio 

The number of I/O operations that the vxconfigd daemon is permitted to request from the kernel in a single ioctl call. 

The default value is 256. This value should not be changed. 

volcvm_smartsync 

If set to 0, volcvm_smartsync disables SmartSync on shared disk groups. If set to 1, this parameter enables the use of SmartSync with shared disk groups. 

See "SmartSync recovery accelerator" on page 61. 

voldrl_max_drtregs 

The maximum number of dirty regions that can exist on the system for non-sequential DRL on volumes. A larger value may result in improved system performance at the expense of recovery time. This tunable can be used to regulate the worse-case recovery time for the system following a failure.  

The default value is 2048. 

voldrl_max_seq_dirty 

The maximum number of dirty regions allowed for sequential DRL. This is useful for volumes that are usually written to sequentially, such as database logs. Limiting the number of dirty regions allows for faster recovery if a crash occurs. 

The default value is 3. 

voldrl_min_regionsz 

The minimum number of sectors for a dirty region logging (DRL) volume region. With DRL, VxVM logically divides a volume into a set of consecutive regions. Larger region sizes tend to cause the cache hit-ratio for regions to improve. This improves the write performance, but it also prolongs the recovery time.  

The default value is 1024 sectors. 

If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio

voliomem_chunk_size 

The granularity of memory chunks used by VxVM when allocating or releasing system memory. A larger granularity reduces CPU overhead due to memory allocation by allowing VxVM to retain hold of a larger amount of memory. 

The default value is 128KB on IA64 systems, and otherwise 32KB. 

voliomem_maxpool_sz 

The maximum memory requested from the system by VxVM for internal purposes. This tunable has a direct impact on the performance of VxVM as it prevents one I/O operation from using all the memory in the system.  

VxVM allocates two pools that can grow up to voliomem_maxpool_sz, one for RAID-5 and one for mirrored volumes.  

A write request to a RAID-5 volume that is greater than voliomem_maxpool_sz/10 is broken up and performed in chunks of size voliomem_maxpool_sz/10

A write request to a mirrored volume that is greater than voliomem_maxpool_sz/2 is broken up and performed in chunks of size voliomem_maxpool_sz/2

The default value is 4MB. 

The value of voliomem_maxpool_sz must be greater than volraid_minpool_size, and be at least 10 times greater than the value of vol_maxio

voliot_errbuf_dflt 

The default size of the buffer maintained for error tracing events. This buffer is allocated at driver load time and is not adjustable for size while VxVM is running. 

The default size for this buffer is 16384 bytes (16KB).  

Increasing this buffer can provide storage for more error events at the expense of system memory. Decreasing the size of the buffer can result in an error not being detected via the tracing device. Applications that depend on error tracing to perform some responsive action are dependent on this buffer. 

voliot_iobuf_default 

The default size for the creation of a tracing buffer in the absence of any other specification of desired kernel buffer size as part of the trace ioctl

The default value is 8192 bytes (8KB).  

If trace data is often being lost due to this buffer size being too small, then this value can be tuned to a more generous amount. 

voliot_iobuf_limit 

The upper limit to the size of memory that can be used for storing tracing buffers in the kernel. Tracing buffers are used by the VxVM kernel to store the tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool. 

Increasing this size can allow additional tracing to be performed at the expense of system memory usage. Setting this value to a size greater than can readily be accommodated on the system is inadvisable. 

The default value is 131072 bytes (128KB). 

voliot_iobuf_max 

The maximum buffer size that can be used for a single trace buffer. Requests of a buffer larger than this size are silently truncated to this size. A request for a maximal buffer size from the tracing interface results (subject to limits of usage) in a buffer of this size. 

The default value is 65536 bytes (64KB).  

Increasing this buffer can provide for larger traces to be taken without loss for very heavily used volumes. Care should be taken not to increase this value above the value for the voliot_iobuf_limit tunable value. 

voliot_max_open 

The maximum number of tracing channels that can be open simultaneously. Tracing channels are clone entry points into the tracing device driver. Each vxtrace process running on a system consumes a single trace channel. 

The default number of channels is 32. The allocation of each channel takes up approximately 20 bytes even when not in use. 

volpagemod_max_memsz 

The amount of memory, measured in kilobytes, that is allocated for caching FastResync and cache object metadata. 

This default value is 1536 kilobytes. The valid range for this tunable is from 0 to 50% of physical memory. 

You can use the vxtune command to increase the amount of memory that is available to the paging module as shown here: 

# vxtune volpagemod_max_memsz size 

The value that should be used for size is determined by the region size and the number of volumes for which space-optimized instant snapshots are taken: 

size_in_KB = 6 * (total_volume_size_in_GB) * (64/region_size_in_KB

For example, a single 1TB volume requires around 6MB of paging memory if the region size is 64KB. If there were 10 such volumes, 60MB of paging memory would be required. 

The memory allocated for this cache is exclusively dedicated to it. It is not available for other processes or applications. 

Setting the value of volpagemod_max_memsz below 512 fails if cache objects or volumes that have been prepared for instant snapshot operations are present on the system. 

If you do not use the FastResync or DRL features that are implemented using a version 20 DCO volume, the value of volpagemod_max_memsz can be set to 0. However, if you subsequently decide to enable these features, you can use the vxtune command to change the value to a more appropriate one: 

volraid_minpool_size 

The initial amount of memory that is requested from the system by VxVM for RAID-5 operations. The maximum size of this memory pool is limited by the value of voliomem_maxpool_sz

The default value is 65536 sectors (32MB).