About tuning AIX Virtual Memory Manager

If you are using either Cached Quick I/O or buffered I/O (that is, plain VxFS files without Quick I/O or mount options specified), it is recommended that you monitor any paging activity to the swap device on your database servers. To monitor swap device paging, use the vmstat -I command. Swap device paging information appears in the vmstat -I output under the columns labeled pi and po (for paging in and paging out from the swap device, respectively). Any nonzero values in these columns indicates swap device paging activity.

For example:

# /usr/bin/vmstat -I
kthr				  memory						          page					                    	faults		  	cpu
--------  --------------------- ----------------------------- ---------- -----------
r	 b	 p	  avm		   fre		    fi	  fo	pi	 po	 fr	   sr		  in		  sy	    cs		  us	sy	id	wa
5	 1	 0	  443602		1566524		661	 20	0	  0	  7	    28	   4760		37401	 7580		11	7	 43	38
1	 1	 0	  505780		1503791		18	  6	 0	  0	  0	    0		   1465		5176	  848		 1	 1	 97	1
1	 1	 0	  592093		1373498		1464	1	 0	  0	  0	    0	   	4261		10703	 7154	 5	 5	 27	62
3	 0	 0	  682693		1165463		3912	2	 0	  0	  0	    0		   7984		19117	 15672	16	13	1	 70
4	 0	 0	  775730		937562		 4650	0	 0	  0	  0	    0		   10082	24634	 20048	22	15	0	 63
6	 0	 0	  864097		715214	 	4618	1	 0	  0	  0	    0		   9762		26195	 19666	23	16	1	 61
5	 0	 0	  951657		489668	 	4756	0	 0	  0	  0	    0	   	9926		27601	 20116	24	15	1	 60
4	 1	 0	  1037864	266164	 	4733	5	 0	  0	  0	    0		   9849		28748	 20064	25	15	1	 59
4	 0	 0	  1122539	47155		  4476	0	 0	  0	  0	    0		   9473		29191	 19490	26	16	1	 57
5	 4	 0	  1200050	247    		4179	4	 70	 554	5300	 27420	10793	31564	 22500	30	18	1	 52
6	 10	0	  1252543	98	     	2745	0 	138	694	4625	 12406	16190	30373	 31312	35	14	2	 49
7	 14	0   1292402	220		    2086	0	 153	530	3559	 17661	21343	32946	 40525	43	12	1	 44
7	 18	0	  1319988	183		    1510	2	 130	564	2587	 14648	21011	28808	 39800	38	9		3	 49

If there is evidence of swap device paging, proper AIX Virtual Memory Manager (VMM) tuning is required to improve database performance. VMM tuning limits the amount of memory pages allocated to the file system cache. This prevents the file system cache from stealing memory pages from applications (which causes swap device page-out) when the VMM is running low on free memory pages.

The command to tune the AIX VMM subsystem is:

# /usr/samples/kernel/vmtune

Changes made by vmtune last until the next system reboot. The VMM kernel parameters to tune include: maxperm, maxclient, and minperm. The maxperm and maxclient parameters specify the maximum amount of memory (as a percentage of total memory) that can be used for file system caching. The maximum amount of memory for file system caching should not exceed the amount of unused memory left by the AIX kernel and all active applications (including the Oracle SGA). Therefore, it can be calculated as:

100*(T-A)/T

where T is the total number of memory pages in the system and A is the maximum number of memory pages used by all active applications.

The minperm parameter should be set to a value that is less than or equal to maxperm, but greater than or equal to 5.

For more information on AIX VMM tuning, see the vmtune(1) manual page and the performance management documentation provided with AIX.

The following is a tunable VxFS I/O parameter:

VMM Buffer Count

( - b <value> option)

Sets the virtual memory manager (VMM) buffer count. There are two values for the VMM: a default value based on the amount of memory, and a current value. You can display these two values using vxtunefs -b. Initially, the default value and the current value are the same. The - b value option specifies an increase, from zero to 100 per cent, in the VMM buffer count from its default. The specified value is saved in the file /etc/vx/vxfssystem to make it persistent across VxFS module loads or system reboots.

In most instances, the default value is suitable for good performance, but there are counters in the kernel that you can monitor to determine if there are delays waiting for VMM buffers. If there appears to be a performance issue related to VMM, the buffer count can be increased. If there is better response time on the system, it is a good indication that VMM buffers were a bottleneck.

The following fields displayed by the kdb vmker command can be useful in determining bottlenecks.

THRPGIO buf wait (_waitcnt) value

This field may indicate that there were no VMM buffers available for pagein or pageout. The thread was blocked waiting for a VMM buffer to become available. The count is the total number of waits since cold load. This field, together with pages "paged in" and pages "paged out" displayed by the kdb vmstat command can be used to determine if there are an adequate number of VMM buffers. The ratio:

waitcnt / pageins+pageouts

is an indicator of waits for VMM buffers, but cannot be exact because pageins + pageouts includes page I/Os to other file systems and pageing space. It is not possible to give a typical value for this ratio because it depends on the amount of memory and page I/Os to file systems other than VxFS. A number greater than 0.1 may indicate a VMM buffer count bottleneck. Other relevant fields displayed by kdb vmker are:

  • THRPGIO partial cnt (_partialcnt) value

    This field indicates page I/O was done in two or more steps because there were fewer VMM buffers available than the number of pages requiring I/O.

  • THRPGIO full cnt (_fullcnt) value

    All the VMM buffers were found for all the pages requiring I/O.