GIT – Memory Allocation :  Allocating RAM for – The Short Answer, What is the key_buffer?,  What is the buffer_pool?, Mutex bottleneck, HyperThreading and Multiple cores (CPUs), 32-bit OS and , 64-bit OS with 32-bit , 64-bit OS and , max_connections, thread_stack, table_ (table_open_),  Query

Allocating RAM for MySQL – The Short Answer

If using just MyISAM, set key_buffer_size to 20% of _available_ RAM. (Plus innodb_buffer_pool_size=0)

If using just InnoDB, set innodb_buffer_pool_size t0 70% of _available_ RAM. (Plus key_buffer_size = 10M, small, but not zero.)

Rule of thumb for tuning mysql:
⚈  Start with released copy of my.cnf / my.ini.
⚈  Set key_buffer_size and innodb_buffer_pool_size according to engine usage and RAM.
⚈  Slow queries can usually be ‘fixed’ via indexes, schema changes, or SELECT changes, not by tuning.
⚈  Don’t get carried away with the Query cache until you understand what it can and cannot do.
⚈  Don’t change anything else unless you run into trouble (eg, max connections).
⚈  Be sure the changes are under the [mysqld] section, not some other section.

Now for the gory details. (NDB Cluster is not discussed here.)

What is the key_buffer?

MyISAM does two different things for caching.
⚈  Index blocks (1KB each, BTree structured, from .MYI file) live in the “key buffer”.
⚈  Data block caching (from .MYD file) is left to the OS, so be sure to leave a bunch of free space for this
Caveat: Some flavors of OS always claim to be using over 90%, even when there is really lots of free space.

SHOW GLOBAL STATUS LIKE 'Key%'; then calculate Key_read_requests / Key_reads If it is high (say, over 10), then the key_buffer is big enough.

What is the buffer_pool?

InnoDB does all its caching in a the “buffer pool”, whose size is controlled by innodb_buffer_pool_size. It contains 16KB data and index blocks from the open tables, plus some maintenance overhead.

MySQL 5.5 (and 5.1 with the “Plugin”) lets you declare the block size to be 8KB or 4KB. MySQL 5.5 allows multiple buffer pools; this can help because there is one mutex per pool, thereby relieving some of the Mutex bottleneck

Mutex bottleneck

MySQL was designed in the days of single-CPU machines, and designed to be easily ported to many different architectures. Unfortunately, that lead to some sloppiness in how to interlock actions. A small number (too small) of “mutexes” to gain access to several critical processes. Of note:
⚈  MyISAM’s key_buffer
⚈  The Query Cache
⚈  InnoDB’s buffer_pool
With multi-core boxes, the mutex problem is causing performance problems. In general, past 4-8 cores, MySQL gets slower, not faster. MySQL 5.5 is making that somewhat better in InnoDB; the practical limit for cores is more like 32, and performance tends plateaus after that rather than declining.

One anecdote says “OpenSuse / distribution for multi core CPUs is not optimal”: Forum #392502

HyperThreading and Multiple cores (CPUs)

Short answers:
⚈  Turn off HyperThreading
⚈  Turn off any cores beyond 8

HyperThreading is great for marketing, lousy for performance. It involves having two processing units sharing a single cache. If both units are doing the same thing, the cache will be reasonably useful. If the units are doing different things, they will be clobbering the cache entries that the other one wants.

Furthermore MySQL is not great on using multiple cores. So, if you turn off HT, the remaining cores run a little faster.

32-bit OS and MySQL

First, the OS (and the hardware?) may conspire to not let you use all 4GB, if that is what you have. If you have more than 4GB of RAM, the excess beyond 4GB is _totally_ inaccessable and unusable.

Secondly, the OS probably has a limit on how much RAM it will allow any process to use.

Example: FreeBSD’s maxdsiz, which defaults to 512MB.

Example:

$ ulimt -a
...
max memory size (kbytes, -m) 524288

So, once you have determined how much RAM is available to mysqld, then apply the 20%/70%, but round down some.

If you get an error like [ERROR] /usr/libexec/mysqld: Out of memory (Needed xxx bytes), it probably means that MySQL exceeded what the OS is willing to give it. Decrease the cache settings.

64-bit OS with 32-bit MySQL

This combo relieves you of the OS limits in “32-bit OS and MySQL”, and moves toward “64-bit OS and MySQL” with a limit of 4GB.

You should probably upgrade MySQL to 64-bit.

Anyway, if you have at least 4GB of RAM, then maybe these would be good:
⚈  key_buffer_size = 20% of _all_ of RAM, but not more than 3G
⚈  buffer_pool = 3G

64-bit OS and MySQL

MyISAM only: key_buffer_size (before 5.0.52 / 5.1.23) had a hard limit of 4G. See also 5.1 restrictions
Otherwise, use about 20% of RAM. Set (in my.cnf / my.ini) innodb_buffer_pool_size = 0.

InnoDB only: innodb_buffer_pool_size = 70% of RAM. If you have lots of RAM and are using 5.5, then consider having multiple pools. Set key_buffer_size = 20M (tiny, but non-zero)

If you have a mixture of engines, lower both numbers.

max_connections, thread_stack

Each “thread” takes some amount of RAM. This used to be about 200KB; 100 threads would be 20MB, not a signifcant size. If you have max_connections = 1000, then you are talking about 200MB, maybe more. Having that many connections probably implies other issues that should be addressed.

Thread stack overrun rarely happens. If it does, do something like thread_stack=256K

table_cache (table_open_cache)

(The name change in some version.)

The OS has some limit on the number of open files it will let a process have. Each table needs 1 to 3 open files.

In *nix, ulimit tells you what the file limit is. The maximum value is in the tens of thousands, but sometimes it is set to only 1024. This limits you to about 300 tables. More discussion on ulimit

On the other side, the table cache is (was) inefficiently implemented — lookups were done with a linear scan. Hence, setting table_cache in the thousands could actually slow down mysql. (Benchmarks have shown this.)

You can see how well your system is performing via SHOW GLOBAL STATUS; and computing the opens/second via Opened_files / Uptime Opened_tables??? If this is more than, say, 5, table_cache should be increased. If it is less than, say, 1, you might get improvement by decreasing table_cache.

Query Cache

Short answer: query_cache_type = OFF

Long answer… There are many aspects of the “Query cache”, many are negative.
⚈  The QC is totally unrelated to the key_buffer and buffer_pool (this confuses novices)
⚈  When it is useful, the QC is blazingly fast. It would not be hard to create a benchmark that runs 1000x faster.
⚈  When _any_ write happens on a table, _all_ entries in the QC for _that_ table are removed. This can be costly. And it happens even on a readonly Slave.
⚈  There is a single mutex controlling the QC.
⚈  The QC, unless it is OFF (0), is consulted for _every_ SELECT.
⚈  Yes, the mutex is hit even if query_cache_type = DEMAND (2).
⚈  Yes, the mutex is hit even for SQL_NO_CACHE.
⚈  Purges are performed with a linear algorithm, so a large QC (even 200MB) can be noticeably slow.

To see how well your QC is performing, SHOW GLOBAL STATUS LIKE 'Qc%'; then compute the read hit rate: Qcache_hits / Qcache_inserts If it is over, say, 5, the QC might be worth keeping.

thread_cache_size

This is a minor tunable. Zero will slow down thread (connection) creation. A small (say, 10), non-zero number is good. The setting has essentially no impact on RAM usage.

It is the number of “extra” processes to hang onto. It does not restrict the number of threads.

swappiness

??? RHEL, in its infinite wisdom, decided to let you control how aggressively the OS will preemptively swap RAM. This is good in general, but lousy for MySQL.

MySQL would love for RAM allocations to be reasonably stable — the caches are (mostly) pre-allocated; the threads, etc, are (mostly) of limited scope. ANY swapping is likely to severly hurt performance of MySQL.

With a high value for swappiness, you lose some RAM because the OS is trying to keep a lot of space free for future allocations (that MySQL is not likely to need).

With swappiness = 0, the OS will probably crash rather than swap. I would rather have MySQL limping than die.

Somewhere in between (say, 5?) might be a good value for a MySQL-only server.

NUMA

OK, it’s time to complicate the architecture of how a CPU talks to RAM. NUMA (Non-Uniform Memory Access) enters the picture. Each CPU (or maybe socket with several cores) has a part of the RAM hanging off each. This leads to memory access being faster for local RAM, but slower (tens of cycles slower) for RAM hanging off other CPUs.

Then the OS enters the picture. In at least one case (RHEL?), two things seem to be done:
⚈  OS allocations are pinned to the ‘first’ CPU’s RAM.]
⚈  Other allocations go by default to the first CPU until it is full.

Now for the problem.
⚈  The OS and MySQL have allocated all the ‘first’ RAM.
⚈  MySQL has allocated some of the second RAM.
⚈  The OS needs to allocate something.
Ouch — it is out of room in the one CPU where it is willing to allocate it’s stuff, so it swaps out some of MySQL. Bad.

Possible solution: Configure the BIOS to “interleave” the RAM allocations. This should prevent the premature swapping, at the cost of off-CPU RAM accesses half the time. Well, you have the costly accesses anyway, since you really want to use all of RAM.

Overall performance loss/gain: A few percent.

huge pages

This is another hardware performance trick.

For a CPU to access RAM, especially mapping a 64-bit address to somewhere in, say, 16GB or ‘real’ RAM, the TLB is used. (TLB = Translation Lookup Buffer.) Think of the TLB as a hardware associative memory lookup table; given a 64-bit virtual address, what is the real address.

Because it is an associative memory of finite size, sometimes there will be “misses” that require reaching into real RAM to resolve the lookup. This is costly, so should be avoided.

Normally, RAM is ‘paged’ in 4KB pieces; the TLB actually maps the top 64-12 bits into a specific page. Then the bottom 12 bits of the virtual address are carried over intact.

For example, 128GB of RAM broken 4KB pages means 32M page-table entries. This is a lot, and probably far exceeds the capacity of the TLB. So, enter the “Huge page” trick.

With the help of both the hardware and the OS, it is possible to have some of RAM in huge pages, of say 4MB (instead of 4KB). This leads to far fewer TLB entries, but it means the unit of paging is 4MB for such parts of RAM. Hence, huge pages tend to be non-pagable.

Now RAM is broken into pagable and non pagable parts; what parts can reasonably be non pagable? In MySQL, the innodb_buffer_pool is] a perfect candidate. So, by correctly configuring these, InnoDB can run a little faster:
⚈  Huge pages enabled
⚈  Tell the OS to allocate the right amount (namely to match the buffer_pool)
⚈  Tell MySQL to use huge pages

innodb memory usage vs swap
That thread has more details on what to look for and what to set.

Overall performance gain: A few percent.

ENGINE=MEMORY

This is a little-used alternative to MyISAM and InnoDB. The data is not persistent, so it has limited uses. The size of a MEMORY table is limited to max_heap_table_size, which defaults to 16MB. I mention it in case you have changed the value to something huge; this would interfere with other possible uses of RAM.

Web server

A web server like Apache runs multiple threads. If each threads opens a connection to MySQL, you could run out of connections. Make sure MaxClients (or equivalent) is set to some civilized number (under 50).

Tools

⚈  MySQLTuner
⚈  TUNING-PRIMER

There are several tools that advise on memory. One misleading entry they come up with

Maximum possible memory usage: 31.3G (266% of installed RAM)

Don’t let it scare you — the formulas used are excessively conservative. They assume all of max_connections are in use and active, and doing something memory-intensive.

Total fragmented tables: 23 This implies that OPTIMIZE TABLE _might_ help. I suggest it for tables with either a high percentage of “free space” (see SHOW TABLE STATUS) or where you know you do a lot of DELETEs and/or UPDATEs.

Print Friendly

Comments

comments

Bài viết liên quan