Mythbusters: How to configure InnoDB buffer poll on large MySQL servers Yesterday I wrote about the dangers in using top on systems with 100+ GB of RAM, not to mention future systems with 1+ TB. A related topic is, how should I configure MySQL on such a large system? There is a classic rule of thumb that on a dedicated MySQL server one should allocate 80% of memory to the InnoDB buffer pool. On a 128GB system that is 102.4 GB. This means that I would leave 25.6 GB of RAM "unused". So surely on these large systems, this old piece of advice cannot hold anymore. If the database was previously running on a server that in total had less than that altogether, it seems wrong to leave so much memory just unused. Let's label the old rule of thumb tentatively a "myth" and ask mythbusters to figure out a new MySQL configuration for us...
So the rationale for challenging the old advice is the assumption that the buffer pool is the main variable consumer of memory in MySQL. There are of course other data strutures that will consume memory too, but we are suspecting the amount of memory consumed on top of the buffer pool is more or less constant overhead. For instance, each connection and executing query will consume some amount of memory, but I don't have more connections here than I had on the much smaller server, so they should consume a constant amount of memory. This is the output of a system with innodb_buffer_pool_size set to 100GB: mysql> show engine innodb status\G
109890764800 bytes is 102.3 GB, so InnoDB consumes 2.3 GB more than the buffer pool. And if you remember from the top output yesterday, MySQL itself consumes in total 109-110 GB, so the MySQL layer consumes an additional 6.7 to 7.7 GB. There are a lot of queries with many JOINs (like an 11-way JOIN) so there are many temporary tables created per second, but if I did the math correctly, that would at most only consume 1-2 GB. (I know that the temporary tables are always smaller than tmp_table_size.) So we learn here that actually MySQL and InnoDB do consume several GB on top of what you configure for the InnoDB buffer pool. We still have some 15 GB unused, but as a rough rule of thumb the old advice is still good also on large systems and configuring the buffer pool at 80% of your total available memory is still relatively good advice. Confirmed would be the official Mythbusters judgement here. In fact, the fine manual says: "InnoDB reserves additional memory for buffers and control structures, so that the total allocated space is approximately 10% greater than the specified size." In my observation, this is not true for InnoDB, but MySQL as a whole indeed consumed 9-10 % on top of the buffer pool size. When I get a TB level system it will be interesting to see if the overhead is still the same, it would be 90 GB then.
*************************** 1. row ***************************
Type: InnoDB Name: Status:
===================================== 110530 8:10:10 INNODB MONITOR OUTPUT =====================================
... ---------------------- BUFFER POOL AND MEMORY ----------------------
Total memory allocated 109890764800; in additional pool allocated 0 Dictionary memory allocated 851714 Buffer pool size 6553596 Free buffers 0 Database pages 5195872 Old database pages 1917929 Modified db pages 1763 ...
- Log in to post comments
- 76302 views
If you have a lot of memory
If you have a lot of memory available, you should also consider to increase buffers that may speed up query execution. E.g., join_buffer_size and sort_buffer_size.
Consider tmp_table_size & max_heap_table_size
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sys…
These are often forgotten tuning variables. Their defaults are 16MB. That's not a lot if you're working on tables with many rows. These values should be factored into the standard 80/20% rule of thumb, imho.
Yes. The server in question
Yes. The server in question actually does some crazy joins, I'm not sure about sorting. But it seems all the temporary tables fit into the default 16MB, so in this case I didn't change it. If the result sets on those queries were larger, it would surely be a huge benefit to make sure they fit in memory.
Large explosion missing
I expect you're still working on a mythbusters style explosion?
Haha! Actually, there was a
Haha!
Actually, there was a large explosion. The way I came to know the facts in the above blog post should be obvious:
- "Surely the 80% rule can't hold for a server this large anymore"
- Deploy server with "optimized" settings
- 5 days later production server goes down
- Go back to obeying the 80% rule
Please consider that any
Please consider that any buffer pool allocated beyond 1.6 times the size of InnoDB data and indexes combined will be simply wasted. Also, large buffer pools should be spread across multiple buffer pool instances (innodb_buffer_pool_instances).
So, if you have 256 GB of RAM and 30 GB of InnoDB data + indexes, then you should consider setting the following:
innodb_buffer_pool_size=48G
innodb_buffer_pool_instances=48
That would give you 48 1GB individual buffer pool instances, allowing for more concurrent access.
In this case, assigning more that 48 GB of buffer pool will not improve performance, as 48 GB is sufficient to maintain all InnoDB data in RAM all the time without paging between the buffer pool and the main system cache.