Sunday, 26 April 2015

Just a little VM tuning: Memory and CPU saving with KVM + KSM

This topic will be somewhat unusal from a java junkie like me, but hopefully interesting for those who are interested in cloud computing and virtualization. To make it easier to understand for everyone, I will start from far-far away, please just skip ahead if you feel like this is nothing new for you, there may be some interesting pieces of information later on.

The basics

This may not be something new for you, feel free to skip ahead to the hypothesis if you know Linux and virtual memory handling.

Virtual Memory


Modern computers break up the memory into pages. When your program reads or writes a memory address, that translates to a page and through the paging table to a physical address.

This is the so called virtual memory and allows swapping, the OS can just swap out some pages from memory to a larger and cheaper storage (typically a disk). When a page is referenced that is not in the memory, the hardware generates an interrupt and the OS takes over, loads the memory page and gives back control to the program. But not only that is possible...

Linux have a small module built in called Kernel Samepage Merging or shortly KSM This module was actually written by the same guy who wrote KVM, and very likely with KVM in mind, but any other system can benefit.
I'd recommend to read the doc in the kernel documentation, but this is what it does in a nutshell:
  1. Periodically checks memory pages
  2. If two identical pages are found, then they are merged and marked with COW (copy on write) -this is because KSM has no idea what the page is used for, it just merges whatever it finds
So if you have two VM's, both running the same OS, then most pages of the kernel and programs can be shared between the two VM's and they will never know. This can save quite some memory and allows big memory overcommit in virtualized environments, if you accept the price:
  1. KSM takes CPU time. If you have a lot of memory, then it will take a lot of CPU time.
  2. Basically it just does not know when to stop, it just keeps running, so additional software is used to manage it. Like ksmtuned.

Cache


While CPU has become faster and faster until the second half of the 2000's, the memory speed did not really keep up with it and CPU's started to use ever growing cache. The cache is in the CPU, it is very quick, but it's size is still limited, even Xeon CPUs have 10 MB of cache, typical desktop CPU's have 1-2 MB.

The hypothesis

Since the cache is small, switching to another VM in a virtualized environment should cause a little performance loss, since the cached pages of the kernel in the VM1 need to be replaced with the actually identical pages of VM2.
The second part of the idea is that KSM could help here by eliminating that performance loss. When pages are shared between the operating systems of the VM's, then a cache miss is less likely after another VM takes the CPU time.
Therefore once pages are merged with KSM and KSMD is turned off, switching between different VM's will be less expensive and respone times improve.

KSM could not only be a memory-saver, but also a CPU-saver.

Test

To test the idea, I prepared 12 web server VM's and one load-balancer. All of them run fedora 20 operating system. The web servers run apache httpd, the load-ballancer runs HAProxy, with more or less default settings. Each VM have 256 MB RAM and a single CPU.

The test host is an Intel NUC D34010WYK with Core i3 CPU (important factors for the test: hyperthreading is enabled, cache size is 3MB) and 2x 8 GB DDR3-1600 RAM.

Nice little box, they could have called it Intel NUKE :)

To generate load, I use the simple apache benchmark (ab) command line utility from my laptop. It is not really relevant, my laptop is a wreck, perfect motivation to speed-optimize software.
Load command:
ab -n 100000 -c 8 http://192.168.1.104/icons/poweredby.png


(This is the small "Powered by Fedora" banner)

Results





Comments, conclusions

The results with VM numbers > 4 seem to prove the theory, but I was surprised to see the performance loss when the number of VM's was less than 4. I do not have an explaination for that yet.
I suspect that the fall of the no-ksm (blue) curve shows the increase of cache misses, it flats out after 10 VM's, basically by then cache misses are becoming so frequent that they can not be a lot more frequent.

Basically on each CPU and memory you will get different values, also different OS'es and programs will generate different values, the intersection may be somewhere else, but the form of the curves should be similar.

TODO

I think it would be interesting to repeat the test with hyperthreading turned off and see how the curve changes.

No comments:

Post a Comment