Comparing virtualization software performance: QEMU vs UML vs KVM

I have been making some quick tests comparing the performance of some different popular virtualization programs.

This test is far from perfect but I think that at least you can get a basic idea of how all these virtualization techniques perform.

The test
The test was very simple: unpacking the source code of Linux 2.6.21.1 (from a .tar.bz2) and compiling it. I know it’s not an elaborate test but at least it involves both computation and I/O so the results shold be fairly realistic.

All the tools used were the ones that come with Debian 4.0 etch x86 (not x86_64): gcc 4.1, bzip2 1.0.3, etc. The filesystem used was ext3.

The host and the guests used basically the same software.

The host
I first compiled the kernel in the host:

  • Intel Core 2 Duo E6300
  • 1 GB RAM
  • Debian GNU/Linux 4.0 (etch)
  • Linux 2.6.21.1

As the machine has a dual-core processor I compiled the same kernel twice: first with a classic make and then with two simultaneous jobs (‘make -j2‘).

The guests
Besides compiling the kernel in the host I tested the following virtualization software:

  • User-mode Linux 2.6.17.13 in skas0 mode
  • User-mode Linux 2.6.17.13 in skas3 (v9-pre9) mode
  • QEMU 0.8.2
  • QEMU 0.8.2 using KQEMU 1.3.0-pre11
  • KVM 28

The results
Here are the results of the test ordered by total compilation time (best results are shown first).

Mode Time
Native (-j2) 4m 49s
Native 8m 38s
KVM 11m 12s
UML skas3 12m 47s
UML skas0 14m 30s
QEMU + KQEMU 17m 55s
QEMU 2h 13m 04s

Here’s a couple of charts showing the results. Both were constructed using the same data, but the latter omits QEMU as its performance is not comparable to the others. Click on the images to enlarge them.

Chart 1 Chart 2

Conclusions
As expected, native compilation is the best and making use of both cpu cores is really worth it.

KVM performed a bit worse than I expected but it’s still faster than any of the the other ones and reasonably close to native compilation.

Regarding User-mode Linux I was a bit surprised that the skas0 mode did not perform much worse than the skas3 mode. I also expected KQEMU to perform a bit better… unlike KVM, this one is even slower than plain User-mode linux running in skas0 mode (that doesn’t require any kernel module nor patch). Of course they’re very different programs, but you get what I mean.

And last but not least, I was really amazed on how slow is QEMU with no helper module. Of course in this mode (and unlike the other solutions tested) it emulates the CPU so it has to be obviously an order of magnitude slower than the rest, but still in everyday,
interactive usage, it doesn’t feel that slow.

Some extra remarks that I’d like to point out:

  • Native mode is the only one that benefits from parallel compilation. All the other virtualization solutions perform (as expected) a bit slower when compiling with -j2. QEMU and KVM have a -smp flag to emulate a machine with several CPUs but neither one boots properly with that flag enabled.
  • I made some tests comparing QEMU performance with qcow and raw disk images. The results were more or less similar.
  • KQEMU didn’t even boot if I tried to use the -kernel-kqemu flag
  • KVM requires hardware support to work properly. Only recent processors have this kind of functionality (mine has) so it doesn’t make sense to use it if your processor is older.
  • Of course there are many other virtualization programs that I did not use in this test: OpenVZ, Xen, Linux-VServer, VirtualBox, VMWare… I expect OpenVZ and Linux-VServer to perform more or less like native mode, but I’d really like to know about the other ones.

And that’s all!! I hope that this article is useful to anyone and of course comments are welcome.

3 thoughts on “Comparing virtualization software performance: QEMU vs UML vs KVM

  1. Iggy

    Have you thought of retrying kvm with guest SMP enabled? I saw somewhere that someone got a 40% boost on kernel compiling by enabling it.

    Just an idea.

    Reply
  2. berto

    Yes I tried. As I said in the conclusions, neither QEMU nor KVM seem to work with the -smp flag.

    Without that flag the performance of make -j 1 and make -j 2 is basically the same.

    However the upcoming Linux 2.6.23 will add support for smp guests under KVM, see here. I’ll try that as soon as I have the time. That article talks about a 40% performance boost, so I guest that’s just what you’re talking about 🙂

    Reply
  3. Pingback: user mode linux + nested x server + gdm « keremin notları

Leave a Reply

Your email address will not be published. Required fields are marked *