I/O limits for disk groups in QEMU 2.4

QEMU 2.4.0 has just been released, and among many other things it comes with some of the stuff I have been working on lately. In this blog post I am going to talk about disk I/O limits and the new feature to group several disks together.

Disk I/O limits

Disk I/O limits allow us to control the amount of I/O that a guest can perform. This is useful for example if we have several VMs in the same host and we want to reduce the impact they have on each other if the disk usage is very high.

The I/O limits can be set using the QMP command block_set_io_throttle, or with the command line using the throttling.* options for the -drive parameter (in brackets in the examples below). Both the throughput and the number of I/O operations can be limited. For a more fine-grained control, the limits of each one of them can be set on read operations, write operations, or the combination of both:

  • bps (throttling.bps-total): Total throughput limit (in bytes/second).
  • bps_rd (throttling.bps-read): Read throughput limit.
  • bps_wr (throttling.bps-write): Write throughput limit.
  • iops (throttling.iops-total): Total I/O operations per second.
  • iops_rd (throttling.iops-read): Read I/O operations per second.
  • iops_wr (throttling.iops-write): Write I/O operations per second.


-drive if=virtio,file=hd1.qcow2,throttling.bps-write=52428800,throttling.iops-total=6000

In addition to that, it is also possible to configure the maximum burst size, which defines a pool of I/O that the guest can perform without being limited:

  • bps_max (throttling.bps-total-max): Total maximum (in bytes).
  • bps_rd_max (throttling.bps-read-max): Read maximum.
  • bps_wr_max (throttling.bps-write-max): Write maximum.
  • iops_max (throttling.iops-total-max): Total maximum of I/O operations.
  • iops_rd_max (throttling.iops-read-max): Read I/O operations.
  • iops_wr_max (throttling.iops-write-max): Write I/O operations.

One additional parameter named iops_size allows us to deal with the case where big I/O operations can be used to bypass the limits we have set. In this case, if a particular I/O operation is bigger than iops_size then it is counted several times when it comes to calculating the I/O limits. So a 128KB request will be counted as 4 requests if iops_size is 32KB.

  • iops_size (throttling.iops-size): Size of an I/O request (in bytes).

Group throttling

All of these parameters I’ve just described operate on individual disk drives and have been available for a while. Since QEMU 2.4 however, it is also possible to have several drives share the same limits. This is configured using the new group parameter.

The way it works is that each disk with I/O limits is member of a throttle group, and the limits apply to the combined I/O of all group members using a round-robin algorithm. The way to put several disks together is just to use the group parameter with all of them using the same group name. Once the group is set, there’s no need to pass the parameter to block_set_io_throttle anymore unless we want to move the drive to a different group. Since the I/O limits apply to all group members, it is enough to use block_set_io_throttle in just one of them.

Here’s an example of how to set groups using the command line:

-drive if=virtio,file=hd1.qcow2,throttling.iops-total=6000,throttling.group=foo
-drive if=virtio,file=hd2.qcow2,throttling.iops-total=6000,throttling.group=foo
-drive if=virtio,file=hd3.qcow2,throttling.iops-total=3000,throttling.group=bar
-drive if=virtio,file=hd4.qcow2,throttling.iops-total=6000,throttling.group=foo
-drive if=virtio,file=hd5.qcow2,throttling.iops-total=3000,throttling.group=bar
-drive if=virtio,file=hd6.qcow2,throttling.iops-total=5000

In this example, hd1, hd2 and hd4 are all members of a group named foo with a combined IOPS limit of 6000, and hd3 and hd5 are members of bar. hd6 is left alone (technically it is part of a 1-member group).

Next steps

I am currently working on providing more I/O statistics for disk drives, including latencies and average queue depth on a user-defined interval. The code is almost ready. Next week I will be in Seattle for the KVM Forum where I will hopefully be able to finish the remaining bits.

I will also attend LinuxCon North America. Igalia is sponsoring the event and we have a booth there. Come if you want to talk to us or see our latest demos with WebKit for Wayland.

See you in Seattle!

14 thoughts on “I/O limits for disk groups in QEMU 2.4

  1. Pingback: Links 16/8/2015: 18th Birthday for GNOME, Android M Name | Techrights

  2. berto Post author

    That allows you to establish I/O limits for a process in a particular block device.

    As your example says, you have /dev/drive1 and you want to limit the I/O that certain processes can do there.

    In QEMU you typically don’t use /dev/drive1 and /dev/drive2, but hd1.img and hd2.img. Those are regular files, and that’s where you want to set the I/O limits.

  3. Maik

    What happens when contradictory values are specified for the same group? Is it an error, or minimum, maximum, first value, last value wins?

    Any idea when libvirt will have config support for this?

  4. berto Post author

    Maik: I/O limits apply to all group members, so setting new values affects the whole group. In short: the last value wins.

    That’s also what happens if you set the limits using the QMP command ‘block_set_io_throttle’.

    About libvirt: I don’t know the roadmap for this, sorry.

  5. Christoph

    I am using disk io limits in kvm. Now I am lloking for a way to get io statistics from the kvm hypervisor. How much iops do my disks make, is io limited and so on. How can I query kvm for these values ? Do we use qemu monitor ?

  6. berto Post author

    The query-blockstats command can give you that information (or ‘info blockstats’ if you use the monitor).

    Coincidentally I’m working on extending the I/O statistics in QEMU to add more information than what you get now.

  7. Pingback: The world won't listen » Blog Archive » I/O bursts with QEMU 2.6

  8. Shaun

    I want each vm to have maximum resources when available but fall back to limits when resource demand is high from other vm’s

    Let’s say the KVM server had 100% resources and 10 vm’s… I don’t want to limit each vm to 10% of the server… they should get 10% + an equal share of whatever resources are available…
    If 9 vm’s are more or less sleeping 1 vm can burst to 100% of the resources..
    But I don’t want any vm to get so many resources that other vm’s are paralysed.

    Is that possible ?

    1. berto Post author

      VMs don’t know what’s going on in the rest of the system, but a management tool can control that from the host, and adjust each VM’s I/O limits (using block_set_io_throttle) depending on the needs.


Leave a Reply

Your email address will not be published. Required fields are marked *