Kahibaro
Discord Login Register

4.7.1 Virtual machines (KVM/QEMU)

Introduction

Virtual machines with KVM and QEMU let you run multiple complete operating systems on a single Linux host. In this chapter the focus is on how KVM and QEMU work together, how to enable them on a typical Linux system, and how to use the basic tools to create and manage virtual machines. Containers and other virtualization technologies are covered elsewhere, so they are not repeated here.

KVM and QEMU: How They Fit Together

KVM, the Kernel-based Virtual Machine, is a feature in the Linux kernel that turns the kernel into a type 1 hypervisor. QEMU is a userspace emulator and virtualizer that creates and manages the virtual hardware that a guest system sees.

On modern x86 hardware, KVM provides hardware assisted virtualization using Intel VT-x or AMD-V. When KVM acceleration is available, QEMU hands off CPU virtualization to KVM, which lets guest code execute using the physical CPU instruction set instead of emulated instructions. This combination makes virtual machines run near native speed.

Without KVM support, QEMU can still run virtual machines through full CPU emulation. In that mode, each guest instruction is translated, which is much slower but more flexible and can emulate different CPU architectures.

For efficient virtual machines on Linux you normally want QEMU with KVM acceleration enabled. Pure QEMU emulation is much slower and is usually reserved for testing other CPU architectures or special situations.

On most distributions you will see QEMU used through tools such as libvirt and virt-manager. KVM is then used automatically under the hood when available.

Checking Hardware and Kernel Support

Before you can use KVM, you must verify that your CPU and kernel support it and that the required kernel modules are loaded.

First, check that the CPU provides virtualization extensions. On x86, run:

egrep 'vmx|svm' /proc/cpuinfo

If this prints lines containing vmx (Intel) or svm (AMD), the CPU has hardware virtualization support. If you see no output, the feature may be disabled in the firmware setup, often still called BIOS, or the CPU may not support it at all.

To confirm that the kernel can use KVM, check that the KVM modules are loaded:

lsmod | grep kvm

On Intel systems you usually see kvm and kvm_intel. On AMD systems you usually see kvm and kvm_amd. If they are not loaded, you can try:

sudo modprobe kvm
sudo modprobe kvm_intel    # or kvm_amd

If loading fails, read the kernel messages:

dmesg | grep -i kvm

Common problems include virtualization disabled in firmware, unsupported CPU, or conflicting kernel options such as nested virtualization blocking the module.

Installing KVM and QEMU Tools

Most distributions package KVM and QEMU together with management libraries and tools. The exact packages differ between families, but the pattern is similar.

On Debian or Ubuntu you typically install:

sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients virt-manager

On Fedora you would use:

sudo dnf install @virtualization virt-install virt-manager
sudo systemctl enable --now libvirtd

On Arch Linux you might use:

sudo pacman -S qemu-full virt-manager dnsmasq iptables-nft
sudo systemctl enable --now libvirtd

The key components are the qemu-kvm binaries, the libvirt daemon that manages virtual machines, and virt-manager or similar tools for a graphical interface. When libvirtd is active, it usually creates a default virtual network for your guests, using dnsmasq and NAT so that guests can access the outside network.

Understanding the Virtual Hardware

A KVM/QEMU virtual machine sees a collection of virtual devices that look like real hardware to the guest operating system. QEMU emulates or exposes the following kinds of devices:

CPU and chipset, which define how many vCPUs the guest has and what features they provide.

RAM, where you specify a fixed or sometimes ballooned amount of memory for the guest.

Disk controllers and disks, which may be SCSI, SATA, Virtio block devices, or others. Disks are usually backed by regular files on the host, or by LVM volumes, or raw partitions.

Network interfaces, which can be emulated e1000, rtl8139, or Virtio network devices connected to various host networking setups.

Graphics devices, including emulated video hardware and optional graphical console output.

For best performance on guests that support it, you generally choose Virtio devices instead of fully emulated ones, since Virtio is designed specifically for para-virtualized drivers and high efficiency under KVM.

Disk Image Formats and Storage Options

Each virtual machine normally uses one or more disk images. These are host files that QEMU presents to the guest as block devices. The most common formats are raw and qcow2.

A raw image is simply a file that contains the disk sectors in order. There is no metadata for snapshots or compression. Raw images are straightforward, can perform well, and can be used directly by some storage layers, but they always allocate their full size on disk. A 20 GB raw image file usually consumes 20 GB of host storage.

A qcow2 image is a QEMU Copy On Write format. It supports snapshots, compression, and sparse allocation where the file grows as data is written. In many desktop scenarios qcow2 is the default. Performance can be slightly lower than raw on some workloads, but the flexibility is often worth it.

You can create a new disk image with qemu-img:

qemu-img create -f qcow2 ubuntu-vm.qcow2 20G

This command creates a 20 GB virtual disk in qcow2 format. The actual host disk usage starts small and grows as you write data inside the guest.

If you rely on thin-provisioned images such as qcow2, monitor host disk usage carefully. If the host filesystem fills completely, running guests can become corrupted because their virtual disks cannot grow any further.

Running a Simple Virtual Machine with QEMU

Although most people eventually use a management layer such as libvirt, you can start a virtual machine directly with the qemu-system-* binaries. For example, on x86_64, try:

qemu-system-x86_64 \
  -enable-kvm \
  -m 2048 \
  -cpu host \
  -drive file=ubuntu-vm.qcow2,if=virtio \
  -cdrom ubuntu-24.04-desktop-amd64.iso \
  -boot d

This command starts a virtual machine with KVM acceleration, 2 GiB of RAM, one Virtio disk, and boots from an ISO image so that you can install an operating system. The -cpu host option tells QEMU to expose most of the host CPU features to the guest, which can improve performance and compatibility.

Once the guest is installed and you no longer need the installer ISO, you can drop the -cdrom and -boot d options and boot from the virtual disk instead.

QEMU normally creates a graphical window for the guest using SDL or GTK. You can also run without a local window using options such as -nographic or -display none and then access the guest using a serial console or VNC, but this moves into more advanced usage.

Managing Virtual Machines with Libvirt and Virt-Manager

Running QEMU directly from the command line is flexible, but the invocation quickly becomes complex. libvirt provides a higher level abstraction for managing virtual machines, storage pools, and networks, and it can use KVM/QEMU as its backend.

The virt-manager graphical tool uses libvirt to create and control VMs. A typical workflow for a new VM looks like this:

Start virt-manager on the host. It connects to the local libvirt daemon.

Create a new virtual machine and choose an installation method, for example from a local ISO.

Assign memory and CPU counts and create a new disk image. By default a qcow2 image in a libvirt storage pool is used.

Select the network, usually the default NAT network, and review or adjust virtual hardware.

Start the installation and proceed as you would on a physical machine.

Once installed, you can start, stop, pause, or delete the virtual machine from virt-manager. You can also configure features such as Virtio devices, SPICE or VNC consoles, and storage pools. All configuration is stored in XML definitions that libvirt manages under /etc/libvirt and /var/lib/libvirt.

For command line control, you can use virsh:

virsh list --all
virsh start myvm
virsh shutdown myvm
virsh destroy myvm

virsh destroy stops the guest immediately, similar to cutting the power, while virsh shutdown requests a clean shutdown from the guest operating system.

Networking Options for Guests

KVM/QEMU networking can range from very simple NAT to complex bridged setups. The default libvirt configuration usually creates a NAT network called default. Guests on this network can reach the outside using the host as a router, but outside hosts cannot initiate connections to guests without additional port forwarding.

Bridged networking attaches a guest virtual interface to a host bridge, which connects to a physical network interface. In that case, guests obtain IP addresses on the same network as the host and behave like separate physical machines. This is common for servers and services that must be directly reachable.

There are also internal and host-only networks that restrict traffic to guests and the host. These are useful for lab environments and testing.

Although the low-level implementation uses Linux networking tools and possibly iptables or nftables, libvirt simplifies most tasks so that you can create networks via XML definitions or graphical tools without manually editing many system files.

Performance and Resource Allocation

Since a KVM/QEMU host runs multiple operating systems, resource allocation is important. You must decide how many virtual CPUs, how much RAM, and how much storage each guest receives.

In many cases, you should over-commit CPU cores, because not all guests are busy at the same time. Memory over-commitment can be more risky. While techniques such as KSM and memory ballooning exist, they are more specialized topics. When you start, it is safer to allocate RAM so that the sum of guest allocations fits comfortably within host RAM, leaving room for the host itself.

Using Virtio devices for disks and network interfaces, and choosing -cpu host or equivalent in libvirt, helps to keep virtualization overhead low. On storage, raw images on fast disks can provide higher throughput, but qcow2 provides snapshots and easier management. Your choice depends on whether you value performance or flexibility more.

Avoid assigning all host CPU cores and memory to guests. The host still needs resources. If the host becomes starved of RAM or CPU, all virtual machines can slow down or fail.

Snapshots and Backups

qcow2 disk images can store internal snapshots, which capture the state of a virtual disk at a point in time. Libvirt can manage snapshots for both disk and full VM state, including memory. Snapshots are convenient for experiments before risky changes. You can snapshot, test an upgrade in the guest, and revert if something goes wrong.

However, snapshots are not replacements for proper backups. They live on the same storage as the active disk image; if the host disk fails, snapshots and active data are lost together. For backups, you typically copy disk images using tools such as rsync or create host level backups of the backing storage system. Some environments quiesce the guest or use guest coordination tools to keep filesystems consistent when backing up.

Security Considerations

Virtual machines provide isolation, but they still share the same kernel and hardware of the host. KVM/QEMU uses many controls to limit what guests can access, for example SELinux or AppArmor profiles, user and group separation, and device cgroup rules.

You should treat guests as separate systems that need their own security updates, firewalls, and access controls. Keep QEMU and libvirt updated, especially on multi-tenant hosts, where untrusted workloads may run. Live migration, device passthrough, and direct hardware access introduce more complexity and risk and should be used carefully.

When to Use KVM/QEMU

KVM/QEMU is particularly useful when you need to run whole operating systems with strong isolation, different kernels, or different distributions, all on one machine. Common uses include test environments, development labs, server consolidation, and running legacy operating systems.

Compared to containers, virtual machines cost more in overhead but offer stronger isolation and more control over the guest environment. In many real deployments, both are used: KVM provides a base server platform, and containers run inside guests where appropriate.

By understanding how KVM and QEMU cooperate, how to enable hardware acceleration, and how to create and manage virtual machines with tools like virt-manager and virsh, you gain a flexible foundation for a wide range of virtualization tasks on Linux.

Views: 8

Comments

Please login to add a comment.

Don't have an account? Register now!