Table of Contents
Understanding KVM and QEMU Together
KVM and QEMU are usually used together to provide full virtualization on Linux:
- KVM (Kernel-based Virtual Machine)
- A kernel module (
kvmplus a CPU-specific module likekvm_intelorkvm_amd). - Turns the Linux kernel into a type-2 hypervisor with hardware acceleration.
- Exposes
/dev/kvmto userspace, where virtualization software can run guest code directly on the CPU. - QEMU
- A userspace emulator and virtualization tool.
- Can emulate CPUs and devices purely in software (slow but very flexible).
- When used with KVM, QEMU uses
/dev/kvmto run guest code at near-native speed, while still emulating devices.
Typical stack:
$$
\text{hardware} \rightarrow \text{Linux kernel + KVM} \rightarrow \text{QEMU + libvirt} \rightarrow \text{guest VMs}
$$
Key roles:
- KVM: fast execution of guest CPU instructions.
- QEMU: virtual hardware (disk, network, display, etc.) and VM lifecycle.
- libvirt (covered elsewhere): management layer and tooling (
virsh,virt-manager, etc.).
Requirements and Basic Concepts
Hardware and Kernel Requirements
To use KVM acceleration:
- CPU virtualization support:
- Intel: VT-x (sometimes VT-d for IOMMU/passthrough)
- AMD: AMD-V
- Enabled in firmware (BIOS/UEFI).
- Linux kernel with:
kvmcore modulekvm_intelorkvm_amdmodule
Check if the CPU supports virtualization:
egrep -c '(vmx|svm)' /proc/cpuinfoNon-zero output suggests virtualization features exist.
Check if KVM devices are present:
lsmod | grep kvm
ls -l /dev/kvm
If /dev/kvm exists and permissions are appropriate, KVM acceleration can be used by QEMU.
QEMU Modes: TCG vs KVM
QEMU operates in two major modes:
- TCG (Tiny Code Generator): pure software emulation.
- Pros: can emulate different CPU architectures (e.g., ARM VM on x86 host).
- Cons: slower, no hardware acceleration.
- KVM-accelerated:
- Pros: near-native performance for guests on the same CPU architecture as host.
- Cons: requires KVM support and host/guest architecture match.
QEMU switches to KVM mode using options like -enable-kvm and a CPU type like -cpu host (details below).
Installing KVM/QEMU Tools
Names differ by distribution, but typical components are:
qemu-system-*(e.g.,qemu-system-x86_64)libvirt-daemon,libvirt-daemon-system,libvirt-clientsvirt-manager(GUI management)bridge-utils/ networking helpers
Example install (Debian/Ubuntu):
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients virt-managerExample (Fedora/RHEL-like):
sudo dnf install @virtualization virt-manager
sudo systemctl enable --now libvirtdConfirm libvirt is running:
sudo systemctl status libvirtdRunning a Simple VM with QEMU Directly
While most production setups use libvirt and virt-manager, it’s useful to know how to start a VM with qemu-system-* directly.
Assume:
- Host architecture: x86_64
- Installation ISO:
debian-12.iso - VM disk image:
debian-12.qcow2(to be created)
Creating a Disk Image
QEMU includes qemu-img to create and inspect disk images.
Create a 20G QCOW2 image:
qemu-img create -f qcow2 debian-12.qcow2 20GCommon formats:
raw: simple, best performance, but larger on disk.qcow2: supports snapshots, compression, thin provisioning.
Inspect an image:
qemu-img info debian-12.qcow2Launching a Basic VM (KVM-Accelerated)
Example command:
qemu-system-x86_64 \
-enable-kvm \
-m 4096 \
-cpu host \
-smp 2 \
-drive file=debian-12.qcow2,if=virtio,format=qcow2 \
-cdrom debian-12.iso \
-boot order=d \
-display defaultKey options:
-enable-kvm
Use KVM acceleration if available.-m 4096
Allocate 4 GiB RAM to the guest.-cpu host
Expose host CPU features to the guest (for performance). Alternative: specific model, e.g.-cpu SandyBridge.-smp 2
Present 2 virtual CPUs to the guest.-drive file=...,if=virtio,format=qcow2
Attach a virtual drive:file: backing image fileif=virtio: use VirtIO interface (paravirtualized, faster than emulated IDE/SATA/SCSI)format: disk image format-cdrom debian-12.iso
Attach ISO as virtual CD/DVD.-boot order=d
Boot from CD first. Later, you might change to-boot order=cto boot from disk.-display
Controls graphics; default may open an SDL or GTK window. Alternatives include-nographicor SPICE/VNC (see below).
Console-Only VM
To run a headless server VM using a text console:
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-cpu host \
-smp 2 \
-drive file=server.qcow2,if=virtio,format=qcow2 \
-nographic \
-serial mon:stdioNotes:
-nographic
Disable graphical output; use serial console.-serial mon:stdio
Share QEMU monitor and guest serial console on the current terminal.
Inside the guest, you must configure the OS to provide a login prompt on the serial console (e.g., console=ttyS0 kernel parameter and an appropriate getty).
Storage with QEMU/KVM
Disk Interfaces and Performance
Virtual disk performance depends on the interface:
- Emulated:
- IDE (
-device ide-hdor default on old configs) - SATA (
-device ahci) - SCSI
- Paravirtualized:
- VirtIO block (
if=virtio)
VirtIO gives significantly better throughput and lower CPU overhead, but the guest OS must have VirtIO drivers (modern Linux distributions do).
Example with multiple drives:
qemu-system-x86_64 \
-enable-kvm \
-m 4096 \
-drive file=os.qcow2,if=virtio,format=qcow2 \
-drive file=data.raw,if=virtio,format=rawUsing `qemu-img` Features
Resize an image (grow only, shrinking is risky):
qemu-img resize debian-12.qcow2 +10GConvert image formats:
qemu-img convert -f qcow2 -O raw debian-12.qcow2 debian-12.rawNetworking with QEMU/KVM
Networking configuration determines how the guest connects to the outside world.
User-Mode Networking (Simple, NAT)
Simplest option: user-mode networking (default if no NIC specified in some QEMU builds).
Example:
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-netdev user,id=net0 \
-device virtio-net-pci,netdev=net0- Guest gets NATed access to the host network.
- No incoming connections from LAN unless you set up port forwards:
-netdev user,id=net0,hostfwd=tcp::2222-:22This forwards host port 2222 to guest port 22, allowing:
ssh -p 2222 user@localhostBridged Networking (VM as a Peer on LAN)
For production-like environments:
- Create/define a Linux bridge on the host (often via
nmcli,/etc/network/interfaces, or NetworkManager). - Attach host NIC and VM tap interface to the bridge.
Raw QEMU example (assuming existing br0 and manually created tap):
sudo ip tuntap add tap0 mode tap user $USER
sudo ip link set tap0 up
sudo ip link set tap0 master br0
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-netdev tap,id=net0,ifname=tap0,script=no,downscript=no \
-device virtio-net-pci,netdev=net0
In practice, libvirt automates tap and bridge configuration via its own networks; you rarely have to manage tap interfaces by hand outside of low-level setups or containers.
Graphics and Remote Access
Display Options
Common display options:
- Window on host desktop:
-display gtk # modern, often default
-display sdl- No GUI:
-nographic- VNC server:
-vnc :0
This starts a VNC server on display :0 (TCP port 5900). Connect with a VNC client to host:5900.
SPICE
SPICE provides better performance and integration for remote desktop access to VMs.
Basic usage:
qemu-system-x86_64 \
-enable-kvm \
-m 4096 \
-device qxl-vga \
-spice port=5930,disable-ticketing=on \
-device virtio-serial-pci \
-chardev spicevmc,id=vdagent,debug=0,name=vdagent \
-device virtserialport,chardev=vdagent,name=com.redhat.spice.0
Connect with a SPICE client (remote-viewer spice://host:5930).
Typically, you let libvirt/virt-manager generate these options.
Managing VMs with libvirt and virt-manager (Quick Orientation)
While this chapter focuses on KVM/QEMU, a minimal orientation to the tools that wrap them is useful:
libvirtddaemon manages VMs, networks, and storage pools.virsh(CLI) andvirt-manager(GUI) speak to libvirt.- Libvirt defines domains (VMs) in XML, specifying:
- CPU, memory, devices
- Storage volumes and pools
- Networks and bridges
Basic checks:
virsh list --all
virsh dominfo vm-name
Creating a new VM through virt-manager will:
- Create the disk image.
- Define the domain in libvirt.
- Start QEMU with a long, auto-generated command line.
- Manage networking (NAT/bridge), storage pools, etc.
You can always inspect the XML for a VM to understand the underlying QEMU options:
virsh dumpxml vm-nameTuning and Optimizing KVM/QEMU VMs
CPU and NUMA Considerations
- Use
-cpu hostfor maximum performance. - Limit vCPUs with
-smp Nand align with host capabilities (avoid oversubscription for latency-sensitive VMs). - For NUMA-aware hosts, pin vCPUs and memory to specific NUMA nodes (usually handled by libvirt configuration).
Raw QEMU pinning is cumbersome; libvirt offers cputune, numatune, etc. in XML.
VirtIO for Network and Disk
Always prefer VirtIO devices for Linux guests unless you have special requirements:
- Network:
-device virtio-net-pci,netdev=... - Disk:
if=virtioor-device virtio-blk-pci,...
For some non-Linux guests (e.g., Windows), you must install VirtIO drivers explicitly.
Snapshotting and Live Migration (Conceptual Overview)
KVM/QEMU support advanced lifecycle features often driven via libvirt:
- Snapshots:
- Internal QCOW2 snapshots.
- External snapshots (new overlay image while preserving base).
- Use
qemu-imgor libvirt (virsh snapshot-create-as). - Live migration:
- Moving a running VM from one host to another with minimal downtime.
- Requires shared storage or disk migration, compatible CPU features, and compatible QEMU/libvirt versions on both hosts.
Full configuration details and automation for these features are typically handled in higher-level management chapters.
Security Considerations Specific to KVM/QEMU
- Device passthrough:
- PCI/GPU passthrough via VFIO and IOMMU (requires VT-d/AMD-Vi and careful isolation).
- USB passthrough:
-device usb-host,.... - Passthrough increases attack surface; be cautious with untrusted guests.
- Isolation:
- QEMU runs as a process; its privileges and SELinux/AppArmor profiles matter.
- libvirt typically uses unprivileged users, cgroups, namespaces, and MAC policies to improve isolation.
- Networking:
- NAT vs bridge: bridging exposes VMs more directly to the network.
- Use appropriate firewall rules and segmentation for production.
Minimal Hands-On Flow Summary
To cement the concepts, a concise end-to-end flow on a typical host:
- Install components
qemu-kvm,libvirt,virt-manager(or equivalents).- Verify KVM
lsmod | grep kvmls /dev/kvm- Create a VM disk
qemu-img create -f qcow2 vm.qcow2 20G- Start a VM with QEMU manually (for learning):
qemu-system-x86_64 \
-enable-kvm \
-m 4096 \
-cpu host \
-smp 2 \
-drive file=vm.qcow2,if=virtio,format=qcow2 \
-cdrom installer.iso \
-boot order=d- Later, manage VMs via libvirt/virt-manager for easier, repeatable setups.
This chapter’s focus is on how KVM and QEMU work together and how to use them directly. Higher-level management, container-specific virtualization, and orchestration aspects are handled in other chapters.