Table of Contents
Understanding LXC and LXD in Context
Linux containers (LXC/LXD) are system containers: they run a full Linux userspace (init, services, multiple processes) in an isolated environment sharing the host kernel. Compared to application containers (as in Docker/Podman), LXC/LXD feels more like lightweight virtual machines: you can ssh into them, run systemd, and manage them as full systems.
Key relationships:
- LXC (Linux Containers): low-level userspace tools and libraries around kernel features (namespaces, cgroups, capabilities).
- LXD: a management layer and REST daemon built on LXC, providing a powerful CLI (
lxc), image management, clustering, networks, and storage. Think of LXD as “a container hypervisor”.
You will mostly interact with LXD in modern setups; LXC alone is typically used for more custom, low-level scenarios.
Core Concepts: System Containers with LXC/LXD
System containers vs application containers
What makes LXC/LXD “system containers”:
- Run an entire distribution root filesystem (Ubuntu, Alpine, CentOS, etc.).
- Use an init system (often
systemd), so services likesshd,cron, etc., can be managed normally. - You log in via shell or SSH as with a VM.
- They share the host kernel (no separate kernel per container).
This is well-suited for:
- Lightweight “VM-like” environments for testing and development.
- Multi-tenant hosting (each tenant gets a full distro environment).
- CI environments where you want full system control without VM overhead.
Installing and Initializing LXD
Exact installation commands vary by distribution, but the flow is similar.
On Ubuntu (typical example):
sudo apt update
sudo apt install lxdOr via Snap (often the preferred method on Ubuntu):
sudo snap install lxdAfter installation, run the initialization wizard:
sudo lxd initThe wizard will ask about:
- Storage backend: e.g.
zfs,btrfs,lvm, ordir. ZFS/Btrfs are recommended for snapshots and copy-on-write. - Network bridge: create a managed bridge (e.g.
lxdbr0) for container networking. - IPv4/IPv6: whether to enable NATed addressing.
- Clustering: whether to join/create a cluster (can be skipped initially).
You can reconfigure later with lxc network and lxc storage commands.
Basic LXD Workflow
LXD uses a single CLI: lxc. Some fundamental workflows:
Listing and using images
LXD provides image servers (remotes). The default is images: (a public image server) and ubuntu: (Ubuntu images).
List images from a remote:
lxc image list images:
lxc image list ubuntu:Launch a container from an image:
# From the default 'ubuntu:' remote
lxc launch ubuntu:22.04 my-ubuntu
# From the generic 'images:' remote
lxc launch images:alpine/3.18 my-alpine
This creates and starts my-ubuntu or my-alpine as a running container.
Inspecting containers
List containers:
lxc listTypical columns:
NAMESTATE(RUNNING, STOPPED)IPV4/IPV6TYPE(container or VM, if VMs are enabled)SNAPSHOTS
Get detailed info:
lxc info my-ubuntuStarting, stopping, and deleting
Basic lifecycle operations:
lxc start my-ubuntu
lxc stop my-ubuntu # Graceful (sends a signal)
lxc stop my-ubuntu --force # Hard stop (like pulling the power)
lxc restart my-ubuntu
# Rename
lxc rename my-ubuntu dev-ubuntu
# Delete (must be stopped)
lxc delete dev-ubuntuAccessing Containers
Getting a shell
Execute a command or open an interactive shell inside a container:
# Run a single command
lxc exec my-ubuntu -- uname -a
# Open an interactive shell (root)
lxc exec my-ubuntu -- bashTo get a non-root shell, either:
su - usernameinside the container, or- Configure a user and
sshinto the container.
File transfers
Copy files between host and container:
# Host -> container
lxc file push ./script.sh my-ubuntu/root/
# Container -> host
lxc file pull my-ubuntu/etc/hosts ./hosts-from-container
You can also push directories with -r.
Console access
Some containers/VMs provide a console:
lxc console my-ubuntu
For containers using systemd, the console often behaves like the system console of a VM.
Managing Container Configuration
LXD stores configuration per instance (container/VM). You can view and edit it.
View current config:
lxc config show my-ubuntuEdit interactively (opens an editor):
lxc config edit my-ubuntuThis exposes YAML-like configuration for:
- Limits (CPU, memory, processes)
- Devices (network interfaces, disks, etc.)
- Security options (e.g. nesting, idmaps)
- Profiles (which this container uses)
You can set individual keys:
lxc config set my-ubuntu limits.cpu 2
lxc config set my-ubuntu limits.memory 2GBResources are controlled by cgroups under the hood, but LXD abstracts that away.
Storage with LXD
LXD uses storage pools and storage volumes.
Storage pools
A storage pool is a backend (ZFS, Btrfs, LVM, directory) where container root filesystems and volumes are stored.
List pools:
lxc storage listCreate a new pool (example using ZFS):
lxc storage create mypool zfs size=50GBSet a pool as default:
lxc profile set default root disk pool=mypoolRoot disks and extra volumes
Containers get a root disk device defined in a profile (usually the default profile):
lxc profile show defaultYou’ll see something like:
devices:
root:
path: /
pool: default
type: diskTo attach an extra storage volume:
- Create a custom volume:
lxc storage volume create default mydata- Attach to a container:
lxc config device add my-ubuntu data disk pool=default source=mydata path=/mnt/data
Now /mnt/data in the container is backed by the mydata volume.
Networking with LXC/LXD
LXD manages networks, typically via bridges providing NAT.
Managed bridges
When you used lxd init, it probably created lxdbr0. Containers connect to this and receive private IPs (via dnsmasq):
- Containers can reach the internet (NAT).
- The host can reach containers.
- Containers are isolated from the LAN by default.
List networks:
lxc network listShow a network:
lxc network show lxdbr0Attaching networks to containers
Bridge interfaces are added as devices:
lxc config device add my-ubuntu eth0 nic network=lxdbr0 name=eth0You can create additional networks (e.g. for internal-only communication) and add interfaces to containers as needed.
Exposing services (proxy devices)
To expose a service running in a container to the host or external network, use proxy devices:
# Forward host port 8080 to container port 80
lxc config device add my-ubuntu webport proxy \
listen=tcp:0.0.0.0:8080 \
connect=tcp:127.0.0.1:80This is convenient for testing web apps inside containers.
Profiles: Reusable Configuration
Profiles are templates for configuration and devices applied to containers.
View existing profiles:
lxc profile list
lxc profile show defaultCreate a custom profile (example: limited resources and extra disk):
lxc profile create limited
lxc profile edit limited
Example limited profile contents:
config:
limits.cpu: "2"
limits.memory: 1GB
devices:
root:
path: /
pool: default
type: disk
data:
path: /data
pool: default
source: shared-data
type: diskApply to a container at launch:
lxc launch ubuntu:22.04 test1 -p default -p limitedOr add to an existing container:
lxc profile add my-ubuntu limitedProfiles make it easy to standardize environments (e.g., “web”, “db”, “ci-runner”).
Snapshots, Clones, and Image Management
Snapshots
Snapshots capture the state of a container’s filesystem at a point in time.
Create snapshot:
lxc snapshot my-ubuntu before-upgradeList snapshots:
lxc info my-ubuntu | grep -A5 SnapshotsRestore snapshot:
lxc restore my-ubuntu before-upgradeSnapshots are fast and space-efficient when using copy-on-write storage backends (ZFS/Btrfs/LVM-thin).
Cloning containers
Create a new container as a copy of an existing one:
lxc copy my-ubuntu my-ubuntu-copyUseful for:
- Creating templates: set up a base container, then copy it.
- Creating ephemeral test environments.
Creating and using images
Convert a container to an image:
lxc publish my-ubuntu --alias ubuntu-baseList local images:
lxc image listLaunch new containers from your image:
lxc launch ubuntu-base new-instanceYou can also export/import images:
lxc image export ubuntu-base ./ubuntu-base.tar.gz
lxc image import ./ubuntu-base.tar.gz --alias ubuntu-baseThis is handy for sharing standardized environments.
Security and Isolation Features
LXC/LXD rely on kernel isolation primitives (namespaces, cgroups, capabilities, seccomp, AppArmor) but LXD exposes user-friendly toggles.
Key concepts for this chapter’s scope:
Privileged vs unprivileged containers
- Unprivileged containers (default in many setups):
- Container root is not host root: UIDs/GIDs are mapped to high-numbered host IDs (user namespace).
- Safer for multi-tenant or untrusted workloads.
- Privileged containers:
- Container root is host root (no UID shifting).
- Easier for certain legacy workloads, but less secure.
You can toggle via a configuration key:
lxc config set my-ubuntu security.privileged trueBe cautious with privileged containers, especially on multi-user systems.
Nesting containers
To run containers inside LXD containers (for CI or testing):
lxc config set my-ubuntu security.nesting trueNesting has security implications but is common for build/test pipelines.
Lightweight VM-Like Use Cases
LXD supports both containers and full virtual machines, but this chapter focuses on containers. Still, some VM-like use cases with containers are:
- Development environments:
- Each project gets its own container with specific dependencies.
- Quick reset with snapshots.
- Service isolation:
- Run different services in separate containers on one host.
- Easier system upgrades or rollbacks by recreating containers.
- Lab environments:
- Simulate multiple nodes and networks for practice (e.g., for clustering or HA studies).
In practice, you can treat an LXD system container almost like a VM:
- Configure static IPs.
- Install and manage packages with the distro’s package manager.
- Run systemd services.
Example: Simple End-to-End Workflow
A minimal workflow to illustrate LXC/LXD in practice:
- Launch a container:
lxc launch ubuntu:22.04 web1- Enter the container and install a web server:
lxc exec web1 -- bash
apt update
apt install -y nginx
exit- Expose HTTP to the host on port 8080:
lxc config device add web1 http proxy \
listen=tcp:0.0.0.0:8080 \
connect=tcp:127.0.0.1:80- Test from the host:
curl http://127.0.0.1:8080- Snapshot before experimentation:
lxc snapshot web1 clean- If configuration gets messy, restore:
lxc restore web1 cleanThis demonstrates how LXC/LXD provides fast, reproducible, VM-like environments using container technology.
When to Choose LXC/LXD
Given there is a separate chapter on Docker/Podman, it’s helpful to clarify when LXC/LXD is a better fit:
Use LXC/LXD when you want:
- Full OS environments per instance (like VMs) without heavy overhead.
- Persistent system containers with multiple services (like “light VMs”).
- Simple management of dozens or hundreds of such environments with built-in storage, networking, and clustering.
- To run traditional tools/services as if on separate machines.
Use application containers (Docker/Podman) when you want:
- Single-process, immutable images for deployment pipelines and microservices.
Both can coexist; many people develop services in LXD containers and then package them as application containers for production.