Table of Contents
Introduction to Linux Containers with LXC and LXD
Linux containers provide a way to run applications and entire systems in isolated environments on a single Linux host. Among the earliest and most feature rich container technologies on Linux are LXC and LXD. They are closely related but play different roles. LXC provides low level container primitives that feel similar to light virtual machines. LXD builds on top of LXC to offer a higher level, user friendly management layer with extra functionality such as image management, clustering, and integrated networking and storage.
This chapter focuses specifically on what is unique to LXC and LXD, how they relate to each other, and how you interact with them in practice, without trying to re explain general container concepts covered elsewhere.
LXC vs LXD: Relationship and Roles
LXC stands for Linux Containers. It is essentially a userspace interface to the container features of the Linux kernel, such as namespaces and cgroups, plus some helper tools and configuration files. With LXC you can create, start, and manage system containers using configuration files and commands like lxc-create and lxc-start. It feels closer to traditional system administration, but with isolation.
LXD is a container and virtual machine manager that uses LXC under the hood for Linux system containers. It exposes a REST API, a modern command line client, and advanced features such as remote management, images, projects, and clustering. When you use modern LXD, you actually interact mostly with the lxc client which talks to the LXD daemon. This can be confusing, because the same name lxc is historically also used for the low level LXC tools. In modern usage, when people say “LXD” they usually mean the LXD daemon plus its lxc client.
A practical way to think about the separation is this. LXC is a library and a small set of tools that gives you the building blocks to run containers. LXD is a full management environment that uses these building blocks and wraps them into a complete solution that includes storage, networking, images, and remote access.
LXC provides low level container primitives, while LXD is a higher level container manager that uses LXC behind the scenes.
System Containers vs Application Containers
LXC and LXD focus on system containers. A system container behaves like a lightweight virtual machine. It runs a full init system inside the container, has its own process tree, its own users and services, and you can install multiple applications in it as you would in a normal Linux installation.
This contrasts with common application container platforms like Docker, which are usually designed around running a single process per container, built from layered images with a specific format, and integrated with an image registry model.
System containers are particularly useful for the following kinds of tasks. You can run multiple isolated full Linux environments on a single host, which is very convenient for development, testing, and even production multi tenant hosting. You can run mixed distributions on the same host, for example a Debian host running Ubuntu and CentOS containers. You can also treat containers much more like traditional virtual machines, with long lived state, full OS upgrades, and standard package management, while still benefiting from container density and simplicity.
Architecture of LXD
LXD follows a daemon and client model. The core component is the lxd daemon, which runs as root and manages containers, virtual machines, networks, storage pools, and other resources. The user interacts with LXD through the lxc command line client, which communicates with the daemon over a REST API. The same API can be used remotely, so you can manage remote hosts with the same tool.
Internally, for Linux system containers, LXD uses the LXC library and LXC container configuration format. When you create or start a container with LXD, LXD prepares configuration, networking, and storage, then asks LXC to actually launch the containerized processes.
LXD supports not only containers but also full virtual machines, using different backends such as QEMU. From the user perspective, both containers and VMs are managed through the same interface, although their capabilities differ. In this chapter, we focus on the Linux container side.
A key architectural feature of LXD is its concept of projects and clustering. Projects allow you to partition resources within a single LXD server, for example by team or environment. Clustering lets you join several servers into one logical LXD cluster, with containers scheduled and migrated between nodes.
Installing and Initializing LXD
On most modern Linux distributions, LXD is provided as a package or as a snap. For example, on Ubuntu it is commonly installed as a snap package named lxd. After installation, you usually run an interactive initialization tool lxd init that configures the default storage pool, network bridge, and other basic settings.
During initialization you choose a storage backend, such as ZFS, Btrfs, LVM, or a simple directory backed storage. You may also choose to create an LXD managed bridge, often lxdbr0, which provides container networking with NAT so containers can access the outside network while remaining on an internal subnet.
Although the choices you make in lxd init determine defaults for most operations, they can later be modified or extended. For example, you can create additional storage pools or networks, or reconfigure profiles.
You must run lxd init at least once after installing LXD to set up initial storage and networking before you can create containers.
Images and Launching System Containers
LXD uses images to create containers. An image is a prebuilt root filesystem plus metadata, typically representing a minimal installation of a Linux distribution. LXD provides a remote image server named images: that hosts a variety of official and community built images for different distributions and versions.
You interact with images and containers using the lxc client. For example, listing available remote images involves querying a remote, and launching a container from an image generally looks like lxc launch <remote>:<image> <container-name>. This single command performs several actions. LXD checks if the selected image exists locally, downloads it from the remote if it does not, creates the container storage, applies a default profile, then starts the container.
Since the images are often unmodified upstream base systems, after launch you have a system that behaves like you installed a minimal distribution from scratch. You then use the distribution’s normal package manager inside the container to install additional software. The container runs its own init process and services, but shares the kernel with the host.
LXD also lets you publish your own images. You can create a container, customize it, and then publish it as an image for reuse. This is useful if you need a standardized environment across many containers, for example with pre installed tools or configuration.
Profiles, Devices, and Configuration
LXD uses a declarative configuration model to control how containers are presented with resources from the host. The core concepts are instances, profiles, and devices.
An instance is a single container or virtual machine. Each instance has its own configuration, such as limits on CPU and memory, and a list of profiles that apply to it. Profiles are reusable sets of configuration options and devices that can be applied to multiple instances. Devices are abstractions over actual host resources, such as network interfaces, disks, USB devices, GPUs, or character devices.
A common default profile includes a root disk device that points to the default storage pool and a network device that connects to the default bridge. When you create a new container without specifying anything special, this default profile is applied, so the container gets storage and network connectivity automatically.
You can create additional profiles, for example one that adds an extra disk, or one that passes through a GPU. Then, when launching containers, you specify which profiles to use. Profiles can be changed later, and changes propagate to all instances that use the profile, which makes configuration management easier.
Constraints like CPU limits, memory limits, and other resource settings are expressed as configuration keys on instances or profiles. For example, you might set a memory limit or restrict the number of CPU cores visible to a container. These settings rely on cgroups behind the scenes, but LXD abstracts the complexity and presents simple key value options.
Networking in LXC/LXD Containers
Networking for LXD containers centers around virtual network devices and bridges. The most common setup is to have LXD create a bridge on the host, usually named lxdbr0, which acts as a virtual switch. Each container gets a virtual network interface that is connected to this bridge. A small DHCP and DNS service inside LXD assigns IP addresses to containers and can provide hostname resolution.
From there, LXD usually sets up NAT on the host, so containers can reach external networks by masquerading their traffic through the host interface. This works well for development, testing, and many production scenarios. It also isolates containers from the external network, unless you explicitly expose them.
If you need containers to be directly reachable on the external network, you can configure different network types. You might attach containers directly to a physical interface with a “macvlan” device, or bridge a host interface so containers get IP addresses from an external DHCP server. These configurations trade ease of setup for increased integration with the existing network.
LXD expresses all these options through network devices and networks. A network device refers to the interface a container sees, while a network object describes how that interface connects to the host or external world. For example, a device might be type nic using a bridge, or a macvlan interface, or a physical interface.
By default, LXD containers use a managed bridge with NAT, which isolates them from the external network unless you explicitly configure port forwarding or alternate network types.
Storage Pools and Container Filesystems
LXD introduces the concept of storage pools to manage where container data lives. A storage pool represents a logical storage backend that can be based on several technologies, such as ZFS, Btrfs, LVM, or a simple directory on an existing filesystem.
Each container’s root filesystem is a volume inside a storage pool. When you launch a container from an image, LXD creates a new volume, populates it with data from the image, and then mounts it into the container’s namespace as its root. If the storage backend supports advanced features like snapshots or thin provisioning, LXD can use them to create containers more efficiently.
For example, with ZFS or Btrfs, creating a container can be almost instantaneous, because LXD can clone from an image snapshot instead of copying everything. Snapshots of running containers can also be created quickly and cheaply. This is particularly useful for testing or rapid rollbacks.
You can create additional volumes in a storage pool and attach them as extra disks to containers. This is helpful if you want to separate application data from the system root, or if you need different storage performance or redundancy characteristics for different parts of your environment.
The configuration of storage pools and volumes is again done through LXD’s configuration model. You specify the storage backend, its location, and any backend specific options when creating the pool. Volumes can be created and attached to instances with additional device definitions.
Lifecycle Management and Snapshots
Managing container lifecycles with LXD is intentionally similar to managing virtual machines. You can create, start, stop, restart, pause, and delete containers with straightforward lxc commands. Containers are designed to be long lived and can persist state across reboots of the host. Because they behave like full systems, you perform administration tasks inside them as you would on any other Linux machine, such as apt or dnf based package management.
Snapshots are a key feature of LXD. A snapshot captures the state of a container at a specific point in time. Depending on the storage backend, snapshots can be created without shutting down the container, using copy on write to track changes after the snapshot. Later, you can either restore a snapshot, rolling the container back to that state, or publish a snapshot as a new image.
This ability is particularly useful for experimentation. For example, you can snapshot a container, attempt a risky upgrade, and if it fails, simply roll back to the snapshot. It is also helpful for development flows where you want to reset an environment frequently.
Snapshots capture the container filesystem state at a point in time, and restoring a snapshot replaces the current state with the snapshot’s content.
Security and Isolation Concepts in LXC/LXD
LXC and LXD rely on Linux namespaces, cgroups, and other kernel mechanisms to provide isolation. A special feature they use heavily is user namespaces, which allow mapping container root to an unprivileged user on the host. This lets containers run processes as root inside the container, while those processes are not root on the host. It provides an additional layer of safety if a container is compromised.
LXD manages unprivileged and privileged containers. Unprivileged containers use user namespace mappings and are the default in many deployments, because they reduce the risk of host compromise from within a container. Privileged containers behave more like traditional chroot environments with root inside mapped to root on the host, and are used only where necessary, for example if some workloads require kernel features that do not play well with user namespaces.
Additional security layers can be applied to containers using existing Linux security mechanisms. AppArmor or SELinux profiles can restrict container processes further. Seccomp filters can limit the system calls available inside containers. LXD automatically generates reasonable defaults but also allows administrators to customize these settings for stricter isolation.
It is important to remember that system containers share the host kernel. This provides efficiency, but means that kernel vulnerabilities are shared risks. Running containers with unprivileged mappings and tight security profiles helps mitigate this, but cannot completely remove kernel level issues.
Remote Management and Clustering
A distinctive capability of LXD compared to many low level container tools is its built in remote management and clustering. Since LXD exposes a network API, you can configure a server to listen on the network, set up authentication, and then use the same lxc client to interact with the remote server as if it were local. You can add remote servers by name, launch containers on them, and move instances between local and remote systems.
Clustering extends this idea by joining multiple LXD servers into a cluster that appears as one logical system. The cluster stores configuration in a distributed database, and LXD can schedule instances across cluster members. You can migrate containers and even live migrate some workloads if storage and backend support it. Clustering is particularly useful for high availability, scaling, and centralized management across many physical hosts.
In a cluster, storage and networking must be carefully designed, for example through shared or replicated storage backends and consistent network configuration, so that containers behave correctly regardless of which node they run on. LXD’s abstractions over pools and networks help enforce consistency.
Using LXC Directly Without LXD
Although LXD is the preferred interface for most users, LXC can be used directly through its own tools and configuration files. This is sometimes useful in specialized scenarios where you need very fine grained control, integration with custom init systems, or minimal dependencies.
Using LXC directly involves editing configuration files under paths like /var/lib/lxc/<name>/config and running commands like lxc-start, lxc-stop, lxc-attach, and lxc-info. You define cgroup limits, namespaces, and device mappings manually or via templates. This can provide deeper insight into how container features are wired together, but requires more expertise and maintenance.
LXD abstracts most of this complexity. It autogenerates the underlying LXC configuration and translates high level settings into the correct low level options. If you are learning containers at a systems level, exploring both layers can be educational. In production, LXD’s simplified management, API, and ecosystem of tooling usually make it a better choice.
Typical Use Cases for LXC/LXD System Containers
LXC and LXD are particularly attractive in environments where you want virtual machine like semantics but with container efficiency. For example, a development team can run multiple isolated test environments on a single powerful workstation, each environment in its own container with a full distribution. Continuous integration systems can create containers for each test run, perform builds and tests in them, then delete them.
Hosting providers can use system containers to provide users with full Linux environments that behave similarly to VPS instances, but with higher density and faster start times than traditional hypervisor based virtual machines. Enterprises might use LXD clusters as a lightweight private cloud solution for internal services, with remote management and API driven automation.
Because LXD supports both containers and full virtual machines, it can provide a unified layer for mixed workloads, although this chapter focuses on the container side. As you build experience with LXC and LXD, you will see that the same patterns apply in both personal and large scale deployments.