Table of Contents
Understanding Docker’s Core Building Blocks
Docker is built around a small set of ideas that repeat everywhere you use it. Once you understand these ideas, almost every Docker command and workflow will start to make sense. In this chapter you will get a mental model of what Docker is actually doing when it works with images, containers, its engine, and registries, without going into the detailed commands that later chapters will cover.
The Mental Model: Shipping and Running Software
Docker solves a simple but painful problem: you want software to run the same way on every machine. To achieve this, Docker separates the world into two phases. First, you package software into something that can be shipped. Second, you run that package in a controlled environment.
The packaged form is the image. The running form is the container. The engine is the program that knows how to go from one to the other. Registries are the warehouses that store and distribute the images. Tags and digests are how you refer to specific versions of what is stored.
If you imagine shipping a physical product, the image is like a sealed box that contains everything needed. The container is that box opened and placed in a workspace. The engine is the logistics and machinery that move and operate these boxes. The registry is the storage facility where boxes are kept and cataloged. The naming and versioning system is the label on each box that tells you exactly what is inside.
Images as Templates for Containers
An image in Docker is a read only blueprint that describes how a filesystem should look when a container starts. It contains the application files, the runtime, and all the dependencies that the application needs. When you start a container, Docker does not copy each file one by one. Instead it layers the container on top of this immutable image.
Images are not instructions in motion, they are static snapshots of a prepared environment. You can think of them as frozen states of a disk. Every time you run a container from an image, Docker uses the same base state and adds a thin writable layer above it. Containers can modify this writable layer while running, but the underlying image stays unchanged.
This immutability has important consequences. If an image works once, it should continue to work in the same way in the future, because its content will not change unexpectedly. If you need a change, you create a new image, often from a Dockerfile, instead of modifying the existing one in place.
Containers as Isolated Running Instances
A container is a running process that uses the filesystem and configuration provided by an image. It is not a virtual machine and it does not include its own operating system kernel. Instead, it shares the host system’s kernel while keeping its processes, files, and network isolated from other containers and from the host.
When you start a container, Docker creates a sandboxed environment around a process. That environment includes its own view of the filesystem based on the image, its own network identity, and sometimes its own resource limits. From inside, the container feels like a small machine dedicated to running the application. From outside, it is just another process that Docker manages and tracks.
This separation between image and container explains why you can start many containers from the same image. Each container has its own lifecycle, its own runtime state, and its own temporary changes, but they all trace back to the same shared blueprint.
An image is immutable and shared. A container is mutable and specific to a single running instance.
The Docker Engine as the Orchestrator
The Docker Engine is the core service that coordinates everything Docker does. It runs on your machine as a background service and exposes an API that Docker commands and other tools use. When you type a Docker command, you are not directly changing containers or images. You are sending a request to the engine, and the engine performs the actual work.
The engine is responsible for several tasks. It pulls images from registries, stores them locally, creates containers from them, sets up networks, and manages volumes. It talks to the underlying operating system, applies isolation features like namespaces and control groups, and keeps track of the state of each container.
This separation lets you treat Docker as a remote controlled system. You could manage containers on your own computer or on a server in a data center using the same Docker commands, as long as both machines run a Docker Engine and expose its API. The logic always lives in the engine, not in the command line tool itself.
Registries as Image Warehouses
A container registry is a service that stores and distributes Docker images. Docker Hub is the most popular public registry, but it is not the only one. Organizations often run private registries so they can store internal images securely and control access.
Registries organize images by name and namespace. When you see a name such as library/nginx or mycompany/backend, the first part identifies a user or organization and the second part identifies a specific repository of related images. Inside a repository, different versions of the image are stored and distinguished by their tags and digests.
When you build an image locally it exists only on your machine until you push it to a registry. Once it is in a registry, any Docker Engine that can reach that registry can pull the image and run containers from it. This is how Docker enables consistent environments across laptops, testing servers, and production systems.
Registries store images, not containers. You push and pull images, then create containers locally from those images.
Naming, Tags, Versions, and Digests
Since many versions of the same image can exist, Docker needs a way to identify them precisely. This is where tags and digests come in. They are both labels for specific image content, but they behave differently.
A tag is a human readable identifier that usually expresses intent, such as latest, 1.0, or 2025-01-15. Tags are part of an image reference like nginx:1.25. They are convenient and easy to remember, but they are movable. The owner of an image can retag a new image with the same tag name, which means nginx:latest today and nginx:latest next month may not be the same bit for bit content.
A digest in Docker is a cryptographic hash of the image content. It looks like a long string such as sha256:.... Once an image is created, its digest is fixed. If any byte inside the image changes a new digest is produced. This provides content addressability, which means the digest uniquely identifies a particular exact image.
From a conceptual point of view, a tag is like a shortcut or nickname, and a digest is like a fingerprint. A single digest can have many tags that point to it. Over time, tags may move to newer digests, but a digest always stays attached to the exact image it represents.
Tags can move and are not guaranteed to be immutable. Digests are tied to exact image content and do not change.
How the Concepts Work Together
When you use Docker in any real scenario, these concepts interact in predictable ways. On a developer machine, you might pull an image from a registry, then start several containers from that image while you test changes. In a continuous integration system, a pipeline might build a new image from your code, tag it with a version, push it to a registry, and then a deployment system would pull that specific tagged or digested image to run updated containers.
The engine is always the participant that makes these steps possible. It knows which images you have, which containers are running, and which registries you can reach. It maps high level commands into low level operations on the host system.
Although later chapters will go into the specific commands and configuration formats, the core ideas remain simple. Images are blueprints, containers are running instances, the engine is the orchestrator, registries are warehouses, and tags and digests are the way you point at exactly which blueprint you want. Understanding this model will make every other feature of Docker easier to place in context.