Table of Contents
Overview of Docker Engine
Docker Engine is the core software that makes Docker work on your machine. It is responsible for building images, running containers, and handling the low level details of isolation and resource control. When you run Docker commands in your terminal, those commands ultimately talk to Docker Engine, which then talks to the operating system.
At a high level, Docker Engine follows a client server architecture. The client is what you interact with, the server is a background process that does the actual work, and there is a low level component that connects containers to the host system resources. Understanding this split helps explain why Docker behaves the way it does on different platforms and why some commands require special permissions.
Client Server Architecture
The most visible part of Docker is the command line interface, usually invoked with the docker command. This command is the Docker client. It does not run containers by itself. Instead, it sends instructions to a background service called the Docker daemon. The daemon performs tasks such as pulling images, starting and stopping containers, and managing networks and storage.
Communication between the Docker client and the daemon happens over an API. On a typical local setup, the client and daemon run on the same machine and communicate over a local Unix socket on Linux or a named pipe on Windows. They can also be separated, for example, a client on your laptop can talk to a daemon running on a remote server, which is useful in development and in some production workflows.
The Docker client does not run containers. It only sends requests to the Docker daemon, which performs all container operations.
This separation means multiple users or tools can talk to the same daemon. Graphical tools, CI systems, and command line scripts all use the same Docker Engine API behind the scenes. Once you understand that every user facing action is really an API request to the daemon, it becomes easier to reason about remote use and permissions.
The Docker Daemon
The Docker daemon is a long running background process. On Linux it is typically called dockerd. Its job is to listen for API requests, manage local images, handle networks and volumes, and create, start, stop, and remove containers.
The daemon maintains internal state about what images are available, which containers exist, what networks and volumes are defined, and what is currently running. When you ask for a new container, it checks whether the required image is present, pulls it from a registry if necessary, then creates a container from it and connects it to the right networks and storage.
Because the daemon needs low level access to the host system, it often runs with elevated privileges. On Linux this usually means it runs as root or has similar capabilities, and on desktop systems it is managed by a system service. This is part of why access to the Docker daemon is a sensitive permission. Anyone who can control the daemon can effectively run code on the host machine with significant control.
Container Runtime
Underneath the daemon there is a container runtime that knows how to start and manage individual containers using operating system features. Docker Engine uses a standard interface called the Open Container Initiative (OCI) runtime specification. A common default runtime is runc.
The runtime takes a prepared bundle of configuration from the daemon, including information about the filesystem, process, and resource limits, and then sets up the container. It creates the container process in an isolated set of namespaces and control groups, connects the filesystem layers, and applies limits on CPU, memory, and other resources.
The separation between the daemon and the runtime makes Docker Engine more modular. Different runtimes can be used for different needs, such as alternative runtimes that focus on stronger isolation or special hardware acceleration. For most beginners this detail remains invisible, but it explains why Docker can plug into a broader ecosystem that follows the OCI standards.
Docker Engine on Linux
On Linux, Docker Engine integrates directly with the host kernel. Containers use the host kernel through mechanisms like namespaces and cgroups, which the runtime sets up under the control of the daemon. There is no need for a separate virtual machine layer, the engine runs natively.
The daemon runs as a system service and listens on a local communication channel. When you install Docker on a Linux distribution, you usually get both the client and the daemon, and they are configured to talk to each other locally. Because the engine is native, performance is close to running processes directly on the host, with some overhead for isolation and storage layers.
Docker Engine on macOS and Windows
On macOS and on most modern Windows setups, the host operating system does not support Linux containers natively. Since most Docker containers rely on Linux, Docker Desktop introduces an additional layer. Instead of running the daemon directly on the host, Docker Desktop starts a lightweight virtual machine that runs a minimal Linux system. The Docker daemon runs inside that virtual machine.
From your perspective, the docker client on macOS or Windows still behaves the same. When you run a command, the client contacts the daemon inside the virtual machine using a special integration provided by Docker Desktop. The containers themselves live in this virtual machine, not directly on the host system.
This design keeps the user experience consistent across platforms while still using Linux containers. It also explains certain behaviors, such as filesystem performance differences and the need for file sharing settings between the host and the internal Linux environment.
On Windows there is also support for Windows containers that use the Windows kernel instead of a Linux one. In that mode, Docker Engine uses Windows specific technologies and can switch between Linux and Windows container modes. For beginners, it is enough to know that the architecture on non Linux systems typically involves a virtual machine or a kernel mode switch.
Storage, Networking, and Plugins
Docker Engine also manages storage and networking features, but it does so through modular subsystems. For storage, the engine works with different storage drivers that implement how layered filesystems are organized on the host. For networking, it uses drivers to create various network types and connect containers.
Plugins add more capabilities to the engine, such as alternative volume drivers or logging integrations. These plugins talk to the daemon through defined interfaces. This keeps the core Engine flexible while allowing extensions for specific platforms or enterprise needs. As you work through image and volume topics later, it is useful to remember that these features are orchestrated by Docker Engine using these modular components.
The Docker Engine API
Everything the Docker client does can also be done by talking directly to the Docker Engine API. This is a REST style HTTP API that exposes endpoints for actions such as listing containers, building images, and inspecting resources.
For normal command line use you do not need to call the API directly, but many tools and services integrate with Docker by using this API. Continuous integration systems, GUI tools, and custom scripts in larger projects often rely on it. The API version can change over time, and the client and daemon negotiate a compatible version when they connect.
All Docker operations go through the Docker Engine API, whether you use the command line, a graphical tool, or a custom integration.
Understanding that the API is the single control point helps when debugging. If the daemon is not reachable, no Docker command will work. If permissions are wrong for the API socket or pipe, the client will fail with connection errors. These issues are typically issues with access to the Engine, not with individual containers.
Summary of Relationships
Docker Engine brings together several parts into a coherent system. The client provides your interface, the daemon manages resources and responds to API calls, the runtime creates isolated processes using the host kernel or a virtual machine, and plugins and drivers handle storage and networking.
When you move on to images, containers, volumes, and networking, you can think of them as resources that the daemon manages under the hood. The architecture stays the same whether you work on a local laptop or a remote server. The main difference is where the daemon runs and how the client connects to it.