Kahibaro
Discord Login Register

14.1 Running Containers in Production

Thinking Differently About Production

Running containers on your laptop and running them in production are two very different activities. In development, you can restart, rebuild, and experiment freely. In production, containers are part of a larger system that must stay available, predictable, and observable. This chapter focuses on what changes when you move containers into a production environment and what this implies for how you run them.

Pets vs Cattle and Ephemeral Containers

Traditional servers are often treated like unique machines that are manually tuned and carefully preserved. Containers in production should instead be treated as disposable units that can be recreated at any time from an image and a configuration. A container might be stopped, replaced, rescheduled, or terminated without warning, and your system should tolerate this.

This mindset affects how you design your application. Any data that must survive restarts should not live inside the container filesystem. Configuration must not be hard coded into images. Your code cannot assume that a specific container instance will always handle the same users or hold the same in memory state. In a production setup it is normal to see many containers come and go over time, all acting as interchangeable instances of the same service.

Important: In production, always assume containers are ephemeral and replaceable. Never depend on container local state for critical data or configuration.

Immutable Images and Repeatable Deployments

In production you should avoid modifying containers after they start. Instead, you build an image once and run that exact image in each environment. This is often called an immutable image approach. If you need to change code or dependencies, you build a new image and deploy that, rather than patching a running container.

This approach makes deployments repeatable. A container that runs successfully in staging should be identical to the one that will run in production, apart from environment specific settings such as hostnames or credentials. In practice this means you will tie every production container to a specific image reference, usually a version tag or digest, and avoid floating tags that can change unexpectedly.

Rule: Use immutable images for production. Build once, promote the same image across environments, and do not modify running containers.

Treating Configuration as External

A production container should not bake environment specific values into the image. Instead, configuration lives outside and is injected when the container starts. Environment variables, mounted configuration files, and secrets managers are common approaches.

With this pattern, you can run the same image in multiple environments. The image represents your application code and dependencies, while configuration defines how it behaves in a particular context. For example, you can switch database endpoints or feature flags by changing only environment configuration, not rebuilding images.

In production you must also consider how configuration changes are rolled out. Updating a single environment variable might require you to restart or recreate containers. This is usually handled by your deployment system, but your application should be robust to configuration changes and restarts.

Scaling with Multiple Container Instances

Production workloads often require more capacity or redundancy than a single container instance can provide. The usual approach is to run multiple containers from the same image and spread them across hosts or nodes. This gives you horizontal scaling and reduces the impact of any single container failure.

Your application must be prepared for this. Stateless services are ideal, since any instance can handle any request. If your application uses in memory caches or sessions, you might need external stores so that multiple containers can share the same data. Load balancing becomes central, since traffic must be distributed across containers and unhealthy ones must be avoided.

In practical terms, you will not rely on one long lived container in production. Instead, you run many shorter lived instances, with automated systems that create and destroy them as needed.

Health Checks and Automatic Recovery

In production, a container that has started is not necessarily healthy. The application inside might be stuck, misconfigured, or degraded. Health checks allow the platform to distinguish a healthy container from a problematic one.

A health check is usually an endpoint or command that can be run periodically to verify that the application is working. If the health check fails consistently, your platform can restart the container or remove it from service. This gives you self healing behavior, where many common issues are resolved by automatic restarts instead of manual intervention.

Rule: Every production container should expose a reliable health check that reflects the real readiness and liveness of the application.

Managing Image Versions in Production

In development, you can rely on generic tags such as "latest" or even rebuild images frequently with the same tag. In production, this pattern is risky, because the tag might start pointing to a different image without a clear record of the change. Instead, you give each production image a stable and unique identifier tied to a version or build.

A typical production setup uses semantic versions or build numbers along with immutable registry references. Rollouts are defined as a transition from one specific image to another, and rollback means returning to a previous known image. You should be able to answer exactly which image is currently running in production, when it was built, and from which source commit.

This detailed tracking is crucial for audits, debugging, and safe deployment strategies. It is also tightly related to how you will handle environment configuration and promotion from testing environments into production.

Observability and Diagnostics Expectations

When a container misbehaves in production, you cannot attach a debugger as you might on your local machine. Instead, you depend on logs, metrics, and traces collected from running containers. Production containers must therefore be configured to emit useful diagnostic information in a predictable way.

Logs are usually written to standard output and standard error so that they can be captured by the platform and forwarded to centralized systems. Metrics are often exposed through HTTP endpoints or push gateways. Traces, if you use distributed tracing, must include identifiers that connect calls across services. This observability data will be your main tool for understanding behavior in production.

A production ready container image includes the right logging configuration and minimal tools required for troubleshooting. You should aim for enough visibility to diagnose issues without needing to manually enter containers or modify them on the fly.

Running with Policies and Guardrails

In production, containers do not run as isolated experiments. They operate under policies that control how often they restart, how they use resources, and how they recover from failures. These policies are essential to maintain stability and fairness on shared infrastructure.

Restart behavior determines what happens if a container exits. Some services should always be restarted, others should stop to signal a broader problem. Resource policies limit CPU and memory usage, to prevent a single container from affecting others. Networking and access policies define which services can communicate and which data they can reach.

These guardrails mean that a container in production cannot assume unlimited resources or unrestricted access. Applications must cope with constrained environments and transient failures, and your operational setup must anticipate and define the appropriate policies.

From Single Hosts to Orchestrated Platforms

While a single Docker host may be acceptable for very small or experimental production setups, most real world deployments use some orchestration or management layer. This can be a built in orchestrator, a cluster manager, or a full container platform. Regardless of the specific technology, the idea is the same. Production containers are managed declaratively, with the platform responsible for creating, monitoring, and rescheduling them.

From the perspective of running containers in production, this means you no longer start or stop individual containers manually. Instead, you describe the desired state, such as how many instances of a service should run, which image and configuration they should use, and how they should be exposed. The platform turns this description into containers running on multiple hosts.

Understanding that shift is important. Direct Docker commands against a single host are not the typical production interface. Your container images and runtime behavior must fit within a higher level system that controls how containers are scheduled, updated, and retired.

Designing for Safe Change

Production environments must change over time. You need to deploy new versions, apply security fixes, and adjust settings, all while keeping your systems running. Running containers in production is therefore closely tied to how you roll out and roll back changes.

The key idea is that a production deployment is a controlled experiment using containers as units of change. You switch from one image version to another in small steps, monitor behavior using your observability tools, and fall back quickly if something goes wrong. This often involves multiple container instances, traffic shifting, and health checks, but the underlying requirement is the same. Your containers must be designed and configured to tolerate this dynamic lifecycle.

By treating images as immutable, configuration as external, containers as disposable, and observability as essential, you create a foundation where changing production is less risky and more predictable. The rest of the deployment tooling builds on these properties.

Views: 5

Comments

Please login to add a comment.

Don't have an account? Register now!