Table of Contents
From Traditional Applications to Containers
Before touching OpenShift, you need a mental model for what containers are and why they matter in cloud‑native environments.
Traditional application deployment typically looked like this:
- You had a server (physical or virtual).
- You installed an operating system.
- You manually installed libraries, runtimes, and application code.
- Each application on the same server could:
- Interfere with others (library version conflicts).
- Be hard to move elsewhere (because the exact environment was “snowflake‑like”).
- Be slow to provision (waiting for new VMs or manual configuration).
Containers address these issues by packaging applications in a more standardized, repeatable, and portable way, ideally suited for cloud and OpenShift.
Key contrasts to keep in mind:
- Host‑centric vs application‑centric:
- Traditional: focus is on configuring servers.
- Containers: focus is on packaging and running applications.
- Static vs dynamic:
- Traditional: long‑lived servers, manual updates.
- Containers: short‑lived, frequently replaced, automated deployments.
This shift underpins the whole “cloud‑native” and Kubernetes/OpenShift ecosystem.
What Is a Container (Conceptual View)
A container is a lightweight, isolated runtime environment for an application and its direct dependencies, sharing the host operating system kernel.
You can think of it as:
- A process (or group of processes) on a host
- With:
- Its own filesystem view (coming from an image).
- Its own network namespace and IP address (from the container runtime / orchestration system).
- Resource isolation (CPU, memory, etc.) enforced by the OS.
Important characteristics:
- Isolation:
- Containers see only their own processes, files, and network view, even though they share the same kernel with other containers on the host.
- Immutability:
- The container filesystem starts from a predefined, read‑only image; changes inside a running container are usually considered ephemeral.
- Ephemerality:
- Containers are expected to be disposable: you delete and recreate them instead of patching in place.
Containers vs Virtual Machines (At the Conceptual Level)
Both containers and VMs isolate workloads, but in fundamentally different ways.
Virtual Machines
- Each VM includes:
- A virtual hardware layer (provided by a hypervisor).
- A full guest operating system.
- Applications and their dependencies.
- Isolation is at the hardware abstraction level.
- Heavier:
- Larger images (GBs), more memory per instance.
- Slower to boot.
Containers
- Multiple containers share the same host OS kernel.
- Isolation is at the process level, via OS features (namespaces, cgroups, etc.).
- Lighter:
- Smaller images (often hundreds of MB or less).
- Start in seconds or less.
Conceptual comparison:
- Boot time:
- VM: boot OS + services ⇒ tens of seconds or more.
- Container: start process in an existing OS ⇒ usually under a second.
- Density:
- You can typically run more containers than VMs on the same hardware.
- Overhead vs isolation strength:
- VMs: stronger isolation, more overhead.
- Containers: lighter, but rely on the same kernel and OS security model.
In cloud‑native environments, containers are often preferred for application workloads, while VMs are still widely used for strong isolation, legacy apps, and infrastructure components.
Container Images (High-Level Concept)
A container image is a packaged, versioned filesystem containing:
- Application binaries or scripts.
- Runtime or language environment (e.g., Java, Python, Node.js).
- System libraries and tools needed by the application.
- A default entrypoint or command to run.
Conceptually:
- An image is immutable: once built and pushed to a registry, you do not change it; you build a new one for updates.
- An image is often built in layers:
- Base OS layer (e.g., minimal Linux).
- Language/runtime layer.
- Application dependencies layer.
- Application code layer.
At runtime:
- A container is a running instance of an image, with:
- A writable layer on top (for temporary changes).
- Its own isolated environment.
For OpenShift users, the important point is:
- You deploy images, not raw application code.
- OpenShift schedules containers created from those images.
Container Registries (Conceptual View)
A container registry is to images what a code repository is to source code:
- Central place to store, version, and retrieve container images.
- Common operations:
push– upload a built image to a registry.pull– download an image from a registry.
Registries typically support:
- Namespaces or repositories to group images, such as:
quay.io/org/app:1.0registry.example.com/team/service:latest- Tags to identify versions or variants:
:v1.0,:prod,:dev.
In an OpenShift environment, images are pulled from:
- External registries (e.g., Quay, Docker Hub, vendor registries).
- An internal registry integrated with the cluster for project‑local images.
Managing where your images are stored, who can access them, and how they are updated is a core operational concern in container platforms.
Cloud-Native Application Characteristics
“Cloud‑native” is more about architecture and practices than technology alone. Containers and Kubernetes/OpenShift are tools that enable those practices.
Common characteristics:
1. Container-Oriented
- Applications are packaged and run as containers.
- All deployment logic is expressed in terms of:
- Images.
- Container configuration (environment variables, volumes, ports).
- Orchestration definitions (e.g., Kubernetes manifests).
2. Declarative and Automated
- System desired state is declared (e.g., “there should be 3 replicas of this service”) rather than executed step‑by‑step (“run this script to create 3 servers”).
- Automation handles:
- Scheduling containers on nodes.
- Restarting failed containers.
- Rolling out updates.
This is key to how Kubernetes/OpenShift operate.
3. Designed for Failure and Change
Cloud‑native applications assume:
- Nodes can disappear.
- Containers can be killed and restarted at any time.
- Network can be unreliable.
So they are built to:
- Be stateless where possible:
- Any replica can handle a request; data is stored in external services (databases, object storage, etc.).
- Handle restarts without manual intervention:
- Configuration is externalized.
- Startup logic tolerates being run many times.
4. Fine-Grained, Service-Oriented
In many cloud‑native setups:
- Large applications are broken into multiple services:
- Each service can be scaled, versioned, and deployed independently.
- Communication is predominantly via network APIs (often HTTP/REST or gRPC).
OpenShift leverages this by:
- Exposing services via Kubernetes Services, Routes/Ingress, and related abstractions.
- Enabling independent scaling and deployment per component.
5. Observability and Operational Readiness
Cloud‑native applications:
- Expose metrics, logs, and often health endpoints.
- Are designed to integrate with platform‑level monitoring, logging, and alerting:
- For example, readiness and liveness probes used by Kubernetes/OpenShift to manage container lifecycle.
This makes it possible to operate large numbers of services at scale.
Why Containers and Cloud-Native Matter for OpenShift
OpenShift is built around the assumption that:
- Your workloads are containerized.
- Your desired system behavior is expressed declaratively (e.g., via YAML manifests, pipelines, and Operators).
- Your architecture fits cloud‑native patterns to benefit from:
- Automated scaling and self‑healing.
- Fast, repeatable deployments.
- Multi‑tenancy and resource sharing within a cluster.
Understanding these fundamentals helps you:
- Interpret OpenShift objects (Pods, Deployments, Routes, etc.) as tools for running containers.
- Design applications that behave well on a container platform:
- Stateless where possible.
- Externalized configuration and storage.
- Health checks and clear runtime boundaries.
- Use OpenShift features effectively for building, deploying, and operating modern applications.