Table of Contents
Core Ideas of Application Deployment on OpenShift
In OpenShift, “deployment” is not just starting containers. It is a controlled, repeatable process of turning a desired application state (what should run, how many replicas, with what configuration) into an actual running workload in the cluster. This chapter focuses on the core concepts that underpin how you deploy applications on OpenShift, independent of any specific mechanism like S2I, Deployments, or CI/CD pipelines.
Desired State and Declarative Deployment
OpenShift (built on Kubernetes) uses a declarative model:
- You describe what you want in YAML/JSON manifests.
- The platform continuously reconciles the actual state to match the desired state.
For application deployment, this usually includes:
- Which container image to run (
image). - How many instances (
replicas). - Which ports the app exposes.
- Basic runtime configuration (environment variables, config references).
- Basic runtime policy (restart policy, resource requests/limits).
You typically apply these definitions using the oc CLI or via the web console. Once created, controllers continuously enforce them, automatically redeploying or rescheduling pods as needed.
From Image to Running Application
At a conceptual level, deployment involves several stages:
- Application artifact
- Source code or pre-built container image.
- Build (optional in deployment stage)
- OpenShift can build images (e.g., S2I), but that is a separate phase from deployment itself.
- Deployment definition
- A Kubernetes/OpenShift resource describing how to run the image (e.g.,
DeploymentorDeploymentConfig). - Scheduling and execution
- The control plane schedules pods to nodes; kubelet starts containers.
- Exposure and access
- Services, routes/ingress, and DNS expose the running pods to other workloads and to users.
In OpenShift, application deployment concepts center around how you define and manage step 3 so that steps 4 and 5 happen reliably and repeatably.
Key Deployment Concepts and Objects
Several resource types participate in application deployment. Details of each are covered elsewhere; here we focus on their conceptual role in deployments.
Workload Controllers
Workload controllers manage pods over time:
- They ensure the desired number of replicas are running.
- They handle updates to the application image or configuration.
- They provide rollout and sometimes rollback logic.
Common controllers for stateless applications:
Deployment: Standard Kubernetes controller for stateless apps.DeploymentConfig: OpenShift-specific controller with additional triggers and hooks.
For stateful applications, StatefulSet is used, but the conceptual ideas of desired state, rollout, and replica management are similar.
Pods and Replica Management
A pod is the smallest schedulable unit—one or more tightly coupled containers that share:
- Network namespace (same IP, ports).
- Storage volumes.
- Some lifecycle aspects.
Deployment controllers rarely manage pods directly in a one-off way. Instead, they manage a template:
- You define a pod template inside the controller spec.
- The controller creates pods based on that template and keeps the right number running.
Conceptually:
- You modify the template to change your application (e.g., new image tag).
- The controller performs a rollout to shift the running pods to match the new template.
Labels and Selectors
Labels are crucial to deployment concepts because they tie together controllers, services, and pods.
- Labels: Key-value pairs attached to objects (pods, services, deployments, etc.).
- Example:
app: my-api,version: v2. - Selectors: Queries that match objects based on their labels.
In deployments:
- Controllers use selectors to know which pods they own.
- Services use selectors to know which pods receive traffic.
- Rollouts often use labels (e.g.,
version) to drive progressive traffic switching.
Consistent labeling is fundamental for clean, manageable deployments.
Deployment Lifecycle
Conceptually, the deployment lifecycle goes through these stages:
- Initial deployment
- Define a deployment object and apply it.
- Controller creates the first set of pods.
- Service (if defined) routes traffic to these pods.
- Configuration or image change
- You update the deployment spec (e.g., new image, env var, resource setting).
- The controller detects the change and creates a new revision.
- Rollout
- The controller incrementally replaces old pods with new ones, according to its strategy.
- Rollout status indicates whether the deployment is progressing, complete, or failed.
- Steady state
- Desired number of updated pods are running.
- Controller maintains this state in the face of failures or cluster changes.
- Rollback (optional)
- If the new version is problematic, you can roll back to a previous revision.
- The controller reverts the pod template to an older version and rolls forward again.
Deployment Strategies (Conceptual)
How pods move from one version to another is often described as a strategy. OpenShift supports various strategies, but conceptually they fall into common patterns:
Recreate
- All existing pods are stopped.
- New version pods are started afterward.
- Implies downtime; simplest strategy.
- Useful when:
- The application cannot run multiple versions in parallel.
- The underlying storage or external systems require exclusive access.
Rolling / RollingUpdate
- Gradually replace old pods with new ones.
- Keep some old pods running while new ones start.
- Aims to minimize or avoid downtime.
- Key controls (conceptually):
- How many new pods can start before old ones are stopped.
- How many old pods can be unavailable during the transition.
Blue-Green (Conceptual)
Often implemented via services/routes rather than a built-in strategy:
- Maintain two complete environments:
- Blue: current production.
- Green: new version.
- Deploy and verify the green environment without impacting users.
- Switch traffic from blue to green (e.g., by changing service selector or route target).
- Rollback is simply switching traffic back if needed.
Canary (Conceptual)
A more granular, gradual rollout:
- Deploy new version to a small percentage of users or traffic (the “canary”).
- Monitor behavior, metrics, and errors.
- Gradually increase share of traffic if healthy; roll back if issues appear.
In OpenShift, canary is usually implemented by adjusting replica counts and routes/services, often assisted by CI/CD tools or service mesh.
Triggers and Automation
Deployments often respond automatically to changes:
- Image change triggers (conceptual):
- Automatically start a new deployment when a new image tag is pushed to a registry.
- Config change triggers:
- Start rollouts when configuration or secrets used by the deployment change.
- Webhook/CI triggers:
- Start deployments when a CI pipeline completes or when a Git repository changes.
OpenShift’s DeploymentConfig offers built-in image and config change triggers conceptually; Kubernetes Deployment can be integrated with CI/CD tools that act as “external triggers.”
Versioning and Revisions
Each significant change to a deployment’s pod template creates a revision:
- Revisions represent different versions of your deployment spec and pods.
- Rollouts move the active version from one revision to the next.
- Rollbacks switch the active spec back to an earlier revision.
Conceptually, think of revisions as a deployment’s internal history, separate from your source code or Git history, but related to them.
Readiness, Liveness, and Progressive Traffic
To ensure safe deployments, OpenShift relies on health information from your containers:
- Readiness:
- Indicates when a pod is ready to serve traffic.
- Deployment controllers and services use this to avoid sending traffic to pods that are still starting, migrating data, or warming caches.
- Liveness:
- Indicates whether a pod is healthy or stuck.
- The platform can restart failing containers based on liveness checks.
In deployment terms:
- New pods become part of the rollout only after they are marked ready.
- If readiness never becomes true, rollouts can stall.
- Proper probes are essential to safe rolling and canary deployments.
Resource Management and Scheduling Considerations
Deployment concepts also include where and how your pods run:
- Resource requests/limits:
- Influence scheduling and stability.
- Under-requesting resources can cause performance issues; over-requesting can prevent scheduling.
- Node selection and placement (conceptual):
- Labels and affinity rules can steer workloads to particular node types (e.g., GPU nodes, large-memory nodes).
- Taints and tolerations:
- Control which pods can schedule on “special” nodes.
When designing deployment specs, you define not only what to run but also constraints on where and with what resources it should run.
Configuration as Part of Deployment
While configuration and secrets management are covered elsewhere, from a deployment perspective:
- Deployments usually reference:
ConfigMaps for non-sensitive configuration.Secrets for sensitive configuration.- Rolling out a configuration change is conceptually similar to rolling out a new image:
- The pod template changes (e.g., different env var values or mounted config files).
- A new revision is created and rolled out.
Good practice is to keep configuration external to the image so you can reuse the same image across environments (dev, test, prod) with different deployment specs.
Deploying Different Types of Applications
Conceptually, deployments can differ by application type:
- Stateless services:
- Ideal for rolling and canary strategies.
- Easy horizontal scaling by increasing replicas.
- Stateful applications:
- Use different controllers (e.g.,
StatefulSet) and storage. - Tend to require more careful rolling strategies (ordered updates, minimal parallelism).
- Batch/one-shot jobs:
- Often run as
JoborCronJobrather than a long-running deployment. - Deployment concepts still apply (desired count, retry behavior), but lifecycle is finite.
When designing your deployment, you choose the appropriate controller and strategy for the workload’s characteristics.
Operational Observability of Deployments
From a conceptual standpoint, successful deployment processes are observable and inspectable:
- Status and history:
- View rollout status, current revision, and previous revisions.
- Events:
- Warnings and info about why pods were restarted, failed, or could not be scheduled.
- Metrics and logs:
- Used to verify whether a deployment is healthy post-rollout.
Modern deployment practices treat rollouts as experiments that must be monitored; deployment concepts inherently include this feedback loop.
Policy, Governance, and Guard Rails
In a shared OpenShift cluster, deployments are constrained by policies:
- Quotas and limits influence:
- How many replicas you can run.
- How much CPU/memory each pod may request.
- Security policies (e.g., Security Context Constraints):
- Restrict what containers can do (run as root, mount privileged volumes, etc.).
- Admission controllers and Operators:
- Can enforce organizational deployment rules (required labels, mandatory probes, approved images, etc.).
From a conceptual point of view, deployment is not purely a developer concern; it is shaped by platform governance.
Putting It All Together
When you “deploy an application” on OpenShift, conceptually you are:
- Defining desired state via deployment resources (what runs, how many, where, and with what configuration).
- Choosing or inheriting a strategy for moving from the current version to the next (recreate, rolling, blue-green, canary).
- Integrating triggers so new versions or config changes lead to controlled rollouts.
- Using labels, selectors, and services to connect deployments to traffic and other services.
- Observing and iterating using status, logs, and metrics to validate each rollout.
- Operating within cluster policies and constraints that shape what is allowed and how resources are used.
Subsequent chapters will show how these concepts are realized concretely via Source-to-Image, container-based deployments, and specific OpenShift controllers like DeploymentConfig and Deployment, as well as how to manage rolling updates and rollbacks in practice.