Table of Contents
Learning goals for this chapter
After this chapter you should be able to:
- Understand how applications are typically built and deployed on OpenShift.
- Navigate the main OpenShift objects involved in application deployment.
- Compare build and deployment approaches (S2I vs pre-built images, etc.).
- Recognize how OpenShift’s opinionated workflow differs from “raw” Kubernetes.
This chapter provides the foundation; the following subchapters will dive deeper into each major mechanism (S2I, container-based, DeploymentConfigs, rollouts, etc.).
The OpenShift view of “an application”
On OpenShift, “an application” is usually not a single object but a collection of resources:
- A container image (from S2I or a registry).
- One or more
DeploymentorDeploymentConfigobjects that define how pods are run. - Services and Routes/Ingress for network access (covered elsewhere).
- ConfigMaps, Secrets, PVCs, etc. for configuration and data (covered elsewhere).
- Optionally, build resources (
BuildConfig,ImageStream) if you build in-cluster.
Conceptually, OpenShift standardizes three phases:
- Build – how code becomes a container image.
- Deploy – how that image is run as pods.
- Release / expose – how users or other systems access it.
This chapter focuses on the first two: building and deploying.
Build models in OpenShift
OpenShift supports multiple ways to produce container images. The choice has a big impact on workflows, permissions, and automation.
1. In-cluster builds (OpenShift-native)
These are builds that run inside the cluster as Kubernetes pods and are managed by OpenShift.
Key native components:
- BuildConfig – defines how to build (strategy, source, output).
- Build – a specific build run created from a
BuildConfig. - ImageStream – a named, versioned view of images, often used as build inputs/outputs.
Core strategies (explained in detail in later subchapters):
- Source-to-Image (S2I) builds – take source code + builder image → application image.
- Docker builds – build from a Dockerfile in source repo.
- Custom builds – advanced, user-defined builder images.
When you trigger a build (manually, via webhook, or automatically):
- OpenShift creates a
Buildobject. - A build pod runs the chosen build strategy.
- The resulting image is pushed to an internal registry and tracked via an
ImageStream.
Advantages:
- No extra external CI infrastructure required.
- Works well with internal registries and security policies.
- Integrates tightly with deployment triggers.
Trade-offs:
- Build workloads consume cluster resources.
- Often better for small/medium apps or where central platform teams manage builds.
2. External CI/CD builds
In this model, building the image happens outside OpenShift:
- External CI systems (Jenkins, GitLab CI, GitHub Actions, etc.) build images.
- Images are pushed to an internal or external registry.
- OpenShift is mainly responsible for deployment, not building.
Typical pattern:
- Developer pushes code.
- External CI builds and tests the image.
- CI pushes image to registry (e.g. Quay, Docker Hub, internal registry).
- CI calls
oc apply/oc rolloutor updates image tags to deploy on OpenShift.
Advantages:
- Offloads build workloads from the cluster.
- Reuses existing DevOps tooling.
- Easier to enforce complex pipelines, tests, and quality gates.
Trade-offs:
- More moving parts to manage and secure.
- You must integrate image update signals with OpenShift deployments.
3. Hybrid models
Organizations frequently mix approaches:
- Critical services: built externally, deployed carefully with strong approvals.
- Simpler line-of-business apps: built using OpenShift-native S2I and BuildConfigs.
- Experimental or training projects: built via web console “Add to Project” workflows.
Understanding that OpenShift supports all these patterns is central to designing a workable application delivery process.
Deployment models in OpenShift
Once you have an image, OpenShift runs it as pods using workload controllers. On OpenShift, you will commonly see:
- Deployments (Kubernetes-native, recommended for new apps).
- DeploymentConfigs (OpenShift-specific, legacy but still widely used).
You will explore these in depth in the dedicated subchapter. For now, it’s important to understand:
- Both represent a desired state of running pods (replicas, template, update strategy).
- They manage rollouts (updating from one version to another).
- They integrate with Services and Routes to keep traffic flowing during changes.
Basic deployment workflow
Conceptually, a typical deployment goes through:
- Define the pod template
- Container image (e.g.
quay.io/org/app:1.0.3). - Environment variables, ports, resource requests/limits.
- Configurations via ConfigMaps/Secrets/Volumes.
- Wrap the template in a controller
- Use a
DeploymentorDeploymentConfigwith: replicas: how many pods.strategy: how to roll out changes (rolling, recreate, etc.).- Connect to the network
- A
Serviceprovides stable DNS and load balancing across pods. - A
Routeor Ingress exposes the service externally if needed. - Roll out changes
- Update image tags, environment variables, or other fields.
- The controller orchestrates creating new pods and scaling down old ones.
Application deployment approaches
From a developer’s point of view, there are two main ways to “get an app running”:
1. “Bring your own image” (container-based deployments)
You use a pre-built image, often produced by an external CI/CD system:
- Minimal OpenShift-specific configuration:
Deploymentpointing atimage: your-registry/your-app:tag.- Optional
HorizontalPodAutoscaler, ConfigMaps, Secrets, PVCs, etc.
This is close to vanilla Kubernetes and is covered in detail later in “Container-based deployments”.
2. “Bring your source code” (S2I-based workflows)
You give OpenShift access to your source code repository and a builder image; OpenShift:
- Builds the image inside the cluster with S2I.
- Stores it in an internal registry.
- Triggers a deployment automatically.
OpenShift’s web console and oc new-app strongly encourage this pattern:
- Developers can start from source repos quickly.
- Platform teams can standardize on supported builder images.
This is the focus of the upcoming “Source-to-Image (S2I)” subchapter.
Integrating builds and deployments
A key benefit of OpenShift’s opinionated model is automated promotion from build to running application.
Typical integration mechanisms:
- Image change triggers on
Deployment/DeploymentConfig: - Watch an
ImageStreamTag(e.g.myapp:latest). - When the image changes (after a new build), trigger a rollout.
- Build triggers on
BuildConfig: - Webhooks from Git, or image-change triggers from builder images.
- Build starts automatically when source or base image changes.
By combining these:
- Code is pushed → BuildConfig starts a build.
- Build produces a new image in an ImageStream.
- Image change trigger fires on the Deployment/DeploymentConfig.
- A rollout to the new version starts automatically.
This creates a basic in-cluster CI-like pipeline without external tooling. The CI/CD-focused chapter and subchapters will cover more advanced pipelines and GitOps patterns.
Environment separation and promotion
Most real deployments have multiple environments:
dev/test/staging/prodprojects or namespaces.- Different levels of access control and resource limits.
Common patterns for managing this in OpenShift:
- Same image, different configuration:
- Promote the same container image through environments.
- Only change configuration (ConfigMaps/Secrets, replicas, limits, routes).
- Image promotion via tags or ImageStreams:
- Use distinct image tags (e.g.
:dev,:staging,:prod) or ImageStreams. - Promotion is done by retagging or copying images between registries/namespaces.
- Separate Build & Deploy responsibilities:
devproject builds images.test/prodprojects deploy only approved images from controlled registries.
This separation is essential for compliance and governance and is built into how many OpenShift clusters are organized.
Developer workflows on OpenShift
While the details differ by organization, a typical OpenShift-based developer workflow looks like:
- Initial onboarding
- Get access to a project/namespace.
- Choose a deployment model: S2I or container-based.
- First deployment
- Use
oc new-appor the web console to create a basic application. - Confirm pods, services, and routes are created and healthy.
- Iterative development
- Modify code, push to Git.
- Build and deploy automatically via:
- OpenShift BuildConfigs, or
- External CI/CD triggering
occommands or GitOps flows. - Observability and tuning
- Inspect logs, events, and metrics.
- Adjust resource requests/limits and scaling.
- Modify deployment strategy for faster or safer rollouts.
- Promotion to higher environments
- Use pipelines, manual approvals, or GitOps to move changes toward production.
Later chapters (CI/CD, monitoring, security, scaling) will expand on each of these aspects; this chapter’s role is to show how the core building and deployment mechanisms underpin the whole workflow.
Design considerations for building and deploying on OpenShift
When deciding how to structure your builds and deployments, you should think about:
- Where builds run
- In-cluster (OpenShift BuildConfigs/S2I) vs external CI runners.
- Security and resource consumption in the cluster.
- How images are managed
- Use of internal registry vs external registries.
- Use of ImageStreams for tracking and triggers.
- How changes are rolled out
- Choice of
DeploymentvsDeploymentConfig. - Rollout strategies (rolling vs recreate) and their tuning.
- Use of rolling updates and rollbacks (covered later).
- How configuration is separated from code
- Use of ConfigMaps, Secrets, and environment variables.
- Keeping images generic and environment-agnostic.
- How environments and teams are isolated
- Projects/namespaces per team or application.
- Access control and quotas influencing deployment patterns.
Each of the following subchapters will focus on one part of this overall picture—S2I, container-based deployments, DeploymentConfigs/Deployments, and rollout strategies—to give you practical tools for implementing robust application delivery on OpenShift.