Table of Contents
Overview of the Dev-to-Prod Lifecycle on OpenShift
In OpenShift, the development-to-production (dev-to-prod) lifecycle is less about individual features of the platform and more about how they are combined into a repeatable, automated flow. The emphasis is on:
- Treating everything as code (manifests, pipelines, configs, policies).
- Using projects/namespaces to model environments.
- Using Git, CI/CD, and automation to move changes through environments.
- Making the cluster enforce standards (security, quotas, policies) instead of relying on manual checks.
This chapter focuses on how that end‑to‑end lifecycle typically looks in practice on OpenShift, not on the details of any single tool.
Environment Design and Promotion Strategy
Mapping environments to OpenShift projects
A common pattern is to map each environment to one or more OpenShift projects:
myapp-devmyapp-testmyapp-stagingmyapp-prod
Sometimes a single cluster hosts multiple environments; in other cases, each environment is a separate cluster. OpenShift’s multi-project design supports both.
Key environment differences:
- Permissions
- Dev: developers can create/edit most resources.
- Test/staging: more restricted; controlled CI/CD or release engineers.
- Prod: highly restricted; usually only automation and operators.
- Policies and limits
- Increasingly strict from dev → prod (quotas, network policies, security).
- Connectivity
- Dev/test may be internal-only; prod exposed via public Routes/Ingress.
Promotion models
Typical promotion models on OpenShift:
- Image promotion
Build once in dev (or a build project), push to a shared registry, then “promote” by referencing the same image in test/stage/prod deployments. This avoids rebuilding for each environment. - GitOps promotion
Each environment has its own Git directory or repo with manifests (or higher-level definitions). Promotion means: - Create a PR that updates image tags, replicas, config, etc. in the target environment repo.
- Merge triggers a reconciliation tool (e.g., Argo CD) to apply changes.
- Branch-based promotion (CI‑centric)
Different Git branches map to different environments: feature/*→ ephemeral/preview environments.develop→ dev.release/*→ staging.main/master→ prod.
Pipelines deploy based on the branch.
OpenShift doesn’t enforce a particular model; it provides building blocks to implement these patterns.
Typical Workflow: From Commit to Production
Step 1: Local development and containerization
Developers typically:
- Write and test code locally or on a shared dev cluster.
- Ensure there is:
- A
Dockerfileor S2I configuration. - Kubernetes/OpenShift manifests, Helm charts, or Kustomize overlays.
- Push code to a Git repository that the CI/CD process watches.
At this stage, developers rely on:
- A shared dev project (e.g.,
myapp-dev) for experiments. - Local clusters (e.g., CRC, kind) that approximate OpenShift.
Step 2: Triggering CI and building images
A commit or PR triggers a CI pipeline that:
- Checks out the code.
- Runs unit and static tests.
- Builds a container image using:
- Dockerfile-based build, or
- S2I for supported languages.
- Pushes the built image to a container registry accessible by OpenShift:
- OpenShift internal registry, or
- External registry (Quay, Docker Hub, etc.).
Typical outcomes:
- Image tag: e.g.,
myapp:1.3.0-rc1or a digest@sha256:.... - Metadata: build number, commit SHA, build date, labels.
In more advanced setups, the CI pipeline may update a Git repo used for deployment (GitOps pattern), instead of directly calling oc to apply changes.
Step 3: Automated deployment to development environment
Once an image is built, a deployment step usually:
- Updates the deployment definition for the dev environment:
- Sets
spec.template.spec.containers[].imageto the new image reference. - Applies the change to the
myapp-devproject using: - A CI/CD tool invoking
oc apply, or - A GitOps controller reacting to a manifest change.
- OpenShift performs:
- A rollout via Deployment or DeploymentConfig.
- Health checking and pod scheduling.
- Traffic switching if using rolling updates.
The dev environment is the first place the full containerized application runs inside OpenShift, often with:
- Development-level configuration (debug logging, mock integrations).
- Less strict security and resource settings.
Step 4: Integration and system testing in non-prod environments
After dev validation, promotion to a test or QA environment (myapp-test) is triggered by:
- A manual approval step in CI/CD.
- A PR merge into a test/staging branch or GitOps repo path.
- A tag or release action.
In myapp-test or myapp-staging, automated tests can include:
- Integration tests across microservices.
- Contract tests between teams.
- Performance tests with scaled workloads.
- Smoke tests verifying basic functionality after deployment.
Key aspects at this stage:
- Configuration convergence: test and staging are configured much closer to prod (e.g., same database engine, same feature flags default) but might still use reduced capacity or masked data.
- More realistic traffic: synthetic load, or shadow traffic in advanced setups.
Step 5: Pre-production validations
Before changes reach prod, many organizations have a staging or pre-prod project/cluster that mirrors production as closely as feasible:
- Same topology: replicas, storage classes, network policies.
- Same security constraints and RBAC model.
- Same external integrations (payment gateways, identity providers), often pointed at test accounts.
Typical activities:
- Final user acceptance tests.
- Security scans and compliance checks.
- Performance and capacity validation.
- Disaster recovery / failover drill tests (in larger environments).
In a GitOps setup, staging and prod often have separate repos or directories, and the main difference between them is config and image tags.
Step 6: Production deployment
Promotion to production normally includes:
- Approval in a change management tool or in the pipeline itself.
- Deployment into the
myapp-prodproject (or prod cluster).
The deployment pattern in prod is chosen based on risk and requirements:
- Blue-green, canary, or rolling strategies, often controlled by:
- OpenShift constructs (e.g., separate Routes for blue/green).
- Service mesh or traffic management tools for finer-grained canaries.
Success criteria:
- Health checks and readiness/liveness probes passing.
- Key service-level indicators (latency, error rate, throughput) within thresholds.
- Logs and alerts show no regressions.
Rollback plans are part of the lifecycle, not an afterthought.
Configuration, Secrets, and Environment Differences
Environment-specific configuration handling
Instead of building different images per environment, the lifecycle favors:
- Single image, multiple configs
The same container image is used across dev/test/stage/prod. Differences are managed by: - ConfigMaps for non-sensitive config (URLs, feature flags).
- Secrets for sensitive values (API keys, passwords).
- Environment variables to wire config into applications.
Typical patterns:
- Separate ConfigMaps/Secrets per environment (e.g.,
myapp-config-dev,myapp-config-prod) with environment-specific values. - Overlays (e.g., Kustomize or Helm values) defining which resources are created in each environment.
Secret handling across environments
In the dev-to-prod flow, secrets must be:
- Generated or provisioned through secure processes.
- Synchronized between environments according to policies:
- Dev might use dummy credentials.
- Staging uses real integration credentials but often non-production accounts.
- Prod uses strictly controlled production secrets.
Secrets are rarely updated manually; instead they are:
- Managed via dedicated secret management tools.
- Injected by pipelines or Operators.
- Versioned carefully, without storing raw values in plain Git.
Deployments, Rollouts, and Rollbacks in the Lifecycle
Rollout patterns across environments
Progression from dev → prod typically tightens:
- Dev: frequent rollouts, looser policies, manual or automated restarts.
- Test/stage: controlled rollouts with basic canary checks or smoke tests.
- Prod: stricter deployment strategies:
- Rolling: new pods gradually replace old pods.
- Blue-Green: run two full stacks and switch traffic using Routes.
- Canary: route a small percentage of traffic to new version first.
The specific objects (Deployments, DeploymentConfigs, etc.) are defined elsewhere; here, they are combined into a controlled progression.
Versioning and traceability
To maintain traceability across the lifecycle:
- Tag builds and images with:
- Git commit SHA.
- Version numbers.
- Build IDs.
- Ensure manifests or GitOps definitions reference immutable image digests (
@sha256:...), so you can reliably identify which exact build is running.
This allows answering lifecycle questions:
- “Which commit is running in production?”
- “Which tests ran before this version was promoted?”
Rollback strategies
When a deployment causes regressions, common rollback approaches include:
- Roll back to previous deployment via deployment history.
- Switch Route back to the previous (blue/green) environment.
- Revert Git change in a GitOps repo:
- Revert the commit that updated the image or config for prod.
- Let the GitOps controller sync back to the previous state.
Embedding rollback as a standard pipeline stage (or documented procedure) is a crucial part of the dev‑to‑prod lifecycle.
Using CI/CD and GitOps in the Lifecycle
CI for build, test, and early deployment
CI pipelines in the lifecycle focus on:
- Building images and performing unit/lint tests.
- Running fast integration tests on dev/test environments.
- Updating manifests / repos that drive deployments.
OpenShift’s own pipeline mechanisms and external CI systems can:
- Orchestrate environment promotions.
- Enforce quality gates (code coverage, static analysis).
GitOps for environment state management
A common modern pattern in OpenShift lifecycles is:
- Treat the cluster as a target of reconciliation from Git.
- Store the source of truth for each environment’s state in Git.
The dev-to-prod promotion then becomes:
- A Git operation (PR/merge) that:
- Changes image tags/digests.
- Adjusts config or replica counts.
- A reconciliation tool applies changes to the cluster automatically.
Benefits for the lifecycle:
- Auditable change history per environment.
- Easy to see differences between dev, test, and prod.
- Automatic drift correction if something is changed manually in the cluster.
Observability and Feedback Loops
Embedding monitoring into the lifecycle
Monitoring and logging are used in every environment but interpreted differently:
- Dev: diagnose functional issues and performance problems early.
- Test/staging: validate new code under realistic load.
- Prod: detect regressions and drive automatic rollbacks or mitigations.
In a typical lifecycle:
- New version is deployed in dev; basic metrics confirm that:
- The application starts correctly.
- Error rates are low for simple scenarios.
- The same version is deployed to test/staging:
- Synthetic tests or load tests run.
- Metrics are compared to previous baselines.
- Promotion to prod is allowed only if defined metrics and logs meet required conditions.
Continuous improvement from production feedback
The lifecycle is not linear; production insights drive new development:
- Incidents or performance problems in prod result in:
- New monitoring alerts.
- Adjusted resource requests/limits.
- Code optimizations or bug fixes.
- These changes start again in dev:
- Implement and test in dev.
- Validate via the same promotion chain.
Over time, the lifecycle becomes a feedback loop where:
- Observability and operational data influence design and coding.
- Policies and automation are updated to avoid repeated issues.
Handling Special Lifecycle Scenarios
Feature branches and ephemeral environments
For larger teams or microservice architectures, the lifecycle often includes:
- Ephemeral environments per feature branch or PR:
- Pipelines create a temporary project or namespace (e.g.,
myapp-pr-123). - Deploy the new version there for isolated testing and review.
- Destroy the environment after merge or closure.
This pattern integrates well with OpenShift’s project model and helps:
- Test changes without impacting shared dev.
- Enable UI/UX stakeholders to validate features before merge.
Coordinating changes across multiple services
In microservice ecosystems on OpenShift, a full lifecycle must handle:
- Multiple services evolving independently.
- Shared contracts (APIs, event schemas).
Common techniques:
- Consumer-driven contracts tested in CI before promotion.
- Versioned APIs to allow gradual rollout and backward compatibility.
- Carefully coordinated staging environments where new versions of multiple services are tested together before prod.
Governance, Compliance, and Approvals
Gates and approvals in the pipeline
From a dev-to-prod perspective, OpenShift is often integrated into existing governance processes:
- Automated checks:
- Image signing and verification.
- Policy checks for resource limits, security contexts, network rules.
- Manual steps:
- QA lead approval to move from test to staging.
- Change advisory board (CAB) approval for production releases in regulated environments.
These steps are codified in the CI/CD workflow:
- Pipelines fail if mandatory checks are not met.
- Promotion jobs are blocked until approvals are recorded.
Auditability across the lifecycle
To support audits and compliance:
- All changes to manifests, pipelines, and configs are tracked in Git or similar VCS.
- CI/CD logs record:
- Who triggered the deployment.
- Which commit and image were deployed.
- When promotion events happened.
OpenShift resources themselves also contribute logs and events, but the core accountability lies in the lifecycle tools integrated around the cluster.
Putting It All Together
The development-to-production lifecycle on OpenShift ties together:
- Project and environment design.
- CI for building and testing.
- GitOps or other deployment automation.
- Configuration and secret management per environment.
- Deployment strategies and rollback mechanisms.
- Observability and governance.
The unique value of OpenShift in this lifecycle is not any single feature, but the way it provides a consistent, policy-aware platform for running the same containerized application across dev, test, staging, and production with predictable behavior and strong automation support.