Table of Contents
What a CI/CD Pipeline Is (In Practice)
In OpenShift-based workflows, a CI/CD pipeline is a defined, automated sequence of steps that takes changes from source control to a running, updated application in the cluster.
Conceptually, a pipeline usually:
- Starts on a trigger
- Code push or pull request
- Merge to a main branch
- Tag or release creation
- Manual trigger from a UI/CLI
- Runs continuous integration (CI) stages
- Fetches the source code
- Builds the software (compile, package)
- Runs tests (unit, integration, basic security checks)
- Produces build artifacts (binaries, images, manifests)
- Runs continuous delivery/deployment (CD) stages
- Builds or updates a container image
- Pushes image to a registry
- Updates manifests or Git repository representing the desired state
- Deploys to one or more environments (dev, test, staging, production)
- Performs automated checks after deployment (smoke tests, health checks)
- Reports outcomes
- Success/failure status
- Logs and test reports
- Notifications (chat, email, dashboards)
The key idea is that the same pipeline definition is reused for each change, giving consistency, repeatability, and traceability.
Typical Stages in an OpenShift-Oriented Pipeline
While details vary by team and tooling, most OpenShift-focused pipelines follow a recognizable structure.
1. Source and Trigger Stage
Purpose: Start the pipeline based on an event.
Typical tasks:
- Detect a new commit, pull request, or tag in Git.
- Capture metadata like branch, commit ID, author.
- Optionally run lightweight checks (linting, formatting) before heavier work.
Common considerations:
- Different triggers for feature branches vs. production branches.
- Validating pull requests without merging them.
- Enforcing policies (e.g., required reviews) before deploy stages run.
2. Build and Test (CI) Stage
Purpose: Prove that the change builds and basic tests pass.
Typical tasks:
- Run
mvn package,npm test,go test, etc. - Execute unit tests, sometimes integration tests with ephemeral dependencies.
- Generate test and coverage reports.
- Fail fast if tests or build fail.
Characteristics in an OpenShift context:
- Build steps run in containers, so the environment is consistent.
- Tests may run against ephemeral or temporary services (databases, message queues) spun up per run.
- Parallelization (e.g., test matrices) to keep feedback quick.
3. Image Build and Packaging Stage
Purpose: Turn tested code into a deployable container image and related artifacts.
Typical tasks:
- Build a container image (e.g., with Dockerfile, S2I, or build tool integrations).
- Tag the image in a meaningful way (commit SHA, branch, version).
- Push the image to a container registry.
- Generate or update deployment manifests, Helm charts, or Kustomize overlays if applicable.
Key ideas:
- Immutability: Each image is uniquely identified (e.g.,
app:1.2.3orapp:sha-abc123). - Reproducibility: Same code + same Dockerfile + same base image should produce the same artifact.
- Scanning hooks (if used): images may be scanned as part of the pipeline, with pass/fail gates.
4. Environment-Specific Deployment Stages
Purpose: Progress a tested image through multiple environments, often with increasing strictness.
Common flows:
- Dev: automatic deployment on every push; tolerant to frequent changes.
- Test / QA: deployment on successful dev build; more thorough tests.
- Staging / Pre-prod: near-prod-like environment for final validation.
- Production: usually gated by approvals, policies, or other checks.
Typical deployment tasks:
- Update deployment definitions (e.g., Deployment, DeploymentConfig, etc.).
- Apply environment-specific configuration (via ConfigMaps, Secrets, etc.).
- Use a chosen rollout strategy (e.g., rolling, blue-green, canary).
- Run smoke tests and health checks after deployment.
Pipeline behavior can differ by environment:
- Automatic vs. manual promotion between stages.
- More strict gates (approvals, security checks, performance tests) as you approach production.
- Different resource settings, scaling, and network policies per environment.
5. Verification and Quality Gates
Purpose: Ensure that each step in the pipeline meets defined quality criteria before proceeding.
Examples of gates:
- CI build and tests must pass.
- Code quality metrics above a threshold.
- Security scanning issues below a severity threshold.
- Performance tests within SLOs (latency, error rate).
- Manual approval for production promotion.
Outcomes:
- Pass: pipeline advances to next stage (e.g., deploy to staging, then prod).
- Fail: pipeline stops, often with notifications and logs to facilitate debugging.
6. Post-Deployment Checks and Rollback Hooks
Purpose: Confirm that the new version is healthy and provide an automated fallback.
Typical checks:
- Application readiness and liveness probes are successful.
- No unusual errors or high latency immediately after release.
- Key business endpoints respond correctly to basic tests.
Rollback considerations:
- Pipeline may:
- Trigger rollout to a previous, known-good image.
- Re-apply a previous set of manifests.
- Rollback policies can be automated (e.g., failure thresholds) or manual (operator-driven).
Pipeline Structure and Concepts
Pipeline as Code
Pipelines are generally expressed as code or configuration stored in version control.
Common characteristics:
- Declarative definitions of stages, tasks, and dependencies.
- Reusable task abstractions (build, test, scan, deploy).
- Versioning and change history of pipeline logic itself.
- Code review for pipeline changes, just like application code.
Benefits:
- Reproducibility: the same code and pipeline definition yield the same result.
- Collaboration: teams can propose improvements via pull requests.
- Auditing: you can see exactly how code reached production at any point in time.
Stages, Tasks, and Artifacts
Pipelines are usually built from:
- Stages (or phases): logical groupings like “Build”, “Test”, “Deploy to Dev”, “Deploy to Prod”.
- Tasks / steps: atomic units of work (run tests, build image, apply manifests).
- Artifacts: outputs passed between stages.
Typical artifacts:
- Compiled binaries, packages, or libraries.
- Container images (referenced by tags or digests).
- Test reports and logs.
- Manifests or configuration used in later stages.
Managing artifacts:
- Promoting the same artifact through multiple environments avoids “works in dev, fails in prod” due to rebuild differences.
- Using digests (e.g.,
image@sha256:...) removes ambiguity.
Pipelines Across Multiple Environments and Clusters
CI/CD for OpenShift often spans:
- Multiple OpenShift clusters (dev, staging, prod).
- Possibly different regions or cloud providers.
Patterns:
- A single central pipeline orchestrating deployments to multiple clusters.
- Environment-specific credentials and endpoints managed securely.
- Consistent deployment patterns, but environment-specific overrides.
Key challenge:
- Keeping pipeline logic reusable while allowing necessary environment differences (resource sizes, domains, secrets).
Integrating Pipelines with OpenShift Workflows
Even without implementation details of specific tools, there are recurring integration points when a pipeline drives OpenShift workloads.
Integrating with Code and Image Flow
Typical relationships:
- Pipeline monitors a Git repository.
- Pipeline produces a container image and pushes it to a registry accessible by OpenShift.
- OpenShift workloads reference that image tag or digest.
Options for how changes are picked up:
- Pipeline explicitly updates deployment configuration when new image is available.
- OpenShift components watch for image changes and trigger rollouts when a new tag appears (if configured to do so).
- Git-based flows where the pipeline updates manifest repositories that define desired state.
Promotion and Environments
Pipeline-driven promotion models often use:
- Branch-based workflows: different branches map to environments.
- Tag-based workflows: tags like
v1.0.0trigger production deployments. - GitOps-style workflows: pipeline updates manifests, and a separate process syncs them into clusters.
For each environment, the pipeline orchestrates:
- Which image tag/digest is used.
- Which configuration is applied.
- What gating and approvals are required.
Non-Technical Aspects of Pipelines
Beyond tools and definitions, CI/CD pipelines encode team processes.
Key points:
- Collaboration: Pipelines standardize how developers, testers, and operations teams interact.
- Governance and compliance: Pipelines can enforce policies (tests, reviews, security checks) consistently.
- Observability and feedback:
- Clear visibility into where a given change is in the promotion flow.
- Dashboards with builds, deployments, and health statuses.
- Reliability:
- Automated and repeatable processes reduce human error.
- Defined rollback behaviors improve recovery from bad releases.
Over time, organizations refine pipelines based on:
- Lead time for changes
- Deployment frequency
- Change failure rate
- Mean time to recovery (MTTR)
These metrics help adjust where to add or relax gates, and how to balance speed with safety.
Summary
In the context of OpenShift and modern DevOps practices, CI/CD pipelines are:
- Automated, codified workflows that move changes from source control to running applications.
- Structured into stages like build, test, image creation, deploy, verify, and promote.
- Built around artifacts (especially container images) that are promoted through environments without rebuilding.
- Integrated with OpenShift clusters to manage deployments, configuration, and rollouts.
- A key mechanism for embedding quality, security, and operational practices into everyday development flow.