Table of Contents
Core Concepts of OpenShift Pipelines
OpenShift Pipelines is Red Hat’s Kubernetes‑native CI/CD solution based on Tekton. It lets you define build and delivery workflows as Kubernetes resources that run in the cluster, without relying on a centralized external CI server.
At a high level, OpenShift Pipelines provides:
- A set of Custom Resource Definitions (CRDs) that describe CI/CD building blocks.
- A controller (
tekton-pipelines-controller) that watches these resources and orchestrates execution. - Tight integration with OpenShift authentication, security, and the web console.
You work with OpenShift Pipelines entirely through Kubernetes/OpenShift objects, not through a separate CI application UI.
Key differences from traditional CI tools:
- Pipelines run as pods in the cluster.
- Build steps run in containers, often one container per step.
- Pipelines are declared as YAML, versionable with your application code.
- Scaling happens naturally with the cluster: more workers → more parallel pipelines.
Tekton Building Blocks
OpenShift Pipelines exposes Tekton concepts as Kubernetes resources. The most important ones:
Tasks and TaskRuns
A Task is a reusable unit of work. Each task defines one or more steps, and each step runs in a separate container.
Example use cases for tasks:
- Building a container image
- Running unit tests
- Scanning an image for vulnerabilities
- Deploying to a test namespace
Conceptually:
Task= template (what to do, which parameters and workspaces it needs).TaskRun= execution (one run of that template with concrete parameter values).
A minimal Task might look like:
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: echo-hello
spec:
steps:
- name: say-hello
image: registry.access.redhat.com/ubi9/ubi-minimal
script: |
echo "Hello from Tekton"
A corresponding TaskRun:
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: echo-hello-run
spec:
taskRef:
name: echo-hello
You normally won’t create TaskRuns manually very often; pipelines do that for you.
Pipelines and PipelineRuns
A Pipeline is an ordered composition of tasks, including dependencies between them.
Core concepts:
tasks: main tasks in the pipeline.runAfter: define that one task runs after another.paramsandworkspaces: pass configuration and data across tasks.- Optional
finally: tasks that always run (e.g., cleanup, notifications).
Example structure:
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: build-and-test
spec:
params:
- name: git-url
type: string
tasks:
- name: fetch-source
taskRef:
name: git-clone
params:
- name: url
value: $(params.git-url)
- name: run-tests
runAfter: [fetch-source]
taskRef:
name: maven-testA PipelineRun triggers an instance of that pipeline:
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: build-and-test-run
spec:
pipelineRef:
name: build-and-test
params:
- name: git-url
value: https://github.com/example/repo.git
Each PipelineRun creates the necessary TaskRuns and pods to execute the workflow.
Workspaces, Params, and Results
These features connect tasks together:
- Parameters (
params): small pieces of configuration (strings, arrays). Used for things like Git URLs, image tags, or environment names. - Workspaces: shared storage volumes mounted into tasks so they can share files (e.g., source code checkout, compiled artifacts).
- Results: values that tasks produce and pipelines can read (e.g., computed image tag, version number).
Example snippets:
- Parameter usage:
$(params.git-url) - Reading task result:
$(tasks.build-image.results.image-url)
Triggers (High Level)
Tekton Triggers let you respond to events (mostly webhooks):
- Convert a webhook payload into Tekton parameters.
- Instantiate
PipelineRuns automatically.
In OpenShift Pipelines, triggers are usually set up using resources like TriggerTemplate, TriggerBinding, and EventListener. These will be configured by cluster admins or DevOps engineers to integrate with Git providers or external systems.
OpenShift-Specific Integration
OpenShift Pipelines adds convenience and security integration on top of base Tekton.
Installation and Namespaces
Typically:
- Installed cluster‑wide by an administrator via OperatorHub.
- Tekton system components live in a special namespace (often
openshift-pipelines). - You define your own pipelines in your project namespaces, controlled by your usual OpenShift permissions.
You use the same oc CLI and web console you already use for other OpenShift workloads.
Security, ServiceAccounts, and SCCs
Pipelines run using Kubernetes ServiceAccounts. This controls:
- Which namespaces and resources the pipeline can access.
- Which secrets it can use (e.g., to access Git or container registries).
- Which Security Context Constraints (SCCs) apply to the pods running the steps.
Typical patterns:
- Each application/project has a dedicated
ServiceAccountfor pipelines, with: - Pull/push secrets for your container registry.
- Permissions to deploy to its own namespace (and sometimes to additional environments).
- Pipelines run under OpenShift’s default constrained SCC (non‑privileged, random UID), which affects how you build images and what tools you can use in steps.
You often see configuration like:
apiVersion: v1
kind: ServiceAccount
metadata:
name: pipeline
secrets:
- name: git-credentials
- name: registry-credentials
Then reference it in PipelineRun.spec.serviceAccountName.
Integration with Image Streams and Builds
In OpenShift, application builds and images are often managed using ImageStreams. OpenShift Pipelines can:
- Build images using Tekton tasks (for example,
buildah,s2i-java,kaniko). - Push resulting images to:
- OpenShift internal registry (referenced by
ImageStream). - External registries (e.g., Quay.io, Docker Hub).
Typical pattern:
- A
Taskchecks out source code. - Another
Taskbuilds and pushes an image to the internal registry. - The pipeline updates a
Deploymentor aDeploymentConfigto use the new image, or simply relies onImageStreamtriggers.
Defining a Simple Pipeline in OpenShift
Below is a conceptual walkthrough of creating a minimal OpenShift Pipeline that:
- Clones code from Git.
- Builds an image.
- Deploys to the current namespace.
Details like actual build tools, languages, and deployment manifests will vary, but the structure is representative.
Example Tasks (High Level)
- Git clone task
Uses a standard Tekton task (often from the Tekton catalog) to fetch source.
Key inputs:
url,revisionparams.- Workspace to store the source code.
- Build & push image task
Uses tools such asbuildahors2ito build the image inside a container.
Key inputs:
- Source workspace from previous step.
- Image repository URL and tag.
- Registry credentials via bound secret.
Key outputs:
- Task
resultwith the final image URL.
- Deploy task
Applies Kubernetes/OpenShift manifests or patches an existing deployment.
Key inputs:
- Image reference from the build task result.
- Access via
ServiceAccountwith permission topatchdeployments.
Example Pipeline Skeleton
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: build-and-deploy
spec:
params:
- name: git-url
type: string
- name: image-url
type: string
workspaces:
- name: source
tasks:
- name: fetch-source
taskRef:
name: git-clone
params:
- name: url
value: $(params.git-url)
workspaces:
- name: output
workspace: source
- name: build-image
runAfter: [fetch-source]
taskRef:
name: buildah-build
params:
- name: IMAGE
value: $(params.image-url)
workspaces:
- name: source
workspace: source
- name: deploy
runAfter: [build-image]
taskRef:
name: deploy-using-oc
params:
- name: image
value: $(params.image-url)Then:
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: build-and-deploy-run
spec:
pipelineRef:
name: build-and-deploy
serviceAccountName: pipeline
params:
- name: git-url
value: https://github.com/example/app.git
- name: image-url
value: image-registry.openshift-image-registry.svc:5000/myproject/app:latest
workspaces:
- name: source
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
This example assumes relevant tasks (git-clone, buildah-build, deploy-using-oc) already exist in the namespace.
Working with OpenShift Pipelines in the Web Console
OpenShift adds pipeline‑focused views to the Developer Perspective in the web console:
- Pipelines list: shows pipelines in your namespace.
- Pipeline details: visualize tasks and their dependencies as a graph.
- PipelineRuns:
- Trigger runs manually from the UI (“Start Pipeline”).
- Watch real‑time status by task (pending, running, succeeded, failed).
- Inspect logs for each task and step.
You can:
- Start a pipeline with specific parameter values using a form.
- Re-run a previous
PipelineRunwith the same parameters. - See history of runs, durations, and success/failure status.
This UI helps newcomers understand the flow of tasks without reading YAML.
Integrating with Git and External Systems
OpenShift Pipelines can be integrated with source control and other tools as part of a broader CI/CD setup:
- Git webhooks: Configure your Git host to call an OpenShift Pipelines EventListener, which then creates
PipelineRuns automatically on pushes, pull requests, or tag creation. - Secrets for Git:
- SSH keys stored in
Secretobjects. - Personal Access Tokens for HTTPS access.
- Attached to the
ServiceAccountused by pipelines.
Typical integration flow:
- A
TriggerBindingextracts values (branch, repo URL, commit SHA) from the webhook payload. - A
TriggerTemplatemaps those values intoPipelineRunparameters. - An
EventListenerexposes an HTTP endpoint; OpenShift networking exposes it externally. - Git is configured to send webhooks to this endpoint.
Using the Tekton CLI (tkn)
In addition to oc, you can use the Tekton CLI (tkn) for pipeline operations (if installed):
tkn pipeline listtkn pipeline start <name>tkn pipelinerun logs <run-name> -ftkn task listtkn pipelinerun describe <run-name>
This can be more convenient for developers working directly from the terminal.
Patterns and Best Practices Specific to OpenShift Pipelines
The following practices are particularly relevant when running Tekton on OpenShift:
Use the Developer Catalog and Sample Pipelines
OpenShift often provides prebuilt tasks and pipelines via:
- The Tekton Hub / catalog tasks (for Git, Maven, NPM, buildah, S2I, etc.).
- Sample “import from Git” flows in the Developer Perspective, which can:
- Detect your application type.
- Generate build and deployment resources.
- Optionally generate a starter pipeline.
These are useful starting points you can then customize.
Align Pipelines with OpenShift Environments
Typical patterns in OpenShift:
- One namespace per environment (e.g.,
dev,test,prod) or per team. - Pipelines placed in a “build” or “dev” namespace, deploying across namespaces.
On OpenShift, this often leads to:
- A dedicated
ServiceAccountin the build namespace that can: - Build images in the internal registry.
get,apply, orpatchresources in target namespaces.- Clearly separated pipelines per environment (or one pipeline with environment parameterization and environment‑specific promotion steps).
Manage Secrets Carefully
OpenShift Pipelines interact heavily with secrets:
- Registry credentials for pushing images.
- Git credentials for private repositories.
- Credentials for external services (scanners, artifact repositories, etc.).
Typical best practices:
- Store all secrets as Kubernetes
Secretobjects, not in pipeline YAML. - Bind secrets to the pipeline
ServiceAccount, not directly to tasks. - Use different secrets for different environments if needed.
Use Workspaces with OpenShift Storage
When using workspaces, OpenShift can back them with:
- Dynamic PVCs using
StorageClassdefinitions. - EmptyDir volumes for ephemeral data.
Choose based on:
- Size and persistence needs.
- Performance requirements.
- Need to share data between tasks or across runs.
On OpenShift, dynamic provisioning and storage classes are usually already configured by the platform team; pipelines can then request storage via PVC templates.
Where OpenShift Pipelines Fit in the Overall CI/CD Picture
Within a broader CI/CD practice on OpenShift:
- OpenShift Pipelines provide the cluster‑native orchestration of build and deployment steps.
- They work alongside:
- External CI tools (which may still be used for certain checks).
- GitOps tools (which may manage promotion to higher environments).
- The Tekton resources become part of your application’s configuration, versioned in Git and deployed like any other Kubernetes manifest.
In this context, OpenShift Pipelines (Tekton) give you:
- Declarative, version-controlled pipelines.
- Consistent execution in the same cluster where your applications run.
- Close integration with OpenShift’s security, storage, and networking models.