Table of Contents
From Idea to Running Application: A Day-in-the-Life View
This chapter focuses on how OpenShift is typically used in practice: the concrete sequences of steps teams follow day-to-day. Think of these as “recipes” or “patterns” that appear again and again, regardless of the specific application or organization.
We’ll look at common workflows from several perspectives:
- Developer inner loop
- Application onboarding into OpenShift
- Configuration and secret changes
- Deploying across environments (dev → test → prod)
- Handling incidents and rollbacks
- Scaling and maintenance
- HPC-/batch-oriented workflows on OpenShift
Developer Inner Loop on OpenShift
The “inner loop” is what a developer does repeatedly during daily work: write code, run it, test it, and iterate. In OpenShift environments, this usually follows one of a few patterns.
Pattern 1: Source-to-Image (S2I) Driven Loop
Typical when the platform team encourages S2I and Git-based builds:
- Initial setup
- Developer gets access to a project (namespace).
- A BuildConfig and Deployment (or DeploymentConfig) are prepared, often via templates or GitOps.
- Code changes
- Developer edits code locally.
- Changes are pushed to a Git repository branch.
- Automatic build and image creation
- A Git webhook triggers an OpenShift Build (
BuildConfig). - S2I pulls the source, combines it with a builder image, and produces a new runtime image.
- The image is pushed into the internal registry.
- Automatic deployment
- The associated Deployment/DeploymentConfig detects the new image.
- A rollout of the new application version occurs (usually rolling).
- Verification
- Developer accesses the application via route/URL.
- Uses OpenShift web console or
octo: - Inspect logs:
oc logs - Check pod health:
oc get pods - View events:
oc describe pod ... - Iterate
- If the change is good: commit and push.
- If there’s an issue: fix code, push again, repeat the build-rollout cycle.
Where CI/CD pipelines are used, some of these steps shift into Tekton or external CI tools, but the observable workflow (push → build → deploy → test) stays similar.
Pattern 2: Container-First Loop (Local Build, Cluster Run)
Common when teams prefer Docker/podman builds or need custom images:
- Developer builds and tests container images locally (
docker buildorpodman build). - Image is pushed to a registry that OpenShift can access.
- OpenShift Deployment/DeploymentConfig is updated to use the new image tag (manually or via automation).
- Rollout is triggered and verified.
- Developer iterates on both Dockerfile and code.
This pattern is also common in HPC workloads where custom MPI/GPU images are curated by specialized teams.
Onboarding an Application to OpenShift
When an existing (non-containerized or external) application is moved onto OpenShift, teams usually follow a structured onboarding workflow.
Step 1: Assess and Containerize
Typical tasks (often outside the cluster at first):
- Identify app dependencies (language runtime, database, external services).
- Define Dockerfile or choose an S2I builder.
- Establish how configuration, secrets, and logging will be injected in-cluster.
Outcome: a runnable container image and a basic understanding of runtime requirements (ports, resources, storage).
Step 2: Define OpenShift Resources
In practice, this usually becomes a set of YAML definitions, often version-controlled:
Deployment/DeploymentConfigfor pods and replicas.Servicefor internal access.Route/Ingress for external access.ConfigMapandSecretstubs for configuration.- Optional
PersistentVolumeClaimandStorageClassreferences for stateful needs.
Teams often create a “starter” YAML for new apps, or use templates/Helm/Kustomize or GitOps repositories.
Step 3: Create a Project and Apply Manifests
Typical sequence:
- Create or select the project (namespace) where the app will live.
- Apply manifests:
oc apply -f app.yaml- Check object status:
oc get deploymentoc get pods- Verify routing:
- Confirm the app is reachable via its Route URL.
Step 4: Integrate with CI/CD
Most organizations standardize on:
- A pipeline that:
- Builds or triggers image build.
- Runs tests.
- Updates manifests or tags.
- Deploys to a pre-production project.
- Optionally, GitOps for promotion to higher environments.
The onboarding workflow typically ends when:
- The app’s full lifecycle (build, test, deploy, update, rollback) is automated and repeatable on OpenShift.
Configuration and Secrets Change Workflow
Configuration and secret management changes are frequent and must be low-risk. Typical patterns:
Config Change (Non-Secret)
- Edit configuration values
- Edit
ConfigMapmanifest in Git. - Or use
oc edit configmap ...(less favored in GitOps setups). - Apply updated ConfigMap
oc apply -f configmap.yaml- Trigger config reload
- If app reads config on start:
- Restart pods (e.g.,
oc rollout restart deployment/myapp). - If app supports live reload:
- It may watch mounted config files directly.
- Validate
- Check pods status and logs.
- Run basic functional checks.
In GitOps, the change is normally only made in Git, and an Operator reconciles it onto the cluster.
Secret Change (Credentials, Tokens, Keys)
- Change secrets in a secure system (vault, secret manager, or encrypted Git).
- Secret is synchronized or updated in OpenShift:
oc apply -f secret.yaml- Restart or rollout application so it picks up new values (if not hot-reload capable).
- Confirm:
- Connections still succeed (databases, APIs).
- No authentication failures in logs.
A common rule is: never patch secrets directly from local files for production; use approved, audited workflows.
Multi-Environment Promotion Workflow
Applications rarely exist only in one project. A typical path is:
myapp-dev→myapp-test→myapp-prod
Different teams customize this, but the common elements are:
Separation of Concerns
- Each environment lives in its own OpenShift project(s).
- Shared patterns:
- Config and secrets differ per environment.
- Resource requests/limits and scaling policies are higher in test/prod.
Promotion Approaches
1. Image Tag Promotion
- Build image once (e.g., in
devpipeline). - Test in dev.
- Promote by retagging image in a registry:
- e.g.,
myapp:1.2.3-dev→myapp:1.2.3-test→myapp:1.2.3-prod. - Env-specific deployments are wired to use different tags.
Benefits:
- Single build artifact.
- Clear traceability between environments.
2. GitOps-Based Promotion
- Separate Git repos or branches per environment.
- A successful dev/test result triggers:
- A pull request to update manifests (or image tags) in the next environment’s repo/branch.
- A GitOps Operator watches the repo and applies changes to the respective cluster/namespace.
- Auditing and approvals happen via Git workflows.
This is increasingly common for OpenShift deployments.
Incident, Debug, and Rollback Workflows
When something goes wrong, teams follow repeatable operational workflows.
Observed Problem
- Alert fires (from monitoring).
- User reports high latency or errors.
- A deployment triggers failing health checks.
Triage Workflow
Common initial steps:
- Identify failing components
oc get podsand filter bySTATUS.- Check events:
oc describe pod .... - Gather logs and metrics
- Application logs:
oc logs(with-pfor previous container). - Metrics and dashboards (from built-in monitoring stack).
- Logs stack (e.g., Loki/Elastic) for cluster-wide logs.
- Reproduce issue
- Use the app’s route URL.
- For backend services, use
oc rshinto a pod to run curl or other test clients.
Common Debug Actions
- Check environment-specific configuration via
ConfigMapandSecret. - Confirm correct image tag and version in Deployment.
- Inspect recent deployments:
oc rollout history deployment/myapp- Compare working vs broken environment or version.
Rollback Workflow
Depending on the deployment mechanism:
Rolling Back a Deployment/DeploymentConfig
- Check rollout history.
- Roll back to a previous version:
- For
Deployment: update image tag, or useoc rollout undoif using native Kubernetes semantics. - For
DeploymentConfig:oc rollout undo dc/myapp. - Monitor the new rollout and validate.
GitOps Rollback
- Revert or restore a previous commit in Git.
- Wait for Operator to reconcile, or force sync.
- Confirm application behavior.
Rollbacks are often coupled with incident reports and post-incident reviews to improve future workflows.
Scaling and Resource Adjustment Workflows
Scaling is a core operational workflow for production and HPC-style workloads.
Manual Scale Up/Down
- Change number of replicas:
oc scale deployment myapp --replicas=5- Confirm:
oc get podsto ensure all new pods are running.- Monitor metrics (CPU, memory, latency) for impact.
This pattern is widely used during:
- Flash traffic events.
- Stress tests.
- Batch window peaks (e.g., large MPI or GPU jobs).
Autoscaling Tuning
While separate chapters cover autoscaling details, the workflow around it typically includes:
- Analyze baseline consumption from metrics.
- Adjust
HorizontalPodAutoscaleror other scaling policies. - Validate that scale-out/in events happen as expected.
- Set guardrails (max replicas, scheduling constraints).
In HPC-like scenarios with GPU or large-memory pods, scaling also involves confirming resource availability and queueing behavior (possibly through custom controllers or Operators).
Maintenance and Change Windows
Routine maintenance (cluster-level or node-level) intersects application workflows.
Planned Changes
Typical steps for app teams:
- Receive notice of cluster maintenance window from platform team.
- Confirm apps are using:
- Multiple replicas.
- Proper readiness/liveness probes.
- Optionally:
- Temporarily increase replicas to absorb disruptions.
- Disable or adjust batch submissions.
After maintenance:
- Perform health checks.
- Validate critical workflows.
- Review logs for anomalies.
OpenShift-Specific Maintenance for Apps
Common repetitive actions:
- Rotating TLS certificates used in Routes.
- Refreshing service account tokens or keys.
- Reapplying quotas and limits when scaling environments.
Typical Workflows for Batch and HPC-Style Jobs
On OpenShift clusters supporting HPC and specialized workloads, additional patterns emerge.
Batch Job Submission Workflow
- Prepare container image with necessary tools/libraries (e.g., MPI, scientific stacks).
- Define
Jobor custom CR (via an Operator for batch/HPC). - Submit job:
oc apply -f job.yaml- Monitor:
oc get jobsoc logsfor job pods.- Collect results from:
- Persistent storage.
- Object storage or external systems.
Repeat as needed, potentially via automation or workflow engines (e.g., Airflow, Argo Workflows).
Parallel/MPI Job Workflow
- Use specialized MPI container image.
- Define a job manifest that:
- Requests appropriate resources (CPU, memory, GPUs).
- Ensures pods are co-scheduled as required (via Operators or custom job controllers).
- Submit job and monitor its pods and logs.
- Validate intra-pod communication and performance metrics.
- Iterate on:
- MPI configuration.
- Node placement constraints.
- Resource requests/limits.
These workflows are usually integrated with standard OpenShift practices (CI/CD for images, GitOps for manifests, metrics-based tuning), but tailored to HPC patterns like job queues and tightly coupled parallel runs.
Putting It Together: Composite Workflow Example
A realistic OpenShift usage often combines several of these workflows:
- Developer updates code and pushes to Git.
- CI/CD pipeline:
- Builds container image.
- Runs tests.
- Updates GitOps manifests or image tags.
- GitOps promotion moves changes to
dev→test. - Configuration tweaks are applied via ConfigMaps/Secrets.
- Autoscaling policies are adjusted based on observed metrics.
- A run of batch/HPC jobs is triggered after deployment to generate data or perform heavy analysis.
- If an issue is observed:
- Incident workflow kicks in (logs, metrics, debug).
- Rollback occurs via deployment undo or Git revert.
- After stabilization:
- The version is promoted to production, following the same systematic flows.
These patterns—rather than individual commands or YAML fields—are what define “typical OpenShift workflows” in day-to-day practice.