Kahibaro
Discord Login Register

Typical OpenShift workflows

From Idea to Running Application: A Day-in-the-Life View

This chapter focuses on how OpenShift is typically used in practice: the concrete sequences of steps teams follow day-to-day. Think of these as “recipes” or “patterns” that appear again and again, regardless of the specific application or organization.

We’ll look at common workflows from several perspectives:

Developer Inner Loop on OpenShift

The “inner loop” is what a developer does repeatedly during daily work: write code, run it, test it, and iterate. In OpenShift environments, this usually follows one of a few patterns.

Pattern 1: Source-to-Image (S2I) Driven Loop

Typical when the platform team encourages S2I and Git-based builds:

  1. Initial setup
    • Developer gets access to a project (namespace).
    • A BuildConfig and Deployment (or DeploymentConfig) are prepared, often via templates or GitOps.
  2. Code changes
    • Developer edits code locally.
    • Changes are pushed to a Git repository branch.
  3. Automatic build and image creation
    • A Git webhook triggers an OpenShift Build (BuildConfig).
    • S2I pulls the source, combines it with a builder image, and produces a new runtime image.
    • The image is pushed into the internal registry.
  4. Automatic deployment
    • The associated Deployment/DeploymentConfig detects the new image.
    • A rollout of the new application version occurs (usually rolling).
  5. Verification
    • Developer accesses the application via route/URL.
    • Uses OpenShift web console or oc to:
      • Inspect logs: oc logs
      • Check pod health: oc get pods
      • View events: oc describe pod ...
  6. Iterate
    • If the change is good: commit and push.
    • If there’s an issue: fix code, push again, repeat the build-rollout cycle.

Where CI/CD pipelines are used, some of these steps shift into Tekton or external CI tools, but the observable workflow (push → build → deploy → test) stays similar.

Pattern 2: Container-First Loop (Local Build, Cluster Run)

Common when teams prefer Docker/podman builds or need custom images:

  1. Developer builds and tests container images locally (docker build or podman build).
  2. Image is pushed to a registry that OpenShift can access.
  3. OpenShift Deployment/DeploymentConfig is updated to use the new image tag (manually or via automation).
  4. Rollout is triggered and verified.
  5. Developer iterates on both Dockerfile and code.

This pattern is also common in HPC workloads where custom MPI/GPU images are curated by specialized teams.

Onboarding an Application to OpenShift

When an existing (non-containerized or external) application is moved onto OpenShift, teams usually follow a structured onboarding workflow.

Step 1: Assess and Containerize

Typical tasks (often outside the cluster at first):

Outcome: a runnable container image and a basic understanding of runtime requirements (ports, resources, storage).

Step 2: Define OpenShift Resources

In practice, this usually becomes a set of YAML definitions, often version-controlled:

Teams often create a “starter” YAML for new apps, or use templates/Helm/Kustomize or GitOps repositories.

Step 3: Create a Project and Apply Manifests

Typical sequence:

  1. Create or select the project (namespace) where the app will live.
  2. Apply manifests:
    • oc apply -f app.yaml
  3. Check object status:
    • oc get deployment
    • oc get pods
  4. Verify routing:
    • Confirm the app is reachable via its Route URL.

Step 4: Integrate with CI/CD

Most organizations standardize on:

The onboarding workflow typically ends when:

Configuration and Secrets Change Workflow

Configuration and secret management changes are frequent and must be low-risk. Typical patterns:

Config Change (Non-Secret)

  1. Edit configuration values
    • Edit ConfigMap manifest in Git.
    • Or use oc edit configmap ... (less favored in GitOps setups).
  2. Apply updated ConfigMap
    • oc apply -f configmap.yaml
  3. Trigger config reload
    • If app reads config on start:
      • Restart pods (e.g., oc rollout restart deployment/myapp).
    • If app supports live reload:
      • It may watch mounted config files directly.
  4. Validate
    • Check pods status and logs.
    • Run basic functional checks.

In GitOps, the change is normally only made in Git, and an Operator reconciles it onto the cluster.

Secret Change (Credentials, Tokens, Keys)

  1. Change secrets in a secure system (vault, secret manager, or encrypted Git).
  2. Secret is synchronized or updated in OpenShift:
    • oc apply -f secret.yaml
  3. Restart or rollout application so it picks up new values (if not hot-reload capable).
  4. Confirm:
    • Connections still succeed (databases, APIs).
    • No authentication failures in logs.

A common rule is: never patch secrets directly from local files for production; use approved, audited workflows.

Multi-Environment Promotion Workflow

Applications rarely exist only in one project. A typical path is:

Different teams customize this, but the common elements are:

Separation of Concerns

Promotion Approaches

1. Image Tag Promotion

  1. Build image once (e.g., in dev pipeline).
  2. Test in dev.
  3. Promote by retagging image in a registry:
    • e.g., myapp:1.2.3-devmyapp:1.2.3-testmyapp:1.2.3-prod.
  4. Env-specific deployments are wired to use different tags.

Benefits:

2. GitOps-Based Promotion

  1. Separate Git repos or branches per environment.
  2. A successful dev/test result triggers:
    • A pull request to update manifests (or image tags) in the next environment’s repo/branch.
  3. A GitOps Operator watches the repo and applies changes to the respective cluster/namespace.
  4. Auditing and approvals happen via Git workflows.

This is increasingly common for OpenShift deployments.

Incident, Debug, and Rollback Workflows

When something goes wrong, teams follow repeatable operational workflows.

Observed Problem

Triage Workflow

Common initial steps:

  1. Identify failing components
    • oc get pods and filter by STATUS.
    • Check events: oc describe pod ....
  2. Gather logs and metrics
    • Application logs: oc logs (with -p for previous container).
    • Metrics and dashboards (from built-in monitoring stack).
    • Logs stack (e.g., Loki/Elastic) for cluster-wide logs.
  3. Reproduce issue
    • Use the app’s route URL.
    • For backend services, use oc rsh into a pod to run curl or other test clients.

Common Debug Actions

Rollback Workflow

Depending on the deployment mechanism:

Rolling Back a Deployment/DeploymentConfig

  1. Check rollout history.
  2. Roll back to a previous version:
    • For Deployment: update image tag, or use oc rollout undo if using native Kubernetes semantics.
    • For DeploymentConfig: oc rollout undo dc/myapp.
  3. Monitor the new rollout and validate.

GitOps Rollback

  1. Revert or restore a previous commit in Git.
  2. Wait for Operator to reconcile, or force sync.
  3. Confirm application behavior.

Rollbacks are often coupled with incident reports and post-incident reviews to improve future workflows.

Scaling and Resource Adjustment Workflows

Scaling is a core operational workflow for production and HPC-style workloads.

Manual Scale Up/Down

  1. Change number of replicas:
    • oc scale deployment myapp --replicas=5
  2. Confirm:
    • oc get pods to ensure all new pods are running.
  3. Monitor metrics (CPU, memory, latency) for impact.

This pattern is widely used during:

Autoscaling Tuning

While separate chapters cover autoscaling details, the workflow around it typically includes:

  1. Analyze baseline consumption from metrics.
  2. Adjust HorizontalPodAutoscaler or other scaling policies.
  3. Validate that scale-out/in events happen as expected.
  4. Set guardrails (max replicas, scheduling constraints).

In HPC-like scenarios with GPU or large-memory pods, scaling also involves confirming resource availability and queueing behavior (possibly through custom controllers or Operators).

Maintenance and Change Windows

Routine maintenance (cluster-level or node-level) intersects application workflows.

Planned Changes

Typical steps for app teams:

  1. Receive notice of cluster maintenance window from platform team.
  2. Confirm apps are using:
    • Multiple replicas.
    • Proper readiness/liveness probes.
  3. Optionally:
    • Temporarily increase replicas to absorb disruptions.
    • Disable or adjust batch submissions.

After maintenance:

OpenShift-Specific Maintenance for Apps

Common repetitive actions:

Typical Workflows for Batch and HPC-Style Jobs

On OpenShift clusters supporting HPC and specialized workloads, additional patterns emerge.

Batch Job Submission Workflow

  1. Prepare container image with necessary tools/libraries (e.g., MPI, scientific stacks).
  2. Define Job or custom CR (via an Operator for batch/HPC).
  3. Submit job:
    • oc apply -f job.yaml
  4. Monitor:
    • oc get jobs
    • oc logs for job pods.
  5. Collect results from:
    • Persistent storage.
    • Object storage or external systems.

Repeat as needed, potentially via automation or workflow engines (e.g., Airflow, Argo Workflows).

Parallel/MPI Job Workflow

  1. Use specialized MPI container image.
  2. Define a job manifest that:
    • Requests appropriate resources (CPU, memory, GPUs).
    • Ensures pods are co-scheduled as required (via Operators or custom job controllers).
  3. Submit job and monitor its pods and logs.
  4. Validate intra-pod communication and performance metrics.
  5. Iterate on:
    • MPI configuration.
    • Node placement constraints.
    • Resource requests/limits.

These workflows are usually integrated with standard OpenShift practices (CI/CD for images, GitOps for manifests, metrics-based tuning), but tailored to HPC patterns like job queues and tightly coupled parallel runs.

Putting It Together: Composite Workflow Example

A realistic OpenShift usage often combines several of these workflows:

  1. Developer updates code and pushes to Git.
  2. CI/CD pipeline:
    • Builds container image.
    • Runs tests.
    • Updates GitOps manifests or image tags.
  3. GitOps promotion moves changes to devtest.
  4. Configuration tweaks are applied via ConfigMaps/Secrets.
  5. Autoscaling policies are adjusted based on observed metrics.
  6. A run of batch/HPC jobs is triggered after deployment to generate data or perform heavy analysis.
  7. If an issue is observed:
    • Incident workflow kicks in (logs, metrics, debug).
    • Rollback occurs via deployment undo or Git revert.
  8. After stabilization:
    • The version is promoted to production, following the same systematic flows.

These patterns—rather than individual commands or YAML fields—are what define “typical OpenShift workflows” in day-to-day practice.

Views: 12

Comments

Please login to add a comment.

Don't have an account? Register now!