Kahibaro
Discord Login Register

Deploying a containerized application

Project overview and objectives

In this final project component, you will deploy a simple containerized application to OpenShift and expose it for external access. The goal is to demonstrate an end‑to‑end workflow using the OpenShift CLI and/or web console:

You are not expected to design complex architectures here; the focus is on correctly using OpenShift primitives and understanding how they fit together for a typical deployment.

Prerequisites and environment setup

Before starting, ensure you have:

Make sure you know:

Choosing an application for deployment

Your application should be:

Examples:

Decide upfront:

Deployment workflow overview

For the project, you should follow a clear sequence:

  1. Prepare project/namespace: create or select a project.
  2. Create application deployment:
    • Using Deployments or DeploymentConfigs (as applicable in your environment).
    • Specify container image, ports, environment, and resource requests/limits.
  3. Create a Service:
    • To provide stable networking to your Pods.
  4. Expose the Service externally:
    • Using a Route (OpenShift) or Ingress (if required).
  5. Verify deployment:
    • Check Pod status, logs, events, and resource usage.
  6. Test access:
    • From inside the cluster (service) and from outside (route/URL).
  7. Document results:
    • Commands used, screenshots, issues encountered, and fixes.

The rest of this chapter walks you through each step in a way you can adapt to your chosen application.

Step 1: Create and prepare your project

If not already given a project, create one:

oc new-project <your-name>-final \
  --display-name="<Your Name> Final Project"

Verify the active project:

oc project

Optionally, label your project to help identify final project work:

oc label namespace <your-name>-final project=final

In the web console:

  1. Go to Home → Projects → Create Project.
  2. Enter a name and display name.
  3. Confirm your new project is selected in the project dropdown.

Step 2: Create the application deployment

You can create the deployment from the web console or the CLI. Both approaches are valid; for the final project, be prepared to describe which method you used and why.

2.1 Using the web console

  1. Ensure your project is selected.
  2. Navigate to Developer perspective.
  3. Choose one of the following:
    • From Container Image:
      • Click +AddContainer Image.
      • Enter the image name, e.g. quay.io/redhattraining/hello-world-nginx.
      • Select the target Resource type:
        • Typically Deployment for most workloads.
      • Set the Application name (logical group) and Name (resource name).
      • Optionally set:
        • Container port (if not auto‑detected).
        • Environment variables.
        • Resource requests/limits (CPU, memory).
      • Enable Create a route to the application if you want OpenShift to create a Service and Route automatically.
    • From Source (optional advanced variant):
      • Click +AddFrom Git (or similar).
      • Fill in Git URL and builder image or Dockerfile strategy.
      • OpenShift may create a BuildConfig, ImageStream, and Deployment automatically.
  4. Click Create.
  5. After creation:
    • Go to Topology view.
    • Confirm your application appears as a circle (or card) with Pods created.

2.2 Using the CLI

A minimal deployment using oc create deployment:

oc create deployment hello-final \
  --image=quay.io/redhattraining/hello-world-nginx

If your application listens on a non‑standard port, specify it later in the Service (next step). To set environment variables or resource requests, you can patch or edit:

oc set env deployment/hello-final APP_MESSAGE="Hello from OpenShift"
oc set resources deployment/hello-final --requests=cpu=50m,memory=128Mi

Alternatively, create a YAML manifest (recommended for reproducibility):

cat <<'EOF' | oc apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-final
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-final
  template:
    metadata:
      labels:
        app: hello-final
    spec:
      containers:
      - name: web
        image: quay.io/redhattraining/hello-world-nginx
        ports:
        - containerPort: 8080
        resources:
          requests:
            cpu: 50m
            memory: 128Mi
          limits:
            cpu: 200m
            memory: 256Mi
EOF

Adjust image, containerPort, and resources to fit your application.

Step 3: Create a Service for internal access

If you did not let the web console auto‑create a Service, or if you created your Deployment from the CLI, you need to define a Service manually.

3.1 Using the CLI

Expose a TCP port from your deployment:

oc expose deployment/hello-final \
  --port=80 \
  --target-port=8080 \
  --name=hello-final-svc

Verify:

oc get svc hello-final-svc

Check that the selector labels match the Deployment’s Pod labels:

oc get svc hello-final-svc -o yaml | grep -A3 selector
oc get pods --show-labels

If needed, adjust labels on the Deployment Pod template to align with the Service selector.

3.2 Using the web console

  1. In Developer → Topology, click your application.
  2. In the side panel:
    • If no Service exists, click Create Service (or AddService).
    • Specify:
      • Port name (e.g. http).
      • Service port (e.g. 80).
      • Target port (your container port, such as 8080).
  3. Save and verify that the Service appears under Resources for your application.

Step 4: Expose the application externally

OpenShift typically uses Routes to provide external HTTP/HTTPS access.

4.1 Create a Route from the CLI

Assuming you have hello-final-svc:

oc expose service hello-final-svc \
  --name=hello-final-route

Check the route:

oc get route hello-final-route

Note the HOST/PORT column; combine it with http:// or https:// as appropriate to get your test URL.

4.2 Create a Route from the web console

  1. In Developer → Topology, click your application.
  2. In the side panel, under Routes, click Create Route (if not already present).
  3. Configure:
    • Name (e.g. hello-final-route).
    • Hostname:
      • Leave blank for auto‑generated, or
      • Specify a custom host if your environment supports it.
    • Target port: your Service’s HTTP port.
    • TLS options if HTTPS is configured in your cluster.
  4. Create the route and note the URL shown.

Step 5: Validate your deployment

Perform a series of basic checks and validations. Capture output or screenshots for your final report.

5.1 Verify Pod health and rollout

Check Pods:

oc get pods

Describe the Deployment and Pods for more detail:

oc describe deployment hello-final
oc describe pod <pod-name>

Look for:

If you changed the Deployment (for example, to adjust the image tag), watch the rollout:

oc rollout status deployment/hello-final

5.2 Check application logs

View container logs:

oc logs deployment/hello-final
# or for a specific Pod:
oc logs <pod-name>

Confirm that the application starts successfully and is listening on the expected port. If you observe repeated restarts, note the error messages for your final report.

5.3 Test connectivity

Internal test using the Service:

  oc run tester --image=registry.access.redhat.com/ubi9/ubi-minimal -it --rm -- bash
  curl http://hello-final-svc:80

External test using the Route:

  curl http://$(oc get route hello-final-route -o jsonpath='{.spec.host}')

Capture the HTTP response and status code; mention them in your project documentation.

Step 6: Basic configuration and scaling tasks

To demonstrate that your application is manageable on OpenShift, perform at least one configuration change and one scaling action.

6.1 Configuration change

Choose one or more small changes:

  oc set env deployment/hello-final APP_GREETING="Hello from final project"
  oc rollout status deployment/hello-final
  oc set resources deployment/hello-final \
    --requests=cpu=100m,memory=128Mi \
    --limits=cpu=300m,memory=256Mi

Verify that a new rollout occurs and that Pods become Running again.

6.2 Scale the application

Increase or decrease the number of replicas:

oc scale deployment/hello-final --replicas=3
oc get pods -l app=hello-final

Optionally, use the web console:

  1. In Topology, click the application.
  2. Use the scale controls (slider or number) to change replicas.

Confirm that load is spread across Pods by hitting the Route multiple times (if your app can show which Pod handled a request).

Step 7: Collect artifacts and document your work

As part of the final project, you should produce a concise record of what you did. At minimum, capture:

    oc get all
    oc get route
    oc get deployment hello-final -o yaml > deployment-final.yaml
    oc get svc hello-final-svc -o yaml > service-final.yaml
    oc get route hello-final-route -o yaml > route-final.yaml

In your write‑up (or presentation), briefly address:

Optional extensions for advanced variants

If time and environment allow, you can enhance your project by:

These are not required for a basic pass but will deepen your understanding and can strengthen your final project evaluation.

Views: 12

Comments

Please login to add a comment.

Don't have an account? Register now!