Table of Contents
Project overview and objectives
In this final project component, you will deploy a simple containerized application to OpenShift and expose it for external access. The goal is to demonstrate an end‑to‑end workflow using the OpenShift CLI and/or web console:
- Start from a container image in a registry (or from source, if you choose).
- Create and configure the necessary OpenShift resources.
- Deploy the application and verify it runs correctly.
- Expose the application and validate external access.
- Capture key information for later reporting and discussion.
You are not expected to design complex architectures here; the focus is on correctly using OpenShift primitives and understanding how they fit together for a typical deployment.
Prerequisites and environment setup
Before starting, ensure you have:
- Access to a running OpenShift cluster (shared lab, local, or managed service).
- A project/namespace you can use:
- Either a pre‑created project assigned to you.
- Or the ability to create one, e.g.
oc new-project <your-name>-final. ocCLI installed and logged in:oc whoamishould return your username.- A container image to deploy. For the base project you can use:
- A public example, e.g.
quay.io/redhattraining/hello-world-nginxordocker.io/library/nginx:alpine. - Or your own image pushed to a registry you can access.
Make sure you know:
- Your project name.
- The cluster console URL.
- The registry URL if you are using a private image (credentials, if needed).
Choosing an application for deployment
Your application should be:
- Simple, stateless, and HTTP‑based for easy validation.
- Exposing at least one TCP port (typically 80 or 8080).
- Healthy when started without extra configuration, or with minimal environment variables.
Examples:
- A static site served by Nginx or Apache.
- A basic REST API written in Python/Node.js/Go.
- A small app used in previous course labs, rebuilt into a container.
Decide upfront:
- Which image and tag you will use.
- What port the container listens on.
- Whether it needs configuration via environment variables or ConfigMaps/Secrets.
- Whether you want to deploy from:
- Existing image (simplest final project variant), or
- Source repository (using S2I or a Dockerfile, if your environment supports it).
Deployment workflow overview
For the project, you should follow a clear sequence:
- Prepare project/namespace: create or select a project.
- Create application deployment:
- Using Deployments or DeploymentConfigs (as applicable in your environment).
- Specify container image, ports, environment, and resource requests/limits.
- Create a Service:
- To provide stable networking to your Pods.
- Expose the Service externally:
- Using a Route (OpenShift) or Ingress (if required).
- Verify deployment:
- Check Pod status, logs, events, and resource usage.
- Test access:
- From inside the cluster (service) and from outside (route/URL).
- Document results:
- Commands used, screenshots, issues encountered, and fixes.
The rest of this chapter walks you through each step in a way you can adapt to your chosen application.
Step 1: Create and prepare your project
If not already given a project, create one:
oc new-project <your-name>-final \
--display-name="<Your Name> Final Project"Verify the active project:
oc projectOptionally, label your project to help identify final project work:
oc label namespace <your-name>-final project=finalIn the web console:
- Go to Home → Projects → Create Project.
- Enter a name and display name.
- Confirm your new project is selected in the project dropdown.
Step 2: Create the application deployment
You can create the deployment from the web console or the CLI. Both approaches are valid; for the final project, be prepared to describe which method you used and why.
2.1 Using the web console
- Ensure your project is selected.
- Navigate to Developer perspective.
- Choose one of the following:
- From Container Image:
- Click +Add → Container Image.
- Enter the image name, e.g.
quay.io/redhattraining/hello-world-nginx. - Select the target Resource type:
- Typically Deployment for most workloads.
- Set the Application name (logical group) and Name (resource name).
- Optionally set:
- Container port (if not auto‑detected).
- Environment variables.
- Resource requests/limits (CPU, memory).
- Enable Create a route to the application if you want OpenShift to create a Service and Route automatically.
- From Source (optional advanced variant):
- Click +Add → From Git (or similar).
- Fill in Git URL and builder image or Dockerfile strategy.
- OpenShift may create a BuildConfig, ImageStream, and Deployment automatically.
- Click Create.
- After creation:
- Go to Topology view.
- Confirm your application appears as a circle (or card) with Pods created.
2.2 Using the CLI
A minimal deployment using oc create deployment:
oc create deployment hello-final \
--image=quay.io/redhattraining/hello-world-nginxIf your application listens on a non‑standard port, specify it later in the Service (next step). To set environment variables or resource requests, you can patch or edit:
oc set env deployment/hello-final APP_MESSAGE="Hello from OpenShift"
oc set resources deployment/hello-final --requests=cpu=50m,memory=128MiAlternatively, create a YAML manifest (recommended for reproducibility):
cat <<'EOF' | oc apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-final
spec:
replicas: 2
selector:
matchLabels:
app: hello-final
template:
metadata:
labels:
app: hello-final
spec:
containers:
- name: web
image: quay.io/redhattraining/hello-world-nginx
ports:
- containerPort: 8080
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
EOF
Adjust image, containerPort, and resources to fit your application.
Step 3: Create a Service for internal access
If you did not let the web console auto‑create a Service, or if you created your Deployment from the CLI, you need to define a Service manually.
3.1 Using the CLI
Expose a TCP port from your deployment:
oc expose deployment/hello-final \
--port=80 \
--target-port=8080 \
--name=hello-final-svc--port: the port that the Service will expose inside the cluster.--target-port: the container port (from your Deployment).
Verify:
oc get svc hello-final-svcCheck that the selector labels match the Deployment’s Pod labels:
oc get svc hello-final-svc -o yaml | grep -A3 selector
oc get pods --show-labelsIf needed, adjust labels on the Deployment Pod template to align with the Service selector.
3.2 Using the web console
- In Developer → Topology, click your application.
- In the side panel:
- If no Service exists, click Create Service (or Add → Service).
- Specify:
- Port name (e.g.
http). - Service port (e.g.
80). - Target port (your container port, such as
8080). - Save and verify that the Service appears under Resources for your application.
Step 4: Expose the application externally
OpenShift typically uses Routes to provide external HTTP/HTTPS access.
4.1 Create a Route from the CLI
Assuming you have hello-final-svc:
oc expose service hello-final-svc \
--name=hello-final-routeCheck the route:
oc get route hello-final-route
Note the HOST/PORT column; combine it with http:// or https:// as appropriate to get your test URL.
4.2 Create a Route from the web console
- In Developer → Topology, click your application.
- In the side panel, under Routes, click Create Route (if not already present).
- Configure:
- Name (e.g.
hello-final-route). - Hostname:
- Leave blank for auto‑generated, or
- Specify a custom host if your environment supports it.
- Target port: your Service’s HTTP port.
- TLS options if HTTPS is configured in your cluster.
- Create the route and note the URL shown.
Step 5: Validate your deployment
Perform a series of basic checks and validations. Capture output or screenshots for your final report.
5.1 Verify Pod health and rollout
Check Pods:
oc get podsDescribe the Deployment and Pods for more detail:
oc describe deployment hello-final
oc describe pod <pod-name>Look for:
- Desired vs current replicas.
- Container image and restarts.
- Events indicating scheduling or image pull issues.
If you changed the Deployment (for example, to adjust the image tag), watch the rollout:
oc rollout status deployment/hello-final5.2 Check application logs
View container logs:
oc logs deployment/hello-final
# or for a specific Pod:
oc logs <pod-name>Confirm that the application starts successfully and is listening on the expected port. If you observe repeated restarts, note the error messages for your final report.
5.3 Test connectivity
Internal test using the Service:
- Option 1: Run a temporary debug Pod:
oc run tester --image=registry.access.redhat.com/ubi9/ubi-minimal -it --rm -- bash
curl http://hello-final-svc:80- Option 2: Use
oc debugon an existing Pod (if familiar and allowed).
External test using the Route:
- From your workstation or browser:
- Navigate to the Route URL, e.g.
http://hello-final-route-<project>.<cluster-domain>. - Confirm you see your application’s landing page or API response.
- Or use CLI
curl:
curl http://$(oc get route hello-final-route -o jsonpath='{.spec.host}')Capture the HTTP response and status code; mention them in your project documentation.
Step 6: Basic configuration and scaling tasks
To demonstrate that your application is manageable on OpenShift, perform at least one configuration change and one scaling action.
6.1 Configuration change
Choose one or more small changes:
- Add or modify an environment variable:
oc set env deployment/hello-final APP_GREETING="Hello from final project"
oc rollout status deployment/hello-final- Adjust resource requests or limits:
oc set resources deployment/hello-final \
--requests=cpu=100m,memory=128Mi \
--limits=cpu=300m,memory=256Mi- Connect a ConfigMap or Secret (if applicable to your app).
Verify that a new rollout occurs and that Pods become Running again.
6.2 Scale the application
Increase or decrease the number of replicas:
oc scale deployment/hello-final --replicas=3
oc get pods -l app=hello-finalOptionally, use the web console:
- In Topology, click the application.
- Use the scale controls (slider or number) to change replicas.
Confirm that load is spread across Pods by hitting the Route multiple times (if your app can show which Pod handled a request).
Step 7: Collect artifacts and document your work
As part of the final project, you should produce a concise record of what you did. At minimum, capture:
- Resources created:
- Output of:
oc get all
oc get route- Configuration:
- YAML for key resources:
oc get deployment hello-final -o yaml > deployment-final.yaml
oc get svc hello-final-svc -o yaml > service-final.yaml
oc get route hello-final-route -o yaml > route-final.yaml- Evidence of successful access:
- HTTP responses from
curlor browser screenshots.
In your write‑up (or presentation), briefly address:
- What image or source you deployed and why.
- Any configuration or resource tuning you applied.
- How you exposed the application to external users.
- Problems you encountered and how you solved them.
- How you would improve or extend this deployment (e.g., add CI/CD, logging, TLS).
Optional extensions for advanced variants
If time and environment allow, you can enhance your project by:
- Using a BuildConfig to build from source within OpenShift.
- Integrating with ConfigMaps and Secrets for configuration and credentials.
- Adding probes (readiness/liveness) to your Deployment spec.
- Defining a ResourceQuota or LimitRange in your project and showing its effect.
- Configuring TLS for your Route, using edge termination or re‑encrypt.
These are not required for a basic pass but will deepen your understanding and can strengthen your final project evaluation.