Table of Contents
Goals of this Final Exercise
In this part of the final project, you will:
- Apply OpenShift security features to a real application.
- Configure and verify basic compliance with security best practices.
- Configure scaling (horizontal and, optionally, vertical).
- Observe behavior under load and during failures.
- Produce evidence (YAML, commands, screenshots) that your application is both secured and scalable.
This chapter assumes you already have a working application deployed for the final project and basic CI/CD in place from previous chapters.
Scenario Overview
You are responsible for a simple web application running on OpenShift:
- Stateless HTTP API or web frontend (container image chosen earlier in the project).
- Backed by a database or persistent store (optional but common).
- Deployed in a dedicated project/namespace.
- Accessible via a Route.
Your tasks:
- Harden the runtime security of the application.
- Protect configuration and secrets.
- Apply network-level protections.
- Implement autoscaling and resilience.
- Demonstrate that your configuration works (tests, logs, metrics).
1. Preparing the Baseline Deployment
Before adding security and scaling, make sure you have:
- A
DeploymentorDeploymentConfigfor your app. - A
ServiceandRouteexposing it. - At least one
ConfigMaporSecret(for configuration and credentials). - Resource requests/limits defined for the application containers.
Typical starting point (simplified example):
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: quay.io/example/demo-app:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: "50m"
memory: "128Mi"
limits:
cpu: "200m"
memory: "256Mi"You will progressively refine this with security and scaling configuration.
2. Hardening Runtime Security
In this project phase you apply OpenShift security features to your workload. You do not need to design a full security model for the cluster, but you must improve the pod-level configuration.
2.1 Run as Unprivileged User
Your pods should not run as root unless absolutely required.
Tasks:
- If your image supports it, configure the container to run as a non-root UID:
- Prefer: set an unprivileged user in the image.
- Alternatively: use
securityContextin the pod spec.
Example (pod-level):
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1001
containers:
- name: demo-app
image: quay.io/example/demo-app:latest
securityContext:
allowPrivilegeEscalation: falseVerification:
oc describe pod <pod-name>and check theUser/Security Context.- Confirm that the pod is not rejected by Security Context Constraints (SCC).
2.2 Minimize Capabilities and Privileges
Avoid privileged containers and unnecessary Linux capabilities.
Tasks:
- Ensure the pod does not run as
privileged: true. - Drop capabilities if your container does not need them.
Example:
containers:
- name: demo-app
image: quay.io/example/demo-app:latest
securityContext:
privileged: false
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]Verification:
oc get pod <pod> -o yamlto verify these flags.- If something breaks, identify which capability (if any) is needed and add only that.
2.3 Limit File System Access
Enforce read-only root filesystem if your app can tolerate it.
Tasks:
- Configure container filesystem as read-only.
- Use
emptyDiror other writable volumes only where needed (e.g., temp files).
Example:
spec:
volumes:
- name: tmp
emptyDir: {}
containers:
- name: demo-app
image: quay.io/example/demo-app:latest
volumeMounts:
- name: tmp
mountPath: /tmp
securityContext:
readOnlyRootFilesystem: trueVerification:
- Application still runs correctly (test write paths).
- Attempts to write outside the writable volumes should fail.
3. Protecting Configuration and Secrets
In the final project you must demonstrate correct handling of configuration and sensitive data. Here you integrate earlier concepts into a concrete application.
3.1 Move Sensitive Data to Secrets
Tasks:
- Identify secrets: database credentials, API keys, tokens.
- Create
Secretobjects instead of hardcoding them.
Example secret:
oc create secret generic demo-db-secret \
--from-literal=username=demo \
--from-literal=password=changeMeMounting via environment variables:
containers:
- name: demo-app
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: demo-db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: password
key: passwordVerification:
oc describe secret demo-db-secret(values not shown in clear-text).oc exec <pod> -- env | grep DB_to confirm injection.
3.2 Separate Non-Sensitive Configuration into ConfigMaps
Tasks:
- Store app configuration like feature flags, URLs, or log levels in
ConfigMap. - Ensure that changing a
ConfigMapdoes not expose secrets.
Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: demo-app-config
data:
LOG_LEVEL: "info"
FEATURE_X_ENABLED: "true"Mount via env:
containers:
- name: demo-app
envFrom:
- configMapRef:
name: demo-app-configVerification:
- Updating config:
oc apply -f demo-app-config.yaml. - Trigger a rollout to pick up new environment variables if required.
3.3 Avoid Leaking Secrets in Logs and CI
Tasks for the project report:
- Check application logs for accidental printing of credentials.
- Ensure CI/CD pipelines do not log secret values.
- If you use build-time variables, ensure secrets are injected only at runtime.
Evidence:
- Provide a short excerpt of logs (redacted if needed) proving that log messages do not include credentials.
- Summarize how secrets are passed from CI to OpenShift (e.g., sealed secrets, cluster-managed secrets, or manual).
4. Network-Level Security
You will implement at least basic network controls around your application.
4.1 Restrict Pod-to-Pod Traffic with NetworkPolicies
Tasks:
- Identify which Pods or namespaces should talk to your application.
- Create a
NetworkPolicythat: - Denies all inbound traffic by default, then
- Allows only your front-end or ingress controller namespaces, as appropriate.
Example: allow only traffic from Pods with label role=frontend in the same namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: demo-app-allow-frontend
spec:
podSelector:
matchLabels:
app: demo-app
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 8080Verification:
- Confirm connectivity from allowed Pods.
- Confirm that a test Pod without the
role=frontendlabel cannot reach the app (e.g., usecurlfrom a debug pod).
4.2 Secure External Access
Tasks:
- Ensure the Route uses TLS (edge or passthrough/reencrypt) as appropriate.
- If your application supports it, enforce HTTPS-only redirects.
Example: edge-terminated Route (conceptual snippet):
spec:
tls:
termination: edge
insecureEdgeTerminationPolicy: RedirectVerification:
- Access the app over HTTP and confirm redirection to HTTPS.
- Validate the certificate from a browser or
curl -v https://....
5. Enforcing Resource Controls and Quotas
Securing and scaling also involves controlling resource consumption.
5.1 Set Sensible Requests and Limits
Tasks:
- Tune CPU and memory requests/limits based on minimal working configuration and test load.
- Avoid setting extremely high limits that would allow a single Pod to starve others.
Example (per-container):
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"Verification:
- Observe pod resource usage via the console or
oc adm top pod. - Adjust values where pods are frequently throttled or OOMKilled.
5.2 Use ResourceQuotas and LimitRanges (Project-Level)
For the final project you don’t have to design a full multi-team policy, but you should:
- Show a simple
LimitRangeorResourceQuotaapplied to your project. - Confirm that your workload fits within it and that misconfigured pods are rejected.
Example LimitRange:
apiVersion: v1
kind: LimitRange
metadata:
name: demo-limits
spec:
limits:
- type: Container
default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "100m"
memory: "256Mi"Verification:
- Try to create a container without requests/limits and see defaults applied.
- Show that pods violating quotas are rejected with clear error messages.
6. Implementing Horizontal Scaling
The core of this section is configuring and validating horizontal pod autoscaling (HPA) for your app.
6.1 Create a Horizontal Pod Autoscaler
Prerequisites:
- Your container exposes metrics compatible with the OpenShift monitoring stack, or you will use CPU utilization as a signal.
requests.cpuis set (HPA uses it to compute utilization).
Task:
- Configure an
HPAfor yourDeploymentorDeploymentConfig.
Example:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: demo-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: demo-app
minReplicas: 2
maxReplicas: 6
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 60
Alternatively, use oc autoscale:
oc autoscale deployment/demo-app \
--min=2 --max=6 \
--cpu-percent=60Verification:
oc get hpato confirm status and metrics.- Initially, the HPA should show current replicas = 2 and utilization under target.
6.2 Generate Load and Observe Scaling
Tasks:
- Use a simple load-generating tool (e.g.,
ab,hey,curlloop) from: - A separate Pod in the same cluster, or
- External client through the Route.
Example (inside a debug pod):
for i in {1..10000}; do curl -s http://demo-app:8080/health > /dev/null; doneObservation:
- Watch pods scale:
watch -n5 oc get podsandoc get hpa. - Document how long it takes for:
- HPA to detect high utilization.
- New pods to be created and become Ready.
- After stopping the load, note how long until the number of pods scales back down.
Evidence for your report:
- Screenshots or command output showing HPA status and pods count over time.
- Short explanation of why you chose the min/max replicas and target utilization.
7. Optional: Vertical Scaling and Right-Sizing
If time permits, explore vertical adjustments based on observed usage.
Tasks:
- Increase or decrease
requests/limitsto better match real usage. - Observe whether HPA behavior changes.
- Optionally, experiment with a vertical pod autoscaler (if available in your environment), but this is not required.
Report:
- Describe one adjustment you made (e.g., decreasing memory limit) and the observed effect on stability or performance.
8. Making the Application Resilient
Security and scaling are incomplete without resilience. You will use probes and replica strategies to improve availability.
8.1 Health Probes
Tasks:
- Configure
livenessProbeandreadinessProbe(orstartupProbeif needed). - Ensure that bad deployments are not considered Ready and unhealthy pods are restarted.
Example:
containers:
- name: demo-app
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10Verification:
- Break the app intentionally (e.g., simulate a deadlock or crash) and watch
oc describe podevents for probe failures. - Confirm that traffic is only routed to Ready pods.
8.2 Deployment Strategy and Rollback
Tasks:
- Use a rolling update strategy so that scaling and updates do not cause downtime.
- Demonstrate a rollback when a faulty, insecure, or misconfigured version is deployed.
Example (snippet):
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1Experiment:
- Deploy a new version that intentionally fails health checks.
- Observe the rollout being paused or failing.
- Roll back to the previous version:
- Using
oc rollout undo deployment/demo-app.
Evidence:
- Output of
oc rollout historyandoc get podsduring the failed rollout. - Short write-up of what went wrong and how rollback restored service.
9. Validation and Reporting Requirements
For this final project component, you will submit a short report or walkthrough describing:
- Security changes:
- Non-root user and privilege settings.
- Use of
SecretsandConfigMaps. - Network policies and Route TLS settings.
- Scaling behavior:
- HPA configuration (YAML snippet or command).
- Measurements: time to scale out/in under load.
- Final chosen min/max replicas and rationale.
- Resilience:
- Description of probes and deployment strategy.
- Evidence of successful recovery from simulated failures.
- Trade-offs and Lessons Learned:
- Where you balanced strict security vs. practicality (e.g., why you did not set
readOnlyRootFilesystemin some case). - How scaling and security affected each other (e.g., HPA and resource limits, network policies and health checks).
Where possible, attach:
- Key YAML manifests (trimmed to relevant sections).
- Command outputs (
oc get,oc describe,oc logs,oc get hpa). - Optional screenshots from the OpenShift web console demonstrating security and scaling behavior.
By the end of this exercise, you should have a concrete, working example of an application that is both reasonably hardened and capable of scaling automatically under load, with clear evidence that your configuration behaves as intended.