Table of Contents
Core concepts of network security in OpenShift
In OpenShift, “network security” is mostly about controlling which workloads can talk to which other workloads, and how traffic enters and leaves the cluster, while integrating with Kubernetes’ native networking and OpenShift-specific features.
At a high level, you will work with:
- Kubernetes NetworkPolicies and the OpenShift SDN / OVN-Kubernetes implementation
- Ingress, Routes, and TLS for secure external access
- Service-level controls (service types, ports, and protocols)
- Internal DNS and name-based access control
- Integration with external firewalls and load balancers
This chapter focuses on these mechanisms specifically from a security perspective, not general networking behavior.
Default network behavior and security implications
By default (unless changed by cluster configuration):
- Pods in the same cluster network can usually talk to each other on any port and protocol.
- Services provide stable virtual IPs and DNS names, but do not restrict which pods can connect to them.
- External access is controlled by Routes/Ingress and cluster edge components (ingress controllers, load balancers, firewalls).
From a security standpoint, you should think about:
- Least privilege connectivity: only allow the traffic that is truly needed.
- Segmentation: separate different applications, environments (dev/test/prod), or tenants using namespaces and network policies.
- Zero trust mindset: do not rely on “everything inside the cluster is trusted.”
Most of this is enforced through NetworkPolicies and how you configure Routes/Ingress and external access.
NetworkPolicies: controlling pod-to-pod traffic
NetworkPolicies are the primary mechanism to secure traffic within the cluster.
What a NetworkPolicy does (security perspective)
- Lets you express which pods are allowed to receive traffic (ingress) from which sources.
- Lets you express which pods are allowed to send traffic (egress) to which destinations.
- Uses labels and selectors (not IP addresses alone) to define policies that follow your workloads as they move or scale.
- Is namespace-scoped: each policy applies to pods in its own namespace.
The effect of a NetworkPolicy depends on whether policies exist for the targeted pods:
- If no NetworkPolicy selects a pod, it typically remains fully open (cluster-wide connectivity).
- Once one or more NetworkPolicies select a pod, its traffic is restricted to only what those policies allow.
Whether egress is enforced depends on the network plugin (OpenShift SDN vs OVN-Kubernetes) and configuration, but on modern OpenShift with OVN-Kubernetes, both ingress and egress are commonly supported.
Basic structure of a NetworkPolicy
A NetworkPolicy generally defines:
podSelector: which pods in this namespace the policy applies to.policyTypes:Ingress,Egress, or both.ingress: list of allowed inbound rules, each withfromand optionalports.egress: list of allowed outbound rules, each withtoand optionalports.
Example (conceptual):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: myapp
spec:
podSelector:
matchLabels:
role: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
ports:
- port: 8080
protocol: TCPSecurity interpretation:
- Only pods labeled
role=frontendin the same namespace can connect torole=backendpods on TCP port 8080. - All other ingress traffic to backend pods is denied (assuming this is the only policy selecting them).
Scoped isolation and multi-tenant security
NetworkPolicies are critical for:
- Multi-tenant clusters: prevent tenants from scanning or accessing other tenants’ pods/services.
- Environment separation (dev/test/prod): limit cross-env communication even when they share the same cluster.
- Service isolation: e.g., only allow API pods to talk to databases, not every pod in the namespace.
Common patterns:
- Default deny ingress:
- Create a NetworkPolicy that selects all pods and has an empty
ingresslist. - Then add specific allow policies for needed traffic.
- Default deny egress (where supported):
- Same idea, but cover
egressand then explicitly allow traffic such as DNS, logging, or specific external endpoints.
Namespaces, labels, and identity-based policies
Network security in OpenShift relies heavily on namespaces and labels as an identity layer.
Namespace-based isolation
You can reference whole namespaces in NetworkPolicies using namespaceSelector. This enables patterns like:
- Allow traffic only from a specific team namespace.
- Allow only pods in a “trusted” namespace to talk to a sensitive namespace.
Example:
spec:
podSelector:
matchLabels:
app: payments
ingress:
- from:
- namespaceSelector:
matchLabels:
security-tier: trusted
Security implication: only pods in namespaces labeled security-tier=trusted may reach app=payments pods.
Label hygiene as a security tool
Because policies use labels for selection:
- Mislabeling a pod (e.g., giving it a
frontendlabel) might grant it unintended access. - Label schemes should be centrally defined and controlled to avoid accidental privilege escalation via labels.
Combining labels such as team, env (dev/test/prod), tier (frontend/backend/db) creates a rich “identity” for policy rules without relying on IPs.
Controlling egress traffic from pods
While ingress protects who can reach your pod, egress control protects what your pod can reach.
Key reasons to control egress:
- Prevent compromised pods from exfiltrating data or scanning the Internet.
- Enforce that services only call approved external endpoints (e.g., specific APIs).
- Ensure compliance by fixing where data can flow.
Egress patterns in OpenShift
Typical policies you may implement:
- Allow DNS (to the cluster’s DNS service).
- Allow logging/metrics traffic to observability backends.
- Allow application calls only to specific internal services or whitelisted external hosts/IPs.
Example (conceptual):
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- protocol: TCP
port: 443
Security interpretation: app=backend pods can only make TCP/443 connections to private IPs in 10.0.0.0/8. All other egress is denied.
In more advanced setups, OpenShift clusters might integrate with:
- Egress IPs: mapping particular workloads to specific external IP addresses.
- External firewalls/proxies that enforce domain-based egress control.
Securing external access: Routes, Ingress, and TLS
Internal pod-to-pod traffic is one side of network security; the other is how you expose applications to the outside world.
In OpenShift:
- Routes (and increasingly Ingress) define external HTTP(S) entry points.
- Ingress controllers terminate or pass through TLS and forward traffic to Services.
- External load balancers / firewalls may front the ingress controllers at the network edge.
Security considerations for Routes / Ingress
From a security viewpoint, you need to consider:
- TLS configuration:
- Prefer HTTPS Routes with TLS termination.
- Use strong TLS versions and ciphers, as supported by the cluster.
- Keep certificates up to date and use trusted certificate authorities.
- Termination modes:
edge: TLS is terminated at the router; traffic to backend pods may be HTTP.passthrough: TLS is passed to the backend; router does not inspect HTTP.reencrypt: TLS terminated at router and re-established with the backend using a separate certificate.- Security trade-offs:
reencryptkeeps traffic encrypted end-to-end within the cluster, improving confidentiality.edgeallows easier inspection and enforcement at the router but leaves internal hops in plaintext.passthroughis useful when the application must control TLS, but the router can’t apply HTTP-layer policies.- Hostnames and wildcard routes:
- Limit wildcard routes to trusted namespaces/teams.
- Ensure that multiple applications sharing a domain are properly isolated and not vulnerable to host header confusion.
- Exposure scope:
- Only create Routes/Ingress for services that must be accessible externally.
- Prefer exposing gateways or APIs, not internal services, to minimize attack surface.
Securing Services and internal endpoints
Even though a Service by itself does not enforce access control, its configuration impacts security:
- Ports and protocols:
- Only expose necessary ports in Service definitions.
- Avoid unnecessary NodePort or LoadBalancer services, as they expose the service directly to external networks.
- Service types (security perspective):
ClusterIP: internal-only, safest baseline.NodePort: exposes a port on every node’s IP; often bypasses some ingress controls and should be used cautiously.LoadBalancer: exposes the service via an external load balancer; ensure external firewall rules and network ACLs are set appropriately.- Headless services:
- Still subject to NetworkPolicies.
- Be careful when exposing databases or stateful services via headless services; pair them with appropriate policies.
DNS, service discovery, and name-based isolation
OpenShift provides internal DNS for Services and Pods. From a security angle:
- DNS names are human-readable identities for workloads, but policies are enforced on IP/label level, not DNS.
- Applications often reference other services by their DNS names; ensure that:
- Your NetworkPolicies align with those dependencies.
- You do not accidentally allow broad connectivity just because a Service is easy to discover.
Some security best practices tied to DNS:
- Avoid depending solely on obscurity (non-obvious service names) for security.
- Use NetworkPolicies to ensure that even if a service name is known, only authorized pods can reach it.
Encryption in transit inside the cluster
Network security is not only about connectivity but also about confidentiality and integrity of traffic.
In OpenShift:
- Service mesh or application-level TLS can encrypt pod-to-pod traffic.
- Routes and Ingress can enforce TLS on external entry points.
- Depending on network plugin and configuration, some node-to-node traffic may be encrypted at the overlay level or rely on underlying network security.
Typical patterns:
- Use TLS between services that handle sensitive data, even when they communicate over the cluster network.
- Combine NetworkPolicies (who can talk) with TLS (how they talk) for defense in depth.
Integrating cluster network security with external infrastructure
OpenShift network security rarely lives in isolation; it integrates with:
- Corporate firewalls and network ACLs: control which external networks or IP ranges can talk to the cluster ingress, and where cluster egress can go.
- VPNs or private network peering: secure connectivity between the cluster and on-prem systems or other clouds.
- WAF (Web Application Firewall) in front of ingress**: add HTTP-layer protections like rate limiting, SQL injection detection, etc.
Important integration points:
- Coordinate firewall rules with which Services, Routes, and LoadBalancers you create.
- Use consistent IP ranges and CIDRs between the cluster and external network to avoid accidentally exposing more than intended.
- Ensure logging and monitoring of edge devices and OpenShift ingress are aligned for incident response.
Common network security patterns in OpenShift
Some typical, reusable patterns you’ll see in real-world clusters:
- Namespace-per-team with isolation
- Each team gets a namespace (or set of namespaces).
- Default deny NetworkPolicies are applied to all namespaces.
- Teams define explicit allow rules between their own components.
- Cross-team access goes through well-defined, policy-protected APIs.
- “DMZ” namespaces
- A dedicated namespace for externally exposed frontends or APIs.
- Ingress/Routes only send traffic into this DMZ namespace.
- Strict policies control which internal services the DMZ pods can talk to.
- Locked-down data services
- Databases, message queues, and caches in separate namespaces or with strong labels.
- Only specific application namespaces are allowed to reach them on the right ports.
- No direct external Routes or LoadBalancers are created for these services.
- Restricted egress to the Internet
- Default deny egress for application workloads.
- Allow only DNS, logging, and explicit external APIs (e.g., payment gateways).
- Optionally, force traffic through a corporate proxy.
Practical guidance and pitfalls
When designing network security in OpenShift, be aware of:
- Order of deployment:
- If you introduce default deny policies after applications are running, you may cause outages.
- Plan and test policies in staging before enforcing them in production.
- Policy visibility:
- Debugging which policy is allowing or denying traffic can be challenging.
- Use tools (or built-in observability) to inspect effective policies and flows.
- Granularity trade-offs:
- Very fine-grained policies (many small rules per microservice) increase safety but also complexity.
- Start with broader, high-impact boundaries (namespace isolation, DMZs), then refine where risk is highest.
- Cross-feature interactions:
- Remember that NetworkPolicies interact with:
- Service types (ClusterIP vs NodePort vs LoadBalancer)
- Routes/Ingress and TLS termination modes
- Security Context Constraints and pod identities (for non-network aspects)
Design your network security as part of an overall security architecture, not as an afterthought around individual services.
By understanding and applying these OpenShift-specific network security concepts—NetworkPolicies, secure external exposure, careful service configuration, and integration with external controls—you can build clusters where network paths are explicit, minimal, and auditable instead of implicit and wide open.