Table of Contents
Big Picture: What Networking Means in OpenShift
In OpenShift, networking connects:
- Pods to other pods
- Pods to services inside the cluster
- Applications to users outside the cluster
OpenShift extends the standard Kubernetes networking model but adds:
- A specific cluster network implementation (typically OVN-Kubernetes in modern OpenShift)
- An integrated Ingress/Route model for external access
- Extra security, multi-tenancy, and policy layers suitable for enterprises
This chapter gives the overall concepts and moving pieces of networking in OpenShift. Later chapters in this section will focus on particular building blocks like Services, Routes, and NetworkPolicies.
Core Networking Goals in OpenShift
OpenShift’s networking stack is designed to achieve a set of guarantees:
- Flat, cluster-wide pod network
Every pod gets its own IP, and pods can (by default) reach each other across nodes without NAT. - Stable access to applications
Applications are accessed via Services inside the cluster and Routes/Ingress from outside, even as pods change. - Network isolation and multi-tenancy
Separation between different projects/namespaces through network policies and network segmentation. - Support for diverse traffic flows
- North–south: external clients ↔ cluster
- East–west: pod ↔ pod and service ↔ service inside the cluster
- Load balancing and resilience
Distribute traffic across multiple pod replicas and handle pod/node failures transparently to users.
Main Networking Layers in OpenShift
OpenShift’s networking can be thought of in layers, from low-level data-plane to user-level access.
1. Cluster Network (Pod and Node Networking)
This is the underlay for all pod-to-pod traffic.
Key aspects:
- Each pod gets an IP address from a configured pod CIDR.
- Each node has a node IP and hosts one or more pod IP ranges.
- The CNI (Container Network Interface) plugin implements routing and network rules. For modern OpenShift, this is usually OVN-Kubernetes.
OpenShift enforces the Kubernetes rule:
- No NAT for pod-to-pod traffic within the cluster (by default).
- Pods can directly use each other’s IP addresses.
Internally this layer is responsible for:
- Routing packets between pods across nodes
- Implementing basic connectivity and isolation
- Supporting encapsulation or routing mechanisms used by the chosen network plugin
You generally don’t need to manage this directly as an application developer, but it sets the foundation for everything above.
2. Service Networking
On top of pod networking, OpenShift provides a virtual IP layer via Kubernetes Services:
- Each Service gets:
- A stable cluster-internal IP (
clusterIP) - A DNS name (e.g.
myapp.myproject.svc.cluster.local) - Traffic to the Service IP is load-balanced across matching pods (based on
selectorlabels).
OpenShift uses kube-proxy (and integration with the cluster network plugin) to implement this Service IP-based load balancing.
This layer enables:
- Stable endpoints for workloads even as pods change
- Abstracting away direct pod IPs
- Internal service discovery (via DNS)
Details of specific Service types and usage are covered in the “Services and service discovery” chapter.
3. Ingress and External Access
To reach applications from outside the cluster, OpenShift uses:
- Ingress Controllers (implemented via OpenShift’s HAProxy-based routers)
- OpenShift Routes (OpenShift-specific resource)
- Kubernetes Ingress resources (also supported, but Routes are the native abstraction)
The typical path:
- External client connects to a router endpoint (e.g. a node’s public IP and port 443, often behind a load balancer).
- The Ingress Controller / Router inspects the HTTP/TLS request (host, path, SNI).
- It forwards traffic to an internal Service, which then sends it to pods.
This layer handles:
- HTTP(S) routing by hostname and path
- TLS termination or passthrough
- External load balancing integration
Details of Routes and Ingress are explored in the dedicated “Routes and ingress” chapter; here we only place them in the overall picture.
4. Network Security and Isolation
OpenShift’s networking model is designed for multi-tenant clusters, where many teams and applications share infrastructure.
Key building blocks:
- Namespaces/Projects: logical separation of applications and their resources.
- NetworkPolicies: rules that control:
- Which pods can talk to which other pods
- What traffic is allowed into and out of pods
- Security Context Constraints (SCCs) and related security features (covered in the Security chapters) complement networking controls.
By default, the cluster might be relatively permissive or restrictive depending on configuration. Network policies provide fine-grained control for inter-service access.
These are covered in detail in the “Network policies” and “Network security” chapters; here it is enough to know they are part of the networking stack and are enforced by the cluster network plugin.
5. Load Balancing Layers
OpenShift uses load balancing at multiple levels:
- Service-level load balancing:
kube-proxy and the network plugin distribute traffic among pod endpoints. - Router/Ingress-level load balancing:
Ingress Controllers balance external HTTP(S) traffic across Service endpoints. - Infrastructure-level load balancers (cloud or on-prem appliances):
Front the router pods or node ports, providing high availability and a single external entry point.
Together, these:
- Spread traffic across multiple pod replicas
- Help absorb node/pod failures
- Allow scaling out applications horizontally
Details of load balancing strategies and configurations are covered in the dedicated “Load balancing” chapter.
OpenShift Networking Components and Their Roles
While you normally interact with Kubernetes resources (Services, Routes, Ingress, NetworkPolicies), several system components make networking work under the hood.
Key components:
- CNI Plugin (e.g. OVN-Kubernetes)
- Attaches network interfaces to pods
- Programs routes, tunnels, and access rules
- Enforces NetworkPolicies
- Kube-proxy
- Programs Service IP-based load balancing rules on nodes
- Works with iptables, IPVS, or equivalent mechanisms
- Ingress Controllers / Routers
- Run as pods in the cluster
- Expose HTTP(S) and some TCP/UDP routes
- Integrate with Routes and Ingress resources
- DNS (CoreDNS)
- Resolves Service DNS names to Service IPs
- Provides consistent service discovery inside the cluster
- Cluster DNS and DNS Operator
- Manage DNS configuration for the cluster
- Integrate with the underlying infrastructure
Developers and operators usually manage these components through high-level resources (YAML manifests, oc CLI, or the web console), not by configuring them all directly.
Networking from Different Perspectives
OpenShift networking looks different depending on your role.
For Application Developers
Common concerns:
- How do my pods discover and call other services?
- Use Service names and DNS; do not rely on pod IPs.
- How do I expose my application to users?
- Typically by creating a Service and a Route.
- How do I secure internal and external traffic?
- Work with NetworkPolicies and TLS termination on Routes/Ingress.
Developers mostly interact with:
- Services
- Routes/Ingress
- NetworkPolicies
- Environment variables and configuration to store endpoint URLs
For Cluster Administrators
Common concerns:
- Choosing and configuring the cluster network (CNI, CIDRs)
- Integrating with physical or cloud networking (VPCs, VLANs, firewalls)
- Managing Ingress Controllers, TLS certificates, and external load balancers
- Setting default network policies and security posture
- Monitoring and troubleshooting connectivity
Admins interact more with:
- Cluster-wide network configuration
- Ingress Controller configuration
- Node networking and routing
- DNS infrastructure
- Monitoring/logging tools for network-level visibility
Network Flows: How Traffic Moves in OpenShift
To understand OpenShift networking, it helps to follow a few common traffic paths.
Pod-to-Pod (East–West Inside the Cluster)
- Pod A makes a request to Pod B.
- If Pod A uses Pod B’s IP directly:
- Traffic is routed via the cluster network (CNI plugin).
- If Pod A uses a Service name:
- DNS resolves the name to a Service IP.
- kube-proxy load-balances to one of the pods backing the Service.
NetworkPolicies, if present, can allow or deny this connection.
External Client to Application (North–South)
- Client connects to a public endpoint:
- Typically an IP of a load balancer or node, on port 80/443 (or others).
- Traffic reaches an Ingress Controller / Router pod.
- The router:
- Uses host/path/TLS information to select a Route/Ingress.
- Forwards traffic to the corresponding Service.
- The Service load-balances traffic to application pods.
Along this path, TLS might be:
- Terminated at the router (edge termination)
- Passed through to pods (passthrough)
Firewalls, external load balancers, and NAT may also be involved at the infrastructure level.
Multi-Tenancy and Network Segmentation
OpenShift is often used as a shared platform for multiple teams or customers. Networking must support isolation while still allowing controlled communication.
Mechanisms used:
- Namespaces / Projects:
- Logical boundaries for grouping resources
- Often map to “tenants” or teams
- Default isolation behavior:
- Depending on cluster configuration, namespaces may be more or less isolated by default.
- NetworkPolicies:
- Explicit allow/deny rules between:
- Pods by label
- Namespaces by label
- Ingress/egress based on IP blocks and ports
- Ingress Controllers and Routes:
- Can be scoped or limited, for example:
- Shared global routers
- Project-specific or team-specific routers
The configuration of these elements defines whether a cluster behaves as:
- A shared but mostly open environment, or
- A strictly segmented multi-tenant platform.
Integration with Underlying Infrastructure
OpenShift clusters run on various platforms (bare metal, virtualization, public clouds). Networking must fit into those environments:
- IP address management: assigning node and pod CIDRs that do not conflict with existing networks.
- Physical/virtual networks:
- VLANs, VPCs, subnets
- Routing between the cluster and external systems (databases, legacy apps, etc.)
- Firewalls and security appliances:
- Allowing necessary ports for:
- Node-to-node communication
- API access
- Ingress/egress traffic for applications
- External load balancers:
- Commonly fronting the API server and the routers
- Often managed by cloud providers or on-prem appliances
These aspects are typically handled by cluster administrators and platform engineers, but they define what connectivity is available to workloads.
Networking Observability and Troubleshooting Basics
Networking is often a source of failures in distributed systems. OpenShift provides multiple ways to observe and debug network behavior:
ocand kubectl commands:- Inspect pods, Services, Routes, NetworkPolicies.
- Execute commands inside pods to test connectivity (
curl,ping, etc.). - Events and logs:
- Router logs for ingress issues
- Pod and Service events for configuration or health problems
- Monitoring and tracing tools:
- Metrics on network traffic, error rates, and latencies
- Distributed tracing for request paths across services
Deeper troubleshooting techniques and tools are covered in the monitoring and troubleshooting chapters; here the key point is that OpenShift networking is observable and debuggable using standard and platform-specific tools.
How This Chapter Connects to the Rest of the Course
This chapter has laid out the overall structure and goals of OpenShift networking. The rest of this section will dive into the main building blocks:
- Services and service discovery: internal service exposure and DNS.
- Routes and ingress: HTTP(S) and external access patterns.
- Network policies: fine-grained traffic control between pods and namespaces.
- Load balancing: distribution and high availability for services.
Together, these chapters form a complete view of how to design, expose, secure, and operate networked applications on OpenShift.