Table of Contents
Core Principles of the OpenShift Networking Model
OpenShift builds on top of Kubernetes networking, but adds a set of strong, opinionated defaults and platform services so that applications can communicate securely and predictably with minimal manual wiring.
At a high level, the OpenShift networking model is based on these core principles:
- Every pod gets its own IP address.
- Pods can communicate with each other by IP (and usually DNS) without NAT inside the cluster.
- Services provide stable virtual IPs and names to reach groups of pods.
- The platform manages internal and external routing, load balancing, and DNS.
- Network isolation and policy are enforced at the platform level, not inside individual applications.
This chapter focuses on how OpenShift realizes these principles in practice.
Cluster Network and Pod Networking
Cluster Network vs. Node Network
OpenShift distinguishes between:
- Node network: How cluster nodes themselves are connected (the underlying physical or virtual network, VLANs, etc.).
- Cluster (pod) network: The overlay or SDN that enables pod-to-pod connectivity across nodes.
OpenShift’s networking stack connects these two layers so that:
- Pods get IPs from an internal cluster network CIDR.
- Nodes have IPs on the underlay network (what your data center or cloud provides).
- The cluster network is routed/encapsulated through the node network so pods on different nodes can reach each other.
Cluster networking is handled by a network plugin (called the “cluster network provider” in OpenShift).
Pod Addressing and Connectivity
OpenShift assigns each pod an IP from the configured cluster network. Typical properties:
- Pods on the same node and across nodes can reach each other directly via pod IPs.
- Pod IPs are ephemeral: they change when pods are recreated.
- Applications generally should not rely on pod IPs; they should use Services or DNS names.
Depending on the cluster network provider and configuration, traffic may be:
- Routed: Using native IP routing between nodes.
- Encapsulated: Using VXLAN, GENEVE, or similar tunnels between nodes.
- Or a combination of both, depending on the environment.
The details of encapsulation and routing are handled by the OpenShift networking components; developers usually only see stable connectivity and DNS.
Cluster Network Providers and IP Addressing
OpenShift SDN and OVN-Kubernetes
OpenShift supports different cluster network providers, the main ones being:
- OVN-Kubernetes (default in modern OpenShift versions)
- OpenShift SDN (legacy, still supported in some deployments)
Both provide:
- Pod networking
- Service networking
- Network policy enforcement
- Support for multiple network topologies (flat, multitenant-like via policies, etc.)
The main differences are in implementation details (OVN uses Open Virtual Network/OVS, SDN has its own mechanisms), but from an application developer perspective, they expose similar behavior.
Network CIDRs
Typical CIDRs configured in OpenShift:
- Cluster Network (Pod Network): Where pod IPs live, e.g.
10.128.0.0/14. - Service Network: Virtual IP range for Kubernetes Services, e.g.
172.30.0.0/16.
These are defined during cluster installation and should not be changed afterward without a full reinstallation.
Key implications:
- You must ensure these CIDRs do not overlap with your existing data center or VPN networks.
- When connecting external networks (e.g., via VPN, Direct Connect/ExpressRoute), routing for these ranges must be accounted for if you need cross-connectivity.
Service Networking Model
ClusterIP, NodePort, LoadBalancer
The underlying model is Kubernetes Services; OpenShift does not change their semantics, but integrates them with OpenShift’s routing and platform services.
Main Service types:
- ClusterIP:
- Default type.
- Allocates a virtual IP from the Service network.
- Accessible only within the cluster (pods, nodes).
- Backed by internal load balancing (e.g., iptables, IPVS).
- NodePort:
- Exposes the Service on a specific port on every node.
- Mostly used by infrastructure components; external exposure is more commonly done with Routes/Ingress in OpenShift.
- LoadBalancer:
- Integrates with cloud provider load balancers where available.
- In many OpenShift environments, external access is instead managed with Routes and DNS.
OpenShift’s application-level external access is usually provided via Routes and Ingress (covered in other chapters), which sit “above” Services.
Service Discovery via DNS
Every Service gets a DNS record from the built-in cluster DNS:
- Format:
service-name.namespace.svc.cluster.local. - Short forms:
service-name(within the same namespace), orservice-name.namespace.
OpenShift relies heavily on this DNS-based discovery model:
- Pods reference other services by name, not IP.
- When pods scale up/down or are rescheduled, their IPs change, but the Service and DNS name remain stable.
This DNS-based indirection is a key part of the networking model for microservices and cloud-native applications.
OpenShift DNS and Name Resolution
CoreDNS and Cluster DNS
OpenShift uses a DNS service (CoreDNS in modern versions) integrated with Kubernetes:
- Automatically creates DNS entries for:
- Services
- Pods (optional, not usually relied on by apps)
- Forwards requests for external domains to upstream DNS resolvers (configured in the cluster).
All pods are configured to use the cluster DNS server:
/etc/resolv.confinside pods points to the cluster DNS.- Apps use normal DNS resolution and are automatically integrated into the cluster’s name resolution.
DNS for OpenShift Routes
While Services have internal cluster DNS names, Routes use external DNS:
- You (or your organization) manage DNS records like
app.example.com. - Those records point to OpenShift’s ingress controllers/load balancers.
- Within the cluster, Routes map hostnames and paths to Services.
The networking model thus has two DNS planes:
- Internal DNS for Services and pods.
- External DNS for user-facing hostnames, pointing into the cluster through ingress.
Ingress, Egress, and Edge Connectivity
Ingress: Traffic Entering the Cluster
OpenShift networking includes a standardized ingress model through:
- Ingress Controllers (router pods): typically using HAProxy.
- One or more external load balancers or reverse proxies (in IPI installations, created automatically in the cloud).
Traffic flow (simplified):
- Client hits
https://app.example.com. - DNS resolves
app.example.comto an external IP (load balancer or node IP). - Traffic goes to the OpenShift ingress controller.
- The ingress controller terminates TLS (if configured) and routes the request to the appropriate Service, which then load-balances to backend pods.
Key properties of this model:
- Centralized policy points for TLS, authentication gateways, and WAFs.
- Path- and host-based routing to multiple applications over a small set of IPs.
- Integration with OpenShift’s concept of Routes and Kubernetes Ingress.
Detailed configuration of Routes and Ingress is handled in other chapters; here the main point is that they are part of the overall networking model for external access.
Egress: Traffic Leaving the Cluster
Traffic from pods to external destinations (e.g., internet, corporate networks) goes out through the nodes:
- By default, pods masquerade behind node IPs when accessing the outside world.
- Network administrators may configure:
- Egress firewalls or ACLs to restrict destinations.
- Egress IPs or gateways (in some plugins) so that specific namespaces or applications use dedicated IP addresses for outbound traffic.
This allows:
- Auditing and control of outbound connectivity.
- Consistent IP-based identity for certain workloads (for example, to access external APIs that whitelist specific IPs).
Network Isolation, Tenancy, and Policy
Default Connectivity and Isolation
In its default configuration:
- All pods can communicate with all other pods in the cluster (flat network).
- Namespaces are primarily a logical separation, not a hard network boundary.
OpenShift then uses NetworkPolicies (and, historically, SDN modes) to introduce isolation and tenancy:
- You can limit which pods/namespaces can talk to each other.
- Isolation is enforced by the cluster network provider.
NetworkPolicies in the OpenShift Model
NetworkPolicies are Kubernetes objects that declare allowed traffic flows:
- Based on:
- Pod labels
- Namespace selectors
- Ports and protocols
- Implement “default deny” patterns:
- Without a policy, everything can talk to everything.
- With a baseline deny-all policy, you explicitly opt-in allowed communication.
In the OpenShift networking model, NetworkPolicies become a core part of:
- Namespace isolation between teams or tenants.
- Microsegmentation of services (e.g., frontends can talk to APIs, but not directly to databases).
- Defense-in-depth for security.
Specific syntax, examples, and workflows for NetworkPolicies are covered in the dedicated networking chapters, but their existence and purpose are central to how networking is structured in OpenShift.
Multus and Multiple Network Attachments
Primary vs. Secondary Networks
By default, every pod connects to the primary cluster network (for normal service-to-service traffic). Some workloads, especially in specialized or HPC-like environments, need:
- Direct access to high-speed fabrics (e.g., SR-IOV, InfiniBand-like NICs).
- Access to legacy VLANs or storage networks.
- Additional network interfaces with different address spaces.
OpenShift supports this via Multus CNI:
- Allows pods to have multiple network interfaces.
- The primary interface uses the standard cluster network.
- Additional interfaces connect to additional networks defined at the cluster level.
NetworkAttachmentDefinition
Additional networks are described using NetworkAttachmentDefinition objects:
- Specify the CNI plugin (e.g., SR-IOV, macvlan), IPAM, and other options.
- Pods request attachment to these networks via annotations or higher-level abstractions.
This fits into the OpenShift networking model by:
- Keeping a consistent primary network for cluster services and control-plane communication.
- Allowing specialized networks for performance, isolation, or compliance needs, without breaking the core cluster model.
Node Networking, MTU, and Performance Considerations
MTU and Encapsulation Overhead
The network model may use encapsulation (e.g., VXLAN, GENEVE), which:
- Adds extra bytes to each packet.
- Requires careful MTU (Maximum Transmission Unit) tuning.
Key points for the OpenShift model:
- The cluster network MTU is typically set lower than the physical network MTU to account for encapsulation.
- Mismatched MTUs can lead to:
- Path MTU discovery issues.
- Fragmentation or dropped packets.
- Hard-to-diagnose intermittent connectivity problems.
Cluster installers and network plugins usually calculate MTU automatically, but in custom environments (on-premises, specialized fabrics), administrators must ensure the underlay network MTU is compatible.
Node Roles and Networking
Control plane and worker nodes participate in the same cluster network, but:
- Control plane nodes typically host API servers, etcd, and vital services.
- Worker nodes host application pods and networking data plane components.
From a networking-model perspective:
- All nodes must be able to:
- Reach each other on control-plane ports (API server, etcd, OVN/SDN).
- Exchange cluster network traffic for pods.
- Firewalls, security groups, and routing must be configured to allow the expected ports and protocols for OpenShift networking to function.
OpenShift Networking and Platform Services
Service Mesh and Higher-Level Networking
OpenShift’s networking model provides the foundational layer on top of which you can run:
- Service Meshes (e.g., OpenShift Service Mesh/Istio):
- For traffic shaping, mTLS, advanced routing, and observability.
- API gateways and ingress controllers beyond the default ones.
- Cloud-native storage and databases that assume robust pod/service networking.
These higher-level tools depend on:
- Reliable Service DNS.
- Stable pod and service connectivity.
- Consistent policy enforcement (NetworkPolicies, etc.).
The networking model is therefore designed to be generic and robust enough that additional services can layer on top without deep customization.
Integration with Platform Monitoring and Logging
Networking in OpenShift is tightly integrated with:
- Monitoring:
- Metrics for network throughput, packet drops, errors.
- Health of network plugins, ingress controllers, and DNS.
- Logging:
- Router logs for HTTP(S) traffic.
- Network plugin logs for troubleshooting connectivity.
From an architectural perspective, this means:
- Networking components are first-class, observable platform services.
- You can diagnose application-level connectivity issues using platform tools, not just application logs.
Summary of the OpenShift Networking Model
The OpenShift networking model:
- Provides a flat, routable pod network with per-pod IPs.
- Uses Services and internal DNS for service discovery and load balancing.
- Exposes applications externally via Ingress/Routes integrated with external DNS and load balancers.
- Implements multi-tenant isolation via namespaces and NetworkPolicies.
- Supports advanced scenarios with multiple networks using Multus.
- Aligns with Kubernetes networking concepts while adding opinionated defaults and strong platform integration.
Understanding this model is essential for designing, securing, and troubleshooting applications on OpenShift, and it underpins the more specific networking mechanisms explored in later chapters.