Kahibaro
Discord Login Register

4.5 OpenShift networking model

Core Principles of the OpenShift Networking Model

OpenShift builds on top of Kubernetes networking, but adds a set of strong, opinionated defaults and platform services so that applications can communicate securely and predictably with minimal manual wiring.

At a high level, the OpenShift networking model is based on these core principles:

This chapter focuses on how OpenShift realizes these principles in practice.

Cluster Network and Pod Networking

Cluster Network vs. Node Network

OpenShift distinguishes between:

OpenShift’s networking stack connects these two layers so that:

Cluster networking is handled by a network plugin (called the “cluster network provider” in OpenShift).

Pod Addressing and Connectivity

OpenShift assigns each pod an IP from the configured cluster network. Typical properties:

Depending on the cluster network provider and configuration, traffic may be:

The details of encapsulation and routing are handled by the OpenShift networking components; developers usually only see stable connectivity and DNS.

Cluster Network Providers and IP Addressing

OpenShift SDN and OVN-Kubernetes

OpenShift supports different cluster network providers, the main ones being:

Both provide:

The main differences are in implementation details (OVN uses Open Virtual Network/OVS, SDN has its own mechanisms), but from an application developer perspective, they expose similar behavior.

Network CIDRs

Typical CIDRs configured in OpenShift:

These are defined during cluster installation and should not be changed afterward without a full reinstallation.

Key implications:

Service Networking Model

ClusterIP, NodePort, LoadBalancer

The underlying model is Kubernetes Services; OpenShift does not change their semantics, but integrates them with OpenShift’s routing and platform services.

Main Service types:

OpenShift’s application-level external access is usually provided via Routes and Ingress (covered in other chapters), which sit “above” Services.

Service Discovery via DNS

Every Service gets a DNS record from the built-in cluster DNS:

OpenShift relies heavily on this DNS-based discovery model:

This DNS-based indirection is a key part of the networking model for microservices and cloud-native applications.

OpenShift DNS and Name Resolution

CoreDNS and Cluster DNS

OpenShift uses a DNS service (CoreDNS in modern versions) integrated with Kubernetes:

All pods are configured to use the cluster DNS server:

DNS for OpenShift Routes

While Services have internal cluster DNS names, Routes use external DNS:

The networking model thus has two DNS planes:

Ingress, Egress, and Edge Connectivity

Ingress: Traffic Entering the Cluster

OpenShift networking includes a standardized ingress model through:

Traffic flow (simplified):

  1. Client hits https://app.example.com.
  2. DNS resolves app.example.com to an external IP (load balancer or node IP).
  3. Traffic goes to the OpenShift ingress controller.
  4. The ingress controller terminates TLS (if configured) and routes the request to the appropriate Service, which then load-balances to backend pods.

Key properties of this model:

Detailed configuration of Routes and Ingress is handled in other chapters; here the main point is that they are part of the overall networking model for external access.

Egress: Traffic Leaving the Cluster

Traffic from pods to external destinations (e.g., internet, corporate networks) goes out through the nodes:

This allows:

Network Isolation, Tenancy, and Policy

Default Connectivity and Isolation

In its default configuration:

OpenShift then uses NetworkPolicies (and, historically, SDN modes) to introduce isolation and tenancy:

NetworkPolicies in the OpenShift Model

NetworkPolicies are Kubernetes objects that declare allowed traffic flows:

In the OpenShift networking model, NetworkPolicies become a core part of:

Specific syntax, examples, and workflows for NetworkPolicies are covered in the dedicated networking chapters, but their existence and purpose are central to how networking is structured in OpenShift.

Multus and Multiple Network Attachments

Primary vs. Secondary Networks

By default, every pod connects to the primary cluster network (for normal service-to-service traffic). Some workloads, especially in specialized or HPC-like environments, need:

OpenShift supports this via Multus CNI:

NetworkAttachmentDefinition

Additional networks are described using NetworkAttachmentDefinition objects:

This fits into the OpenShift networking model by:

Node Networking, MTU, and Performance Considerations

MTU and Encapsulation Overhead

The network model may use encapsulation (e.g., VXLAN, GENEVE), which:

Key points for the OpenShift model:

Cluster installers and network plugins usually calculate MTU automatically, but in custom environments (on-premises, specialized fabrics), administrators must ensure the underlay network MTU is compatible.

Node Roles and Networking

Control plane and worker nodes participate in the same cluster network, but:

From a networking-model perspective:

OpenShift Networking and Platform Services

Service Mesh and Higher-Level Networking

OpenShift’s networking model provides the foundational layer on top of which you can run:

These higher-level tools depend on:

The networking model is therefore designed to be generic and robust enough that additional services can layer on top without deep customization.

Integration with Platform Monitoring and Logging

Networking in OpenShift is tightly integrated with:

From an architectural perspective, this means:

Summary of the OpenShift Networking Model

The OpenShift networking model:

Understanding this model is essential for designing, securing, and troubleshooting applications on OpenShift, and it underpins the more specific networking mechanisms explored in later chapters.

Views: 85

Comments

Please login to add a comment.

Don't have an account? Register now!