Table of Contents
Big-Picture View of OpenShift Architecture
OpenShift is often described as “Kubernetes plus batteries included.” Architecturally, it is a Kubernetes distribution with a set of integrated components and conventions that turn raw Kubernetes into a more complete application platform.
At a high level, an OpenShift cluster consists of:
- A control plane (API and cluster brains)
- A set of nodes that run your workloads
- A rich set of cluster services implemented largely via Operators
- An opinionated configuration around security, networking, and multi-tenancy
Later chapters will dive into individual architectural aspects (control plane, nodes, Operators, networking, storage). Here, the focus is on how these pieces fit together and what is distinctively “OpenShift” in the way the architecture is organized.
Core Architectural Principles
Several design principles drive OpenShift’s architecture:
- Kubernetes at the core
All workload management is based on standard Kubernetes APIs and objects. OpenShift adds capabilities on top rather than replacing Kubernetes concepts. - API-driven everything
Almost every aspect of the platform (cluster config, networking, storage, upgrades) is controlled via Kubernetes-style APIs and custom resources, not ad‑hoc scripts or one-off tools. - Operators as the control plane extension model
Platform features (ingress, storage, monitoring, registry, etc.) are managed by Operators that reconcile the desired state of the cluster. - Secure by default
Architectural defaults emphasize multi-tenancy, restricted container permissions, and built-in authentication/authorization. - Opinionated, but configurable
OpenShift ships with a set of curated, integrated components (ingress, registry, monitoring, logging, etc.) that work out-of-the-box, but can be replaced or extended.
High-Level Component View
From an architectural perspective, you can think of OpenShift as a set of layers:
- Infrastructure layer
- Physical or virtual machines (on-prem or cloud)
- Networking and storage provided by the underlying environment
- Kubernetes layer
- Kubernetes control plane (API server, scheduler, controllers)
- Worker nodes (kubelet, container runtime)
- Core Kubernetes objects and behavior
- OpenShift platform layer
- OpenShift-specific APIs (via Custom Resource Definitions, CRDs)
- Platform Operators (cluster configuration, ingress, registry, etc.)
- Integrated platform services (registry, monitoring, logging, etc.)
- Web console and CLI (
oc) - Application layer
- User workloads: Pods, Deployments/DeploymentConfigs, Jobs, etc.
- Application services: Routes, ConfigMaps, Secrets, PVCs, etc.
- CI/CD, GitOps, and higher-level workflows
The key architectural idea is that OpenShift extends Kubernetes with additional APIs and controllers (Operators) to manage not only user workloads, but also the platform itself.
Cluster Topology
An OpenShift cluster is composed of multiple machines with well-defined roles:
- Control plane nodes
Run the Kubernetes API and OpenShift-specific control components. They manage the desired state of the cluster, but generally do not run user application workloads (unless configured to do so in smaller setups). - Worker nodes
Run user workload Pods and some platform components that are implemented as standard Kubernetes workloads (for example, ingress controllers, logging agents). - Optional infrastructure/specialized nodes
Some deployments use dedicated node groups for: - Ingress and routing
- Storage-heavy components
- GPU workloads
- System/infra workloads
How these roles are implemented is covered in detail in later architecture subsections, but the architectural pattern is always a separation between control logic and workload execution.
OpenShift-Specific APIs and Abstractions
Architecturally, OpenShift extends Kubernetes by defining its own API groups and custom resources. These become first-class citizens in the cluster API and drive platform behavior.
Examples of OpenShift-specific abstractions include:
- Projects
An opinionated way to manage Kubernetes namespaces together with quotas and access controls. Architecturally, Projects are the primary multi-tenant boundary. - Routes
A higher-level abstraction on top of Kubernetes networking for exposing HTTP/HTTPS services outside the cluster. - Cluster configuration resources
Such asClusterVersion,ClusterOperator,Ingress,Image, and others that represent cluster-wide settings and state. - Security constructs
Additional APIs around Security Context Constraints (SCCs) and enhanced RBAC views are baked into the architecture.
These resources are consumed and reconciled by various Operators and controllers, ensuring that the desired global configuration is continually enforced.
Operators as Architectural Building Blocks
A defining architectural trait of OpenShift is its extensive use of Operators to manage:
- Core platform services (ingress, DNS, registry, monitoring)
- Cluster configuration (authentication, networking, proxies)
- Add-on capabilities (databases, message queues, specialized platforms)
Conceptually, a typical OpenShift platform component is structured as:
- One or more CustomResourceDefinitions (CRDs) that define the API
- A controller/Operator that watches these resources and applies changes to:
- Deployments/DaemonSets/StatefulSets
- ConfigMaps, Secrets, Services, Routes
- Underlying cloud resources (in some cases)
This Operator-based architecture offers:
- Declarative configuration for platform services, similar to how applications themselves are configured.
- Automatic lifecycle management, including installation, upgrades, and recovery for platform components.
- Extensibility, allowing vendors and users to add new platform services using the same pattern.
Integrated Platform Services
OpenShift’s architecture includes a set of integrated services that are part of the core platform, rather than optional add-ons. These typically run as Kubernetes workloads managed by Operators.
Common built-in platform services include:
- Ingress and routing
Cluster-wide routing layer based on HAProxy/NGINX-like controllers and Routes. - Internal image registry
A registry service for storing container images within the cluster. Architecturally, it is a standard application stack managed by an Operator and integrated with cluster authentication and storage. - Monitoring stack
A Prometheus-based monitoring system with alerting, managed by the Cluster Monitoring Operator. - Logging (optional/variant-dependent)
A centralized logging stack that aggregates logs from nodes and applications (implementation may vary by version and add-ons).
These services are part of the platform layer and are observed and controlled through the same Kubernetes APIs and Operators that manage other cluster resources.
Configuration and State Management
From an architectural point of view, OpenShift stores nearly all configuration as Kubernetes resources:
- Cluster-scoped configuration
Resources likeAPIServer,Ingress,Network,OAuth,Proxyrepresent cluster-wide settings. They live in specific API groups and are reconciled by respective Operators. - Namespace-scoped configuration
Application-specific ConfigMaps, Secrets, ResourceQuotas, and RoleBindings live in Projects/namespaces.
The global configuration model is:
- Administrators define desired cluster configuration by creating or editing config resources.
- Platform Operators watch these resources and:
- Adjust running platform components (Deployments, DaemonSets, etc.).
- Validate and enforce the configuration.
- Surface status and conditions in status fields.
This leads to a self-healing architecture, where changes to configuration are reflected automatically and platform components attempt to return to the desired state when something drifts.
Security and Multi-Tenancy as Architectural Concerns
OpenShift’s architecture is intentionally multi-tenant and secure-by-default. Some aspects that are baked into the design (but explained in detail in separate security chapters):
- Authentication and Authorization
Centralized, pluggable identity providers integrated into the control plane, with RBAC policies applied at all API layers. - Security Context Constraints (SCCs)
A cluster-scoped concept that controls what Pods are allowed to do (e.g., run as root, use host networking). SCCs are enforced by the control plane admission stack. - Network isolation
Defaults and policies that determine how Projects/namespaces can communicate, managed by the networking layer.
Because these elements are part of the core API and enforcement path, they influence how you design and deploy workloads on OpenShift from the start.
Cluster Lifecycle and Self-Management
OpenShift is designed to manage its own lifecycle as much as possible. Architecturally, this is achieved through:
- Cluster Version Operator (CVO)
Maintains the overall cluster version and coordinates upgrades by reconciling the set of cluster Operators and manifests to the desired version. - Cluster Operators
Represent distinct subsystems (e.g., authentication, network, ingress). Each Operator: - Exposes status (Available, Degraded, Progressing)
- Manages its own upgrade and reconciliation logic
This architecture means that a cluster:
- Tracks its own health at a subsystem level
- Performs controlled, rolling updates to core components
- Surfaces upgrade and configuration issues via standard APIs
Relationship to the Underlying Infrastructure
Although OpenShift is infrastructure-agnostic at the API level, its architecture is designed to integrate deeply with the underlying environment where appropriate:
- On clouds (AWS, Azure, GCP, etc.), certain Operators manage:
- Machine provisioning (Machine API Operator)
- Load balancers and networking integrations
- Storage provisioning via cloud-specific CSI drivers
- On bare metal or virtualization platforms, other integrations are used instead, or administrators manage some aspects manually.
Architecturally, this leads to a consistent cluster experience (same APIs and behaviors) even when the implementation details for nodes, storage, or load balancing differ across environments.
Architectural View Across Deployment Models
Different OpenShift deployment models (installer-provisioned, user-provisioned, managed services, single-node) share the same core architecture:
- The control plane and platform services are functionally similar.
- The API surface and Operators are largely identical.
- The main differences lie in:
- Who manages the underlying infrastructure
- Which components are visible or customizable by the user
- How upgrades and maintenance are orchestrated
This consistency allows you to apply the same architectural understanding whether you are dealing with a small single-node lab or a large multi-zone production cluster.
How This Architecture Shapes Day-to-Day Use
Understanding OpenShift’s architecture influences how you:
- Interact with the cluster
Through APIs and custom resources rather than manual configuration of system services. - Design applications
To work within the multi-tenant, secure defaults and the provided networking and storage models. - Operate the platform
By reading Operator and ClusterOperator status, inspecting cluster-scoped configuration resources, and coordinating with the underlying infrastructure where necessary.
Subsequent chapters on control plane, node roles, Operators, networking, and storage will explore specific architectural components in more depth, building on this overall structural view.