Table of Contents
High-Level View of OpenShift as a Platform
OpenShift is a Kubernetes-based application platform that bundles Kubernetes with additional components for security, automation, developer productivity, and operations. Architecturally, you can think of it as several integrated layers:
- Infrastructure layer – physical or virtual machines, networking, and storage in data centers or clouds.
- OpenShift cluster layer – control plane, worker nodes, Operators, and platform services running on top of Kubernetes.
- Application services layer – routing, service discovery, storage, observability, CI/CD, and security features.
- User interaction layer – web console, CLI, APIs, and automation interfaces used by developers and cluster administrators.
This chapter focuses on how these layers fit together and what makes OpenShift’s architecture distinct compared to “plain” Kubernetes distributions.
Core Architectural Principles
Several design principles shape OpenShift’s architecture:
- Kubernetes at the core: The Kubernetes API and control plane are the foundation. Almost everything in OpenShift is defined and managed as Kubernetes resources.
- Opinionated defaults: Many architectural decisions are pre-chosen (e.g., networking model, ingress, registry), reducing the amount of custom assembly needed.
- Operator-driven management: Platform capabilities (monitoring, networking, authentication, etc.) are managed by Operators that continually reconcile the actual state with the desired state.
- Multi-tenancy and security by default: The architecture assumes multiple teams and workloads share a cluster, so isolation, quotas, and policies are fundamental.
- Consistency across environments: The same architectural concepts apply whether OpenShift runs on-premises, in public clouds, or as a managed service.
Logical Components and Layers
From a logical perspective, OpenShift’s architecture can be broken into key component groups.
Kubernetes Foundation
At the base is a standard Kubernetes cluster:
- Control plane responsible for cluster-wide decisions (scheduling, scaling, API).
- Worker nodes that run application workloads in containers.
- Kubernetes resources such as Pods, Services, Deployments, and Namespaces, used to describe desired state.
OpenShift does not replace this foundation; instead, it extends it with additional APIs and controllers.
OpenShift Platform Services
On top of Kubernetes, OpenShift adds a curated set of integrated services that are considered part of the platform itself. Typical examples include:
- Ingress and routing layer – built-in components to expose HTTP/HTTPS applications externally.
- Integrated container registry – an internal image registry for storing and pulling container images within the cluster.
- Authentication and authorization integration – pluggable identity providers and RBAC tied into cluster APIs and the web console.
- Monitoring and logging – pre-integrated stacks for cluster and application metrics, and log aggregation.
- Cluster configuration management – centralized management of global configurations such as networking, identity providers, and cluster-wide policies.
Each of these services is managed declaratively via Kubernetes-style APIs, and their lifecycle is automated using Operators.
Operators as Architectural Building Blocks
Operators are a defining architectural element of OpenShift. Conceptually, they:
- Extend the Kubernetes API with Custom Resource Definitions (CRDs) to model higher-level systems (e.g., a monitoring stack, ingress controller, or storage subsystem).
- Run specialized controllers (Operator pods) that continuously reconcile the cluster toward the desired state expressed by those custom resources.
- Encapsulate operational knowledge such as installation, upgrade, configuration, and health checks.
A typical pattern in OpenShift is:
- You define a custom resource (for example, a
ClusterVersion,IngressController, orStorageCluster). - The relevant Operator reads that resource and applies the necessary Kubernetes changes.
- The Operator continuously watches for drift or failures and corrects them.
This makes Operators central to how OpenShift is installed, configured, and upgraded.
Multi-Tenancy and Project Isolation
OpenShift is designed for multi-tenant use: many teams and applications share the same cluster. Architectural choices to support this include:
- Projects as the primary unit of isolation, mapping to Kubernetes Namespaces plus additional OpenShift-specific metadata and policies.
- Resource constraints via configurable quotas and limits to avoid noisy neighbors and overconsumption of shared resources.
- Security boundaries enforced by admission controls and policies that govern which actions users and workloads can perform.
From an architecture standpoint, this means OpenShift must keep strong separation between system components (platform services) and user workloads (applications), even though they all run on the same physical or virtual nodes.
Physical and Deployment View
While the logical view explains relationships between components, the physical view explains where they run and how they are deployed.
Cluster Nodes and Roles (High-Level)
An OpenShift cluster typically consists of:
- Control plane nodes – running the API server, cluster data store, controllers, and core platform Operators.
- Worker nodes – running application workloads and many application-facing platform components.
OpenShift may further refine node roles (for example, nodes dedicated to infrastructure workloads vs. user applications, or specialized nodes for GPUs or storage). The architecture allows for dedicated machine pools with tailored configurations per role.
Network and Storage Integration
OpenShift is designed to plug into diverse networking and storage backends:
- Networking – cluster networking is provided by a Container Network Interface (CNI) implementation integrated with the OpenShift APIs and Operators. Higher-level routing and service discovery build on this base.
- Storage – persistent storage is abstracted using Kubernetes storage primitives, with OpenShift layering on automation and policies for provisioning and lifecycle management.
Architecturally, these integrations are modular but managed as part of the platform, not as ad hoc add-ons.
Cluster Lifecycle as a First-Class Concern
An important architectural distinction is that OpenShift treats cluster lifecycle as part of the platform:
- Installation and configuration – automated installation flows that can optionally provision underlying infrastructure (compute, networking, storage) in supported environments.
- Upgrades – a cluster-wide upgrade mechanism that coordinates version changes for the control plane, nodes, and platform services.
- Day-2 operations – built-in capabilities for scaling the number of nodes, applying configuration changes, and managing maintenance activities.
These capabilities rely heavily on Operators and custom resources that represent the desired cluster version, machine pools, and configuration profiles.
Control Plane vs. Platform vs. Workloads
Architecturally, it helps to distinguish between three categories of components, all running on the same cluster:
- Kubernetes control plane – core API server, controllers, and etcd equivalents (cluster data store), responsible for fundamental scheduling and state management.
- OpenShift platform components – Operators and system services providing features such as ingress, registry, monitoring, authentication, and configuration.
- User workloads – applications and services deployed by developers and operators, using the APIs and capabilities provided by the first two layers.
OpenShift’s architecture ensures:
- Clear boundaries between system namespaces (where platform components live) and user namespaces (projects).
- Upgrades and changes to the platform components are coordinated, tested, and versioned with the core OpenShift release.
- Applications remain largely decoupled from the implementation details of the underlying platform, interacting mainly through the stable API surface.
Extensibility and Ecosystem Integration
A key architectural goal is to make OpenShift extensible, so additional capabilities can be layered on without disrupting the base platform.
API-Driven Extensibility
- New features are typically exposed as new APIs (custom resource definitions) rather than bespoke scripts or one-off tools.
- Existing Kubernetes APIs are reused where possible, with OpenShift-specific behaviors implemented through admission plugins, webhooks, and controllers.
Operators and Catalogs
- Platform extensions are usually shipped as Operators, distributed via catalogs that the cluster can subscribe to.
- This pattern allows new capabilities (databases, storage systems, observability tools, CI/CD engines, etc.) to be integrated in a manner consistent with core platform components.
Architecturally, this makes OpenShift not just a cluster, but a platform for running and managing other platforms and services on top of Kubernetes.
Security and Compliance as Architectural Concerns
Security is embedded into the architecture rather than added later:
- Identity and access control – integrated authentication providers and RBAC are foundational, affecting how all platform APIs are used.
- Policy enforcement – admission controls, security constraints, and network policies shape where and how workloads run.
- Isolation of system components – core platform namespaces and resources are protected from modification by regular users.
- Auditability – many operations are logged and traceable to support governance and compliance requirements.
Because these aspects are built into how the platform is structured and how components interact, they influence the design of nearly every other architectural element.
Architectural Summary
In summary, OpenShift’s architecture can be viewed as:
- Kubernetes at the core, with an opinionated, integrated stack of essential services around it.
- Managed and evolved through a declarative, Operator-driven model that treats both workloads and the platform itself as resources described via APIs.
- Designed for multi-tenant, secure, and consistent operation across diverse infrastructures.
- Extensible through a coherent pattern of custom APIs and Operators, allowing the platform to grow without sacrificing manageability.
This architectural foundation is what enables OpenShift to function as a full application platform, rather than just a raw Kubernetes cluster, for teams building and operating containerized applications.