Table of Contents
Role of the Control Plane in OpenShift
The control plane in OpenShift is the “brains” of the cluster. It makes global decisions (scheduling, scaling, admission), maintains cluster state, and exposes the Kubernetes and OpenShift APIs that all tools and users interact with.
Where a generic Kubernetes control plane focuses only on core Kubernetes behavior, OpenShift extends it with additional components and APIs to provide features like:
- Built‑in authentication and authorization
- Integrated routing, builds, and image management
- Operator-based lifecycle management
- Cluster configuration and policy controls
In OpenShift, the control plane is typically hosted on dedicated control plane nodes (formerly called masters) that are separated from worker nodes that run user workloads.
Core Control Plane Components
OpenShift includes the standard Kubernetes control plane plus OpenShift‑specific control components. At a high level:
- Kubernetes API server (kube-apiserver)
- etcd (cluster data store)
- Kubernetes controller manager (kube-controller-manager)
- Kubernetes scheduler (kube-scheduler)
- OpenShift API server (openshift-apiserver)
- OpenShift controllers (openshift-controller-manager)
- Cluster version and configuration controllers (e.g., Cluster Version Operator)
- Authentication and OAuth services
- Web console backend components
Most of these run as static pods on control plane nodes and are themselves managed declaratively.
Kubernetes API Server (kube-apiserver)
The Kubernetes API server is the primary front door for nearly all control plane operations:
- Accepts and validates REST requests from
oc, the web console, Operators, and other components. - Persists cluster state in etcd.
- Enforces admission control and authentication/authorization (in coordination with OpenShift auth components).
- Acts as the central “source of truth” for all Kubernetes objects.
In OpenShift, kube-apiserver is highly available by running multiple replicas behind a load balancer (usually fronted by the API load balancer configured during installation).
etcd
etcd is the key‑value store that holds all persistent cluster state:
- Definitions of resources (Pods, Deployments, Routes, etc.).
- Cluster configuration and status.
- Custom resources for Operators and OpenShift APIs.
In OpenShift:
- etcd typically runs on control plane nodes only.
- It is configured for high availability (3 or 5 etcd members in production clusters).
- Backup and restore procedures revolve largely around protecting etcd data.
- The Cluster etcd Operator manages configuration, certificates, and upgrades of etcd.
Because everything in OpenShift ultimately flows through the API server into etcd, cluster consistency depends on etcd’s health and performance.
Kubernetes Controller Manager (kube-controller-manager)
The Kubernetes controller manager runs core controllers that reconcile desired state with actual state. Relevant examples in an OpenShift context include:
- Node controller: tracks node health and readiness.
- Replication/ReplicaSet controller: ensures the requested number of pod replicas.
- Endpoints/EndpointSlice controller: maintains service-to-pod mappings.
- Service account and token controllers: help manage credentials used by workloads.
In OpenShift, these controllers operate on both standard Kubernetes resources and OpenShift resources that reference Kubernetes objects (for example, controllers that ensure routes correctly reference services).
Kubernetes Scheduler (kube-scheduler)
The scheduler decides which node each pod should run on, based on:
- Resource requests (CPU/memory) and node capacity.
- Node labels, taints, tolerations, and affinity/anti‑affinity rules.
- Topology constraints (for multi‑zone/multi‑region clusters).
OpenShift uses the upstream scheduler with configuration adapted to OpenShift policies. Other layers (like Operators and cluster‑level configurations) may influence scheduling by setting labels, taints, or default constraints, but the final placement decision remains with the scheduler.
OpenShift-Specific Control Plane Components
Beyond the Kubernetes core, OpenShift introduces its own API server, controllers, and cluster‑level management components.
OpenShift API Server (openshift-apiserver)
The OpenShift API server extends the API surface with OpenShift‑specific resources. Examples include:
Routefor external HTTP(S) access to services.ImageStreamand related image APIs.- Legacy OpenShift deployment types (e.g.,
DeploymentConfig). - Project management APIs (
Projectresources as an abstraction over namespaces).
Characteristics:
- Runs alongside the Kubernetes API server on control plane nodes.
- Uses the same underlying authentication and RBAC model.
- Presents a unified API endpoint to clients; from the user’s perspective, Kubernetes and OpenShift resources are accessed via the same API endpoint.
OpenShift Controller Manager (openshift-controller-manager)
The OpenShift controller manager contains controllers specific to OpenShift concepts. Typical responsibilities:
- Managing
Routes(ensuring routes map correctly to services and that router configuration is updated). - Handling
ImageStreamupdates and triggers (e.g., automatically triggering a deployment when a new image tag appears). - Project lifecycle operations (initialization of project defaults, quotas, and policies).
- Some build‑related logic (where not delegated to Operators).
These controllers watch OpenShift resources via the API server and continuously reconcile their actual state to the desired specifications in those resources.
OAuth and Authentication Services
OpenShift integrates an OAuth server into the control plane, enabling secure user authentication through:
- Built‑in
htpasswdor Kubernetes Secret backends. - External identity providers (LDAP, GitHub, GitLab, OIDC, etc.).
Key aspects:
- The OAuth server issues bearer tokens used with the API server and web console.
- Authentication flows for the web console redirect through the OAuth server.
- Configuration is controlled by cluster‑level
OAuthandAuthenticationresources (managed by Operators).
This makes identity management a first‑class control plane function rather than a separate add‑on.
Web Console Backend
The OpenShift web console is a UI served from the control plane side:
- A server component runs on the control plane (or as cluster workloads managed by Operators).
- It communicates with the same API endpoint used by
oc. - Uses the built‑in OAuth flow to authenticate users.
While the UI itself is not a control loop, it is tightly integrated with the APIs and auth components in the control plane.
Operators and Control Plane Management
OpenShift uses Operators extensively to manage the control plane itself. These Operators run as controllers in the cluster and manage cluster‑level resources and configuration.
Cluster Version Operator (CVO)
The Cluster Version Operator is central to OpenShift’s lifecycle management:
- Tracks the desired OpenShift version (set in the
ClusterVersionresource). - Drives upgrades by applying new manifests and coordinating updates to cluster Operators and core components.
- Ensures that all control plane components are at compatible versions.
Rather than manually updating each component, administrators set a target version and the CVO orchestrates the rest.
Cluster Operators
Cluster Operators are specialized Operators that manage distinct subsystems, many of which are part of or closely tied to the control plane. Examples:
kube-apiserverOperatorkube-controller-managerOperatorkube-schedulerOperatoretcdOperatorauthenticationOperatoringressOperatorconsoleOperatornetworkOperator
Each Cluster Operator:
- Deploys and configures its subsystem (often as static pods or DaemonSets on control plane nodes).
- Monitors health and reports status to the
ClusterOperatorresource. - Applies configuration defined in cluster‑level custom resources (for example,
APIServer,Network,Ingress,Authentication).
This Operator‑driven approach means many aspects of the control plane are configured declaratively, and reconciliation is automatic.
Cluster Configuration Resources
Several custom resources define cluster‑wide behavior handled by Operators and control plane components, for example:
APIServer: API server configuration (TLS, auditing, certain admission plugins).Authentication: identity providers and token configuration.Network: top‑level cluster network settings.Ingress: cluster‑default ingress behavior.
These are not workloads themselves but control plane configuration objects that Operators watch and apply.
High Availability and Topology of the Control Plane
The control plane in OpenShift is designed for high availability and resilience.
Control Plane Node Layout
Common patterns:
- 3 control plane nodes: the standard production topology.
- 1 control plane node: for development or single‑node OpenShift, with reduced fault tolerance.
- 5 control plane nodes: in some larger deployments for extra etcd quorum resilience.
Characteristics:
- Core components (kube‑apiserver, openshift‑apiserver, controllers, scheduler, etcd) run as static pods pinned to control plane nodes.
- An external load balancer usually fronts the control plane API endpoints.
Highly Available etcd and API Servers
HA for critical components typically works as follows:
- etcd:
- Runs as an odd number of members (3 or 5) to maintain quorum.
- Data is replicated among members; loss of a minority of nodes can be tolerated without data loss.
- API servers:
- Multiple instances of kube‑apiserver and openshift‑apiserver run on all control plane nodes.
- The external API load balancer routes client connections across available API servers.
If a single control plane node fails, the remaining nodes continue serving the API, and etcd quorum is preserved (in a correctly sized deployment).
Control Plane vs Worker Responsibilities
The control plane:
- Hosts only cluster management components, not user application workloads (in standard multi‑node clusters).
- Runs critical APIs, controllers, and data stores.
Workers:
- Run application pods and most Operators that do not require direct control plane node placement.
- Are supervised by the control plane but do not host control plane components.
This separation enhances stability and simplifies performance tuning.
Control Plane Interaction Patterns
From a user or application perspective, interactions with the control plane follow a consistent pattern:
- A client (e.g.,
oc, web console, Operator) sends a request to the API server. - The API server authenticates the client and authorizes the action using OpenShift RBAC and security policies.
- The API server validates the object and writes it to etcd.
- One or more controllers (Kubernetes or OpenShift or Operators) see the change via watches on the API.
- Those controllers take actions (create pods, update routes, adjust deployments, configure underlying services) to bring the actual state into alignment with the new desired state.
In OpenShift, many of these steps are enriched by:
- Additional resources (Routes, ImageStreams, custom Operator CRDs).
- Additional controllers that provide OpenShift‑specific behaviors (e.g., automatic deployment triggers from new images).
- A more opinionated default configuration enforced by Operator‑managed control plane components.
Summary
The OpenShift control plane combines:
- The standard Kubernetes control plane (API server, scheduler, controllers, etcd),
- OpenShift‑specific APIs and controllers,
- And a rich set of Operators and cluster configuration resources,
to provide a self‑managing, highly available cluster “brain.” It owns cluster‑wide decisions, state, and policies, while worker nodes focus on running application workloads.