Kahibaro
Discord Login Register

Control plane in OpenShift

Role of the Control Plane in OpenShift

The control plane in OpenShift is the “brains” of the cluster. It makes global decisions (scheduling, scaling, admission), maintains cluster state, and exposes the Kubernetes and OpenShift APIs that all tools and users interact with.

Where a generic Kubernetes control plane focuses only on core Kubernetes behavior, OpenShift extends it with additional components and APIs to provide features like:

In OpenShift, the control plane is typically hosted on dedicated control plane nodes (formerly called masters) that are separated from worker nodes that run user workloads.

Core Control Plane Components

OpenShift includes the standard Kubernetes control plane plus OpenShift‑specific control components. At a high level:

Most of these run as static pods on control plane nodes and are themselves managed declaratively.

Kubernetes API Server (kube-apiserver)

The Kubernetes API server is the primary front door for nearly all control plane operations:

In OpenShift, kube-apiserver is highly available by running multiple replicas behind a load balancer (usually fronted by the API load balancer configured during installation).

etcd

etcd is the key‑value store that holds all persistent cluster state:

In OpenShift:

Because everything in OpenShift ultimately flows through the API server into etcd, cluster consistency depends on etcd’s health and performance.

Kubernetes Controller Manager (kube-controller-manager)

The Kubernetes controller manager runs core controllers that reconcile desired state with actual state. Relevant examples in an OpenShift context include:

In OpenShift, these controllers operate on both standard Kubernetes resources and OpenShift resources that reference Kubernetes objects (for example, controllers that ensure routes correctly reference services).

Kubernetes Scheduler (kube-scheduler)

The scheduler decides which node each pod should run on, based on:

OpenShift uses the upstream scheduler with configuration adapted to OpenShift policies. Other layers (like Operators and cluster‑level configurations) may influence scheduling by setting labels, taints, or default constraints, but the final placement decision remains with the scheduler.

OpenShift-Specific Control Plane Components

Beyond the Kubernetes core, OpenShift introduces its own API server, controllers, and cluster‑level management components.

OpenShift API Server (openshift-apiserver)

The OpenShift API server extends the API surface with OpenShift‑specific resources. Examples include:

Characteristics:

OpenShift Controller Manager (openshift-controller-manager)

The OpenShift controller manager contains controllers specific to OpenShift concepts. Typical responsibilities:

These controllers watch OpenShift resources via the API server and continuously reconcile their actual state to the desired specifications in those resources.

OAuth and Authentication Services

OpenShift integrates an OAuth server into the control plane, enabling secure user authentication through:

Key aspects:

This makes identity management a first‑class control plane function rather than a separate add‑on.

Web Console Backend

The OpenShift web console is a UI served from the control plane side:

While the UI itself is not a control loop, it is tightly integrated with the APIs and auth components in the control plane.

Operators and Control Plane Management

OpenShift uses Operators extensively to manage the control plane itself. These Operators run as controllers in the cluster and manage cluster‑level resources and configuration.

Cluster Version Operator (CVO)

The Cluster Version Operator is central to OpenShift’s lifecycle management:

Rather than manually updating each component, administrators set a target version and the CVO orchestrates the rest.

Cluster Operators

Cluster Operators are specialized Operators that manage distinct subsystems, many of which are part of or closely tied to the control plane. Examples:

Each Cluster Operator:

This Operator‑driven approach means many aspects of the control plane are configured declaratively, and reconciliation is automatic.

Cluster Configuration Resources

Several custom resources define cluster‑wide behavior handled by Operators and control plane components, for example:

These are not workloads themselves but control plane configuration objects that Operators watch and apply.

High Availability and Topology of the Control Plane

The control plane in OpenShift is designed for high availability and resilience.

Control Plane Node Layout

Common patterns:

Characteristics:

Highly Available etcd and API Servers

HA for critical components typically works as follows:

If a single control plane node fails, the remaining nodes continue serving the API, and etcd quorum is preserved (in a correctly sized deployment).

Control Plane vs Worker Responsibilities

The control plane:

Workers:

This separation enhances stability and simplifies performance tuning.

Control Plane Interaction Patterns

From a user or application perspective, interactions with the control plane follow a consistent pattern:

  1. A client (e.g., oc, web console, Operator) sends a request to the API server.
  2. The API server authenticates the client and authorizes the action using OpenShift RBAC and security policies.
  3. The API server validates the object and writes it to etcd.
  4. One or more controllers (Kubernetes or OpenShift or Operators) see the change via watches on the API.
  5. Those controllers take actions (create pods, update routes, adjust deployments, configure underlying services) to bring the actual state into alignment with the new desired state.

In OpenShift, many of these steps are enriched by:

Summary

The OpenShift control plane combines:

to provide a self‑managing, highly available cluster “brain.” It owns cluster‑wide decisions, state, and policies, while worker nodes focus on running application workloads.

Views: 8

Comments

Please login to add a comment.

Don't have an account? Register now!