Kahibaro
Discord Login Register

3.3 Control plane components

Overview of the Kubernetes Control Plane

In Kubernetes, the control plane is the “brain” of the cluster. It makes global decisions (scheduling, scaling, reacting to failures) and provides a consistent view of the cluster state. Worker nodes run your application containers; the control plane decides what should run where and when.

This chapter focuses on the standard, upstream Kubernetes control plane components and how they work together:

You will later see how OpenShift builds on these ideas, but here we focus on the generic Kubernetes concepts.

kube-apiserver

kube-apiserver is the front door to the Kubernetes control plane.

Role

Conceptually, the API server is the only component that talks directly to the cluster’s persistent store (etcd).

Responsibilities

How other components use the API server

Other control plane components never write directly to etcd. Instead, they:

  1. Read desired and current state via the API.
  2. Compute what needs to change.
  3. Write their changes back via the API.

This pattern (read–reconcile–write via the API server) is central to how Kubernetes works.

etcd

etcd is the strongly consistent, distributed key–value store backing the Kubernetes API.

Role

In production, etcd is usually run as a highly available cluster (odd number of members) to tolerate failures.

Characteristics Relevant to Kubernetes

What gets stored in etcd

Examples of data types:

While you rarely interact with etcd directly in managed environments, its performance and reliability directly affect API responsiveness and cluster stability.

kube-scheduler

kube-scheduler assigns Pods to Nodes.

Role

It does not start containers itself; that is the job of the node’s kubelet. The scheduler only decides placement.

Scheduling process (high level)

For each pending Pod:

  1. Filter (Predicates)
    • Eliminate Nodes that cannot run the Pod.
    • Examples:
      • Insufficient CPU / memory.
      • Node doesn’t match the Pod’s node selector or affinities.
      • Node is tainted in a way the Pod cannot tolerate.
      • Required volumes or resources aren’t available.
  2. Score (Priorities)
    • Rank the remaining Nodes.
    • Examples:
      • Prefer Nodes with more free resources.
      • Spread Pods across failure domains (zones, nodes).
      • Honor Pod affinity/anti-affinity.
  3. Bind
    • Select the highest-scoring Node.
    • Write a binding decision (or update the Pod) through the API server.

If no suitable Node exists, the Pod remains pending until conditions change (e.g., a new Node joins or resources free up).

Extensibility

The scheduler is pluggable:

kube-controller-manager

kube-controller-manager runs a collection of built-in controllers that continuously reconcile different aspects of cluster state.

Reconciliation concept

Each controller implements a loop:

  1. Observe desired state from the API server (e.g., a Deployment object).
  2. Observe current state of related resources (e.g., existing Pods, ReplicaSets).
  3. Compare desired vs. current.
  4. Act: create, update, or delete Kubernetes objects so that actual state moves toward desired state.

This is often described as:

$$
\text{Reconcile loop:}\ \text{desired state} - \text{current state} \rightarrow \text{actions}
$$

Examples of built-in controllers

A few important controllers that typically run inside kube-controller-manager:

All of these controllers use the same pattern: watch → compare → reconcile via the API server.

cloud-controller-manager

cloud-controller-manager runs controllers that interact with the underlying cloud provider.

This component only appears in clusters integrated with cloud infrastructures (e.g., AWS, GCP, Azure, OpenStack). In on-prem or bare-metal setups, it may be absent or replaced with different integrations.

Why it exists

Originally, cloud-specific logic was deeply embedded into core Kubernetes binaries. cloud-controller-manager was introduced to:

Typical cloud-related controllers

Examples of controllers that may run in cloud-controller-manager:

How the Control Plane Components Work Together

To understand the interaction, consider a high-level example: creating a new Deployment.

  1. User submits a Deployment
    • A user (or CI/CD system) sends a request to kube-apiserver to create a Deployment.
    • API server:
      • Authenticates and authorizes the request.
      • Validates the object.
      • Stores it in etcd.
  2. Controllers react
    • The Deployment controller (in kube-controller-manager) watches Deployments.
    • It sees the new Deployment and creates a ReplicaSet with the desired number of replicas.
    • The ReplicaSet controller then creates the required number of Pods via the API server.
  3. Scheduler assigns Pods
    • kube-scheduler watches for Pods with no nodeName.
    • It evaluates Nodes, picks suitable ones, and writes binding decisions via the API server.
  4. Nodes run the Pods
    • Node-local components (like kubelet, described elsewhere) see the assigned Pods.
    • They pull images, start containers, and report status back to the API server.
  5. Ongoing reconciliation
    • If a Pod fails, the ReplicaSet controller notices fewer running replicas than desired and creates a replacement.
    • If a Node disappears, the Node controller marks it unhealthy; pods are eventually rescheduled by the scheduler.

Throughout this process:

High Availability and Scalability Considerations

In production environments, control plane components are usually run in a highly available and scalable configuration:

This design aims to keep the cluster functional even if individual control plane nodes fail.

Summary

The Kubernetes control plane is composed of cooperative components, each with a focused responsibility:

Together, they implement Kubernetes’s declarative model: you describe the desired state, and the control plane continuously works to make the cluster match that description.

Views: 68

Comments

Please login to add a comment.

Don't have an account? Register now!