Kahibaro
Discord Login Register

Worker node components

Role of Worker Nodes in Kubernetes

Worker nodes are the machines (virtual or physical) that actually run your application containers. While control plane components decide what should happen in the cluster, worker node components are responsible for doing the work: running Pods, pulling images, attaching storage, and handling network traffic.

This chapter focuses on the key software components that must be present on every Kubernetes worker node and how they interact.

Core Components on a Worker Node

A typical worker node runs at least these core components:

Control plane components never run on worker nodes by default in a standard production cluster; instead, they communicate with these worker components through the Kubernetes API.

Kubelet

kubelet is the primary Kubernetes agent running on each node. It acts as a bridge between:

Responsibilities of kubelet

At a high level, kubelet:

Concretely, kubelet:

Kubelet does not make scheduling decisions; it simply enforces the desired state expressed by the control plane for the Pods assigned to its node.

Container Runtime

The container runtime is the software responsible for actually running containers on the node. Kubernetes interacts with it via the Container Runtime Interface (CRI).

Common runtimes:

Responsibilities of the container runtime

The runtime handles all low-level container operations, such as:

Kubelet communicates with the runtime over a local gRPC API defined by CRI:

Pods and runtime sandboxes

From the runtime’s perspective, a Pod is often represented as:

Kubelet orchestrates both sandbox and container lifecycle via CRI.

Kube-proxy

kube-proxy is the network proxy that runs on each worker node. It implements Kubernetes Service networking on the node, making cluster-internal service discovery and basic load balancing work.

Responsibilities of kube-proxy

Depending on the mode, kube-proxy may:

How kube-proxy handles Service traffic

For a ClusterIP Service:

For a NodePort Service:

Kube-proxy operates at the node level; it does not inspect application protocols. It simply forwards packets based on IP and port, using the Service and Endpoint data from the API server.

CNI Plugins and Node Networking

While kube-proxy implements Service-level load balancing, CNI (Container Network Interface) plugins implement Pod networking on the node.

CNI details are typically configured by the cluster distribution and belong to the networking model chapter, so here we only highlight the worker-node aspect:

From the worker node’s perspective, CNI plugins are local binaries and configuration that kubelet calls to wire up Pod networking.

Node as a Kubernetes Object vs. Machine

In Kubernetes, a “Node” is also an API object representing the machine. Worker nodes host:

Kubelet bridges these two views:

Labels on the node (e.g. node-role.kubernetes.io/worker, topology.kubernetes.io/zone) are critical for scheduling and higher-level features, but they’re stored in the Kubernetes node object, not in kubelet itself.

Resource Enforcement and Isolation

Worker nodes enforce container and Pod resource limits through:

Kubelet configures these via the container runtime based on Pod specs:

While the control plane decides “this Pod should get 500m CPU and 512Mi memory,” it is the worker node (via kubelet + runtime + kernel) that enforces these limits.

Node-Level Health and Reporting

Worker nodes continuously report their health to the cluster:

Kubelet sends these updates to the API server. Higher-level components (like the scheduler or cluster autoscaler) rely on this information, but the collection and reporting are done on the worker node.

Additionally, worker nodes:

Interaction Summary

Putting it all together on a worker node:

  1. Scheduler (control plane) decides a Pod should run on Node A.
  2. API server records the Pod’s assignment to Node A.
  3. Kubelet on Node A:
    • Sees the new Pod assigned to its node.
    • Requests the container runtime to create a Pod sandbox and containers.
    • Invokes CNI plugins to configure the Pod’s network.
  4. Kube-proxy on Node A:
    • Updates its rules if the new Pod is an endpoint for a Service.
  5. Kubelet:
    • Monitors the Pod and containers, runs health probes, enforces limits.
    • Reports status back to the API server.
  6. If something fails (container crash, health probe failure), kubelet:
    • Restarts containers as needed based on the Pod’s restart policy.
    • Updates Pod status so higher-level controllers can react.

All of this happens on every worker node, making them the execution backbone of the Kubernetes cluster.

Views: 27

Comments

Please login to add a comment.

Don't have an account? Register now!