Kahibaro
Discord Login Register

Nodes and machine roles

Node Types in OpenShift

OpenShift clusters are composed of several types of nodes, each with specific responsibilities. While the architecture overview already introduced the idea of control plane vs worker, here we focus on how OpenShift refines these into concrete machine roles.

At a high level, an OpenShift node is just a machine (virtual or physical) that runs the OpenShift software stack and participates in the cluster. How that node is labeled, configured, and scheduled defines its role.

Typical node types in OpenShift:

These are mostly conventions implemented via labels, taints, and machine configuration, not different binaries.

Control Plane Nodes

Control plane nodes (often called “masters”) run the core Kubernetes and OpenShift control plane components.

Characteristics specific to control plane nodes:

In most OpenShift deployments you do not run your applications on control plane nodes; their role is to maintain cluster state and respond to API requests.

Worker Nodes

Worker nodes (often called “compute” nodes) run your application workloads and most platform services that are not part of the core control plane.

Key characteristics:

OpenShift manages worker nodes through the worker MachineConfigPool and, in many cases, through Machine API resources that allow automatic provisioning and scaling (details of provisioning are covered in the installation and deployment chapters).

Infrastructure Nodes

Infrastructure (infra) nodes are worker nodes dedicated to cluster infrastructure workloads instead of user applications.

They still run the node components (kubelet, CRI-O) like any worker, but their scheduling is constrained to specific infrastructure pods, for example:

Why use infra nodes:

How infra nodes are typically implemented:

The exact label/taint patterns can vary by OpenShift version and organizational standards, but the principle remains: use labels+taints to dedicate a class of worker nodes to infrastructure services.

Specialized Worker Roles

Beyond basic worker and infra roles, OpenShift encourages defining specialized worker groups for specific types of workloads or hardware. These are implemented with the same mechanisms: labels, taints, and custom MachineConfigPools.

Common examples:

GPU / Accelerator Nodes

Nodes with GPUs or other accelerators are dedicated to ML/AI, visualization, or HPC-style workloads.

Typical characteristics:

Storage-Heavy or I/O Nodes

Some clusters use nodes optimized for I/O-heavy workloads (databases, caches, file-serving workloads).

Examples:

Edge or Remote Site Nodes

For edge deployments or remote sites:

Single-Node OpenShift (SNO) as a Special Case

Single-Node OpenShift (SNO) is a special deployment model where control plane and worker roles run on a single machine.

Role characteristics in SNO:

Management concepts like MachineConfigPool still exist, but they apply to a single node.

Node Roles, Labels, and MachineConfigPools

Machine roles in OpenShift are mainly implemented using three mechanisms:

  1. Node roles and labels
    • OpenShift automatically creates some canonical labels, such as:
      • node-role.kubernetes.io/master="" (or control-plane)
      • node-role.kubernetes.io/worker=""
    • Admins can add additional role labels, for example:
      • node-role.kubernetes.io/infra=""
      • node-role.kubernetes.io/gpu=""
    • Workloads then use:
      • nodeSelector
      • affinity / antiAffinity
        to control where pods run.
  2. Taints and tolerations
    • Used to repel workloads from certain node types.
    • Example:
      • Infra nodes tainted with node-role.kubernetes.io/infra:NoSchedule.
      • Only infra workloads with corresponding tolerations are allowed to schedule there.
  3. MachineConfigPools (MCPs)
    • MCPs group nodes with similar configuration needs.
    • Common pools:
      • master
      • worker
      • Custom pools, for example: infra, gpu, edge.
    • MachineConfigs target specific MCPs, allowing:
      • Different OS configuration
      • Different kubelet settings
      • Different kernel parameters
        per role.

Understanding this mapping is key:

Assigning Workloads to Node Roles

From an application or platform perspective, using node roles correctly is about:

Example of a Deployment that targets infra nodes only:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: infra-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app: infra-service
  template:
    metadata:
      labels:
        app: infra-service
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: "node-role.kubernetes.io/infra"
        operator: "Exists"
        effect: "NoSchedule"
      containers:
      - name: app
        image: myorg/infra-service:latest

This Deployment will:

Design Considerations for Node Role Layout

When planning node roles in an OpenShift cluster, you typically decide:

Key factors that influence design:

By carefully defining node roles and using labels, taints, and MachineConfigPools, you can tightly control where workloads run, how cluster resources are used, and how different classes of machines are managed.

Views: 16

Comments

Please login to add a comment.

Don't have an account? Register now!