Table of Contents
Concept and Use Cases
Single-node OpenShift (SNO) is a deployment model where all OpenShift control plane and worker node components run on a single physical or virtual machine. It is a fully conformant OpenShift cluster, just with a cluster size of one.
Typical use cases include:
- Edge and far-edge deployments
- Industrial or manufacturing sites
- Retail stores and branch offices
- Telco cell sites and base stations
- Resource-constrained environments
- Small labs, test beds, and PoCs
- On-prem appliances with limited hardware
- Special-purpose systems
- Data acquisition gateways
- Network function virtualization at the edge
- Hardened or isolated environments with no local cluster admins
Compared to multi-node clusters, SNO trades redundancy and scale-out capacity for simplicity, smaller footprint, and easier placement in constrained locations.
Architectural Characteristics of SNO
In SNO, the logical cluster roles are the same as in multi-node OpenShift, but they are co-located:
- Control plane and worker on the same host
- The
kube-apiserver,etcd,kube-scheduler,kube-controller-manager, OpenShift API, authentication stack, and all worker node kubelet workloads run on a single node. - There is only one
etcdinstance; there is noetcdcluster. - High availability trade-offs
- No replication of control plane components.
- No failover node for workloads.
- A host or OS reboot means a full cluster outage for the duration.
- Persistence assumptions
- The host must provide reliable storage for
etcdand platform components. - Disk performance and reliability directly affect cluster stability since all state is local.
- Scheduling model
- All user workloads and infrastructure workloads (Ingress, monitoring, local registry if used, etc.) share CPU, memory, and storage on the same machine.
- Taints and tolerations are usually configured so that control-plane-level components and user workloads can co-exist without starving critical services.
SNO remains API-compatible with multi-node OpenShift, which means most tools, manifests, and automation can be reused with minor adjustments.
When to Choose Single-node vs Multi-node
Single-node OpenShift is particularly appropriate when:
- You cannot deploy multiple machines
- Physical space, power, or network limitations.
- Remote or rugged environments with only one reliable server.
- Local autonomy is required
- Applications must keep running even if WAN connectivity to central sites is lost.
- Local processing of data is required for latency, bandwidth, or compliance reasons.
- Operational simplicity is paramount
- Minimal on-site administration.
- Appliance-like behavior is desired.
In contrast, you should not use SNO when:
- You require high availability at the cluster level
- No tolerance for single-node failure.
- Strict SLAs that need node-level redundancy.
- You need large-scale compute
- Many nodes, GPUs, or high aggregate memory requirements.
- Large multi-tenant environments with many teams sharing a cluster.
- You expect heavy, noisy workloads
- Very resource-intensive applications that would interfere with control-plane stability.
A common pattern is to combine a central multi-node OpenShift cluster with many SNOs deployed at the edge, managed via central tooling.
Resource Sizing and Hardware Considerations
Because all roles share the same host, sizing is critical for stability:
- CPU
- Reserve cores for control plane components; do not plan for 100% CPU usage by workloads.
- Hyperthreading can help but does not replace proper core allocation.
- Memory
- The control plane,
etcd, and platform services have a baseline memory footprint. - Plan headroom for spikes (e.g., deployment bursts, logging, monitoring scrapes).
- Storage
- Low-latency and high IOPS storage is important for
etcdand core components. - Prefer SSD or NVMe; avoid slow or heavily contended disks.
- Plan separate logical volumes for:
- OS and platform components
- Container storage (e.g.,
overlay2, CSI-backed volumes) - Optional application data, especially for stateful workloads
- Network
- Ensure reliable local network for any attached devices, sensors, or peer services.
- Bandwidth may be low to WAN, but local network performance should be predictable.
- Time synchronization (NTP or similar) remains essential.
For constrained deployments, carefully match:
- Number of expected pods
- Size of container images and logs
- Any local persistent state
to the machine’s CPU, memory, disk, and network capacity.
Installation Approaches Specific to SNO
While SNO uses the same underlying installation technologies as other OpenShift deployments, some patterns are common and specific to single-node use:
- Single-node IPI-like flows
- Automated provisioning of a single bare-metal node (e.g., via Red Hat’s Assisted Installer, or vendor-specific automation).
- Useful when deploying many SNOs with a standardized image.
- Uplift from bare OS
- Install a supported OS on the hardware (often RHCOS or a supported RHEL base).
- Bootstrap OpenShift components locally using ignition and configuration manifests created centrally.
- Often integrated into factory or appliance build pipelines.
- Air-gapped or partially connected installs
- Required for edge locations with no or limited internet access.
- A local or central mirror of container images and Operators is used.
- Bootstrap artifacts (ignition files, manifests, ISO) are prepared in a connected environment and transported physically or over intermittent WAN.
Key considerations:
- Reproducibility
- You may need to produce identical SNO instances for many sites.
- Golden images, PXE boot infrastructure, or preconfigured ISO workflows help achieve consistent deployments.
- Zero-touch or low-touch provisioning
- Remote factories or sites may have minimal hands-on expertise.
- Automated installation steps, preconfigured network, and remote console access reduce the need for on-site intervention.
Operational Characteristics and Limitations
Single-node OpenShift behaves like a regular OpenShift cluster from an API and tooling perspective, but its operational envelope is different:
- Availability
- Any maintenance or reboot affects the entire cluster.
- Hardware failures result in total service loss; there is no standby node.
- Maintenance windows
- Upgrades, OS patching, and hardware changes require explicit planning to minimize downtime.
- Applications may require local buffering or graceful shutdown to avoid data loss.
- Resource contention
- Heavy workloads can starve control-plane components, causing instability.
- Use requests and limits, quotas, and scheduling policies to protect critical services.
- Log and metric retention
- Local disks can be overwhelmed by logs and metrics if not controlled.
- Implement log rotation and retention policies tailored to small-footprint deployments.
- Consider forwarding critical logs and metrics to central systems, bandwidth permitting.
- Security and isolation
- The node is a single point from both functional and security perspectives.
- Hardening of the base OS and physical security of the host are especially important.
Management Strategies for Multiple SNOs
In real-world environments, SNO is often deployed in fleets rather than as a single instance. Managing many SNOs introduces its own patterns:
- Central control with decentralized execution
- GitOps or similar approaches can manage configuration for many SNOs from a central repo.
- Operators and configuration policies are rolled out consistently.
- Consistent baseline configuration
- Common cluster-wide settings:
- Authentication configuration (e.g., OAuth providers)
- Network and DNS settings patterns
- Common platform Operators and storage drivers
- Variants per site:
- Local storage configuration
- Site-specific routes and external endpoints
- Local integration with sensors or data sources
- Monitoring and fleet observability
- Each SNO can run local monitoring stacks, but central aggregation helps.
- Lightweight remote health checks and alarms are crucial, particularly over unreliable links.
- Lifecycle and decommissioning
- Edge sites may be added or retired frequently.
- Automated procedures to:
- Bring up a new SNO with a predefined profile.
- Gracefully drain and archive workloads before decommissioning a node.
- Wipe or repurpose hardware securely.
Workload Design Considerations on SNO
Designing applications for SNO requires attention to the constraints and strengths of single-node deployments:
- Resilience patterns
- Application-level resilience is more important because infrastructure-level HA is limited.
- Use:
- Local buffering and retry mechanisms.
- Safe restart behavior for stateful services.
- Resource-aware design
- Avoid large numbers of small, chatty microservices that add overhead without clear benefit.
- Minimize container footprint and memory usage.
- Carefully tune concurrency, caching, and background tasks.
- Data handling
- Decide what data is processed and stored locally vs sent upstream.
- Protect local data against corruption (filesystem choices, journaling, backup strategies).
- Where required, support delayed synchronization when WAN connectivity is restored.
- Update strategy
- Plan application updates around cluster-level maintenance windows.
- Use canary or staged deployment patterns across multiple SNOs (e.g., update a subset of sites first before fleet-wide rollout).
Typical SNO Deployment Patterns
Common patterns seen in practice include:
- Branch-office “micro-datacenter”
- One robust SNO with local storage running:
- A small set of business applications.
- Local ingress and perhaps VPN endpoints.
- Optional local database or caching tier.
- Industrial edge gateway
- SNO directly connected to industrial networks and sensors.
- Runs:
- Data collection and pre-processing services.
- Local dashboards and control loops.
- Intermittent synchronization to central analytics platforms.
- Telco edge or MEC node
- SNO hosts virtual network functions or 5G-related services.
- Performance, latency, and deterministic behavior are prioritized.
- Automated remote installation and upgrades are essential.
Each pattern balances local autonomy, central management, and reliability according to business requirements.
Pros and Cons Summary
Advantages:
- Small footprint (single machine, reduced hardware and power needs).
- Full OpenShift feature set, but scaled down.
- Easier deployment into remote or constrained environments.
- Good fit for edge, appliance-like, or specialized workloads.
Limitations:
- No cluster-level high availability.
- Resource contention can more easily destabilize the cluster if not managed.
- Maintenance operations imply full-service downtime on that site.
- Scaling out requires adding more SNOs or moving to a multi-node model.
Understanding these trade-offs helps you decide where Single-node OpenShift fits best in your overall OpenShift deployment strategy.