Table of Contents
Understanding OpenShift Deployment Choices
OpenShift can run on many kinds of infrastructure and in different operational models. Before touching installers or cloud consoles, you need to understand what kind of OpenShift deployment you’re aiming for and what trade‑offs each model brings.
At a high level, there are three main questions:
- Where will the cluster run?
- On‑premises (bare metal, VMware, etc.)
- Public cloud (AWS, Azure, GCP, others)
- Edge / single node / remote sites
- Who manages the infrastructure?
- You (self‑managed, full control and responsibility)
- Red Hat or a cloud provider (managed service)
- How is the cluster created and evolved?
- With the official installer doing most of the infrastructure work
- With infrastructure you prepare and manage yourself
- As a fully managed subscription service
This chapter provides a map of these options and explains how they relate to cluster lifecycle management (creation, scaling, upgrades, and decommissioning), which is covered in more depth later.
Key Concepts: Installation vs Deployment Model vs Lifecycle
It helps to keep three terms distinct:
- Installation method
The concrete way a cluster is initially created: - Using the
openshift-installinstaller - Using a managed‑service control plane (e.g., “Create cluster” in a cloud console)
- Using automation you wrote yourself (e.g., Ansible, Terraform, pipeline)
- Deployment model
The overall pattern of how OpenShift is hosted and operated: - Self‑managed vs managed
- On‑prem vs cloud vs hybrid
- Single cluster vs many clusters
- Cluster lifecycle management
How clusters are changed over time: - Upgrades, scaling, node replacement, configuration drift, long‑term operations
This chapter focuses on deployment models and the high‑level installation approaches they imply. Details of day‑2 lifecycle tools and operations are discussed separately.
Major OpenShift Deployment Options
Self-Managed vs Managed OpenShift
OpenShift deployment options broadly fall into two families:
- Self-managed OpenShift
- You install, configure, and operate the cluster.
- You own both the control plane and worker nodes, whether on-premises or in the cloud.
- You are responsible for:
- Infrastructure provisioning (unless automated by the installer)
- Availability, performance, backups, and upgrades
- Integrations with your environment (identity, storage, networking)
- Managed OpenShift
- A provider (Red Hat and/or cloud vendor) operates the cluster for you.
- You consume OpenShift “as a service”.
- You focus more on applications, less on cluster plumbing.
Most organizations use both models in different places:
- Managed clusters for fast adoption and typical workloads.
- Self-managed clusters where deep customization or strict on‑prem requirements apply.
Platform Targets: On-Prem, Public Cloud, Edge
OpenShift supports several infrastructure targets:
- On-premises
- Bare metal servers
- Virtualization platforms (e.g., VMware vSphere, Red Hat Virtualization, etc.)
- Often chosen for data residency, low‑latency, or hardware control.
- Public cloud
- Major providers: AWS, Azure, GCP (and others via specialized paths).
- Offers elasticity and global reach, plus provider-native managed offerings.
- Edge and constrained environments
- Single-node clusters and small footprints.
- Remote locations (factories, retail, telco sites) often with limited resources or connectivity.
Each infrastructure type supports a subset of the deployment and installation models described below; not every model is available everywhere.
Installer-Provisioned vs User-Provisioned Infrastructure
When you deploy self-managed OpenShift, you choose between two primary patterns for how infrastructure is created:
- Installer-Provisioned Infrastructure (IPI)
- User-Provisioned Infrastructure (UPI)
These are high-level models that shape the entire install and lifecycle story.
Installer-Provisioned Infrastructure (IPI) – Conceptual Model
With IPI, the official OpenShift installer (openshift-install) is responsible for both:
- Bootstrapping the OpenShift cluster, and
- Creating the underlying infrastructure in your chosen environment.
You usually:
- Provide a minimal configuration (e.g., via an
install-config.yaml): - Platform and region (e.g., AWS region)
- Base domain
- Cluster name
- Initial node sizing or counts
- Run the installer.
- Let it:
- Create networking components (VPCs, subnets, load balancers, etc.)
- Launch and configure compute instances for control plane and workers
- Bootstrap the cluster and configure it to be self‑hosting
When IPI Fits Well
IPI is typically preferred when:
- You can allow the installer to own the infrastructure for the cluster.
- You are on a supported cloud or virtualization platform with full integration.
- You want the simplest, opinionated experience:
- Faster initial setup
- Tighter integration with day-2 operations (e.g., machine API for node scaling)
- Reduced amount of custom automation to maintain
This model suits:
- Greenfield environments where you can accept the installer’s networking/storage patterns.
- Teams without heavy infra-as-code requirements or custom network constraints.
- Training, POCs, and many production environments where the standard patterns work.
Characteristics of IPI
- Strong conventions:
- Standardized cluster layout per platform.
- Infrastructure automation embedded in the installer:
- Creates and wires resources in the platform API (cloud or virtualization).
- Smooth integration with node lifecycle:
- Nodes are typically managed via the Machine API, easing scale‑out/in.
- Less initial flexibility but fewer decisions to make up front.
If you later need to integrate with complex, preexisting infra patterns, IPI might feel too rigid; that’s where UPI comes in.
User-Provisioned Infrastructure (UPI) – Conceptual Model
With UPI, you:
- Design and create the underlying infrastructure yourself (usually via your own automation), and
- Use the OpenShift installer in a more limited role (to generate configuration and bootstrap artifacts, not to create all resources end‑to‑end).
You:
- Decide and implement:
- Network topology (subnets, routing, firewalls)
- Load balancers and DNS setup
- Storage platforms and node provisioning
- Use the OpenShift installer to:
- Generate manifests and ignition configs
- Assist with bootstrap and cluster formation
- Coordinate the actual bring‑up of control plane and worker nodes with your infrastructure.
When UPI Fits Well
UPI is typically chosen when:
- You have strict infrastructure constraints or standardized cloud/on‑prem templates.
- You must integrate with existing networking, security zones, or corporate standards that differ from IPI defaults.
- Your provisioning chain is already automated in:
- Terraform, Ansible, vendor tools, or custom software.
- You’re deploying on infra targets that lack full IPI support, or where you need exotic topologies.
UPI is common in:
- Large enterprises with centralized infrastructure teams.
- Highly regulated environments with tight control on every resource.
- Complex hybrid networks (multi‑VPC, legacy connectivity, multiple firewalls).
Characteristics of UPI
- Maximum flexibility, more responsibility:
- You own every cloud/on‑prem resource.
- Better fit for heavy infra‑as‑code:
- You replicate your standard patterns instead of adopting IPI defaults.
- More up‑front design work:
- Networking, security, DNS, and storage need careful planning.
- May require extra integration effort:
- Node lifecycle and scaling might rely more on your tooling than on the Machine API.
In many organizations, the cluster platform team provides a reusable UPI pattern, then application teams just consume the resulting clusters.
Managed OpenShift Services
Instead of installing and operating OpenShift yourself, you can consume a managed OpenShift service. The core idea:
- You get a fully functional OpenShift cluster endpoint.
- The provider operates the control plane and much of the worker infrastructure.
- You focus on workloads, not cluster plumbing.
Common patterns:
- A cloud-native “OpenShift as a Service” where:
- Control plane runs in the provider’s account/subscription.
- You may or may not manage worker nodes directly, depending on the service.
- The provider automates:
- Cluster creation
- Control plane upgrades
- Platform observability, SLAs, and incident response for control plane issues
When Managed OpenShift Fits Well
- You want to adopt OpenShift quickly without building a platform team first.
- Your workload is mostly cloud-native and suited to the provider’s environment.
- You accept provider’s:
- Operational practices
- Upgrade windows/constraints
- Integrations and limits related to networking, storage, and add-ons
Typical uses:
- New projects or teams starting cloud-native work.
- Organizations prioritizing speed and reduced operational burden.
- Environments where core infrastructure operations are intentionally outsourced.
Characteristics of Managed OpenShift
- Reduced operational responsibility for the cluster:
- You still handle applications and some node-level aspects, but not full control plane ops.
- Tighter integration with provider services:
- Identity, load balancers, storage, logging, metrics, and more.
- Standardization and opinionated defaults:
- Less freedom to change low-level components.
- Different cost model:
- Subscription and cloud usage are often bundled or aligned with provider billing.
From an installation perspective, “installation” becomes:
- A UI or CLI workflow on the provider’s portal, or
- An API/Terraform resource that creates a cluster via the managed service backend,
rather than running theopenshift-installbinary directly.
Single-Node OpenShift (SNO) as a Deployment Model
Single-node OpenShift (SNO) is a special deployment form where:
- Control plane and worker roles run on a single machine.
- You still get a full OpenShift stack (including Operators), but:
- No high-availability across multiple nodes.
- Resources are limited compared to a multi-node cluster.
While SNO itself is covered in detail elsewhere, from a deployment-model perspective it matters because:
- It targets edge and small-footprint use cases.
- It often runs on:
- Minimal on‑prem hardware at a remote site
- Specialized or rugged edge devices
- It can be:
- Self-managed (installed and operated by you)
- Part of a broader fleet that’s centrally managed (e.g., via GitOps, RHACM, or similar tools)
SNO changes the shape of your OpenShift estate:
- Instead of a few large clusters, you may manage many small SNO clusters.
- Installation and lifecycle strategies must support mass deployment and fleet-level management, not just one-off setups.
Choosing a Deployment Model
Selecting the right deployment model is a balance across multiple dimensions:
1. Operational Responsibility
- If you have a strong platform/SRE team and want maximum control:
- Self-managed IPI or UPI may be best.
- If you want to reduce platform operations load:
- Managed OpenShift services are often preferable.
2. Infrastructure Constraints
- If you run primarily on on-prem hardware:
- Self-managed (IPI/UPI) is typical.
- SNO might be used for remote or edge sites.
- If you are cloud-first:
- Managed OpenShift plus possibly some self-managed clusters (for special needs).
3. Network and Security Requirements
- Strict, custom network and security patterns:
- Often push you towards UPI or particular on-prem configurations.
- More standard cloud VPC/VNet environments:
- Fit nicely with IPI or managed services.
4. Scale and Footprint
- A few large, central clusters:
- Classic self-managed or managed OpenShift deployments.
- Many small/edge deployments:
- Single-node OpenShift or small-form-factor clusters, with fleet management strategies.
5. Automation and IaC Maturity
- Strong existing Infra‑as‑Code, strict resource control:
- UPI (or a heavily customized IPI wrapper) is common.
- New teams without extensive automation:
- IPI and/or managed services accelerate initial adoption.
A common pattern:
- Start with a managed service or IPI for fast learning and early workloads.
- Over time, introduce UPI or specialized models for complex, regulated, or edge scenarios.
Installation Approaches in Practice
While specific commands and workflows are covered elsewhere, it’s useful here to see how the deployment models shape your day‑zero practice.
Typical IPI Workflow (Conceptual)
- Prepare prerequisites (DNS, base domain, credentials).
- Generate
install-config.yamlwith the installer. - Run installer, which:
- Creates infra, bootstrap, control plane, and workers.
- Retrieve cluster credentials and kubeconfig.
- Optionally integrate with more advanced lifecycle tooling later.
Typical UPI Workflow (Conceptual)
- Design infra (network, security, storage, compute instances).
- Implement infra creation in your tools (Terraform, etc.).
- Use installer to generate configs/ignition files.
- Manually (or via your tooling) boot nodes and pass ignition.
- Perform additional manual or automated steps to complete cluster bring‑up.
- Integrate cluster lifecycle into your existing IaC pipelines.
Typical Managed Service Workflow (Conceptual)
- Decide cluster parameters (region, size, version, node pools).
- Use cloud/Red Hat console, CLI, or API to request a cluster.
- Wait for the managed service to provision and signal “ready”.
- Download kubeconfig and start deploying workloads.
- Use provider-specified mechanisms to handle upgrades and scaling.
Deployment Models Across the Cluster Lifecycle
Even though lifecycle is covered in detail elsewhere, the initial deployment model has long-term implications:
- Upgrades
- Managed OpenShift: provider orchestrates many aspects of control plane upgrades; you must align with their procedures.
- IPI: upgrades often more straightforward with well-understood platform integrations.
- UPI: upgrades must respect your custom infrastructure model; sometimes more manual considerations.
- Scaling
- Managed OpenShift: scaling is often a simple API call, but subject to provider patterns.
- IPI: the Machine API and cluster autoscaling integrate with installer-created infra.
- UPI: scaling typically depends heavily on your provisioning automation.
- DR, backup, and multi-cluster
- Deployment model and infra choices shape how you implement backup, disaster recovery, and multi-cluster topologies (e.g., multiple cloud regions, on-prem plus cloud, edge fleets).
Understanding these relationships early helps you avoid expensive re-architecture later.
Summary
- OpenShift installation and deployment models define where and how your clusters run, and who operates them.
- The main axes of choice are:
- Self-managed vs managed services
- Installer-Provisioned Infrastructure (IPI) vs User-Provisioned Infrastructure (UPI) for self-managed clusters
- Full multi-node clusters vs Single-node OpenShift for edge and constrained environments
- Each model carries trade-offs in:
- Operational responsibility
- Flexibility and complexity
- Integration with existing infrastructure and automation
- Your first decision is not which installer flag to use, but which deployment model fits your organization’s constraints and goals.
Subsequent chapters dive deeper into each specific option—IPI, UPI, managed services, single-node deployments, and cluster lifecycle management—building on this high-level map.