Table of Contents
Overview of OpenShift Deployment Options
OpenShift can be deployed in several ways, depending on who manages the infrastructure, who manages the platform, and where the cluster runs (on‑premises, private cloud, public cloud, edge, etc.). This chapter focuses on understanding these deployment options at a high level and how to choose between them.
Later chapters in this section will go into detail about specific models like Installer‑Provisioned Infrastructure (IPI), User‑Provisioned Infrastructure (UPI), managed services, and single‑node deployments. Here we focus on the “big picture” and comparison.
Key Axes of Deployment Choice
Most OpenShift deployment options can be described along a few main axes:
- Location
- On‑premises datacenter
- Private cloud (e.g. OpenStack, VMware-based clouds)
- Public cloud (AWS, Azure, GCP, others)
- Edge / remote sites
- Responsibility
- Self‑managed (you operate the cluster)
- Fully managed (cloud provider / Red Hat operates it)
- Shared responsibility models
- Infrastructure provisioning style
- Installer‑Provisioned Infrastructure (IPI)
- User‑Provisioned Infrastructure (UPI)
- Cluster shape
- Multi‑node production clusters
- Single‑node or small footprint clusters
- Specialized clusters (e.g. for GPUs, edge, or test/dev)
Understanding where your requirements fall on these axes helps narrow down the appropriate deployment option.
Broad Categories of OpenShift Deployments
1. Self‑Managed OpenShift on Your Own Infrastructure
You install and operate the cluster yourself. This includes:
- Bare metal installations in your datacenter
- Virtualized environments (e.g. VMware vSphere, Red Hat Virtualization, etc.)
- Private cloud platforms (e.g. OpenStack)
Typical characteristics:
- You own and manage the underlying hardware or virtualization layer.
- You are responsible for OpenShift installation, lifecycle, and day‑to‑day operations.
- Common for:
- Enterprises with strict compliance or data residency requirements
- Workloads requiring low‑latency or specialized hardware
- Environments where public cloud is not allowed or not cost‑effective
Within this category, you can choose between IPI and UPI. Those details are covered in the dedicated chapters, but at a high level:
- IPI: The OpenShift installer provisions infrastructure components for you (supported platforms only).
- UPI: You provision the infrastructure yourself and then layer OpenShift on top.
2. Self‑Managed OpenShift on Public Cloud
You still operate OpenShift yourself, but the underlying infrastructure is in a public cloud. Examples include:
- OpenShift clusters on:
- AWS (EC2, EBS, networking)
- Azure (VMs, managed disks, VNets)
- Google Cloud Platform (Compute Engine, disks, VPC)
- Other supported cloud providers
Typical characteristics:
- You benefit from cloud elasticity and managed infrastructure services.
- You remain responsible for:
- Installing OpenShift (using IPI or UPI)
- Upgrades, patching, and configuration
- Monitoring and troubleshooting the cluster
Common reasons to choose this model:
- Need for full control over cluster configuration and lifecycle
- Hybrid cloud strategies where on‑prem and cloud clusters are operated similarly
- Integration with existing cloud‑native tools and services while maintaining platform control
3. Managed OpenShift Services (OpenShift as a Service)
In this model, the platform is delivered as a managed service by a cloud provider or by Red Hat. Examples (names may evolve over time, but the patterns are similar):
- Managed services jointly run by Red Hat and cloud providers:
- Red Hat OpenShift Service on AWS (ROSA)
- Azure Red Hat OpenShift (ARO)
- Red Hat OpenShift on IBM Cloud
- Other hosted OpenShift services from partners or cloud vendors
Typical characteristics:
- The provider is responsible for:
- Installing the clusters
- Managing and upgrading the control plane and worker nodes (within agreed boundaries)
- Ensuring cluster availability (within SLA)
- You focus primarily on:
- Creating projects/namespaces
- Deploying and operating applications
- Configuring application‑level services and policies
Typical use cases:
- Organizations that want OpenShift features without running the platform themselves
- Teams without strong in‑house platform operations expertise
- Faster time‑to‑value and standardized deployments
This category is covered more deeply in the “Managed OpenShift services” chapter; here you should remember it as “OpenShift, but someone else runs it for you.”
4. Edge, Small Footprint, and Specialized Deployments
Not all environments look like a large datacenter or cloud region. OpenShift also offers options for:
- Edge and remote sites
- Retail stores, 5G sites, industrial locations
- Limited compute, power, or network connectivity
- Single-node OpenShift
- Control plane and workloads on the same node
- Suitable for highly constrained or embedded environments
- Compact clusters
- Small number of nodes (for example, 3 nodes where control plane and worker roles are combined)
- Often used in smaller sites or specific-purpose clusters
These deployment options trade cluster size and redundancy for footprint and simplicity. They are explored in detail in the “Single-node OpenShift” and other related chapters.
Comparing Deployment Options
Control vs Operational Burden
Roughly, deployment options fall along a spectrum:
- Highest control, highest operational responsibility
- Self‑managed OpenShift on bare metal or private cloud (UPI)
- Self‑managed OpenShift on public cloud (UPI)
- Medium control, medium operational responsibility
- Self‑managed OpenShift using IPI on supported platforms
- Compact clusters in controlled environments
- Lower control, lowest operational responsibility
- Managed OpenShift services (ROSA, ARO, etc.)
- Hosted OpenShift platforms run by providers
The more control you need (over OS versions, network topology, security hardening, etc.), the more likely you are to choose a self‑managed model. The more you want to offload platform operations, the more a managed service makes sense.
Typical Decision Drivers
Compliance and Data Residency
- Strict regulatory environments often prefer:
- On‑premises or private cloud self‑managed deployments
- Clusters in specific regions or isolated networks
Skills and Team Structure
- Strong in‑house SRE/platform teams:
- Comfortable with self‑managed clusters and custom integrations
- Primarily application development teams:
- Benefit from managed OpenShift services, where the platform is “just there”
Cost Model and Procurement
- CapEx‑focused organizations:
- Might invest in on‑prem hardware and self‑managed OpenShift
- OpEx‑focused or cloud‑first organizations:
- Often choose managed OpenShift services or self‑managed in public cloud
Scale and Growth
- Rapidly growing or bursty workloads:
- Fit well with public cloud (self‑managed or managed)
- Stable, predictable workloads:
- May justify dedicated on‑prem clusters
Examples of Deployment Scenarios
Enterprise with Existing Datacenter
- Requirements:
- Use existing hardware and network
- Integrate with corporate identity and security tools
- Likely choice:
- Self‑managed OpenShift on bare metal or virtualization
- UPI or IPI depending on platform support and automation maturity
Cloud‑First Startup
- Requirements:
- Move fast, minimal operations overhead
- Use cloud‑native services (managed databases, storage, etc.)
- Likely choice:
- Managed OpenShift service on a preferred cloud provider
- Or self‑managed OpenShift on the cloud with IPI if more customization is needed
Telecom or Edge Use Case
- Requirements:
- Distributed edge locations
- Small footprint, possibly harsh or constrained environments
- Likely choice:
- Single‑node OpenShift or compact clusters at edge sites
- Central management from one or more larger clusters
Hybrid Organization
- Requirements:
- Some workloads must stay on‑prem (compliance, latency)
- Other workloads run in public cloud
- Likely choice:
- Self‑managed OpenShift on‑prem
- Plus managed or self‑managed OpenShift in public cloud
- Federation or GitOps to coordinate cluster configuration and application deployment
How Deployment Options Affect Operations
Your deployment choice influences many operational aspects, including:
- Installation complexity
- Managed services: minimal; you mostly configure parameters
- Self‑managed: full install workflows (IPI/UPI) and integration with your infra
- Upgrade process
- Managed services: largely handled by the provider
- Self‑managed: you plan and run OpenShift upgrades and coordinate with infrastructure changes
- Integration with existing tools
- On‑prem: easier to integrate with legacy systems and networks
- Cloud: easier to integrate with cloud‑native services (managed databases, queues, etc.)
- Support model
- Managed services: unified support from cloud provider/Red Hat for platform issues
- Self‑managed: you own first‑line troubleshooting, then escalate to vendor support if needed
These aspects are explored more deeply in the “Cluster lifecycle management,” “Upgrades, Maintenance, and Operations,” and related chapters.
Choosing a Deployment Option: Key Questions
When deciding among deployment options, it helps to ask:
- Where must my data and workloads live?
- Are there regulatory or latency constraints?
- Who will operate the platform?
- Do we have (or want to build) a platform/SRE team?
- How much customization do we require?
- Do we need specific OS, network topologies, or hardware?
- What scale and growth do we expect?
- How quickly will we need to add capacity or new clusters?
- How do we prefer to pay for infrastructure and platform?
- CapEx vs OpEx, contracts with cloud providers, support models
Your answers typically lead toward one of the main categories:
- Self‑managed on‑prem
- Self‑managed on public cloud
- Managed OpenShift service
- Edge/small footprint variants
Later chapters in this section will connect these choices to the concrete technical models: IPI, UPI, managed services, and single‑node OpenShift.