Kahibaro
Discord Login Register

5 OpenShift Installation and Deployment Models

Understanding OpenShift Deployment Choices

OpenShift can run on many kinds of infrastructure and in different operational models. Before touching installers or cloud consoles, you need to understand what kind of OpenShift deployment you’re aiming for and what trade‑offs each model brings.

At a high level, there are three main questions:

  1. Where will the cluster run?
    • On‑premises (bare metal, VMware, etc.)
    • Public cloud (AWS, Azure, GCP, others)
    • Edge / single node / remote sites
  2. Who manages the infrastructure?
    • You (self‑managed, full control and responsibility)
    • Red Hat or a cloud provider (managed service)
  3. How is the cluster created and evolved?
    • With the official installer doing most of the infrastructure work
    • With infrastructure you prepare and manage yourself
    • As a fully managed subscription service

This chapter provides a map of these options and explains how they relate to cluster lifecycle management (creation, scaling, upgrades, and decommissioning), which is covered in more depth later.

Key Concepts: Installation vs Deployment Model vs Lifecycle

It helps to keep three terms distinct:

This chapter focuses on deployment models and the high‑level installation approaches they imply. Details of day‑2 lifecycle tools and operations are discussed separately.

Major OpenShift Deployment Options

Self-Managed vs Managed OpenShift

OpenShift deployment options broadly fall into two families:

  1. Self-managed OpenShift
    • You install, configure, and operate the cluster.
    • You own both the control plane and worker nodes, whether on-premises or in the cloud.
    • You are responsible for:
      • Infrastructure provisioning (unless automated by the installer)
      • Availability, performance, backups, and upgrades
      • Integrations with your environment (identity, storage, networking)
  2. Managed OpenShift
    • A provider (Red Hat and/or cloud vendor) operates the cluster for you.
    • You consume OpenShift “as a service”.
    • You focus more on applications, less on cluster plumbing.

Most organizations use both models in different places:

Platform Targets: On-Prem, Public Cloud, Edge

OpenShift supports several infrastructure targets:

Each infrastructure type supports a subset of the deployment and installation models described below; not every model is available everywhere.

Installer-Provisioned vs User-Provisioned Infrastructure

When you deploy self-managed OpenShift, you choose between two primary patterns for how infrastructure is created:

  1. Installer-Provisioned Infrastructure (IPI)
  2. User-Provisioned Infrastructure (UPI)

These are high-level models that shape the entire install and lifecycle story.

Installer-Provisioned Infrastructure (IPI) – Conceptual Model

With IPI, the official OpenShift installer (openshift-install) is responsible for both:

You usually:

  1. Provide a minimal configuration (e.g., via an install-config.yaml):
    • Platform and region (e.g., AWS region)
    • Base domain
    • Cluster name
    • Initial node sizing or counts
  2. Run the installer.
  3. Let it:
    • Create networking components (VPCs, subnets, load balancers, etc.)
    • Launch and configure compute instances for control plane and workers
    • Bootstrap the cluster and configure it to be self‑hosting

When IPI Fits Well

IPI is typically preferred when:

This model suits:

Characteristics of IPI

If you later need to integrate with complex, preexisting infra patterns, IPI might feel too rigid; that’s where UPI comes in.

User-Provisioned Infrastructure (UPI) – Conceptual Model

With UPI, you:

You:

  1. Decide and implement:
    • Network topology (subnets, routing, firewalls)
    • Load balancers and DNS setup
    • Storage platforms and node provisioning
  2. Use the OpenShift installer to:
    • Generate manifests and ignition configs
    • Assist with bootstrap and cluster formation
  3. Coordinate the actual bring‑up of control plane and worker nodes with your infrastructure.

When UPI Fits Well

UPI is typically chosen when:

UPI is common in:

Characteristics of UPI

In many organizations, the cluster platform team provides a reusable UPI pattern, then application teams just consume the resulting clusters.

Managed OpenShift Services

Instead of installing and operating OpenShift yourself, you can consume a managed OpenShift service. The core idea:

Common patterns:

When Managed OpenShift Fits Well

Typical uses:

Characteristics of Managed OpenShift

From an installation perspective, “installation” becomes:

Single-Node OpenShift (SNO) as a Deployment Model

Single-node OpenShift (SNO) is a special deployment form where:

While SNO itself is covered in detail elsewhere, from a deployment-model perspective it matters because:

SNO changes the shape of your OpenShift estate:

Choosing a Deployment Model

Selecting the right deployment model is a balance across multiple dimensions:

1. Operational Responsibility

2. Infrastructure Constraints

3. Network and Security Requirements

4. Scale and Footprint

5. Automation and IaC Maturity

A common pattern:

Installation Approaches in Practice

While specific commands and workflows are covered elsewhere, it’s useful here to see how the deployment models shape your day‑zero practice.

Typical IPI Workflow (Conceptual)

  1. Prepare prerequisites (DNS, base domain, credentials).
  2. Generate install-config.yaml with the installer.
  3. Run installer, which:
    • Creates infra, bootstrap, control plane, and workers.
  4. Retrieve cluster credentials and kubeconfig.
  5. Optionally integrate with more advanced lifecycle tooling later.

Typical UPI Workflow (Conceptual)

  1. Design infra (network, security, storage, compute instances).
  2. Implement infra creation in your tools (Terraform, etc.).
  3. Use installer to generate configs/ignition files.
  4. Manually (or via your tooling) boot nodes and pass ignition.
  5. Perform additional manual or automated steps to complete cluster bring‑up.
  6. Integrate cluster lifecycle into your existing IaC pipelines.

Typical Managed Service Workflow (Conceptual)

  1. Decide cluster parameters (region, size, version, node pools).
  2. Use cloud/Red Hat console, CLI, or API to request a cluster.
  3. Wait for the managed service to provision and signal “ready”.
  4. Download kubeconfig and start deploying workloads.
  5. Use provider-specified mechanisms to handle upgrades and scaling.

Deployment Models Across the Cluster Lifecycle

Even though lifecycle is covered in detail elsewhere, the initial deployment model has long-term implications:

Understanding these relationships early helps you avoid expensive re-architecture later.

Summary

Subsequent chapters dive deeper into each specific option—IPI, UPI, managed services, single-node deployments, and cluster lifecycle management—building on this high-level map.

Views: 52

Comments

Please login to add a comment.

Don't have an account? Register now!