Table of Contents
What “Installer-Provisioned Infrastructure” Means in OpenShift
In the Installer-Provisioned Infrastructure (IPI) model, the OpenShift installer not only deploys the cluster components, but also creates and manages most of the underlying infrastructure on a supported cloud or virtualization platform.
In practice, this means:
- You provide high-level inputs (platform, region, base domain, credentials).
- The installer creates:
- Network components (VPC/VNet, subnets, security groups, load balancers).
- Compute (control plane and worker VMs/instances).
- Storage integrations for default storage classes.
- DNS records for cluster endpoints (where supported).
- You get a fully functioning, production-grade cluster with minimal manual infra work.
This is in contrast to User-Provisioned Infrastructure (UPI), where you build and wire most of the infrastructure yourself and the installer only lays down OpenShift on top of it.
When to Choose IPI
IPI is best suited for situations where:
- You are deploying on a supported public cloud or virtualization platform:
- Common examples: AWS, Azure, GCP, vSphere, IBM Cloud, bare metal with specific integrations.
- You want a fast, opinionated, automated deployment.
- You accept the default reference architecture provided by OpenShift for that platform (network layout, LB type, etc.).
- You prefer less manual work and more day-2 automation via Cluster API and MachineSets.
You might not choose IPI if:
- Your organization has strict infrastructure standards (custom networking, security zoning, IPAM, etc.) that deviate from the installer’s defaults.
- The platform is not supported by IPI.
- You need fine-grained control over every infrastructure component (then UPI or a managed service may fit better).
Core Concepts in IPI
Cluster Manifest Generation
IPI begins with creating an install-config.yaml that describes how the installer should build the cluster. The installer uses it to generate full cluster manifests and a set of infrastructure creation templates.
Typical fields in install-config.yaml:
- Cluster identity:
metadata.namebaseDomain- Platform-specific configuration:
- e.g.,
platform.aws.region,platform.azure.region, etc. - Network configuration:
networking.machineNetworknetworking.clusterNetworknetworking.serviceNetwork- Network type (e.g.,
OpenShiftSDNin older versions,OVNKubernetes). - Machine pools:
- Control plane:
controlPlane(replicas, instance type, platform settings). - Workers:
computesections (MachinePools).
The installer consumes this file and, in IPI, does not require you to define low-level infrastructure details (subnets, LB names, etc.)—it derives and creates them.
Machine API, MachineSets, and Scaling
In IPI clusters, the Machine API is central:
- Machines represent VMs/instances managed by the cluster itself.
- MachineSets define groups of Machines with a shared template:
- Similar in spirit to Kubernetes
ReplicaSet, but for nodes. - Specify instance types, images, labels, and availability zone details.
- OpenShift’s autoscaling and day-2 scaling can act directly on MachineSets:
- You scale workers by changing
spec.replicason a MachineSet (or using an autoscaler). - The Machine API then creates or deletes the underlying instances.
With IPI, MachineSets are automatically created for the initial workers (and often for control plane, depending on platform), giving you immediate cloud-native scaling capabilities.
Integration With Cloud Provider APIs
IPI clusters are wired to the cloud provider from day 1:
- Cloud-specific controllers are configured to:
- Attach volumes and expose cloud storage as PersistentVolumes.
- Create and manage load balancers, health checks, and node membership.
- Manage DNS entries for cluster endpoints (where supported).
- Service types like
LoadBalancerleverage the underlying platform’s native LB.
Because the installer created the infra, these controllers are configured in a known-good, supported way without manual wiring.
Typical IPI Workflow
Although details vary by platform, an IPI deployment usually follows a similar pattern.
1. Prepare Prerequisites
Typical prerequisites:
- Access to a supported platform account (e.g., AWS, Azure).
- Adequate IAM/role permissions allowing:
- Creating VMs/instances.
- Managing networking (VPC/VNet, subnets, security groups, LBs).
- Managing storage and DNS (as applicable).
- A pull secret from Red Hat.
- SSH key for node access (optional but recommended).
- Sizing decisions for control plane and workers:
- Instance types.
- Number of nodes and zones.
These prerequisites are usually validated by the installer early in the process.
2. Create the Install Configuration
You run openshift-install create install-config and answer prompts (or supply a pre-written install-config.yaml) specifying:
- Platform (e.g.,
aws,azure,gcp,vsphere). - Region, base domain, cluster name.
- Pull secret and SSH key.
- Basic networking information if needed.
The installer stores install-config.yaml in the working directory. In IPI, this file is the primary input; you rarely need to edit low-level infra definitions.
3. Generate Manifests (Optional Customization)
Optionally, you can:
- Run
openshift-install create manifeststo generate cluster manifests. - Make limited adjustments to cluster-level resources before install:
- Additional MachinePools.
- Tweaks to some operator configurations.
For most beginner IPI deployments, you skip detailed manifest edits and go directly to cluster creation.
4. Create the Cluster and Infrastructure
Running openshift-install create cluster starts the full IPI process:
- The installer:
- Creates platform-specific infra components (VPC/VNet, subnets, security groups, LBs, DNS records, etc.).
- Boots bootstrap, control plane, and worker nodes.
- The bootstrap node:
- Temporarily controls initial configuration of the control plane.
- Once the control plane is healthy, it is destroyed (except on some bare-metal or specialized flows).
You monitor progress:
- Via console output from the installer.
- Using
ocand thekubeconfigit writes once API is up. - Sometimes via cloud console to see created resources.
Upon completion, you receive:
kubeconfigfor cluster access.- Web console URL and credentials (or instructions to set up IDP).
5. Day-2 Operations on IPI
For daily operation, IPI clusters emphasize:
- Machine API–driven node management:
- Add more worker capacity by cloning or editing MachineSets.
- Configure cluster autoscaler to scale MachineSets automatically.
- Platform integration:
- Use cloud-native constructs (e.g., cloud load balancers, cloud storage).
- Cluster lifecycle:
- Keep in mind that the installer “owns” much of the initial infra shape. Major topology changes (e.g., redesigning VPC) may require redeployment or moving toward a more customized model.
Platform-Specific Characteristics (High-Level)
Detail belongs in platform-specific material, but some key differences are important to understand conceptually.
Public Clouds (AWS, Azure, GCP)
Typical IPI behavior:
- Creates:
- VPC/VNet (or uses an existing one if configured that way).
- Public/private subnets.
- Security groups / network rules.
- External and internal load balancers for API and Ingress.
- DNS records for
api.<cluster>.<domain>and wildcard apps. - Sets up:
- Platform-specific images (AMIs, images, templates).
- MachineSets across multiple availability zones.
This often results in:
- Highly integrated cluster with native cloud services.
- Good defaults for availability and resiliency (multi-AZ deployments when requested).
Virtualization Platforms (e.g., vSphere)
In vSphere IPI:
- You typically prepare:
- vSphere cluster/datastores/networks.
- A base template or image.
- The installer:
- Clones VMs for bootstrap, control plane, and workers.
- Creates MachineSets tied to vSphere templates.
- Networking and load balancing:
- May use NSX-T or external load balancers depending on configuration.
- Some components (like DNS) might still require external setup.
The principle remains: the installer orchestrates VM creation and wiring, but exact capabilities depend on the virtualization platform’s integration.
Bare Metal IPI (and Similar)
Bare-metal IPI is more constrained and specialized:
- Uses components like the Bare Metal Operator and Ironic.
- Requires detailed host inventory (BMC addresses, credentials).
- The installer:
- Automates provisioning of physical nodes.
- Manages them as Machines.
Although still “installer-provisioned,” bare metal usually demands more initial prep and familiarity with hardware provisioning compared to cloud-based IPI.
Advantages and Trade-Offs of IPI
Advantages
- Speed to value: Fast path to a functioning cluster with minimal manual infra work.
- Reduced complexity for beginners:
- No need to handcraft every network or instance.
- Supported design:
- Uses reference architectures validated by Red Hat on that platform.
- Tight integration with Machine API and cloud provider:
- Easier scaling.
- Seamless load balancers and storage classes.
Trade-Offs
- Less flexibility:
- Network layout, security groups, and certain infra choices follow Red Hat’s reference design.
- Infra ownership:
- The cluster lifecycle is closely tied to how the installer created infrastructure.
- Deep changes to underlying infra may be harder than in a fully custom UPI model.
- Platform dependency:
- Only works on supported platforms with required APIs.
Common Beginner Scenarios with IPI
Quick Lab or Test Cluster
- Use IPI on a public cloud sandbox subscription.
- Provide simple
install-config.yamlwith minimal customization. - Get a working cluster quickly for:
- Learning
ocand the web console. - Trying deployment patterns and Operators.
Small Production Cluster in Cloud
- Use IPI to create a small multi-AZ cluster on a supported cloud.
- Accept default network layout.
- Focus operational effort on:
- Application deployment.
- Security, monitoring, and CI/CD.
- Scale workers via MachineSets as workloads grow.
Transition to More Advanced Models
- Start with IPI to learn:
- Cluster behavior.
- Machine API concepts.
- Platform integration.
- Over time, if requirements outgrow defaults:
- Consider:
- More advanced IPI tuning (custom MachinePools, pre-existing VPCs).
- Or switching to UPI / managed OpenShift offerings for stricter control.
Practical Tips for Working With IPI
- Keep your
install-config.yamlversioned: - Helps reproduce or adjust clusters consistently.
- Understand what the installer creates:
- Explore cloud console to see VPCs, subnets, LBs, and instances.
- This is valuable for troubleshooting and cost estimation.
- Monitor resource limits and quotas:
- IPI may fail if the account runs out of:
- VM quotas.
- IP addresses.
- Load balancer or disk quotas.
- Use labels and MachineSets wisely:
- Group workers by role (e.g., “infra”, “app”, “batch”) via separate MachineSets.
- This improves placement and scaling policies.
- Plan for deprovisioning:
openshift-install destroy clustercan remove infra created by IPI.- Use it carefully, especially in shared accounts.
How IPI Fits into the Broader Deployment Landscape
Within the overall OpenShift deployment options:
- IPI sits between:
- Managed services (where the provider manages infra and cluster for you).
- UPI (where you fully manage infra and then install OpenShift).
- It is particularly attractive for:
- New teams getting started with OpenShift.
- Environments where cloud-native automation aligns with organizational practices.
- For HPC and specialized workloads:
- IPI gives a straightforward way to stand up clusters that can then be tailored for compute-heavy or GPU-based workloads, as long as the underlying platform and instance types support them.
Understanding IPI prepares you to:
- Quickly stand up clusters for experimentation and early production.
- Grasp how OpenShift interacts with underlying platforms.
- Compare it to other deployment models and pick the right one for each use case.