Table of Contents
Ways to install Operators in OpenShift
OpenShift supports several ways to install Operators, all orchestrated by the Operator Lifecycle Manager (OLM). At a high level you can:
- Install from the default OperatorHub catalogs.
- Install from custom or private catalogs.
- Install manually from YAML manifests (advanced/edge cases).
- Use GitOps or automation (e.g., Argo CD, pipelines, Ansible) to manage Operator configuration as code.
This chapter focuses on how to actually perform installations and day‑2 management, assuming you already understand the basic idea of Operators and OLM.
OperatorHub in the web console
For most users, Operator installation starts in the OpenShift web console via OperatorHub.
Browsing and discovering Operators
In the console:
- Navigate to Operators → OperatorHub.
- Use:
- Search (by name/keyword).
- Filters for:
- Provider type (Red Hat, certified, community).
- Capability level (Basic Install, Seamless Upgrades, etc.).
- Infrastructure (e.g., OpenShift Container Platform, OpenShift Dedicated).
- Click an Operator tile to see:
- Description and primary use cases.
- Supported platforms / versions.
- Install modes and namespace support.
- Required permissions (ClusterServiceVersions show RBAC needs).
This discovery step is where you validate that:
- The Operator is suitable for your cluster version and environment.
- The scope (cluster‑wide or namespace‑scoped) matches your needs.
- The support model (Red Hat/partner/community) aligns with your risk profile.
Installing an Operator from OperatorHub
When you choose Install on an Operator tile, OLM guides you through key decisions:
- Installation mode
- All namespaces on the cluster (cluster‑wide)
- Creates the Operator in a specific namespace (often
openshift-operators). - Operator can watch and manage resources across all namespaces.
- Typical for infrastructure/platform Operators (logging, monitoring, storage).
- A specific namespace on the cluster (namespace‑scoped)
- Operator is installed into and limited to a single namespace.
- Suitable for team‑ or project‑specific Operators and reduced blast radius.
Choose the most restrictive scope that still satisfies your use case.
- Installed Namespace
- For cluster‑wide mode, usually a dedicated namespace like
openshift-operatorsor one recommended by documentation. - For namespace‑scoped mode, you typically:
- Select an existing project, or
- Create a new project to isolate the Operator and its managed apps.
- Update channel
- Example:
stable-4.15,stable,fast,preview. - Channels represent curated upgrade streams for that Operator.
- In production, prefer a
stableor version‑locked channel over experimental ones. - Approval strategy
- Automatic:
- New Operator versions in the selected channel are auto‑approved and rolled out.
- Minimal admin overhead; risk of unintended upgrades.
- Manual:
- New versions appear as pending; an administrator must explicitly approve them.
- Allows change windows, testing, and coordinated upgrades.
- Additional install options (if exposed)
- Some Operators provide configuration at install time, for example:
- Resource limits for the Operator pods.
- Customizations for watch namespaces or features.
- Many of these can later be changed by editing the
Subscriptionor Operator config CRs.
After you confirm:
- OLM creates a:
Subscription(what to install, from where, and how to upgrade),ClusterServiceVersion(CSV; describes the installed version),InstallPlan(the concrete steps for installing/upgrading).- The Operator deployment(s) and required RBAC are created automatically.
Installing Operators with the CLI (`oc`)
The console is convenient, but CLI gives you repeatable and scriptable installs.
Discovering available Operators via CLI
The catalogs exposed to the cluster appear as CatalogSource objects in the openshift-marketplace namespace:
$ oc get catalogsources -n openshift-marketplace
NAME DISPLAY TYPE PUBLISHER AGE
redhat-operators Red Hat Operators grpc Red Hat 10d
certified-operators Certified Operators grpc Red Hat 10d
community-operators Community Operators grpc Red Hat 10d
redhat-marketplace Red Hat Marketplace grpc Red Hat 10d
To list all available ClusterServiceVersion entries in a catalog:
$ oc get packagemanifests -n openshift-marketplaceFilter by name:
$ oc get packagemanifests -n openshift-marketplace | grep my-operator
PackageManifest (or in newer OCP, catalog APIs with the new catalog format) shows which channels and versions are available.
Installing by creating a Subscription
A basic installation with OLM uses:
- A
Namespace(for the Operator). - An optional
OperatorGroup(defines target namespaces the Operator watches). - A
Subscription(tells OLM what to install and how to keep it updated).
Example: namespace‑scoped Operator for a single project.
- Create a project:
apiVersion: v1
kind: Namespace
metadata:
name: my-operator-namespace$ oc apply -f namespace.yaml- Create an
OperatorGroup:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: my-operator-group
namespace: my-operator-namespace
spec:
targetNamespaces:
- my-operator-namespace$ oc apply -f operatorgroup.yaml- Create a
Subscription:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: my-operator
namespace: my-operator-namespace
spec:
channel: stable
name: my-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic$ oc apply -f subscription.yamlOLM will then:
- Resolve the package in
redhat-operators. - Create an
InstallPlan. - Create the CSV and deploy the Operator.
Verifying installation
Check the Subscription:
$ oc get subscription my-operator -n my-operator-namespace -o yamlCheck the CSV status:
$ oc get csv -n my-operator-namespace
NAME DISPLAY VERSION PHASE
my-operator.v1.2.3 My Operator 1.2.3 Succeeded
If PHASE is Succeeded, the Operator is installed and ready to manage its custom resources.
Custom and private catalog sources
In many organizations, Operators are not pulled directly from public catalogs. Instead, you might:
- Mirror content to an internal registry.
- Curate a reduced set of Operators.
- Package in‑house or custom Operators.
This is done by defining your own CatalogSource.
Creating a custom CatalogSource
A basic gRPC catalog source:
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: my-internal-operators
namespace: openshift-marketplace
spec:
sourceType: grpc
image: registry.example.com/olm/my-operator-catalog:latest
displayName: My Internal Operators
publisher: My OrgAfter applying this:
$ oc apply -f my-catalogsource.yaml
$ oc get catalogsources -n openshift-marketplace
OLM will add my-internal-operators to the list of sources. Operators from this catalog can then be installed via:
- The console (they will appear in OperatorHub), or
- A
Subscriptionreferencingsource: my-internal-operators.
Using catalog priority and scoping
If multiple catalogs export the same Operator, OpenShift uses priority and configuration from CatalogSource. Administrators can:
- Prefer internal catalogs over community ones.
- Block or remove catalogs you do not want users to consume.
- Limit access to specific catalogs via RBAC (e.g., controlling who can create
Subscriptionobjects pointing at them).
Managing Operator lifecycle
Once installed, Operators must be kept healthy, up to date, and aligned with policy.
Understanding key OLM objects
- Subscription
- Defines:
- Operator package name.
- Channel.
- Catalog source.
- Approval strategy (automatic/manual).
- Controls upgrade behavior over time.
- ClusterServiceVersion (CSV)
- Represents a specific version of an Operator.
- Contains metadata (version, descriptors, required permissions, owned CRDs).
- Status reflects installation/upgrade progress (
Pending,Installing,Succeeded,Failed). - InstallPlan
- Concrete steps OLM will execute to move from one CSV to another.
- In automatic mode, OLM creates and approves it.
- In manual mode, OLM creates it, but an admin must approve.
These resources are what you monitor and manipulate when managing Operators.
Upgrading Operators
With automatic approval:
- OLM continuously checks the selected channel for new CSVs.
- When a new version is available, it creates and auto‑approves an
InstallPlan. - The Operator pods are rolled out with minimal disruption, respecting defined strategies in the CSV.
With manual approval:
- A new
InstallPlanappears asRequiresApproval. - Review the plan:
$ oc get installplans -n my-operator-namespace
$ oc describe installplan <name> -n my-operator-namespace- Approve:
$ oc patch installplan <name> -n my-operator-namespace \
--type merge -p '{"spec":{"approved":true}}'This integrates well with change control processes.
Changing channels or approval strategies
You can safely adjust channel or approval on the Subscription:
$ oc patch subscription my-operator -n my-operator-namespace \
--type merge -p '{"spec":{"channel":"stable-2.x","installPlanApproval":"Manual"}}'OLM will then honor the new channel for subsequent upgrades.
Pausing upgrades
To temporarily freeze Operator upgrades:
- Set
installPlanApprovaltoManualon theSubscription. - Do not approve upcoming
InstallPlanobjects.
This allows you to:
- Test new versions in a staging cluster.
- Wait for maintenance windows.
Namespace scoping and OperatorGroups
OperatorGroup objects determine which namespaces an Operator watches and where it can create resources.
Common patterns
- Global OperatorGroup (cluster‑wide):
- In a namespace like
openshift-operators:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: global-operators
namespace: openshift-operators
spec:
targetNamespaces:
- ""- The special
""instructs OLM to treat it as global; exact semantics may vary by version, so check docs.
- Single‑namespace OperatorGroup:
- For a project‑scoped Operator:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: app-team-operator-group
namespace: team-a
spec:
targetNamespaces:
- team-a- Multi‑namespace OperatorGroup:
- One Operator managing resources in multiple selected namespaces:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: shared-operator-group
namespace: ops-operators
spec:
targetNamespaces:
- project1
- project2
You should avoid overlapping OperatorGroup scopes that might compete for the same Operator, as this can lead to unexpected scheduling and errors.
Removing and cleaning up Operators
Removing an Operator requires attention to both the Operator itself and the resources it manages.
Uninstalling via web console
- Go to Operators → Installed Operators.
- Select the Operator.
- Choose Uninstall Operator.
This typically:
- Deletes the
Subscriptionand CSV. - Deletes the Operator deployment and related resources in its namespace.
- Leaves behind:
- Custom Resource Definitions (CRDs).
- Any existing Custom Resources (CRs) and the workloads they created.
Many platform Operators are not meant to be casually uninstalled; removing them can disrupt cluster services.
Uninstalling via CLI
Delete the Subscription and CSV:
$ oc delete subscription my-operator -n my-operator-namespace
$ oc delete csv my-operator.v1.2.3 -n my-operator-namespaceOLM cleans up the Operator pods and associated OLM artifacts.
Cleaning up CRDs and managed resources
To completely remove the API surface and data:
- List CRDs owned by the Operator:
$ oc get crd | grep my-operator- For each CRD:
- List and delete remaining CRs (ensure you understand implications):
$ oc get <kind> --all-namespaces
$ oc delete <kind> <name> -n <namespace>- Delete the CRD itself:
$ oc delete crd <crd-name>Be aware that some Operators implement finalizers to do cleanup; if they are already removed, you may need to clear stuck finalizers manually in rare edge cases.
Day‑2 operations and best practices
Watching Operator health
Use the console or CLI to monitor:
- Operator pod status in its namespace.
- CSV phases (
Succeeded,Failed,Replacing). - Events related to OLM resources.
Example:
$ oc get csv -A
$ oc describe csv my-operator.v1.2.3 -n my-operator-namespaceAlerts from the built‑in monitoring stack often include Operator health signals for core platform Operators.
Logging and troubleshooting
When an Operator misbehaves:
- Check the Operator Deployment / pods:
$ oc get pods -n my-operator-namespace
$ oc logs deploy/my-operator-controller-manager -n my-operator-namespace- Check:
- CSV events.
Subscriptionstatus for errors from the catalog or image pulls.- RBAC or SCC errors in events or logs.
- Confirm catalog reachability if images or bundles cannot be resolved.
Version pinning and drift control
For tightly controlled environments:
- Use a custom catalog containing only approved versions.
- Use a locked channel or a channel that is not updated frequently.
- Set
installPlanApproval: Manualand gate approvals via a change process. - Sync OLM objects using GitOps tools to keep desired state in version control.
Separating platform and application Operators
A common pattern:
- Cluster administrators manage:
- Cluster‑critical Operators in system namespaces (
openshift-*). - CatalogSources and cluster‑wide Subscriptions.
- Application teams can:
- Install allowed Operators in their own namespaces (subject to RBAC and policies).
- Manage their own Subscriptions responsibly.
This separation reduces risk while still giving teams flexibility.
Automating Operator management
Operator management aligns well with infrastructure‑as‑code and GitOps practices.
GitOps with Argo CD
- Store
Namespace,OperatorGroup,Subscription, and optionallyCatalogSourceYAML in a Git repository. - Argo CD watches the repo and applies changes to the cluster.
- Benefits:
- Auditable change history (Git logs).
- Easy rollback to previous Operator versions (revert a commit).
- Consistent Operator configuration across environments (dev/stage/prod).
Pipelines and Ansible
- CI/CD pipelines (e.g., OpenShift Pipelines / Tekton) can:
- Provision Operators as part of environment setup.
- Promote
Subscriptionchanges through environments. - Ansible modules and roles can:
- Manage OLM resources declaratively.
- Handle cross‑cluster operations (installing Operators across many clusters).
The key pattern is to treat all Operator configuration (not just app CRs) as declarative resources in code, not manual click‑driven actions.
Summary
Installing and managing Operators in OpenShift centers around:
- Using OperatorHub (console or CLI) with OLM.
- Creating and maintaining CatalogSources, OperatorGroups, and Subscriptions.
- Choosing appropriate installation scopes, channels, and approval strategies.
- Monitoring CSV, InstallPlan, and Operator pod health.
- Cleaning up Operators and their APIs carefully.
- Applying GitOps and automation to keep Operator lifecycle consistent, auditable, and repeatable.
With these tools and practices, Operators become a manageable, reliable way to extend OpenShift with platform and application services at scale.