Kahibaro
Discord Login Register

Installing and managing Operators

Ways to install Operators in OpenShift

OpenShift supports several ways to install Operators, all orchestrated by the Operator Lifecycle Manager (OLM). At a high level you can:

This chapter focuses on how to actually perform installations and day‑2 management, assuming you already understand the basic idea of Operators and OLM.

OperatorHub in the web console

For most users, Operator installation starts in the OpenShift web console via OperatorHub.

Browsing and discovering Operators

In the console:

  1. Navigate to Operators → OperatorHub.
  2. Use:
    • Search (by name/keyword).
    • Filters for:
      • Provider type (Red Hat, certified, community).
      • Capability level (Basic Install, Seamless Upgrades, etc.).
      • Infrastructure (e.g., OpenShift Container Platform, OpenShift Dedicated).
  3. Click an Operator tile to see:
    • Description and primary use cases.
    • Supported platforms / versions.
    • Install modes and namespace support.
    • Required permissions (ClusterServiceVersions show RBAC needs).

This discovery step is where you validate that:

Installing an Operator from OperatorHub

When you choose Install on an Operator tile, OLM guides you through key decisions:

  1. Installation mode
    • All namespaces on the cluster (cluster‑wide)
      • Creates the Operator in a specific namespace (often openshift-operators).
      • Operator can watch and manage resources across all namespaces.
      • Typical for infrastructure/platform Operators (logging, monitoring, storage).
    • A specific namespace on the cluster (namespace‑scoped)
      • Operator is installed into and limited to a single namespace.
      • Suitable for team‑ or project‑specific Operators and reduced blast radius.

Choose the most restrictive scope that still satisfies your use case.

  1. Installed Namespace
    • For cluster‑wide mode, usually a dedicated namespace like openshift-operators or one recommended by documentation.
    • For namespace‑scoped mode, you typically:
      • Select an existing project, or
      • Create a new project to isolate the Operator and its managed apps.
  2. Update channel
    • Example: stable-4.15, stable, fast, preview.
    • Channels represent curated upgrade streams for that Operator.
    • In production, prefer a stable or version‑locked channel over experimental ones.
  3. Approval strategy
    • Automatic:
      • New Operator versions in the selected channel are auto‑approved and rolled out.
      • Minimal admin overhead; risk of unintended upgrades.
    • Manual:
      • New versions appear as pending; an administrator must explicitly approve them.
      • Allows change windows, testing, and coordinated upgrades.
  4. Additional install options (if exposed)
    • Some Operators provide configuration at install time, for example:
      • Resource limits for the Operator pods.
      • Customizations for watch namespaces or features.
    • Many of these can later be changed by editing the Subscription or Operator config CRs.

After you confirm:

Installing Operators with the CLI (`oc`)

The console is convenient, but CLI gives you repeatable and scriptable installs.

Discovering available Operators via CLI

The catalogs exposed to the cluster appear as CatalogSource objects in the openshift-marketplace namespace:

$ oc get catalogsources -n openshift-marketplace
NAME                               DISPLAY            TYPE   PUBLISHER   AGE
redhat-operators                   Red Hat Operators  grpc   Red Hat     10d
certified-operators                Certified Operators grpc  Red Hat     10d
community-operators                Community Operators grpc  Red Hat     10d
redhat-marketplace                 Red Hat Marketplace grpc  Red Hat     10d

To list all available ClusterServiceVersion entries in a catalog:

$ oc get packagemanifests -n openshift-marketplace

Filter by name:

$ oc get packagemanifests -n openshift-marketplace | grep my-operator

PackageManifest (or in newer OCP, catalog APIs with the new catalog format) shows which channels and versions are available.

Installing by creating a Subscription

A basic installation with OLM uses:

Example: namespace‑scoped Operator for a single project.

  1. Create a project:
apiVersion: v1
kind: Namespace
metadata:
  name: my-operator-namespace
$ oc apply -f namespace.yaml
  1. Create an OperatorGroup:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: my-operator-group
  namespace: my-operator-namespace
spec:
  targetNamespaces:
    - my-operator-namespace
$ oc apply -f operatorgroup.yaml
  1. Create a Subscription:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: my-operator
  namespace: my-operator-namespace
spec:
  channel: stable
  name: my-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  installPlanApproval: Automatic
$ oc apply -f subscription.yaml

OLM will then:

Verifying installation

Check the Subscription:

$ oc get subscription my-operator -n my-operator-namespace -o yaml

Check the CSV status:

$ oc get csv -n my-operator-namespace
NAME                     DISPLAY         VERSION   PHASE
my-operator.v1.2.3       My Operator     1.2.3     Succeeded

If PHASE is Succeeded, the Operator is installed and ready to manage its custom resources.

Custom and private catalog sources

In many organizations, Operators are not pulled directly from public catalogs. Instead, you might:

This is done by defining your own CatalogSource.

Creating a custom CatalogSource

A basic gRPC catalog source:

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: my-internal-operators
  namespace: openshift-marketplace
spec:
  sourceType: grpc
  image: registry.example.com/olm/my-operator-catalog:latest
  displayName: My Internal Operators
  publisher: My Org

After applying this:

$ oc apply -f my-catalogsource.yaml
$ oc get catalogsources -n openshift-marketplace

OLM will add my-internal-operators to the list of sources. Operators from this catalog can then be installed via:

Using catalog priority and scoping

If multiple catalogs export the same Operator, OpenShift uses priority and configuration from CatalogSource. Administrators can:

Managing Operator lifecycle

Once installed, Operators must be kept healthy, up to date, and aligned with policy.

Understanding key OLM objects

These resources are what you monitor and manipulate when managing Operators.

Upgrading Operators

With automatic approval:

With manual approval:

  1. A new InstallPlan appears as RequiresApproval.
  2. Review the plan:
   $ oc get installplans -n my-operator-namespace
   $ oc describe installplan <name> -n my-operator-namespace
  1. Approve:
   $ oc patch installplan <name> -n my-operator-namespace \
     --type merge -p '{"spec":{"approved":true}}'

This integrates well with change control processes.

Changing channels or approval strategies

You can safely adjust channel or approval on the Subscription:

$ oc patch subscription my-operator -n my-operator-namespace \
  --type merge -p '{"spec":{"channel":"stable-2.x","installPlanApproval":"Manual"}}'

OLM will then honor the new channel for subsequent upgrades.

Pausing upgrades

To temporarily freeze Operator upgrades:

This allows you to:

Namespace scoping and OperatorGroups

OperatorGroup objects determine which namespaces an Operator watches and where it can create resources.

Common patterns

  1. Global OperatorGroup (cluster‑wide):
    • In a namespace like openshift-operators:
     apiVersion: operators.coreos.com/v1
     kind: OperatorGroup
     metadata:
       name: global-operators
       namespace: openshift-operators
     spec:
       targetNamespaces:
         - ""
  1. Single‑namespace OperatorGroup:
    • For a project‑scoped Operator:
     apiVersion: operators.coreos.com/v1
     kind: OperatorGroup
     metadata:
       name: app-team-operator-group
       namespace: team-a
     spec:
       targetNamespaces:
         - team-a
  1. Multi‑namespace OperatorGroup:
    • One Operator managing resources in multiple selected namespaces:
     apiVersion: operators.coreos.com/v1
     kind: OperatorGroup
     metadata:
       name: shared-operator-group
       namespace: ops-operators
     spec:
       targetNamespaces:
         - project1
         - project2

You should avoid overlapping OperatorGroup scopes that might compete for the same Operator, as this can lead to unexpected scheduling and errors.

Removing and cleaning up Operators

Removing an Operator requires attention to both the Operator itself and the resources it manages.

Uninstalling via web console

  1. Go to Operators → Installed Operators.
  2. Select the Operator.
  3. Choose Uninstall Operator.

This typically:

Many platform Operators are not meant to be casually uninstalled; removing them can disrupt cluster services.

Uninstalling via CLI

Delete the Subscription and CSV:

$ oc delete subscription my-operator -n my-operator-namespace
$ oc delete csv my-operator.v1.2.3 -n my-operator-namespace

OLM cleans up the Operator pods and associated OLM artifacts.

Cleaning up CRDs and managed resources

To completely remove the API surface and data:

  1. List CRDs owned by the Operator:
   $ oc get crd | grep my-operator
  1. For each CRD:
    • List and delete remaining CRs (ensure you understand implications):
     $ oc get <kind> --all-namespaces
     $ oc delete <kind> <name> -n <namespace>
  1. Delete the CRD itself:
   $ oc delete crd <crd-name>

Be aware that some Operators implement finalizers to do cleanup; if they are already removed, you may need to clear stuck finalizers manually in rare edge cases.

Day‑2 operations and best practices

Watching Operator health

Use the console or CLI to monitor:

Example:

$ oc get csv -A
$ oc describe csv my-operator.v1.2.3 -n my-operator-namespace

Alerts from the built‑in monitoring stack often include Operator health signals for core platform Operators.

Logging and troubleshooting

When an Operator misbehaves:

  1. Check the Operator Deployment / pods:
   $ oc get pods -n my-operator-namespace
   $ oc logs deploy/my-operator-controller-manager -n my-operator-namespace
  1. Check:
    • CSV events.
    • Subscription status for errors from the catalog or image pulls.
    • RBAC or SCC errors in events or logs.
  2. Confirm catalog reachability if images or bundles cannot be resolved.

Version pinning and drift control

For tightly controlled environments:

Separating platform and application Operators

A common pattern:

This separation reduces risk while still giving teams flexibility.

Automating Operator management

Operator management aligns well with infrastructure‑as‑code and GitOps practices.

GitOps with Argo CD

Pipelines and Ansible

The key pattern is to treat all Operator configuration (not just app CRs) as declarative resources in code, not manual click‑driven actions.

Summary

Installing and managing Operators in OpenShift centers around:

With these tools and practices, Operators become a manageable, reliable way to extend OpenShift with platform and application services at scale.

Views: 18

Comments

Please login to add a comment.

Don't have an account? Register now!