Kahibaro
Discord Login Register

Dynamic provisioning

Understanding Dynamic Provisioning

Dynamic provisioning in OpenShift automates the creation of persistent storage when a pod (through a PersistentVolumeClaim) asks for it. Instead of pre-creating PersistentVolumes (PVs), the cluster allocates and binds storage on demand using a StorageClass and a provisioner (often a CSI driver).

You should already be familiar with PVs, PVCs, and StorageClasses from previous chapters; here we focus on what is unique about dynamic provisioning: how it works, how it is configured in OpenShift, and what to watch out for.

How Dynamic Provisioning Works

At a high level, the flow is:

  1. A user (or application) creates a PersistentVolumeClaim that:
    • Requests a certain size (resources.requests.storage)
    • Optionally selects a StorageClass (via storageClassName)
  2. The control plane sees the PVC is unbound and has a storageClassName.
  3. The associated storage provisioner (defined by the StorageClass) is called.
  4. The provisioner:
    • Talks to the underlying storage system (CSI driver, cloud API, etc.)
    • Creates a new volume that satisfies the claim
    • Creates a PersistentVolume object referencing that backend volume
  5. Kubernetes binds the new PV to the PVC.
  6. Pods referencing the PVC can now mount and use the dynamically created volume.

If no appropriate StorageClass and provisioner can satisfy the request, the PVC will stay in Pending state.

Role of StorageClasses in Dynamic Provisioning

Dynamic provisioning is driven entirely by StorageClass definitions. Each StorageClass describes:

A minimal example:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
provisioner: kubernetes.io/no-provisioner  # for illustration; real classes use a CSI driver
parameters:
  type: ssd
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer

In OpenShift, storage vendors or the platform itself usually install and configure production-ready StorageClasses via Operators (e.g., for OpenShift Data Foundation, AWS EBS, Azure Disk, etc.).

Default StorageClass

If exactly one StorageClass carries the annotation:

metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"

then PVCs without an explicit storageClassName will use that class automatically. This is how most dynamic provisioning happens in typical OpenShift clusters: application developers only create PVCs; they do not manage PVs or StorageClasses directly.

Creating PVCs for Dynamic Provisioning

A PVC that leverages dynamic provisioning looks like this:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-data
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: fast-ssd

Key points specific to dynamic provisioning:

Once the PVC is created:

Provisioners in OpenShift

OpenShift uses the standard Kubernetes dynamic provisioning framework, with provisioners usually implemented as CSI (Container Storage Interface) drivers. Typical categories:

In many setups, these are managed by Operators that:

As a user of dynamic provisioning, you usually only need to know:

Volume Binding Modes

Dynamic provisioning includes when and where volumes actually get created. This is controlled by the volumeBindingMode on a StorageClass:

Example:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: zonal
provisioner: my.csi.example.com
volumeBindingMode: WaitForFirstConsumer

In OpenShift, cloud-integrated and ODF classes often use WaitForFirstConsumer to ensure efficient placement and avoid scheduling conflicts.

Reclaim Policy and Dynamic Provisioning

The reclaimPolicy on a StorageClass (or PV) determines what happens to dynamically provisioned volumes after the PVC is deleted:

With dynamic provisioning, Delete is typical for many workloads, but for critical or compliance-sensitive data, storage administrators may create special StorageClasses with Retain.

Allowing Volume Expansion

Dynamic provisioning also intersects with volume expansion. A StorageClass can enable online or offline resizing through:

allowVolumeExpansion: true

If supported by the driver and storage backend, you can then increase the size of a PVC:

  1. Edit the PVC spec.resources.requests.storage to a larger value.
  2. The CSI driver expands the underlying volume.
  3. The filesystem is resized (often automatically if supported).

Dynamic provisioning plus expansion makes storage management largely self-service:

Dynamic Provisioning in Multi-Tenant OpenShift Clusters

In OpenShift’s multi-tenant model, dynamic provisioning is particularly valuable because:

Dynamic provisioning considerations in such environments:

Observing and Troubleshooting Dynamic Provisioning

When dynamic provisioning does not behave as expected, focus on:

  1. PVC status:
    • kubectl get pvc / oc get pvc
    • Pending indicates no PV can be bound or created.
  2. Events on the PVC:
    • Check with oc describe pvc <name>.
    • Look for messages like “no persistent volumes available”, “failed to provision volume”, “storageclass not found”.
  3. StorageClass and provisioner health:
    • Ensure StorageClass.storageClassName matches exactly.
    • Check CSI driver pods / Operator status in openshift-storage or vendor-specific namespaces.
  4. Topology / access mode issues:
    • For WaitForFirstConsumer, a PVC may stay unbound until a pod is actually created.
    • Some drivers do not support requested access modes or volume modes (Filesystem vs Block).

Typical failure patterns:

Best Practices for Using Dynamic Provisioning in OpenShift

Dynamic provisioning is the core mechanism that turns OpenShift storage into a self-service platform: developers and workloads request persistent storage declaratively, and the cluster, via its provisioners and StorageClasses, allocates and manages the backing volumes automatically.

Views: 12

Comments

Please login to add a comment.

Don't have an account? Register now!