Table of Contents
Understanding Dynamic Provisioning
Dynamic provisioning in OpenShift automates the creation of persistent storage when a pod (through a PersistentVolumeClaim) asks for it. Instead of pre-creating PersistentVolumes (PVs), the cluster allocates and binds storage on demand using a StorageClass and a provisioner (often a CSI driver).
You should already be familiar with PVs, PVCs, and StorageClasses from previous chapters; here we focus on what is unique about dynamic provisioning: how it works, how it is configured in OpenShift, and what to watch out for.
How Dynamic Provisioning Works
At a high level, the flow is:
- A user (or application) creates a
PersistentVolumeClaimthat: - Requests a certain size (
resources.requests.storage) - Optionally selects a
StorageClass(viastorageClassName) - The control plane sees the PVC is unbound and has a
storageClassName. - The associated storage provisioner (defined by the
StorageClass) is called. - The provisioner:
- Talks to the underlying storage system (CSI driver, cloud API, etc.)
- Creates a new volume that satisfies the claim
- Creates a
PersistentVolumeobject referencing that backend volume - Kubernetes binds the new PV to the PVC.
- Pods referencing the PVC can now mount and use the dynamically created volume.
If no appropriate StorageClass and provisioner can satisfy the request, the PVC will stay in Pending state.
Role of StorageClasses in Dynamic Provisioning
Dynamic provisioning is driven entirely by StorageClass definitions. Each StorageClass describes:
- The provisioner (
provisioner/csidriver name) - Parameters specific to the storage backend
- Reclaim policy (
reclaimPolicy) - Volume binding behavior (
volumeBindingMode) - Whether it is the default
StorageClassfor the cluster
A minimal example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/no-provisioner # for illustration; real classes use a CSI driver
parameters:
type: ssd
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumerIn OpenShift, storage vendors or the platform itself usually install and configure production-ready StorageClasses via Operators (e.g., for OpenShift Data Foundation, AWS EBS, Azure Disk, etc.).
Default StorageClass
If exactly one StorageClass carries the annotation:
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
then PVCs without an explicit storageClassName will use that class automatically. This is how most dynamic provisioning happens in typical OpenShift clusters: application developers only create PVCs; they do not manage PVs or StorageClasses directly.
Creating PVCs for Dynamic Provisioning
A PVC that leverages dynamic provisioning looks like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: fast-ssdKey points specific to dynamic provisioning:
storageClassNamemust match an existingStorageClasswith a valid provisioner.- If
storageClassNameis omitted: - The cluster may use the default
StorageClassif one exists. - If no default exists, the PVC will stay
Pending. accessModesmust be supported by the underlying class (e.g., some only supportReadWriteOnce).
Once the PVC is created:
- A PV is automatically created and bound.
- The PV’s
spec.storageClassNamematches the PVC’sstorageClassName. - The PV’s
spec.claimRefpoints to the PVC.
Provisioners in OpenShift
OpenShift uses the standard Kubernetes dynamic provisioning framework, with provisioners usually implemented as CSI (Container Storage Interface) drivers. Typical categories:
- Cloud provider drivers (when OpenShift runs on a public cloud):
- AWS EBS, Azure Disk/File, GCP Persistent Disk, etc.
- On-premise and enterprise storage drivers:
- OpenShift Data Foundation (ODF) / Ceph-based
- NetApp, Dell, Pure Storage, etc.
- Local or special-purpose drivers:
- Local storage (for special cases, not generic shared storage)
- File-based drivers (NFS, SMB) via appropriate CSI implementations
In many setups, these are managed by Operators that:
- Deploy the CSI driver
- Create or suggest recommended
StorageClasses(e.g.,ocs-storagecluster-ceph-rbd,gp2-csion AWS) - Handle updates and health checks
As a user of dynamic provisioning, you usually only need to know:
- What StorageClasses are available.
- Which one you should use for your workload (e.g., “standard”, “fast”, “encrypted”, “object storage gateway”, etc.).
Volume Binding Modes
Dynamic provisioning includes when and where volumes actually get created. This is controlled by the volumeBindingMode on a StorageClass:
Immediate(default Kubernetes behavior):- Volume is provisioned as soon as the PVC is created.
- Suitable in environments where topology (zones, nodes) is not a concern.
WaitForFirstConsumer:- Volume provisioning is delayed until a pod using the PVC is scheduled.
- The scheduler knows which node or zone the pod will run in, and can request the volume in a matching topology zone.
- Common for cloud and distributed storage to avoid cross-zone attachment issues.
Example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zonal
provisioner: my.csi.example.com
volumeBindingMode: WaitForFirstConsumer
In OpenShift, cloud-integrated and ODF classes often use WaitForFirstConsumer to ensure efficient placement and avoid scheduling conflicts.
Reclaim Policy and Dynamic Provisioning
The reclaimPolicy on a StorageClass (or PV) determines what happens to dynamically provisioned volumes after the PVC is deleted:
Delete:- Underlying storage object is deleted along with the PV.
- Common for ephemeral or per-app volumes.
Retain:- PV and underlying storage are preserved.
- Manual cleanup or re-attachment is required.
- Used when data must be preserved past application lifecycle.
With dynamic provisioning, Delete is typical for many workloads, but for critical or compliance-sensitive data, storage administrators may create special StorageClasses with Retain.
Allowing Volume Expansion
Dynamic provisioning also intersects with volume expansion. A StorageClass can enable online or offline resizing through:
allowVolumeExpansion: trueIf supported by the driver and storage backend, you can then increase the size of a PVC:
- Edit the PVC
spec.resources.requests.storageto a larger value. - The CSI driver expands the underlying volume.
- The filesystem is resized (often automatically if supported).
Dynamic provisioning plus expansion makes storage management largely self-service:
- Developers request storage.
- Later, they can grow it without creating a new volume and migrating data (within driver limits).
Dynamic Provisioning in Multi-Tenant OpenShift Clusters
In OpenShift’s multi-tenant model, dynamic provisioning is particularly valuable because:
- Project (namespace) owners can request storage without cluster-admin access.
- Administrators expose appropriate
StorageClassesand quota policies. - Per-project resource quotas can limit the total storage requested.
Dynamic provisioning considerations in such environments:
- Quotas: PVC creation will fail (or stay
Pending) if it violatesrequests.storageor object-count quotas. - Access controls: Some clusters may restrict which
StorageClassesa given project can use (for example, via admission controllers). - Cost and performance: Different
StorageClassescan represent different cost tiers or performance levels; teams may be guided or restricted to particular classes.
Observing and Troubleshooting Dynamic Provisioning
When dynamic provisioning does not behave as expected, focus on:
- PVC status:
kubectl get pvc/oc get pvcPendingindicates no PV can be bound or created.- Events on the PVC:
- Check with
oc describe pvc <name>. - Look for messages like “no persistent volumes available”, “failed to provision volume”, “storageclass not found”.
- StorageClass and provisioner health:
- Ensure
StorageClass.storageClassNamematches exactly. - Check CSI driver pods / Operator status in
openshift-storageor vendor-specific namespaces. - Topology / access mode issues:
- For
WaitForFirstConsumer, a PVC may stay unbound until a pod is actually created. - Some drivers do not support requested access modes or volume modes (
FilesystemvsBlock).
Typical failure patterns:
- Missing or mis-typed
storageClassName→ PVCPending, no events about PV creation. - Provisioner not installed or failing → events indicating failure to provision volume.
- Quota exceeded → events referencing resource quota violations.
Best Practices for Using Dynamic Provisioning in OpenShift
- Prefer dynamic provisioning over static PV management for most workloads.
- Use clearly named
StorageClassesthat indicate intent:standard,fast,archive,encrypted, etc. - Align
volumeBindingModewith your environment: - Use
WaitForFirstConsumerin zoned or multi-availability setups. - Choose
reclaimPolicybased on data sensitivity: Deletefor scratch / rebuildable data.Retainfor important or compliance-regulated data.- Enable
allowVolumeExpansionwhere your backend supports online or offline resizing. - Document which
StorageClassesare recommended for which application profiles (databases, logs, object storage gateways, etc.). - Regularly review dynamically provisioned PV usage to manage capacity planning and costs.
Dynamic provisioning is the core mechanism that turns OpenShift storage into a self-service platform: developers and workloads request persistent storage declaratively, and the cluster, via its provisioners and StorageClasses, allocates and manages the backing volumes automatically.