Table of Contents
Role of StorageClasses in OpenShift
In OpenShift, a StorageClass describes how storage should be provisioned for PersistentVolumeClaims (PVCs). Where a PVC describes what an application needs (size, access mode), a StorageClass describes the backend, performance, and policy used to satisfy that request.
When dynamic provisioning is enabled, a StorageClass acts as the “template” for creating new PersistentVolumes (PVs) on demand.
Key points:
- Each StorageClass usually maps to a specific storage backend or profile (e.g., fast SSD, standard HDD, NFS, cloud block storage).
- A cluster can have multiple StorageClasses for different performance, availability, or cost characteristics.
- One StorageClass can be marked as the default — PVCs that don’t specify a storageClassName will use it.
StorageClass Anatomy
A StorageClass is a Kubernetes resource with some OpenShift specifics. Typical fields:
provisioner: the CSI or in-tree plugin that creates volumes.parameters: backend-specific settings (e.g., volume type, IOPS, replication).reclaimPolicy: what happens to a PV after its PVC is deleted.allowVolumeExpansion: whether PVCs using this class can be expanded.volumeBindingMode: when volume provisioning and binding occur.
Minimal example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumerProvisioner
The provisioner field determines which storage plugin manages underlying volumes. Common types in OpenShift:
- CSI drivers (preferred in modern clusters), e.g.:
ebs.csi.aws.com(AWS EBS)pd.csi.storage.gke.io/pd.csi.cloud.google.com(GCE PD)disk.csi.azure.com(Azure Disk)- Vendor CSI drivers (NetApp, Dell, Ceph, etc.)
- Legacy in-tree provisioners (deprecated in newer Kubernetes versions, but still seen in older clusters).
The provisioner is usually installed and configured by an Operator; as an application developer you mostly just reference existing StorageClasses.
Parameters
parameters are key–value pairs specific to each provisioner. Examples (these vary by backend):
- Cloud block storage:
type: performance class (e.g.,gp3,io1,premium-lrs)iopsPerGB,throughput: performance settingsencrypted,kmsKeyId: encryption settings- File storage:
nfsServer,nfsPathfor NFS-based drivers- Ceph / distributed storage:
pool: storage pool namereplication: replication factorcsi.storage.k8s.io/fstype: filesystem type (ext4,xfs, etc.)
These parameters let cluster admins expose different storage “flavors” to users without exposing full backend detail.
Reclaim Policy
reclaimPolicy controls what happens to PVs created from this StorageClass when the PVC is deleted:
Delete:- Underlying storage is deleted when the PVC is deleted.
- Common in cloud / ephemeral environments.
Retain:- PV and underlying storage are kept; data remains.
- Admin intervention is required to reuse or manually remove it.
- Useful when you must avoid accidental data loss.
Example:
reclaimPolicy: RetainChoosing the right reclaim policy is critical for data retention and cost management.
Allow Volume Expansion
If allowVolumeExpansion: true, PVCs bound to this StorageClass can be resized (subject to backend support and cluster configuration):
allowVolumeExpansion: trueThis enables workflows like:
- Create
10GiPVC. - Later, edit PVC (or use
oc patch) to request20Gi. - Storage is expanded, then filesystem is grown accordingly.
Without this setting (or if backend doesn’t support it), expansion is not allowed.
Volume Binding Mode
volumeBindingMode affects when PVs are provisioned and bound:
Immediate(default behavior in classic Kubernetes):- Volume is created and bound as soon as PVC is created.
- Can cause suboptimal or impossible scheduling in multi-zone environments (volume may get created in a zone with no suitable nodes).
WaitForFirstConsumer:- Volume is not created until a Pod using the PVC is scheduled.
- Scheduler considers node location (zone, region, topology, etc.) before provisioning the volume.
- Strongly recommended for multi-zone / multi-region clusters.
Example:
volumeBindingMode: WaitForFirstConsumer
On OpenShift in cloud environments, platform-provided StorageClasses usually default to WaitForFirstConsumer for better scheduling and fault-tolerance behavior.
Default StorageClass
One StorageClass can be flagged as the cluster-wide default. PVCs that omit storageClassName will use it automatically.
The default is annotated:
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"Key points:
- Only one StorageClass should be default; if multiple are annotated, behavior is implementation-specific and should be avoided.
- Cluster admins should pick a sensible default that is:
- Generally available across nodes
- Reasonable in performance and cost
- Appropriate for generic workloads
Users can override by specifying storageClassName explicitly in their PVCs.
Example PVC using the default:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
# storageClassName omitted -> default StorageClass usedExample PVC using a specific StorageClass:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-fast
spec:
storageClassName: fast-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20GiWorking with StorageClasses in OpenShift
From a cluster user perspective, you typically:
- List available StorageClasses to understand your options.
- Choose a StorageClass based on performance, access, and cost requirements.
- Reference the chosen StorageClass in your PVC.
From a cluster admin perspective, you:
- Create and manage StorageClasses to expose appropriate storage profiles.
- Integrate with storage backends and CSI drivers via Operators.
- Control policies like retention, expansion, and binding behavior.
Inspecting StorageClasses
Use oc to inspect StorageClasses:
List all StorageClasses:
oc get storageclassOutput example:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 30d
standard-nfs nfs.csi.k8s.io Retain Immediate true 10d
fast-ssd ebs.csi.aws.com Delete WaitForFirstConsumer true 5dView details:
oc describe storageclass gp3-csiThis shows parameters, reclaim policy, volume binding mode, and which Operator / driver owns the class.
Creating and Modifying StorageClasses
In many OpenShift environments, StorageClasses are managed by platform or storage administrators, often via Operators. Still, the basic workflow looks like:
Create a StorageClass:
oc apply -f fast-ssd-sc.yamlExample manifest:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: ebs.csi.aws.com
parameters:
type: gp3
csi.storage.k8s.io/fstype: xfs
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumerSet a StorageClass as default (admin action):
oc annotate storageclass gp3-csi storageclass.kubernetes.io/is-default-class="true" --overwriteRemove default annotation:
oc annotate storageclass gp3-csi storageclass.kubernetes.io/is-default-class-Modifying an existing StorageClass is possible, but you must consider:
- Changes affect new PVs created after the change; existing PVs are not retroactively adjusted.
- Some fields (like provisioner) should not be changed on a class that is already in use.
StorageClasses and Dynamic Provisioning
StorageClasses enable dynamic provisioning: when a PVC references a StorageClass and no matching PV exists, the provisioner:
- Receives a request from Kubernetes for a new volume.
- Creates the volume in the backend according to class parameters.
- Registers a new PV object with a reference back to this StorageClass.
- Binds the PV to the PVC.
This behavior depends on:
- StorageClass existence and correctness.
- The CSI driver or provisioner being installed and healthy.
- Cluster permissions and infrastructure access.
If dynamic provisioning is configured, you do not manually create PVs; you define PVCs that point to appropriate StorageClasses.
Troubleshooting dynamic provisioning often starts with checking:
- Events on the PVC (
oc describe pvc ...). - Status and logs of the CSI driver Pods.
- The configuration of the StorageClass (typos in parameters, incorrect provisioner name, etc.).
StorageClass Design Considerations
When designing or choosing StorageClasses, it helps to think in terms of “storage profiles” that map to application needs.
Common profiles:
- General-purpose:
- Balanced cost/performance.
- Often the default StorageClass.
- High-performance:
- SSD-based, higher IOPS and throughput, possibly with higher cost.
- For databases, latency-sensitive workloads.
- Capacity/archival:
- Larger, cheaper, slower.
- For logs, backups, or cold data.
- File vs block:
- Block storage for single-node attachment (
ReadWriteOnce). - Shared file storage for multi-node access (
ReadWriteMany), e.g., NFS or distributed file systems.
Questions an admin typically considers:
- Do different teams/apps need different performance tiers?
- How should data be retained:
DeleteorRetain? - Do we require volume expansion by developers?
- How does topology (zones, regions) impact volumeBindingMode?
- Which StorageClasses should be visible to which users (via RBAC and project policies)?
Using StorageClasses Effectively as a Developer
As an application developer on OpenShift, you generally:
- Discover available StorageClasses in your cluster:
- Use
oc get storageclass. - Read documentation from your platform team, which usually describes when to use each class.
- Select a StorageClass based on:
- Required access mode (
ReadWriteOncevsReadWriteMany). - Performance needs (latency, IOPS).
- Data criticality (do you want volumes retained on PVC deletion?).
- Reference it in your PVC:
- Example for a high-performance DB volume:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: db-data
spec:
storageClassName: fast-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi- Plan for resizing:
- Check whether the StorageClass allows expansion (
allowVolumeExpansion: true). - If yes, you can increase the requested size later; if not, you may need a migration strategy.
- Avoid hard-coding cluster-specific names:
- For Helm charts or templates you plan to reuse across environments, parameterize the
storageClassNameso it can be overridden per cluster.
OpenShift-Specific Aspects
While StorageClasses are a Kubernetes concept, OpenShift adds some practical aspects:
- Many StorageClasses are created and managed by Operators:
- For example, an Operator for a storage product may create multiple StorageClasses (gold, silver, bronze).
- OpenShift’s UI integrates StorageClasses:
- The web console shows available classes when you create a PVC.
- It often includes documentation links or labels that indicate intended use.
- Security and multi-tenancy:
- Not all StorageClasses may be intended for all projects; platform teams may use RBAC or admission controls to limit who can use certain classes (for cost or compliance reasons).
Understanding how your OpenShift platform team has set up StorageClasses is essential: the same OpenShift version can look very different depending on which storage backends and Operators are installed.
Summary
- StorageClasses define how storage is provisioned: which backend, policy, and performance profile.
- They are central to dynamic provisioning of PVs from PVCs.
- Key fields include
provisioner,parameters,reclaimPolicy,allowVolumeExpansion, andvolumeBindingMode. - One StorageClass can be marked as default; PVCs without
storageClassNamewill use it. - Developers choose StorageClasses based on workload requirements; admins design and manage classes to align with infrastructure capabilities and organizational policies.
In practice, effective use of StorageClasses lets OpenShift users request storage in a simple, declarative way, while keeping backend details and complexity encapsulated in the platform.