Table of Contents
Role of Persistent Volume Claims in OpenShift
Persistent Volume Claims (PVCs) are the way applications request storage in OpenShift. While Persistent Volumes (PVs) represent actual storage resources in the cluster, PVCs represent an application’s claim on that storage, using an abstract description (size, access mode, sometimes a class of storage) instead of tying the application to a specific backend implementation.
PVCs are namespaced user-facing resources, while PVs are cluster-scoped. Most application developers interact only with PVCs, not directly with PVs.
PVC Lifecycle and Binding
The lifecycle of a PVC in OpenShift (and Kubernetes) usually follows these steps:
- Create a PVC
A developer (or automation) creates a PVC in a project/namespace, specifying: - Requested storage capacity
- Access mode(s)
- Optionally, a
storageClassNameand other attributes - Binding to a PV
The control plane attempts to satisfy the PVC: - With dynamic provisioning: the StorageClass provisions a new PV, which is then bound to the PVC.
- With pre-provisioned PVs: the system searches for a matching existing PV and binds it.
- Use by Pods
Pods in the same namespace reference the PVC in their volume definitions. When scheduled, the Pod receives the bound storage. - Release and Reclaim
When the PVC is deleted: - The binding between PV and PVC is removed.
- What happens to the underlying PV/data is governed by the PV’s
reclaimPolicy(covered elsewhere).
PVCs are bound one-to-one with PVs. Multiple Pods can mount the same PVC, but multiple PVCs cannot bind to the same PV.
Key PVC Attributes
The most important fields in a PVC spec are:
accessModesresources.requests.storagestorageClassNamevolumeMode(optional)selector/dataSource(advanced use cases)
These shape how the storage is provisioned and used.
Access Modes
Access modes describe how a volume can be mounted:
ReadWriteOnce(RWO):
Mounted as read-write by a single node. Multiple Pods on the same node can use it, but not across nodes.ReadOnlyMany(ROX):
Mounted read-only by multiple nodes.ReadWriteMany(RWX):
Mounted read-write by multiple nodes simultaneously.
Depending on the underlying storage plugin, not all access modes are supported. For example:
- Many block/device-based storage backends support only
ReadWriteOnce. - Shared filesystems (e.g., NFS, some CSI drivers) may support
ReadWriteMany.
Choosing the wrong access mode can prevent a PVC from binding or make Pods unschedulable.
Requested Capacity
The line:
resources:
requests:
storage: 10Gitells the cluster how much storage you need. Some important aspects:
- A PVC cannot request less than the minimum size enforced by the storage backend or StorageClass.
- The storage quota/limit in the namespace (if applied) may restrict the total capacity available across all PVCs.
requests.storageis the only required resource for a PVC;limitsare not typically used for storage resources in the same way as CPU/Memory.
Increasing requested size after creation is possible with many (but not all) storage providers (volume expansion), assuming the underlying StorageClass and driver support it.
StorageClass Selection
storageClassName ties your PVC to a class of storage with certain characteristics (performance, availability, backend type, reclaim policy, etc.).
Typical patterns in OpenShift:
- Explicit StorageClass:
storageClassName: fast-ssd- Default StorageClass:
If a default StorageClass exists in the cluster and you specify nostorageClassName, the PVC will use the default. - No StorageClass / Manual binding:
storageClassName: ""This disables dynamic provisioning. The PVC will only bind if an appropriate pre-created PV is available.
Choosing the right StorageClass is often a policy and performance decision and is typically defined by cluster administrators.
Volume Mode: Filesystem vs Block
volumeMode controls how the volume appears in the Pod:
volumeMode: Filesystem(default)
The cluster formats the volume with a filesystem and mounts it as a directory into the container.volumeMode: Block
The Pod receives a raw block device (no filesystem), useful for applications that manage their own storage layout (e.g., some databases).
If omitted, Filesystem is assumed. Not all storage drivers support block mode.
Basic PVC Manifest Structure
A minimal PVC might look like:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-data-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standardKey points:
metadata.namemust be unique within the namespace.spec.accessModes,spec.resources.requests.storage, and (usually)spec.storageClassNamemust be set correctly for binding and provisioning to succeed.- The PVC is created in the same namespace as the Pods that will use it.
How Pods Use PVCs
PVCs are not directly mounted; they are referenced through the volumes section of a Pod (or a higher-level controller like a Deployment). A typical Pod snippet:
apiVersion: v1
kind: Pod
metadata:
name: app-using-pvc
spec:
volumes:
- name: app-storage
persistentVolumeClaim:
claimName: my-data-claim
containers:
- name: app
image: registry.example.com/my-app:latest
volumeMounts:
- name: app-storage
mountPath: /var/lib/app-dataImportant constraints:
- The
claimNamemust match an existing PVC in the same namespace. - The Pod will not start until the PVC is Bound and the underlying volume can be attached and mounted.
- Access mode and node affinity of the volume can influence Pod scheduling (for example, RWO volumes may pin Pods to a specific node).
PVC Binding Behavior and Matching
When a PVC is created, the control plane attempts to match it with a suitable PV:
- Requirements considered:
- Capacity (PV’s capacity must be ≥ PVC’s request).
- Access modes (PV must support all access modes requested by the PVC).
- StorageClass (must match, or both must be unset).
selectorlabels (if specified on the PVC).- With dynamic provisioning:
- The StorageClass provisions a new PV with the requested attributes.
- The PVC is bound to that new PV.
- With pre-provisioned PVs:
- The binding process selects from existing PVs.
Once bound:
- The PVC stores a reference to the PV name in
spec.volumeName. - The PV’s
claimRefpoints back to the PVC, preventing it from being bound to a different claim.
Working with PVCs via the OpenShift CLI
Using oc you can manage PVCs similarly to other resources.
Listing and Inspecting PVCs
# List PVCs in the current project
oc get pvc
# Show details of a specific PVC
oc describe pvc my-data-claim
oc describe is particularly useful to:
- Check the PVC’s
Status(Pending,Bound, etc.). - Inspect
Eventsrelated to binding or provisioning failures. - See which PV it is bound to.
Creating and Deleting PVCs
Create from a manifest:
oc apply -f pvc.yamlDelete a claim:
oc delete pvc my-data-claimDeleting a PVC does not always delete the underlying storage; what happens is tied to the PV’s reclaim policy (covered elsewhere). From the application’s perspective, deleting the PVC generally means the volume is no longer mountable.
PVC Status and Common States
PVCs have a .status.phase that indicates their current state:
Pending:
The cluster has not yet bound the claim to a PV. Reasons include:- No matching PV available (pre-provisioned mode).
- Dynamic provisioning failed (StorageClass/CSI problems).
- Quota exceeded or configuration errors.
Bound:
The claim is successfully bound to a PV and is ready for use by Pods.Lost:
Rare in normal operation; indicates a problem where the bound PV may no longer be available.
For troubleshooting, always inspect:
oc describe pvc <name>for:- Events at the bottom.
- Bound PV name.
oc describe pv <pv-name>for more details on the backing volume.
PVCs and Namespaces
PVCs are namespaced:
- A PVC can only be used by Pods in the same namespace.
- If multiple teams/projects need similar storage, each creates its own PVC, possibly using the same StorageClass or PV template.
- Cluster administrators can control which StorageClasses are available to which namespaces (e.g., via policies and quotas).
This aligns with multi-tenant isolation in OpenShift and supports resource governance via quotas and limits at the namespace level.
Using PVCs with Stateful Workloads
Many stateful workloads (databases, message queues, data processing systems) rely on PVCs to persist data beyond Pod lifetimes:
- Each instance of a stateful component often has its own PVC.
- Higher-level controllers (e.g., StatefulSets) can automatically manage per-Pod PVCs; their detailed behavior is covered elsewhere.
- PVCs allow Pods to be deleted, rescheduled, or upgraded without losing the underlying data (as long as the PVC remains and the reclaim policy preserves data).
For simple workloads, a single PVC may be shared by multiple Pods (subject to access mode and backend limitations).
Resizing and Modifying PVCs
PVC modification capabilities depend on the StorageClass and storage driver:
- Resizing:
If enabled, you can increaseresources.requests.storage:
oc patch pvc my-data-claim \
-p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'Considerations:
- Expansion from, say,
10Gito20Giis common; shrinking is typically not supported. - Some backends may require Pod restarts or file system expansion inside the container.
- Changing
storageClassNameoraccessModes:
Usually not supported on a bound PVC; a new PVC is typically required, then data migration is performed at the application or operator level.
PVCs and Security Considerations
Although detailed security topics are treated elsewhere, a few PVC-specific points matter:
- File system permissions and ownership inside the mounted volume must align with container user IDs and Security Context Constraints (SCCs).
- Some storage plugins support features like encryption at rest or per-claim encryption policies, exposed via StorageClass parameters or annotations.
- In multi-tenant clusters, PVC visibility is limited to the namespace, but the physical storage backend may be shared; ACLs and backend isolation are configured at the storage level by cluster administrators.
Patterns and Best Practices for PVC Usage
When designing applications for OpenShift with PVCs, common patterns include:
- One claim per component that needs durable state
For example, each database instance or each broker node has its own PVC. - Use appropriate access modes
Do not requestReadWriteManyunless truly needed; many performant backends only supportReadWriteOnce. - Rely on dynamic provisioning where possible
Using StorageClasses simplifies management and avoids manual PV creation. - Avoid hardcoding backend assumptions in the application
Applications should treat PVC-backed storage as generic POSIX or block storage, allowing cluster admins to change backends transparently through StorageClasses. - Plan ahead for backup and restore
PVCs alone are not a backup strategy; integrate them with your backup tooling and data management processes.
By understanding PVCs as the bridge between applications and the cluster’s persistent storage infrastructure, you can design workloads on OpenShift that are both portable and resilient, without coupling them tightly to specific storage technologies.