Node-bound persistence
The underlying data lives on one node filesystem, so portability is limited by design.
Learn how HostPath-backed persistent volumes work in development and lab clusters.
k8s/labs/storage/hostpath-pv-pvc.yaml — HostPath PV, PVC, and Pod manifestk8s/labs/storage/hostpath.yaml — Standalone HostPath volume PodThe underlying data lives on one node filesystem, so portability is limited by design.
Even with HostPath, Kubernetes uses the same supply and claim model as any other persistent backend.
HostPath is good for labs, local content, and demos, but not a general production storage strategy.
Point the volume at a real directory on the host, such as /mnt/data.
The claim asks for size and access mode so Kubernetes can bind it to the volume.
The Pod references the claim and exposes it inside the container filesystem.
Writes land on the chosen node path, and that location becomes part of the workload design.
apiVersion: v1
kind: PersistentVolume
metadata:
name: hostpath-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /mnt/data
type: Directory
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hostpath-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: hostpath-pod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: hostpath-volume
mountPath: /usr/share/nginx/html
volumes:
- name: hostpath-volume
persistentVolumeClaim:
claimName: hostpath-pvc
| Aspect | HostPath | NFS |
|---|---|---|
| Scope | Single node path | Shared network path |
| Access | Usually one-node oriented | Can support shared access across nodes |
| Setup effort | Very low for labs | Moderate because a server is required |
| Typical use | Local experiments and node-specific data | Shared files and multi-node access |
Demonstrate PV, PVC, and Pod mounting without standing up extra storage infrastructure.
Use it when the data is intentionally local to one machine and portability is not required.
Pair the demo with scheduling, failover, and rescheduling discussions so the tradeoffs stay clear.