NFS Storage Integration
NFS (Network File System) is one of the most commonly used storage protocols for Kubernetes integration. NFS provides file-based storage that can be shared across multiple pods, making it ideal for applications that need ReadWriteMany access mode. It’s simple to set up, widely supported, and works well for shared content, web servers, and content management systems.
What is NFS?
NFS is a distributed file system protocol that allows files on a remote server to be accessed as if they were local files. In Kubernetes, NFS volumes can be mounted into pods, allowing multiple pods to access the same files simultaneously.
Key advantages:
- Shared access - Multiple pods can read and write simultaneously (ReadWriteMany)
- Simple setup - Easy to configure and use
- Widely supported - Available on most storage systems
- Cost-effective - Uses standard network infrastructure
NFS in Kubernetes
Kubernetes supports NFS in two ways:
- In-tree NFS volume plugin - Built-in support (still functional)
- CSI NFS drivers - Modern CSI-based drivers (recommended for new deployments)
In-Tree NFS Plugin
The in-tree plugin allows direct NFS mounts in volume specifications:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany # NFS supports multiple readers/writers
nfs:
server: nfs-server.example.com
path: /exports/data
Characteristics:
- Simple configuration
- Direct NFS mount
- No dynamic provisioning (manual PV creation)
- Limited features
CSI NFS Drivers
Modern CSI NFS drivers provide dynamic provisioning and additional features:
Popular CSI NFS drivers:
- NFS Ganesha CSI - Community NFS CSI driver
- NFS subdir external provisioner - Dynamic NFS provisioning
- Vendor-specific NFS CSI drivers
Static Provisioning Example
Here’s a complete example using static provisioning:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: nfs.example.com
path: /exports/web-content
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
replicas: 3
template:
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: content
mountPath: /usr/share/nginx/html
volumes:
- name: content
persistentVolumeClaim:
claimName: nfs-pvc
Dynamic Provisioning with NFS
For dynamic provisioning, use an NFS CSI driver. Here’s an example using the NFS subdir external provisioner:
1. Install NFS provisioner (example - check driver documentation):
# Example StorageClass using NFS subdir provisioner
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-dynamic
provisioner: cluster.local/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "false"
pathPattern: "${.PVC.namespace}-${.PVC.name}" # Subdirectory pattern
server: nfs.example.com
share: /exports
2. Create PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc-dynamic
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-dynamic
resources:
requests:
storage: 100Gi
The provisioner creates a subdirectory on the NFS server for each PVC.
NFS Server Setup
Basic NFS Server (Linux)
1. Install NFS server:
# Ubuntu/Debian
apt-get install nfs-kernel-server
# RHEL/CentOS
yum install nfs-utils
2. Create export directory:
mkdir -p /exports/data
chown nobody:nogroup /exports/data
3. Configure exports:
Edit /etc/exports:
/exports/data *(rw,sync,no_subtree_check,no_root_squash)
4. Start NFS services:
systemctl enable nfs-server
systemctl start nfs-server
exportfs -ra # Reload exports
NFS Version
NFS has multiple versions (v3, v4, v4.1, v4.2). Specify the version in the PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v4
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
mountOptions:
- nfsvers=4.1 # Use NFSv4.1
nfs:
server: nfs.example.com
path: /exports/data
NFS version considerations:
- NFSv3 - Widely supported, simpler
- NFSv4 - Better security, stateful protocol
- NFSv4.1 - Parallel NFS (pNFS) support, better performance
- NFSv4.2 - Latest features, server-side copy
Security Considerations
NFS Security Options
1. Network restrictions:
Configure /etc/exports to restrict access:
/exports/data 10.0.0.0/8(rw,sync,no_subtree_check) # Only allow specific network
2. Kerberos authentication (NFSv4):
NFSv4 supports Kerberos for secure authentication:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-secure
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
mountOptions:
- nfsvers=4.1
- sec=krb5 # Kerberos authentication
nfs:
server: nfs.example.com
path: /exports/data
3. Root squash:
The root_squash option maps root user to nobody for security (default in most configurations).
Use Cases
NFS is ideal for:
- Shared web content - Multiple web server pods sharing HTML files
- Content management systems - Shared media and content files
- Shared configuration - Configuration files accessed by multiple pods
- Log aggregation - Centralized log storage
- File sharing - Applications that need shared file access
Performance Considerations
NFS performance depends on several factors:
- Network latency - Lower latency improves performance
- Network bandwidth - Higher bandwidth supports more throughput
- NFS version - NFSv4.1+ generally performs better
- Concurrent access - Multiple pods accessing simultaneously can impact performance
- File size - Small files have more overhead than large files
Optimization tips:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-optimized
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
mountOptions:
- nfsvers=4.1
- rsize=1048576 # Read size 1MB
- wsize=1048576 # Write size 1MB
- hard # Hard mount (retry on failure)
- timeo=600 # Timeout 60 seconds
- retrans=2 # Retry 2 times
nfs:
server: nfs.example.com
path: /exports/data
Complete Example: Shared Web Content
Here’s a complete example for shared web content:
apiVersion: v1
kind: PersistentVolume
metadata:
name: web-content-pv
spec:
capacity:
storage: 1Ti
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: nfs.example.com
path: /exports/web-content
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: web-content-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Ti
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
replicas: 5
template:
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: content
mountPath: /usr/share/nginx/html
volumes:
- name: content
persistentVolumeClaim:
claimName: web-content-pvc
Advantages
- ReadWriteMany support - Multiple pods can read and write simultaneously
- Simple setup - Easy to configure and use
- Widely supported - Available on most systems
- Cost-effective - Uses standard network infrastructure
- Flexible - Works with various storage systems
Limitations
- Network dependency - Requires stable network connection
- Performance - Slower than local storage or block storage
- Consistency - File locking and consistency can be complex
- Single point of failure - NFS server is a SPOF (mitigate with HA NFS)
- Security - Should use secure configurations (Kerberos, network restrictions)
Best Practices
- Use NFSv4.1+ - Better performance and features
- Configure network restrictions - Limit NFS access to authorized networks
- Use dedicated network - Isolate NFS traffic on dedicated network/VLAN
- Consider HA NFS - Use highly available NFS solutions for production
- Monitor performance - Monitor NFS performance and network utilization
- Optimize mount options - Tune rsize, wsize, and other options for your workload
- Use CSI drivers - Prefer CSI drivers for dynamic provisioning
- Plan for capacity - Monitor NFS server storage capacity
Troubleshooting
Mount Fails
If NFS mounts fail:
- Verify NFS server is accessible:
showmount -e <nfs-server> - Check firewall rules allow NFS ports (2049 for NFSv4, 111 for portmap)
- Test mount manually:
mount -t nfs <server>:<path> /mnt/test - Check pod events:
kubectl describe pod <pod-name> - Review NFS server logs
Performance Issues
If experiencing performance issues:
- Check network latency:
ping <nfs-server> - Monitor network bandwidth utilization
- Verify NFS version (use v4.1+ if possible)
- Tune mount options (rsize, wsize)
- Consider dedicated storage network
- Review NFS server performance metrics
Permission Issues
If experiencing permission issues:
- Check NFS export permissions in
/etc/exports - Verify file permissions on NFS server
- Check root_squash vs no_root_squash settings
- Review pod securityContext settings
See Also
- Storage Integration - Storage integration overview
- Access Modes - ReadWriteMany access mode
- CSI Persistent Volumes - Container Storage Interface
- iSCSI - iSCSI block storage integration