Reclaim Policy

The reclaim policy determines what happens to a PersistentVolume and its data when the bound PersistentVolumeClaim is deleted. This is a critical setting that affects data persistence and storage cleanup. Understanding reclaim policies helps you protect important data and manage storage resources effectively.

What is a Reclaim Policy?

When a PVC is deleted, Kubernetes needs to decide what to do with the PersistentVolume that was bound to it. The reclaim policy controls this behavior—whether the volume is retained (keeping the data), deleted (destroying the data), or recycled (deprecated, wiped for reuse).

graph TB A[PVC Deleted] --> B{Reclaim Policy?} B -->|Retain| C[PV Retained<br/>Status: Released<br/>Data Preserved] B -->|Delete| D[PV Deleted<br/>Storage Destroyed<br/>Data Lost] B -->|Recycle| E[Deprecated<br/>Should Not Use] style A fill:#e1f5ff style C fill:#ffe1e1 style D fill:#ffe1e1 style E fill:#f3e5f5

Reclaim Policy Types

Retain

When a PVC is deleted, the PV is retained but not automatically deleted or reused. The PV’s status becomes “Released” and contains all the original data. Manual intervention is required to clean up or reuse the volume.

Key characteristics:

  • PV status changes to “Released”
  • Data remains intact on the underlying storage
  • PV cannot be reused until manually cleaned up
  • Administrator must manually delete the PV and storage
  • Useful for data recovery and backups
sequenceDiagram participant User participant K8s as Kubernetes participant PV as PersistentVolume participant Storage as Storage System User->>K8s: Delete PVC K8s->>PV: Set status to Released K8s->>PV: Retain PV (not deleted) PV->>Storage: Data preserved Note over PV,Storage: Manual cleanup required K8s->>User: PVC deleted, PV retained

Example:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: retain-storage
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
reclaimPolicy: Retain  # Volumes retained when PVC deleted

When to use Retain:

  • Production databases and critical data
  • Data that must not be accidentally deleted
  • Compliance requirements for data retention
  • When you need time to backup data before deletion
  • Development environments where you want to inspect data after pod deletion

Manual cleanup process:

After a PVC is deleted and the PV is in Released state:

  1. Verify the PV is Released: kubectl get pv
  2. Delete the PV: kubectl delete pv <pv-name>
  3. Delete the underlying storage (cloud console, CLI, etc.)
  4. If reusing the PV, you may need to remove the claimRef: kubectl patch pv <pv-name> -p '{"spec":{"claimRef":null}}'

Delete

When a PVC is deleted, the PV is automatically deleted, and the underlying storage is destroyed. All data is permanently lost. This is the default for dynamically provisioned volumes.

Key characteristics:

  • PV is automatically deleted
  • Underlying storage is destroyed
  • All data is permanently lost
  • No manual cleanup required
  • Fast and automatic
sequenceDiagram participant User participant K8s as Kubernetes participant PV as PersistentVolume participant Storage as Storage System User->>K8s: Delete PVC K8s->>PV: Delete PV K8s->>Storage: Delete storage volume Storage->>K8s: Storage deleted K8s->>User: PVC and PV deleted

Example:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: delete-storage
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
reclaimPolicy: Delete  # Volumes deleted when PVC deleted (default)

When to use Delete:

  • Development and testing environments
  • Temporary data and caches
  • Ephemeral workloads
  • When automatic cleanup is desired
  • Non-critical data that can be regenerated

Important considerations:

  • Data loss is permanent—ensure this is acceptable
  • Use with caution in production
  • Consider backups if data might be needed later
  • Faster cleanup than Retain (no manual steps)

Recycle

Deprecated - The Recycle policy is deprecated and should not be used. It was used to wipe the volume and make it available for reuse, but it has been replaced by dynamic provisioning with StorageClasses.

Why deprecated:

  • Unreliable (no guarantees data is fully wiped)
  • Security concerns (data might be recoverable)
  • Replaced by dynamic provisioning (better approach)
  • Only works with a few volume types

Modern alternative:

Use dynamic provisioning with Delete policy instead:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete  # Use Delete, not Recycle

Setting Reclaim Policy

Reclaim policy can be set in two places:

In StorageClass (Dynamic Provisioning)

For dynamically provisioned volumes, set the reclaim policy in the StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: production-storage
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-ssd
reclaimPolicy: Retain  # All volumes from this StorageClass use Retain

All PVs created from this StorageClass will use the Retain policy.

In PersistentVolume (Static Provisioning)

For manually created PVs, set the reclaim policy directly:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-manual
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain  # Set in PV spec
  storageClassName: manual
  hostPath:
    path: /mnt/data

Reclaim Policy Selection Guide

Use this decision tree to choose the right reclaim policy:

graph TD A[Choosing Reclaim Policy] --> B{Environment Type?} B -->|Production| C{Critical Data?} B -->|Development/Testing| D[Use Delete<br/>Automatic cleanup] C -->|Yes| E[Use Retain<br/>Manual cleanup] C -->|No| F{Need automatic cleanup?} F -->|Yes| D F -->|No| E style A fill:#e1f5ff style D fill:#e8f5e9 style E fill:#fff4e1

Changing Reclaim Policy

You can change the reclaim policy of an existing PV, but only when it’s in Released state:

# Change reclaim policy
kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

Note: For dynamically provisioned volumes, it’s better to set the policy correctly in the StorageClass from the start.

Common Patterns

Production Database (Retain)

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: database-storage
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-ssd
reclaimPolicy: Retain  # Protect database data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-data
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: database-storage
  resources:
    requests:
      storage: 200Gi

Development Environment (Delete)

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: dev-storage
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-standard
reclaimPolicy: Delete  # Auto-cleanup for dev
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dev-data
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: dev-storage
  resources:
    requests:
      storage: 10Gi

Temporary Cache (Delete)

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cache-storage
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-standard
reclaimPolicy: Delete  # Cache can be regenerated
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cache-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: cache-storage
  resources:
    requests:
      storage: 50Gi

Released PV Cleanup

When a PV is in Released state (Retain policy), you have several options:

Option 1: Delete the PV (Lose Data)

# Delete the PV and underlying storage
kubectl delete pv <pv-name>
# Then delete storage via cloud console/CLI

Option 2: Reuse the PV (Keep Data)

To reuse a Released PV, you need to remove the claimRef:

# Remove claimRef to make PV Available again
kubectl patch pv <pv-name> -p '{"spec":{"claimRef":null}}'

# Verify PV is now Available
kubectl get pv <pv-name>

# Create new PVC that can bind to this PV

Warning: Reusing a PV with existing data may cause issues if the new application expects empty storage.

Option 3: Backup Then Delete

  1. Backup the data from the Released PV
  2. Delete the PV
  3. Create new storage as needed

Best Practices

  1. Use Retain for production - Protect critical data from accidental deletion
  2. Use Delete for development - Simplify cleanup in non-production environments
  3. Set policy in StorageClass - For dynamic provisioning, set the policy in StorageClass
  4. Document policies - Document which StorageClasses use which reclaim policies
  5. Plan for cleanup - If using Retain, have a process for cleaning up Released PVs
  6. Consider backups - Even with Retain, have backups for critical data
  7. Test cleanup procedures - Know how to clean up Released PVs before you need to
  8. Never use Recycle - It’s deprecated; use Delete with dynamic provisioning instead

Troubleshooting

Released PVs Accumulating

If you have many Released PVs (Retain policy):

  1. Review which StorageClasses use Retain
  2. Create a cleanup process for Released PVs
  3. Consider using Delete for non-critical data
  4. Automate cleanup where possible

Accidentally Deleted Data

If data was deleted (Delete policy) and you need it:

  1. Check if backups exist
  2. Review why Delete policy was used
  3. Consider changing to Retain for critical data
  4. Implement better backup strategies

PV Stuck in Released

If a PV is stuck in Released and you want to reuse it:

  1. Remove claimRef: kubectl patch pv <name> -p '{"spec":{"claimRef":null}}'
  2. Verify PV is Available: kubectl get pv <name>
  3. Create a new PVC that matches the PV

See Also