CronJobs
CronJobs create Jobs on a time-based schedule, similar to the Unix cron utility. They’re perfect for periodic tasks like backups, reports, data cleanup, or any recurring work that needs to run at specific times. CronJobs create Job objects according to a schedule, and each Job runs pods until completion.
What Are CronJobs?
A CronJob is a Kubernetes resource that creates Jobs on a schedule. It uses cron syntax to define when Jobs should be created. Each scheduled execution creates a new Job, which runs pods until they complete successfully.
Why Use CronJobs?
CronJobs are ideal for:
✅ Scheduled backups - Regular database or filesystem backups
✅ Periodic reports - Generate reports daily, weekly, or monthly
✅ Data cleanup - Remove old logs, temporary files, or expired data
✅ Health checks - Periodic system checks and maintenance
✅ Data synchronization - Sync data between systems on a schedule
✅ Batch processing - Process data at specific intervals
CronJob vs Job
CronJobs create Jobs on a schedule:
Use CronJobs when:
- Task needs to run on a schedule
- Recurring work
- Periodic maintenance
Use Jobs when:
- One-time task
- Manual execution
- No schedule needed
Cron Syntax
CronJobs use standard cron syntax with 5 fields:
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6) (Sunday to Saturday)
│ │ │ │ │
* * * * *
Common Examples
0 * * * *- Every hour at minute 00 0 * * *- Every day at midnight0 0 * * 0- Every Sunday at midnight0 0 1 * *- First day of every month at midnight*/5 * * * *- Every 5 minutes0 9-17 * * 1-5- Every hour from 9 AM to 5 PM, Monday to Friday
Basic CronJob Example
Here’s a CronJob that runs a backup every day at 2 AM:
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup
spec:
schedule: "0 2 * * *" # Every day at 2 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: backup-tool:latest
command:
- /bin/sh
- -c
- |
backup-database
upload-to-s3 s3://backups/$(date +%Y%m%d).sql
restartPolicy: OnFailure
backoffLimit: 2
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
Key fields:
schedule- Cron expression defining when to run (required)jobTemplate- Template for creating Jobs (required)successfulJobsHistoryLimit- Number of successful Jobs to keep (default: 3)failedJobsHistoryLimit- Number of failed Jobs to keep (default: 1)
CronJob Lifecycle
Time Zones
By default, CronJobs use the timezone of the kube-controller-manager. You can specify a timezone using the timeZone field (Kubernetes 1.24+):
apiVersion: batch/v1
kind: CronJob
metadata:
name: report
spec:
schedule: "0 9 * * *" # 9 AM in specified timezone
timeZone: "America/New_York"
jobTemplate:
spec:
template:
spec:
containers:
- name: report
image: report-generator:latest
restartPolicy: OnFailure
Concurrency Policy
Control what happens when a scheduled time arrives but a previous Job is still running:
Allow (Default)
Allow multiple Jobs to run concurrently:
spec:
concurrencyPolicy: Allow
schedule: "*/5 * * * *" # Every 5 minutes
Forbid
Skip creating a new Job if the previous one is still running:
spec:
concurrencyPolicy: Forbid
schedule: "0 * * * *" # Every hour
Replace
Replace the currently running Job with a new one:
spec:
concurrencyPolicy: Replace
schedule: "0 * * * *"
Starting Deadline
If a CronJob misses its scheduled time (e.g., cluster was down), it can still start within a deadline:
spec:
schedule: "0 * * * *"
startingDeadlineSeconds: 300 # Start within 5 minutes of scheduled time
jobTemplate:
spec:
# ...
If the deadline passes, the execution is skipped and counted as a missed execution.
Job History
CronJobs keep a history of completed Jobs. Control how many are retained:
spec:
successfulJobsHistoryLimit: 5 # Keep 5 successful Jobs
failedJobsHistoryLimit: 3 # Keep 3 failed Jobs
jobTemplate:
spec:
# ...
Setting to 0 removes all Jobs immediately after completion.
Suspending CronJobs
Temporarily disable a CronJob without deleting it:
spec:
suspend: true # CronJob is suspended
schedule: "0 * * * *"
# ...
Or use kubectl:
kubectl patch cronjob my-cronjob -p '{"spec":{"suspend":true}}'
Suspended CronJobs don’t create new Jobs, but existing Jobs continue running.
Common Use Cases
1. Daily Database Backup
apiVersion: batch/v1
kind: CronJob
metadata:
name: db-backup
spec:
schedule: "0 2 * * *" # 2 AM daily
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: postgres:15
env:
- name: PGHOST
valueFrom:
secretKeyRef:
name: db-secret
key: host
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
command:
- /bin/sh
- -c
- |
pg_dump -h $PGHOST -U postgres mydb > /backup/backup-$(date +%Y%m%d).sql
aws s3 cp /backup/backup-$(date +%Y%m%d).sql s3://backups/
volumeMounts:
- name: backup-dir
mountPath: /backup
volumes:
- name: backup-dir
emptyDir: {}
restartPolicy: OnFailure
backoffLimit: 2
successfulJobsHistoryLimit: 7 # Keep a week of backups
failedJobsHistoryLimit: 3
2. Hourly Health Check
apiVersion: batch/v1
kind: CronJob
metadata:
name: health-check
spec:
schedule: "0 * * * *" # Every hour
concurrencyPolicy: Forbid # Don't overlap
jobTemplate:
spec:
template:
spec:
containers:
- name: health-check
image: health-checker:latest
command: ["check-all-services"]
restartPolicy: OnFailure
backoffLimit: 1
successfulJobsHistoryLimit: 24 # Keep a day
failedJobsHistoryLimit: 12
3. Weekly Report Generation
apiVersion: batch/v1
kind: CronJob
metadata:
name: weekly-report
spec:
schedule: "0 9 * * 1" # 9 AM every Monday
timeZone: "America/New_York"
jobTemplate:
spec:
template:
spec:
containers:
- name: report
image: report-generator:latest
command:
- generate-report
- --period=weekly
- --output=/reports/weekly-$(date +%Y%m%d).pdf
volumeMounts:
- name: reports
mountPath: /reports
volumes:
- name: reports
persistentVolumeClaim:
claimName: reports-pvc
restartPolicy: OnFailure
backoffLimit: 3
successfulJobsHistoryLimit: 4 # Keep a month
failedJobsHistoryLimit: 2
4. Data Cleanup
apiVersion: batch/v1
kind: CronJob
metadata:
name: cleanup-old-data
spec:
schedule: "0 3 * * *" # 3 AM daily
jobTemplate:
spec:
template:
spec:
containers:
- name: cleanup
image: cleanup-tool:latest
command:
- cleanup
- --older-than=30d
- --type=logs,temp-files
restartPolicy: OnFailure
backoffLimit: 1
successfulJobsHistoryLimit: 7
failedJobsHistoryLimit: 7
Best Practices
Use appropriate schedules - Avoid very frequent schedules (less than 1 minute) that could overload the cluster
Set concurrency policy - Choose
Forbidif jobs shouldn’t overlap,Allowif they can run concurrentlyManage job history - Set appropriate limits to avoid accumulating too many Jobs
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
- Use starting deadline - Handle cases where the cluster might be down during scheduled time
startingDeadlineSeconds: 300
Set resource limits - Jobs created by CronJobs should have resource constraints
Handle failures - Set appropriate
backoffLimitin the Job templateUse time zones - Specify timezone if your cluster spans multiple time zones
Test schedules - Verify your cron expressions are correct before deploying
Monitor job execution - Check that Jobs are being created and completing successfully
Use suspend for maintenance - Suspend CronJobs during cluster maintenance
Document schedules - Add annotations describing what the CronJob does and why
metadata:
annotations:
description: "Daily database backup at 2 AM"
owner: "platform-team"
Common Operations
Create a CronJob
# Create from YAML
kubectl create -f cronjob.yaml
# Create from command
kubectl create cronjob my-cronjob --image=busybox --schedule="0 * * * *" -- echo "Hello"
View CronJob Status
# List CronJobs
kubectl get cronjobs
kubectl get cj
# Detailed information
kubectl describe cronjob my-cronjob
# View created Jobs
kubectl get jobs -l job-name
# View job history
kubectl get jobs
Suspend/Resume
# Suspend
kubectl patch cronjob my-cronjob -p '{"spec":{"suspend":true}}'
# Resume
kubectl patch cronjob my-cronjob -p '{"spec":{"suspend":false}}'
Trigger Immediately
Manually create a Job from the CronJob template:
kubectl create job --from=cronjob/my-cronjob manual-job-$(date +%s)
Delete a CronJob
# Delete CronJob (Jobs are also deleted)
kubectl delete cronjob my-cronjob
# Orphan Jobs
kubectl delete cronjob my-cronjob --cascade=orphan
Troubleshooting
Jobs Not Being Created
# Check CronJob status
kubectl describe cronjob my-cronjob
# Check if suspended
kubectl get cronjob my-cronjob -o jsonpath='{.spec.suspend}'
# Check schedule syntax
kubectl get cronjob my-cronjob -o jsonpath='{.spec.schedule}'
# Check controller logs (if you have access)
kubectl logs -n kube-system -l component=kube-controller-manager | grep cronjob
Jobs Failing
# View created Jobs
kubectl get jobs -l job-name
# Check Job status
kubectl describe job <job-name>
# View pod logs
kubectl logs -l job-name=<job-name>
Too Many Jobs
# Check history limits
kubectl get cronjob my-cronjob -o jsonpath='{.spec.successfulJobsHistoryLimit}'
# List all Jobs
kubectl get jobs
# Delete old Jobs manually if needed
kubectl delete job <old-job-name>
See Also
- Jobs - Understanding the Job resource that CronJobs create
- Deployments - For continuously running applications
- ConfigMaps - For CronJob configuration
- Secrets - For sensitive CronJob data