Events

Kubernetes Events provide a chronological record of what happened to resources in your cluster. They’re essential for understanding why pods fail, services aren’t working, or resources aren’t being created. This guide covers viewing, filtering, and using events for effective debugging.

What are Events?

Events are Kubernetes objects that record important lifecycle information:

  • Resource changes - Pods created, deleted, scheduled
  • State transitions - Pod started, stopped, restarted
  • Errors - Failures, warnings, errors
  • Normal operations - Successful operations, scaling events
graph TB A[Kubernetes Components] --> B[Events] B --> C[API Server] C --> D[etcd] E[Controller Manager] --> B F[Scheduler] --> B G[Kubelet] --> B H[Other Components] --> B C --> I[kubectl get events] C --> J[kubectl describe] style A fill:#e1f5ff style B fill:#e8f5e9 style C fill:#fff4e1 style I fill:#f3e5f5

Event Lifecycle

Events are created by Kubernetes components and stored in etcd:

sequenceDiagram participant Component as Kubernetes Component participant API as API Server participant etcd as etcd participant User as User Component->>API: Create Event API->>etcd: Store Event User->>API: Query Events API->>etcd: Retrieve Events etcd->>API: Return Events API->>User: Display Events

Event Retention

Events are retained for:

  • Default: 1 hour
  • Namespace: Events are namespaced
  • Automatic cleanup: Old events are automatically deleted

Event Storage

Events are stored in etcd but have limited retention. For long-term event history, use event routers or external monitoring.

Viewing Events

Basic Commands

# View events in current namespace
kubectl get events

# View events in specific namespace
kubectl get events -n <namespace>

# View events in all namespaces
kubectl get events -A

# Watch events in real-time
kubectl get events -w

# Watch events in namespace
kubectl get events -n <namespace> -w

Event Output Format

LAST SEEN   TYPE      REASON              OBJECT                MESSAGE
30s         Normal    Scheduled           pod/my-pod            Successfully assigned default/my-pod to node-1
25s         Normal    Pulling             pod/my-pod            Pulling image "nginx:latest"
20s         Normal    Pulled              pod/my-pod            Successfully pulled image "nginx:latest"
15s         Normal    Created             pod/my-pod            Created container nginx
10s         Normal    Started             pod/my-pod            Started container nginx

Sorting Events

# Sort by time (newest first)
kubectl get events --sort-by='.lastTimestamp'

# Sort by time (oldest first)
kubectl get events --sort-by='.firstTimestamp'

# Sort by count
kubectl get events --sort-by='.count'

Event Details

# Detailed event information
kubectl describe events

# Events for specific resource
kubectl describe pod <pod-name>

# Events section shows recent events

Filtering Events

By Resource

# Events for specific pod
kubectl get events --field-selector involvedObject.name=<pod-name>

# Events for specific service
kubectl get events --field-selector involvedObject.name=<service-name>

# Events for specific node
kubectl get events --field-selector involvedObject.name=<node-name>

By Type

# Warning events only
kubectl get events --field-selector type=Warning

# Normal events only
kubectl get events --field-selector type=Normal

By Reason

# Failed events
kubectl get events --field-selector reason=Failed

# Scheduled events
kubectl get events --field-selector reason=Scheduled

Combined Filters

# Warning events for specific pod
kubectl get events \
  --field-selector involvedObject.name=<pod-name>,type=Warning

# Failed events in namespace
kubectl get events -n <namespace> \
  --field-selector type=Warning,reason=Failed

Event Types

Normal Events

Normal events indicate successful operations:

  • Scheduled - Pod successfully scheduled to node
  • Pulled - Image successfully pulled
  • Created - Container successfully created
  • Started - Container successfully started
  • SuccessfulCreate - Resource successfully created

Warning Events

Warning events indicate problems or failures:

  • Failed - Operation failed
  • FailedScheduling - Pod could not be scheduled
  • FailedMount - Volume mount failed
  • FailedPull - Image pull failed
  • BackOff - Container restarting (CrashLoopBackOff)
  • Unhealthy - Health check failed

Common Events and Their Meanings

Pod Events

Scheduled

Normal  Scheduled  pod/my-pod  Successfully assigned default/my-pod to node-1

Meaning: Pod was successfully scheduled to a node.

Pulling/Pulled

Normal  Pulling  pod/my-pod  Pulling image "nginx:latest"
Normal  Pulled   pod/my-pod  Successfully pulled image "nginx:latest"

Meaning: Image is being pulled or was successfully pulled.

Created/Started

Normal  Created  pod/my-pod  Created container nginx
Normal  Started  pod/my-pod  Started container nginx

Meaning: Container was created and started successfully.

FailedScheduling

Warning  FailedScheduling  pod/my-pod  0/3 nodes are available: 3 Insufficient cpu

Meaning: Pod cannot be scheduled due to resource constraints or other issues.

Common causes:

  • Insufficient resources (CPU, memory)
  • No nodes match affinity/anti-affinity rules
  • Taints without tolerations
  • PVC pending

FailedMount

Warning  FailedMount  pod/my-pod  MountVolume.SetUp failed for volume "pvc-xxx" : mount failed: exit status 32

Meaning: Volume mount failed.

Common causes:

  • PVC not bound
  • Storage provider issues
  • Permission problems
  • Network issues

FailedPull

Warning  FailedPull  pod/my-pod  Failed to pull image "my-app:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied

Meaning: Failed to pull container image.

Common causes:

  • Image doesn’t exist
  • Wrong image name or tag
  • Private registry without credentials
  • Network issues

BackOff

Warning  BackOff  pod/my-pod  Back-off restarting failed container

Meaning: Container is restarting due to failures (CrashLoopBackOff).

Common causes:

  • Application errors
  • Configuration issues
  • Missing dependencies
  • Health check failures

Unhealthy

Warning  Unhealthy  pod/my-pod  Readiness probe failed: Get http://10.244.1.5:8080/health: dial tcp 10.244.1.5:8080: connect: connection refused

Meaning: Health check (readiness/liveness) failed.

Common causes:

  • Application not ready
  • Wrong health check configuration
  • Port mismatch
  • Application not responding

Node Events

NodeNotReady

Warning  NodeNotReady  node/worker-1  Node is not ready

Meaning: Node is not ready.

Common causes:

  • Kubelet not running
  • Network issues
  • Disk pressure
  • Memory pressure

NodeReady

Normal  NodeReady  node/worker-1  Node worker-1 is now ready

Meaning: Node became ready.

Event-Based Troubleshooting Workflows

Debugging Pod Failures

# Step 1: Get pod events
kubectl get events --field-selector involvedObject.name=<pod-name>

# Step 2: Filter warnings
kubectl get events --field-selector involvedObject.name=<pod-name>,type=Warning

# Step 3: Check recent events
kubectl describe pod <pod-name> | grep -A 10 Events

# Step 4: Watch events in real-time
kubectl get events --field-selector involvedObject.name=<pod-name> -w

Debugging Scheduling Issues

# Get scheduling events
kubectl get events --field-selector reason=FailedScheduling

# Get scheduling events for specific pod
kubectl get events --field-selector \
  involvedObject.name=<pod-name>,reason=FailedScheduling

# Check node events
kubectl describe node <node-name>

Debugging Image Pull Issues

# Get image pull events
kubectl get events --field-selector reason=FailedPull

# Get events for specific pod
kubectl get events --field-selector \
  involvedObject.name=<pod-name>,reason=FailedPull

# Check pod details
kubectl describe pod <pod-name>

Watching Events

Real-Time Monitoring

# Watch all events
kubectl get events -w

# Watch events in namespace
kubectl get events -n <namespace> -w

# Watch events for specific resource
kubectl get events --field-selector involvedObject.name=<pod-name> -w

Filtered Watching

# Watch only warnings
kubectl get events --field-selector type=Warning -w

# Watch events for deployment
kubectl get events --field-selector involvedObject.kind=Deployment -w

Integration with Monitoring

Event Router

Route events to external systems:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: event-router
spec:
  template:
    spec:
      containers:
      - name: event-router
        image: eventrouter:latest
        env:
        - name: SINK
          value: "gcp"

Prometheus Event Exporter

Export events as Prometheus metrics:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: event-exporter
spec:
  template:
    spec:
      containers:
      - name: event-exporter
        image: event-exporter:latest

Best Practices

1. Check Events First

When troubleshooting, always check events first:

kubectl describe <resource> | grep -A 10 Events

2. Filter by Type

Focus on warnings and errors:

kubectl get events --field-selector type=Warning

3. Sort by Time

Sort events chronologically:

kubectl get events --sort-by='.lastTimestamp'

4. Watch Events

Monitor events in real-time during debugging:

kubectl get events -w

5. Correlate with Logs

Combine events with logs for full picture:

kubectl describe pod <pod-name>
kubectl logs <pod-name>

6. Use Event Routers

For production, route events to monitoring systems for long-term storage.

7. Document Common Events

Document common events and their resolutions for your team.

Troubleshooting

No Events Showing

# Check if events exist
kubectl get events

# Check namespace
kubectl get events -n <namespace>

# Verify resource exists
kubectl get <resource> <name>

Events Not Updating

Events have limited retention (default 1 hour). For long-term tracking, use event routers or monitoring systems.

Too Many Events

# Filter by type
kubectl get events --field-selector type=Warning

# Filter by resource
kubectl get events --field-selector involvedObject.name=<resource-name>

# Limit output
kubectl get events --tail=50

See Also