Pods 101

Pods are the smallest deployable units in Kubernetes. Understanding pods is essential because everything else in Kubernetes—deployments, services, jobs—ultimately creates and manages pods. If you understand pods, you understand the foundation of how Kubernetes runs your applications.

Think of a pod like a logical host—it’s a group of one or more containers that share storage and network resources and are scheduled together. Just as a physical host can run multiple processes that share the host’s resources, a pod can run multiple containers that share the pod’s resources.

What is a Pod?

A pod is a Kubernetes object that represents a running process (or processes) in your cluster. It contains:

  • One or more containers - The actual application containers
  • Shared storage - Volumes that all containers in the pod can access
  • Shared network - All containers share the same IP address and port space
  • Shared namespace - Containers can communicate via localhost

Pods are ephemeral—they can be created, destroyed, and recreated. Kubernetes doesn’t guarantee that a pod will survive node failures, scheduling decisions, or other cluster events. This is why you typically don’t create pods directly; instead, you use higher-level controllers like Deployments that manage pods for you.

Why Pods Exist

You might wonder why Kubernetes uses pods instead of just running containers directly. The answer lies in the design philosophy:

Container Grouping

Pods allow you to group tightly coupled containers that need to work together. For example, a web server container and a log shipping sidecar container might run in the same pod because they:

  • Share the same lifecycle (start and stop together)
  • Need to communicate efficiently (via localhost)
  • Share the same storage (log files)

Atomic Scheduling

Kubernetes schedules pods, not individual containers. All containers in a pod are guaranteed to run on the same node. This ensures that containers that need to share resources or communicate efficiently are always co-located.

Shared Context

Containers in a pod share:

  • Network namespace - Same IP address, can communicate via localhost
  • Storage volumes - Can mount the same volumes
  • IPC namespace - Can use inter-process communication
  • UTS namespace - Share the same hostname

This shared context makes it easy for containers to work together as a cohesive unit.

Pod Structure

graph TB Pod[Pod] --> Container1[Container 1<br/>Main App] Pod --> Container2[Container 2<br/>Sidecar] Pod --> Volume1[Volume 1<br/>Shared Storage] Pod --> Volume2[Volume 2<br/>Config] Pod --> Network[Network<br/>Shared IP] Container1 --> Volume1 Container2 --> Volume1 Container1 --> Network Container2 --> Network style Pod fill:#e1f5ff style Container1 fill:#fff4e1 style Container2 fill:#fff4e1 style Volume1 fill:#e8f5e9 style Volume2 fill:#e8f5e9 style Network fill:#f3e5f5

A pod specification includes:

  • Metadata - Name, labels, annotations, namespace
  • Spec - Containers, volumes, restart policy, and other configuration
  • Status - Current state, pod IP, node assignment, container statuses

Single Container Pods

Most pods contain a single container. This is the simplest and most common case:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.21
    ports:
    - containerPort: 80

This pod runs a single nginx container. Even though it’s just one container, it’s still a pod—Kubernetes always works with pods, never containers directly.

Multi-Container Pods

Pods can contain multiple containers that work together. Common patterns include:

Sidecar Pattern

A sidecar container extends or enhances the main container:

apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  containers:
  - name: nginx
    image: nginx:1.21
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/nginx
  - name: log-shipper
    image: fluentd:latest
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log
  volumes:
  - name: shared-logs
    emptyDir: {}

The nginx container writes logs, and the fluentd sidecar reads and ships them. They share the same volume and network.

Ambassador Pattern

An ambassador container proxies network traffic:

apiVersion: v1
kind: Pod
metadata:
  name: app-with-proxy
spec:
  containers:
  - name: app
    image: my-app:1.0
  - name: proxy
    image: envoy:latest
    # Proxy configuration

The proxy container handles all network communication, simplifying the main application.

Adapter Pattern

An adapter container transforms output to a standard format:

apiVersion: v1
kind: Pod
metadata:
  name: app-with-adapter
spec:
  containers:
  - name: app
    image: my-app:1.0
  - name: adapter
    image: adapter:latest
    # Transforms app output

Pod Lifecycle

Pods go through several phases during their lifetime:

graph LR A[Pending] --> B[Running] B --> C[Succeeded] B --> D[Failed] B --> E[Unknown] A --> F[Terminating] F --> G[Terminated] style A fill:#fff4e1 style B fill:#e8f5e9 style C fill:#e1f5ff style D fill:#ffe1e1 style E fill:#f3e5f5 style F fill:#fff4e1 style G fill:#f3e5f5

Pending

The pod has been accepted by Kubernetes, but one or more containers haven’t been created yet. This could be because:

  • The image is being pulled
  • The node doesn’t have enough resources
  • The scheduler hasn’t found a suitable node yet

Running

The pod has been bound to a node, and all containers have been created. At least one container is running, starting, or restarting.

Succeeded

All containers have terminated successfully and won’t restart. This is typical for Jobs and CronJobs.

Failed

All containers have terminated, and at least one container failed (exited with non-zero status or was terminated by the system).

Unknown

The pod state can’t be determined, usually due to communication issues with the node.

Container States

Within a pod, each container has its own state:

  • Waiting - Container is waiting to start (pulling image, applying security context)
  • Running - Container is executing
  • Terminated - Container has stopped (either successfully or with an error)

Pod Specifications

Here’s a more complete pod example:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  namespace: default
  labels:
    app: my-app
    version: "1.0"
  annotations:
    description: "My application pod"
spec:
  # Restart policy
  restartPolicy: Always
  
  # Containers
  containers:
  - name: main
    image: my-app:1.0
    imagePullPolicy: IfNotPresent
    
    # Resource requests and limits
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
    
    # Ports
    ports:
    - name: http
      containerPort: 8080
      protocol: TCP
    
    # Environment variables
    env:
    - name: ENV_VAR
      value: "value"
    - name: CONFIG_MAP_VAR
      valueFrom:
        configMapKeyRef:
          name: my-config
          key: config-key
    
    # Volume mounts
    volumeMounts:
    - name: config-volume
      mountPath: /etc/config
    - name: data-volume
      mountPath: /var/data
  
  # Volumes
  volumes:
  - name: config-volume
    configMap:
      name: my-config
  - name: data-volume
    emptyDir: {}
  
  # Node selector (optional)
  nodeSelector:
    disktype: ssd
  
  # Tolerations (optional)
  tolerations:
  - key: "special"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"

Key Pod Concepts

Restart Policy

Controls what happens when a container exits:

  • Always - Always restart (default for pods)
  • OnFailure - Restart only on failure
  • Never - Never restart

Init Containers

Init containers run before the main containers and must complete successfully:

spec:
  initContainers:
  - name: init-db
    image: busybox
    command: ['sh', '-c', 'until nslookup mydb; do sleep 2; done']
  containers:
  - name: app
    image: my-app:1.0

Use init containers for setup tasks like waiting for dependencies or initializing data.

Probes

Health checks for containers:

  • livenessProbe - Determines if container is alive (restarts if fails)
  • readinessProbe - Determines if container is ready to serve traffic
  • startupProbe - Determines if container has started (useful for slow-starting apps)

Resource Requests and Limits

  • requests - Minimum resources guaranteed to the container
  • limits - Maximum resources the container can use

Pod Networking

All containers in a pod share:

  • The same IP address
  • The same port space (containers can’t use the same port)
  • Ability to communicate via localhost

Containers can reach each other using localhost:

containers:
- name: app
  image: my-app:1.0
- name: sidecar
  image: sidecar:1.0
  env:
  - name: APP_HOST
    value: "localhost"  # Can reach app container

Pod Storage

Containers in a pod can share volumes:

containers:
- name: writer
  volumeMounts:
  - name: shared-data
    mountPath: /data
- name: reader
  volumeMounts:
  - name: shared-data
    mountPath: /data
volumes:
- name: shared-data
  emptyDir: {}

Both containers can read and write to the same volume.

Best Practices

  1. Don’t Create Pods Directly - Use Deployments, StatefulSets, or other controllers that manage pods for you

  2. Use Labels - Label pods for easy selection and organization

  3. Set Resource Limits - Always specify resource requests and limits to help the scheduler

  4. Use Health Probes - Configure liveness and readiness probes for reliable operation

  5. Keep Pods Simple - If containers don’t need to share resources, use separate pods

  6. Use Init Containers - For setup tasks that must complete before the main container starts

  7. Handle Termination Gracefully - Containers should handle SIGTERM signals for clean shutdowns

Key Takeaways

  • Pods are the smallest deployable units in Kubernetes
  • Pods can contain one or more containers that share resources
  • All containers in a pod share network, storage, and IPC namespaces
  • Pods are ephemeral—they can be created and destroyed
  • Use higher-level controllers (Deployments) instead of creating pods directly
  • Pods provide atomic scheduling—all containers run on the same node

See Also