DaemonSets

DaemonSets ensure that a copy of a pod runs on every node (or specific nodes) in the cluster. Unlike Deployments that manage a specific number of pod replicas, DaemonSets automatically create one pod per node, and new pods are automatically created when new nodes are added to the cluster. This makes DaemonSets perfect for cluster-wide utilities like logging agents, monitoring tools, and networking components.

What Are DaemonSets?

A DaemonSet is a Kubernetes workload that ensures all (or some) nodes run a copy of a pod. When you add a node to the cluster, the DaemonSet automatically schedules the pod on that node. When you remove a node, the pod is garbage collected.

graph TB A[DaemonSet] --> B[Node 1] A --> C[Node 2] A --> D[Node 3] B --> E[Pod on Node 1] C --> F[Pod on Node 2] D --> G[Pod on Node 3] H[New Node Added] --> I[DaemonSet Creates Pod on New Node] style A fill:#e1f5ff style E fill:#fff4e1 style F fill:#fff4e1 style G fill:#fff4e1 style I fill:#e8f5e9

Why Use DaemonSets?

DaemonSets are ideal for:

Node-level services - One pod per node for cluster-wide utilities
Logging agents - Collect logs from every node
Monitoring agents - Monitor node-level metrics
Networking - Network policy agents, CNI plugins
Storage - Storage daemons that run on every node
Automatic node management - Pods automatically added to new nodes

DaemonSet vs Deployment

The key difference is how pods are distributed:

graph TB subgraph deployment[Deployment: 3 Replicas] A[Deployment] --> B[3 Pods Total] B --> C[Pod on Node 1] B --> D[Pod on Node 2] B --> E[Pod on Node 3] F[Node 4 Added] -.->|No Pod| G[Deployment doesn't add pod] end subgraph daemonset[DaemonSet] H[DaemonSet] --> I[1 Pod Per Node] I --> J[Pod on Node 1] I --> K[Pod on Node 2] I --> L[Pod on Node 3] M[Node 4 Added] --> N[Pod Automatically Created] end style A fill:#fff4e1 style H fill:#e8f5e9 style N fill:#e1f5ff

Use Deployments when:

  • You need a specific number of pods
  • Pods don’t need to run on every node
  • You want control over pod placement

Use DaemonSets when:

  • You need one pod per node
  • Node-level utilities are required
  • Automatic node management is needed

How DaemonSets Work

DaemonSets continuously watch the cluster and ensure every node (matching the selector) has a pod running. When a new node is added, the DaemonSet controller creates a pod on that node.

graph TD A[DaemonSet Created] --> B[Find All Nodes] B --> C{Node Matches Selector?} C -->|Yes| D[Check if Pod Exists] C -->|No| E[Skip Node] D -->|No Pod| F[Create Pod on Node] D -->|Pod Exists| G[Monitor Pod Health] F --> G G --> H{Pod Failed?} H -->|Yes| I[Replace Pod] H -->|No| J[Continue Monitoring] K[New Node Added] --> B L[Node Removed] --> M[Pod Garbage Collected] style A fill:#e1f5ff style F fill:#fff4e1 style G fill:#e8f5e9 style I fill:#ffe1e1

Basic DaemonSet Example

Here’s a DaemonSet that runs a logging agent on every node:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-logging
  labels:
    app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-logging
  template:
    metadata:
      labels:
        name: fluentd-logging
    spec:
      tolerations:
      # Allow DaemonSet to run on master nodes
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1-debian-cloudwatch
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

Key fields:

  • selector - Labels to identify pods managed by this DaemonSet
  • template - Pod template used to create pods on nodes
  • tolerations - Allow pods to run on tainted nodes (often needed for master nodes)

Node Selection

By default, DaemonSets run on all nodes. You can limit which nodes run DaemonSet pods using:

Node Selectors

spec:
  template:
    spec:
      nodeSelector:
        disktype: ssd  # Only run on nodes with this label

Node Affinity

spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: disktype
                operator: In
                values:
                - ssd

Taints and Tolerations

Control plane nodes often have taints to prevent regular pods from scheduling. DaemonSets that need to run on all nodes (including control plane) must include tolerations:

spec:
  template:
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule

DaemonSet Lifecycle

graph TD A[DaemonSet Created] --> B[Find Matching Nodes] B --> C[Create Pods on Nodes] C --> D[All Nodes Have Pods] E[Update DaemonSet] --> F{Update Strategy} F -->|RollingUpdate| G[Update Pods Gradually] F -->|OnDelete| H[Manual Pod Deletion] G --> I[Delete Pod on Node 1] I --> J[Create New Pod on Node 1] J --> K[Delete Pod on Node 2] K --> L[Continue for All Nodes] M[New Node Added] --> N[DaemonSet Creates Pod] O[Node Removed] --> P[Pod Garbage Collected] style A fill:#e1f5ff style D fill:#e8f5e9 style G fill:#fff4e1 style N fill:#e8f5e9

Update Strategies

DaemonSets support two update strategies:

RollingUpdate (Default)

Updates pods one node at a time, with configurable parameters:

spec:
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1  # Max pods unavailable during update

OnDelete

Pods are updated only when manually deleted:

spec:
  updateStrategy:
    type: OnDelete

This gives you full control over when updates happen.

Common Use Cases

1. Logging Agents

Collect logs from every node:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:8.0.0
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

2. Monitoring Agents

Monitor node-level metrics:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
    spec:
      containers:
      - name: node-exporter
        image: prom/node-exporter:v1.3.1
        ports:
        - containerPort: 9100
          name: metrics
        volumeMounts:
        - name: proc
          mountPath: /host/proc
          readOnly: true
        - name: sys
          mountPath: /host/sys
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: sys
        hostPath:
          path: /sys

3. Network Policy Agents

Network plugins that need to run on every node:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  template:
    metadata:
      labels:
        k8s-app: calico-node
    spec:
      hostNetwork: true
      containers:
      - name: calico-node
        image: calico/node:v3.24.0
        env:
        - name: CALICO_NODENAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName

Best Practices

  1. Use hostPath volumes carefully - Mount only what’s necessary for security

  2. Set resource limits - DaemonSets run on every node, so resource usage multiplies

resources:
  limits:
    memory: 200Mi
    cpu: 200m
  requests:
    memory: 100Mi
    cpu: 100m
  1. Include tolerations for control plane - If you need pods on master nodes

  2. Use node selectors or affinity - Limit which nodes run DaemonSet pods if needed

  3. Consider update strategy - RollingUpdate for zero-downtime, OnDelete for controlled updates

  4. Use health probes - Ensure DaemonSet pods are healthy

livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 30
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /ready
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 5
  1. Monitor resource usage - DaemonSets consume resources on every node

  2. Use appropriate image pull policy - IfNotPresent to avoid unnecessary pulls

  3. Test on a subset first - Use node selectors to test on specific nodes before rolling out

  4. Document hostPath usage - Clearly document why host filesystem access is needed

Common Operations

View DaemonSet Status

# List DaemonSets
kubectl get daemonsets
kubectl get ds

# Detailed information
kubectl describe daemonset fluentd-logging

# View pods (should see one per node)
kubectl get pods -l name=fluentd-logging -o wide

Updating DaemonSets

# Update container image
kubectl set image daemonset fluentd-logging fluentd=fluent/fluentd-kubernetes-daemonset:v2-debian-cloudwatch

# Check rollout status
kubectl rollout status daemonset fluentd-logging

# View rollout history
kubectl rollout history daemonset fluentd-logging

# Rollback
kubectl rollout undo daemonset fluentd-logging

Scaling

DaemonSets don’t use replica counts like Deployments. Instead, the number of pods equals the number of matching nodes. You can effectively “scale” by:

  • Adding/removing nodes
  • Using node selectors to target specific nodes
  • Using node affinity to control placement

Deleting DaemonSets

# Delete DaemonSet (pods are also deleted)
kubectl delete daemonset fluentd-logging

# Delete without cascading (orphans pods)
kubectl delete daemonset fluentd-logging --cascade=orphan

Troubleshooting

Pods Not Creating on Nodes

# Check DaemonSet status
kubectl describe ds fluentd-logging

# Check node labels
kubectl get nodes --show-labels

# Check node selectors in DaemonSet
kubectl get ds fluentd-logging -o yaml | grep nodeSelector

# Check for taints
kubectl describe node <node-name> | grep Taints

# Verify tolerations
kubectl get ds fluentd-logging -o yaml | grep -A 10 tolerations

Pods Failing

# Check pod events
kubectl describe pod -l name=fluentd-logging

# Check pod logs
kubectl logs -l name=fluentd-logging

# Check if hostPath volumes are accessible
kubectl exec -it <pod-name> -- ls /var/log

Resource Issues

# Check resource usage
kubectl top pods -l name=fluentd-logging

# Verify resource limits
kubectl get ds fluentd-logging -o yaml | grep -A 5 resources

See Also