Node & Sidecar logging

Beyond container logs, Kubernetes also generates logs at the node level (system components) and supports advanced patterns like sidecar containers for log aggregation. This guide covers node-level logging, sidecar patterns, and best practices for production log collection.

Node-Level Logging

Node-level logs include system components like kubelet, kube-proxy, and container runtime logs.

System Component Logs

graph TB A[Node Components] --> B[Kubelet] A --> C[Kube-proxy] A --> D[Container Runtime] A --> E[System Services] B --> B1[kubelet.log] C --> C1[kube-proxy.log] D --> D1[containerd.log] E --> E1[systemd logs] B1 --> F[Node Filesystem] C1 --> F D1 --> F E1 --> F F --> G[Log Collection Agent] style A fill:#e1f5ff style F fill:#e8f5e9 style G fill:#fff4e1

Accessing Node Logs

Kubelet Logs

# On the node (if you have SSH access)
journalctl -u kubelet -n 100

# View kubelet logs
journalctl -u kubelet --since "1 hour ago"

# Follow kubelet logs
journalctl -u kubelet -f

# Check kubelet service status
systemctl status kubelet

Container Runtime Logs

# containerd logs
journalctl -u containerd -n 100

# Docker logs (if using Docker)
journalctl -u docker -n 100

# Follow container runtime logs
journalctl -u containerd -f

System Logs

# All system logs
journalctl -n 100

# Logs from specific service
journalctl -u <service-name>

# Logs since specific time
journalctl --since "2024-01-15 10:00:00"

# Follow all logs
journalctl -f

Sidecar Pattern for Log Aggregation

The sidecar pattern uses a separate container in the same pod to collect and forward logs from the application container.

How Sidecar Logging Works

graph TB A[Application Container] --> B[Writes Logs] B --> C[Shared Volume] C --> D[Sidecar Container] D --> E[Log Processor] E --> F[Centralized Storage] G[stdout/stderr] --> H[Node Log Files] H --> I[Also Collected] style A fill:#e1f5ff style D fill:#e8f5e9 style E fill:#fff4e1 style F fill:#f3e5f5

Sidecar Pattern Example

apiVersion: v1
kind: Pod
metadata:
  name: app-with-sidecar
spec:
  containers:
  - name: app
    image: my-app:latest
    volumeMounts:
    - name: logs
      mountPath: /var/log/app
    # Application writes logs to /var/log/app/app.log
  - name: log-collector
    image: fluent/fluent-bit:latest
    volumeMounts:
    - name: logs
      mountPath: /var/log/app
      readOnly: true
    - name: config
      mountPath: /fluent-bit/etc
  volumes:
  - name: logs
    emptyDir: {}
  - name: config
    configMap:
      name: fluent-bit-config

Benefits of Sidecar Pattern

  1. Separation of concerns - Application and log collection are decoupled
  2. Flexible processing - Sidecar can filter, transform, or enrich logs
  3. Independent scaling - Can scale log processing separately
  4. Multiple outputs - Can send logs to multiple destinations
  5. Application-agnostic - Works with any application

Fluent Bit Sidecar Example

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
data:
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
    
    [INPUT]
        Name              tail
        Path              /var/log/app/*.log
        Parser            json
        Refresh_Interval  5
    
    [OUTPUT]
        Name  http
        Match *
        Host  loki.monitoring.svc.cluster.local
        Port  3100
        URI   /loki/api/v1/push
        Format json
---
apiVersion: v1
kind: Pod
metadata:
  name: app-with-fluent-bit
spec:
  containers:
  - name: app
    image: my-app:latest
    volumeMounts:
    - name: logs
      mountPath: /var/log/app
  - name: fluent-bit
    image: fluent/fluent-bit:latest
    volumeMounts:
    - name: logs
      mountPath: /var/log/app
      readOnly: true
    - name: config
      mountPath: /fluent-bit/etc
  volumes:
  - name: logs
    emptyDir: {}
  - name: config
    configMap:
      name: fluent-bit-config

DaemonSet Pattern for Log Collection

A DaemonSet runs a log collector on every node, collecting logs from all pods on that node.

How DaemonSet Logging Works

graph TB A[Node 1] --> B[DaemonSet Pod] A --> C[Application Pods] C --> D[/var/log/pods/] D --> B B --> E[Log Collector] F[Node 2] --> G[DaemonSet Pod] F --> H[Application Pods] H --> I[/var/log/pods/] I --> G G --> E E --> J[Centralized Storage] style B fill:#e1f5ff style G fill:#e1f5ff style E fill:#e8f5e9 style J fill:#fff4e1

Fluent Bit DaemonSet Example

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: logging
spec:
  selector:
    matchLabels:
      name: fluent-bit
  template:
    metadata:
      labels:
        name: fluent-bit
    spec:
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:latest
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: logging
data:
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
    
    [INPUT]
        Name              tail
        Path              /var/log/pods/**/*.log
        Parser            cri
        Tag               kubernetes.*
        Refresh_Interval  5
        Mem_Buf_Limit     50MB
        Skip_Long_Lines   On
    
    [FILTER]
        Name                kubernetes
        Match               kubernetes.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kubernetes.var.log.containers.
        Merge_Log           On
        Keep_Log            Off
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off
    
    [OUTPUT]
        Name  http
        Match *
        Host  loki.monitoring.svc.cluster.local
        Port  3100
        URI   /loki/api/v1/push
        Format json

Comparing Patterns

Sidecar vs DaemonSet

graph TB A[Log Collection Pattern] --> B[Sidecar] A --> C[DaemonSet] B --> B1[Per Pod] B --> B2[Application-Specific] B --> B3[Higher Resource Usage] B --> B4[More Control] C --> C1[Per Node] C --> C2[All Pods] C --> C3[Lower Resource Usage] C --> C4[Less Control] style B fill:#e1f5ff style C fill:#e8f5e9

Sidecar Pattern:

  • ✅ Application-specific processing
  • ✅ More control over log collection
  • ✅ Can use shared volumes
  • ❌ Higher resource usage
  • ❌ More complex configuration

DaemonSet Pattern:

  • ✅ Lower resource usage
  • ✅ Simpler deployment
  • ✅ Automatic scaling with nodes
  • ❌ Less application-specific control
  • ❌ All pods use same configuration

Fluent Bit vs Fluentd

Fluent Bit

Lightweight log processor and forwarder:

  • Low resource usage - Designed for edge computing
  • Fast - High performance
  • Simple configuration - Easier to configure
  • Limited plugins - Smaller plugin ecosystem

Fluentd

Full-featured log collector:

  • Rich features - More processing capabilities
  • Large plugin ecosystem - Many plugins available
  • Higher resource usage - More memory/CPU
  • Complex configuration - More configuration options

Best Practices

1. Choose the Right Pattern

  • Use DaemonSet for standard log collection across all pods
  • Use Sidecar for application-specific processing or when you need more control

2. Resource Considerations

# Set resource limits for log collectors
resources:
  requests:
    memory: "64Mi"
    cpu: "100m"
  limits:
    memory: "128Mi"
    cpu: "200m"

3. Log Rotation

Configure log rotation to prevent disk space issues:

  • Container logs: Automatic (10MB, 5 files)
  • Node logs: Configure journald limits
  • Application logs: Implement rotation in application

4. Structured Logging

Use structured formats for easier parsing:

{"timestamp":"2024-01-15T10:30:45Z","level":"INFO","message":"..."}

5. Add Metadata

Enrich logs with Kubernetes metadata:

  • Pod name
  • Namespace
  • Container name
  • Labels
  • Node information

6. Filtering and Parsing

Filter and parse logs in the collector:

  • Parse JSON logs
  • Extract fields
  • Filter unnecessary logs
  • Add metadata

7. Multiple Outputs

Send logs to multiple destinations:

  • Development: Local storage
  • Production: Log solutions (Loki, Elasticsearch)

8. Monitoring Log Collectors

Monitor the log collectors themselves:

  • Resource usage
  • Error rates
  • Queue lengths
  • Processing latency

Troubleshooting

Sidecar Not Collecting Logs

# Check sidecar container logs
kubectl logs <pod-name> -c <sidecar-container>

# Verify shared volume
kubectl exec <pod-name> -c app -- ls -la /var/log/app

# Check sidecar configuration
kubectl get configmap <config-name> -o yaml

DaemonSet Not Running

# Check DaemonSet status
kubectl get daemonset -n logging

# Check pod status
kubectl get pods -n logging -l name=fluent-bit

# Check logs
kubectl logs -n logging -l name=fluent-bit

# Check node labels/taints
kubectl get nodes --show-labels

High Resource Usage

# Check resource usage
kubectl top pods -n logging

# Check log volume
kubectl exec <pod-name> -c log-collector -- du -sh /var/log

# Adjust resource limits
# Reduce buffer sizes
# Increase flush intervals

See Also