CNI Basics

The Container Network Interface (CNI) is a specification and library for writing plugins to configure network interfaces in Linux containers. In Kubernetes, CNI plugins are responsible for assigning IP addresses to pods and configuring their network connectivity. Understanding CNI is essential for understanding how pods get network access in Kubernetes.

What is CNI?

CNI is a standard interface between container runtimes (like containerd or CRI-O) and network plugins. When Kubernetes creates a pod:

  1. Container runtime calls the CNI plugin
  2. CNI plugin assigns an IP address to the pod
  3. CNI plugin configures network interfaces, routes, and bridges
  4. Pod gets network connectivity
graph LR A[Container Runtime] --> B[CNI Plugin] B --> C[Network Configuration] C --> D[IP Assignment] D --> E[Pod Network Ready] style A fill:#e1f5ff style B fill:#fff4e1 style E fill:#e8f5e9

How CNI Works

CNI plugins are executables that the container runtime calls with JSON configuration:

CNI Operations

CNI defines three main operations:

  1. ADD - Add container to network (assign IP, configure interfaces)
  2. DEL - Remove container from network (cleanup)
  3. CHECK - Check network configuration (health check)
graph TB A[Pod Created] --> B[CNI ADD Called] B --> C[Plugin Assigns IP] C --> D[Plugin Configures Network] D --> E[Pod Network Ready] F[Pod Deleted] --> G[CNI DEL Called] G --> H[Plugin Cleans Up] H --> I[Network Resources Released] style B fill:#e8f5e9 style G fill:#fff4e1

CNI Plugin Lifecycle

When a pod is created:

  1. kubelet creates the pod’s network namespace
  2. kubelet calls CNI plugin with ADD command
  3. CNI plugin reads configuration
  4. CNI plugin assigns IP from IPAM (IP Address Management)
  5. CNI plugin configures network interfaces
  6. CNI plugin sets up routes and bridges
  7. Pod can now communicate on the network

When a pod is deleted:

  1. kubelet calls CNI plugin with DEL command
  2. CNI plugin removes network interfaces
  3. CNI plugin releases IP address back to pool
  4. CNI plugin cleans up routes and bridges
graph TB A[Pod Creation] --> B[Network Namespace Created] B --> C[CNI ADD] C --> D[IPAM Allocates IP] D --> E[Interface Configured] E --> F[Routes Set Up] F --> G[Pod Ready] H[Pod Deletion] --> I[CNI DEL] I --> J[Interface Removed] J --> K[IP Released] K --> L[Cleanup Complete] style C fill:#e8f5e9 style I fill:#fff4e1

CNI Configuration

CNI plugins are configured via JSON files, typically in /etc/cni/net.d/:

{
  "cniVersion": "0.4.0",
  "name": "mynet",
  "type": "bridge",
  "bridge": "cnio0",
  "ipam": {
    "type": "host-local",
    "subnet": "10.244.0.0/16",
    "routes": [
      {
        "dst": "0.0.0.0/0"
      }
    ]
  }
}

Key fields:

  • name - Network name
  • type - CNI plugin type (bridge, calico, etc.)
  • ipam - IP Address Management configuration
  • subnet - IP range for pod IPs

IP Address Management (IPAM)

IPAM plugins manage IP address allocation:

host-local IPAM

Allocates IPs from a local subnet:

{
  "type": "host-local",
  "subnet": "10.244.0.0/16",
  "rangeStart": "10.244.1.0",
  "rangeEnd": "10.244.1.255",
  "gateway": "10.244.0.1"
}

DHCP IPAM

Uses DHCP for IP allocation:

{
  "type": "dhcp"
}

Static IPAM

Assigns static IPs:

{
  "type": "static",
  "addresses": [
    {
      "address": "10.244.1.5/24",
      "gateway": "10.244.0.1"
    }
  ]
}

Network Namespaces

CNI plugins work with Linux network namespaces:

  • Each pod gets its own network namespace
  • Isolated network stack - Pod has its own interfaces, routes, etc.
  • CNI plugin configures the namespace
graph TB A[Node] --> B[Host Network Namespace] A --> C[Pod 1 Network Namespace] A --> D[Pod 2 Network Namespace] A --> E[Pod 3 Network Namespace] F[CNI Plugin] --> C F --> D F --> E style B fill:#fff4e1 style C fill:#e8f5e9 style D fill:#e8f5e9 style E fill:#e8f5e9

CNI Plugin Types

Bridge Plugin

Creates a Linux bridge and connects containers:

graph LR A[Pod 1] --> B[Bridge] C[Pod 2] --> B D[Pod 3] --> B B --> E[Host Interface] style B fill:#e8f5e9 style E fill:#fff4e1

Host-Device Plugin

Moves host device into container namespace.

IPVLAN Plugin

Creates IPVLAN interfaces for containers.

Macvlan Plugin

Creates Macvlan interfaces for containers.

Loopback Plugin

Provides loopback interface (always included).

CNI in Kubernetes

In Kubernetes, CNI plugins are configured at the cluster level:

Plugin Location

CNI plugins are executables in /opt/cni/bin/:

/opt/cni/bin/
├── bridge
├── calico
├── cilium
├── flannel
└── ...

Configuration Location

CNI configuration is in /etc/cni/net.d/:

/etc/cni/net.d/
├── 10-calico.conflist
├── 20-flannel.conflist
└── ...

kubelet Configuration

kubelet is configured to use CNI:

# kubelet config
networkPlugin: cni
cniBinDir: /opt/cni/bin
cniConfDir: /etc/cni/net.d

CNI Plugin Selection

Kubernetes uses the first valid CNI configuration found:

  1. Reads /etc/cni/net.d/ directory
  2. Finds first valid JSON/JSONC file
  3. Uses that configuration
  4. Calls specified CNI plugin

Important: Only one CNI plugin should be active at a time.

Calico

  • Network policies
  • BGP routing
  • IP-in-IP or VXLAN encapsulation

Cilium

  • eBPF-based
  • High performance
  • Network policies

Flannel

  • Simple overlay network
  • VXLAN or host-gw backend
  • Easy to set up

Weave Net

  • Overlay network
  • Automatic mesh
  • Network policies

CNI and Network Policies

CNI plugins that support Network Policies:

  • Calico - Full support
  • Cilium - Full support
  • Weave Net - Full support
  • Flannel - Limited support

If your CNI plugin doesn’t support Network Policies, Network Policy resources will have no effect.

Troubleshooting CNI

Pods Not Getting IPs

  1. Check CNI plugin: ls /opt/cni/bin/
  2. Check configuration: cat /etc/cni/net.d/*
  3. Check kubelet logs: journalctl -u kubelet
  4. Verify plugin executable: Ensure plugin has execute permissions
  5. Check IPAM: Verify IP pool has available addresses

Network Connectivity Issues

  1. Check bridge/interface: ip link show
  2. Verify routes: ip route show
  3. Test connectivity: ping from pod
  4. Check CNI logs: Review CNI plugin logs
  5. Verify subnet: Ensure pod subnet doesn’t conflict

CNI Plugin Not Called

  1. Check kubelet config: Verify networkPlugin: cni
  2. Verify CNI paths: Check cniBinDir and cniConfDir
  3. Check kubelet logs: Look for CNI-related errors
  4. Verify container runtime: Ensure runtime supports CNI

Best Practices

  1. Choose appropriate CNI - Select CNI that meets your needs
  2. Plan IP ranges - Ensure pod subnet is large enough
  3. Monitor IP usage - Track IP address allocation
  4. Keep CNI updated - Update CNI plugins regularly
  5. Test network policies - Verify Network Policy support if needed
  6. Document configuration - Document CNI configuration
  7. Backup configs - Backup CNI configuration files
  8. Test upgrades - Test CNI upgrades in non-production first
  9. Monitor performance - Track network performance metrics
  10. Use supported plugins - Use well-maintained CNI plugins

See Also