CNI Basics
The Container Network Interface (CNI) is a specification and library for writing plugins to configure network interfaces in Linux containers. In Kubernetes, CNI plugins are responsible for assigning IP addresses to pods and configuring their network connectivity. Understanding CNI is essential for understanding how pods get network access in Kubernetes.
What is CNI?
CNI is a standard interface between container runtimes (like containerd or CRI-O) and network plugins. When Kubernetes creates a pod:
- Container runtime calls the CNI plugin
- CNI plugin assigns an IP address to the pod
- CNI plugin configures network interfaces, routes, and bridges
- Pod gets network connectivity
How CNI Works
CNI plugins are executables that the container runtime calls with JSON configuration:
CNI Operations
CNI defines three main operations:
- ADD - Add container to network (assign IP, configure interfaces)
- DEL - Remove container from network (cleanup)
- CHECK - Check network configuration (health check)
CNI Plugin Lifecycle
When a pod is created:
- kubelet creates the pod’s network namespace
- kubelet calls CNI plugin with ADD command
- CNI plugin reads configuration
- CNI plugin assigns IP from IPAM (IP Address Management)
- CNI plugin configures network interfaces
- CNI plugin sets up routes and bridges
- Pod can now communicate on the network
When a pod is deleted:
- kubelet calls CNI plugin with DEL command
- CNI plugin removes network interfaces
- CNI plugin releases IP address back to pool
- CNI plugin cleans up routes and bridges
CNI Configuration
CNI plugins are configured via JSON files, typically in /etc/cni/net.d/:
{
"cniVersion": "0.4.0",
"name": "mynet",
"type": "bridge",
"bridge": "cnio0",
"ipam": {
"type": "host-local",
"subnet": "10.244.0.0/16",
"routes": [
{
"dst": "0.0.0.0/0"
}
]
}
}
Key fields:
name- Network nametype- CNI plugin type (bridge, calico, etc.)ipam- IP Address Management configurationsubnet- IP range for pod IPs
IP Address Management (IPAM)
IPAM plugins manage IP address allocation:
host-local IPAM
Allocates IPs from a local subnet:
{
"type": "host-local",
"subnet": "10.244.0.0/16",
"rangeStart": "10.244.1.0",
"rangeEnd": "10.244.1.255",
"gateway": "10.244.0.1"
}
DHCP IPAM
Uses DHCP for IP allocation:
{
"type": "dhcp"
}
Static IPAM
Assigns static IPs:
{
"type": "static",
"addresses": [
{
"address": "10.244.1.5/24",
"gateway": "10.244.0.1"
}
]
}
Network Namespaces
CNI plugins work with Linux network namespaces:
- Each pod gets its own network namespace
- Isolated network stack - Pod has its own interfaces, routes, etc.
- CNI plugin configures the namespace
CNI Plugin Types
Bridge Plugin
Creates a Linux bridge and connects containers:
Host-Device Plugin
Moves host device into container namespace.
IPVLAN Plugin
Creates IPVLAN interfaces for containers.
Macvlan Plugin
Creates Macvlan interfaces for containers.
Loopback Plugin
Provides loopback interface (always included).
CNI in Kubernetes
In Kubernetes, CNI plugins are configured at the cluster level:
Plugin Location
CNI plugins are executables in /opt/cni/bin/:
/opt/cni/bin/
├── bridge
├── calico
├── cilium
├── flannel
└── ...
Configuration Location
CNI configuration is in /etc/cni/net.d/:
/etc/cni/net.d/
├── 10-calico.conflist
├── 20-flannel.conflist
└── ...
kubelet Configuration
kubelet is configured to use CNI:
# kubelet config
networkPlugin: cni
cniBinDir: /opt/cni/bin
cniConfDir: /etc/cni/net.d
CNI Plugin Selection
Kubernetes uses the first valid CNI configuration found:
- Reads
/etc/cni/net.d/directory - Finds first valid JSON/JSONC file
- Uses that configuration
- Calls specified CNI plugin
Important: Only one CNI plugin should be active at a time.
Popular CNI Plugins
Calico
- Network policies
- BGP routing
- IP-in-IP or VXLAN encapsulation
Cilium
- eBPF-based
- High performance
- Network policies
Flannel
- Simple overlay network
- VXLAN or host-gw backend
- Easy to set up
Weave Net
- Overlay network
- Automatic mesh
- Network policies
CNI and Network Policies
CNI plugins that support Network Policies:
- Calico - Full support
- Cilium - Full support
- Weave Net - Full support
- Flannel - Limited support
If your CNI plugin doesn’t support Network Policies, Network Policy resources will have no effect.
Troubleshooting CNI
Pods Not Getting IPs
- Check CNI plugin:
ls /opt/cni/bin/ - Check configuration:
cat /etc/cni/net.d/* - Check kubelet logs:
journalctl -u kubelet - Verify plugin executable: Ensure plugin has execute permissions
- Check IPAM: Verify IP pool has available addresses
Network Connectivity Issues
- Check bridge/interface:
ip link show - Verify routes:
ip route show - Test connectivity:
pingfrom pod - Check CNI logs: Review CNI plugin logs
- Verify subnet: Ensure pod subnet doesn’t conflict
CNI Plugin Not Called
- Check kubelet config: Verify
networkPlugin: cni - Verify CNI paths: Check
cniBinDirandcniConfDir - Check kubelet logs: Look for CNI-related errors
- Verify container runtime: Ensure runtime supports CNI
Best Practices
- Choose appropriate CNI - Select CNI that meets your needs
- Plan IP ranges - Ensure pod subnet is large enough
- Monitor IP usage - Track IP address allocation
- Keep CNI updated - Update CNI plugins regularly
- Test network policies - Verify Network Policy support if needed
- Document configuration - Document CNI configuration
- Backup configs - Backup CNI configuration files
- Test upgrades - Test CNI upgrades in non-production first
- Monitor performance - Track network performance metrics
- Use supported plugins - Use well-maintained CNI plugins
See Also
- Pod Connectivity Overview - How pods connect
- CNI Plugins - Available CNI plugins
- Calico - Calico CNI plugin
- Cilium - Cilium CNI plugin
- Flannel - Flannel CNI plugin