AWS VPC CNI 1.1: Native AWS Networking for EKS

Table of Contents
Introduction
On July 26, 2018, AWS released VPC CNI Plugin version 1.1, a major update to the Container Network Interface (CNI) plugin for Amazon EKS. This release enables Kubernetes pods to receive native AWS VPC IP addresses, eliminating the need for overlay networks and providing direct integration with AWS networking features.
What made VPC CNI 1.1 significant wasn’t just the technology—it was proving that cloud-native networking could leverage cloud provider primitives directly. Instead of running pods in an overlay network, pods get real VPC IPs, making them first-class citizens in AWS networking.
Why VPC CNI Matters
- Native VPC Integration: Pods receive real VPC IP addresses, not overlay IPs.
- AWS Feature Access: Direct access to security groups, VPC flow logs, and AWS networking features.
- No Overhead: Eliminates overlay network encapsulation overhead.
- Simplified Operations: Pods appear as regular VPC resources in AWS console and APIs.
Key Features (1.1)
- Source NAT Control: Disable source NAT for pods to preserve original source IP addresses.
- IP Pre-allocation: Configurable pre-allocation of secondary IP addresses to reduce pod start latency.
- DaemonSet Scheduling: Ensures CNI plugin daemons are scheduled on all nodes, including masters.
- Secondary ENI Support: Leverages AWS EC2 secondary network interfaces for pod IP allocation.
Architecture
- VPC CNI DaemonSet: Runs on each node, managing IP address allocation and network interface configuration.
- Secondary ENIs: Uses EC2 secondary network interfaces to provide IP addresses to pods.
- IP Address Management: Pre-allocates and manages IP addresses from VPC subnets.
- kube-proxy Integration: Works with kube-proxy for service networking.
Getting Started
VPC CNI 1.1 is the default CNI for new EKS clusters. For existing clusters:
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.1/config/master/aws-k8s-cni.yaml
Configure IP pre-allocation:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-vpc-cni
namespace: kube-system
data:
WARM_IP_TARGET: "2"
MINIMUM_IP_TARGET: "2"
Source NAT Control
Disable source NAT to preserve pod source IP addresses:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-vpc-cni
namespace: kube-system
data:
AWS_VPC_K8S_CNI_EXTERNALSNAT: "true"
This is useful when:
- Pods need to access AWS services using IAM roles
- You want to see real pod IPs in VPC flow logs
- Applications require source IP preservation
IP Pre-allocation
The WARM_IP_TARGET parameter controls how many IP addresses to pre-allocate:
- Lower values: Faster pod startup, but may exhaust IPs during bursts
- Higher values: More IPs available, but slower initial pod startup
- Default: 1 IP address per node
Comparison with Overlay CNIs
| Aspect | VPC CNI | Overlay CNI (Flannel/Calico) |
|---|---|---|
| IP Addresses | VPC IPs | Overlay IPs |
| AWS Integration | Native | Limited |
| Performance | No overlay overhead | Overlay encapsulation |
| Security Groups | Direct support | Indirect |
| VPC Flow Logs | Pod IPs visible | Overlay IPs only |
| IP Address Limits | VPC subnet limits | Separate overlay range |
Use Cases
- AWS Service Integration: Pods can use IAM roles and security groups directly.
- Compliance: VPC flow logs show real pod IPs for compliance requirements.
- Performance: Eliminate overlay network overhead for latency-sensitive workloads.
- Network Visibility: See pod traffic in AWS networking tools and dashboards.
Operational Considerations
- IP Address Planning: VPC CNI consumes VPC subnet IPs; plan subnet sizes accordingly.
- ENI Limits: EC2 instances have ENI limits; understand instance type constraints.
- Security Groups: Configure security groups for pod-to-pod and pod-to-external communication.
- Subnet Configuration: Ensure subnets have sufficient IP addresses for pod allocation.
Common Patterns
- Multi-Subnet: Deploy pods across multiple subnets for high availability.
- Security Group per Pod: Use security groups for fine-grained network policies.
- VPC Flow Logs: Enable VPC flow logs to monitor pod network traffic.
- Private Subnets: Deploy pods in private subnets with NAT gateway for internet access.
Limitations (1.1)
- IP Address Exhaustion: VPC subnet IP limits can constrain cluster size.
- ENI Limits: EC2 instance ENI limits affect IP address capacity.
- Region-Specific: Works only in AWS regions with EKS support.
- Subnet Requirements: Requires properly configured VPC subnets.
Looking Ahead
VPC CNI 1.1 established the foundation for:
- IP Optimization: Future versions would improve IP address utilization.
- Pod Security Groups: Enhanced security group support for pods.
- Performance Improvements: Better pod startup times and IP allocation.
- Feature Enhancements: Additional AWS networking feature integrations.
Summary
| Aspect | Details |
|---|---|
| Release Date | July 26, 2018 |
| Key Innovations | Native VPC IPs, source NAT control, IP pre-allocation, secondary ENI support |
| Significance | Enabled native AWS networking for Kubernetes pods, eliminating overlay network overhead |
AWS VPC CNI 1.1 demonstrated that cloud-native networking could leverage cloud provider primitives directly. By giving pods real VPC IP addresses, it provided seamless AWS integration, better performance, and simplified operations—setting the standard for cloud-native CNI plugins.