EKS Networking
EKS networking differs from traditional Kubernetes networking because pods get real VPC IP addresses instead of using overlay networks. This VPC-native approach provides better performance, security group integration, and seamless integration with AWS services, but requires careful IP address planning.
VPC CNI Plugin Architecture
The VPC Container Network Interface (CNI) plugin is the default networking solution for EKS. It assigns VPC IP addresses directly to pods, making them first-class citizens in your VPC.
How VPC CNI Works
Elastic Network Interfaces (ENIs):
- Each EC2 instance has a primary ENI
- VPC CNI attaches secondary ENIs as needed
- Each ENI can have multiple IP addresses
- Pods get IP addresses from ENI secondary IPs
IP Address Allocation:
- Primary ENI: Node IP address
- Secondary ENIs: Pod IP addresses
- IPs come from the subnet’s CIDR block
- No NAT required for pod-to-pod communication
Benefits:
- Native VPC performance (no overlay overhead)
- Security groups at pod level
- Direct integration with AWS services
- No additional network hops
ENI Limits and Planning
Instance types have different ENI and IP limits:
| Instance Type | ENIs | IPs per ENI | Total IPs |
|---|---|---|---|
| t3.small | 3 | 4 | 12 |
| t3.medium | 4 | 6 | 24 |
| t3.large | 4 | 10 | 40 |
| m5.large | 3 | 10 | 30 |
| m5.xlarge | 4 | 15 | 60 |
| c5.2xlarge | 4 | 15 | 60 |
Planning Considerations:
- Reserve 1 IP for the node
- Remaining IPs available for pods
- Plan for pod density requirements
- Consider using larger instance types for more IPs
Pod Networking
Pods in EKS get real VPC IP addresses and can use security groups for network isolation.
Security Groups for Pods
With VPC CNI, you can apply security groups directly to pods:
apiVersion: v1
kind: Pod
metadata:
name: web-app
annotations:
vpc.amazonaws.com/security-group-ids: sg-1234567890abcdef0
spec:
containers:
- name: app
image: nginx:latest
Security Group Selection:
- Pod-level security groups (via annotation)
- Node security group (default)
- Both (union of rules)
Use Cases:
- Isolate pods from each other
- Restrict database access to specific pods
- Allow only certain pods to access external services
Custom Networking Mode
For higher pod density, use custom networking:
# Enable custom networking
aws eks update-addon \
--cluster-name my-cluster \
--addon-name vpc-cni \
--addon-version latest \
--configuration-values '{"env":{"ENABLE_PREFIX_DELEGATION":"true"}}'
Benefits:
- Higher pod density per node
- Uses /28 prefixes instead of individual IPs
- Better IP utilization
Configuration:
- Requires secondary CIDR ranges
- Configured at cluster creation
- Works with security groups
Service Networking
Kubernetes services provide stable endpoints for pods. EKS supports all standard service types with AWS-specific integrations.
ClusterIP Services
Internal service IPs (default):
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
- port: 80
targetPort: 8080
type: ClusterIP
- Accessible only within cluster
- Uses kube-proxy for load balancing
- No AWS resources created
NodePort Services
Expose services on node IPs:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 8080
nodePort: 30080
- Accessible via
<node-ip>:30080 - Requires security group rules
- Not recommended for production (use LoadBalancer)
LoadBalancer Services
Integrate with AWS Load Balancers:
apiVersion: v1
kind: Service
metadata:
name: web-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 8080
Load Balancer Types:
- Classic Load Balancer - Legacy, not recommended
- Network Load Balancer (NLB) - Layer 4, high performance
- Application Load Balancer (ALB) - Layer 7, advanced routing
AWS Load Balancer Controller
The AWS Load Balancer Controller manages AWS load balancers for Kubernetes services and ingresses.
Installation
# Add Helm repository
helm repo add eks https://aws.github.io/eks-charts
helm repo update
# Install controller
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=my-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
Application Load Balancer (ALB)
Use ALB for HTTP/HTTPS traffic with advanced routing:
apiVersion: v1
kind: Service
metadata:
name: web-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "external"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 8080
ALB Features:
- Path-based routing
- Host-based routing
- SSL/TLS termination
- WebSocket support
- HTTP/2 support
Ingress with ALB
Use ALB Ingress Controller for advanced routing:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Ingress Annotations:
alb.ingress.kubernetes.io/scheme- internet-facing or internalalb.ingress.kubernetes.io/target-type- ip or instancealb.ingress.kubernetes.io/certificate-arn- SSL certificatealb.ingress.kubernetes.io/ssl-policy- SSL policy
Network Load Balancer (NLB)
Use NLB for TCP/UDP traffic:
apiVersion: v1
kind: Service
metadata:
name: tcp-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "external"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
type: LoadBalancer
selector:
app: tcp-app
ports:
- port: 443
targetPort: 8443
protocol: TCP
NLB Features:
- Layer 4 load balancing
- Preserves source IP
- High performance
- Static IP support
Network Policies
Network policies provide pod-to-pod network isolation using firewall rules.
Calico Network Policies
Install Calico for network policy support:
# Install Calico
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.28/config/master/calico-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.28/config/master/calico-crs.yaml
Network Policy Example
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-policy
namespace: default
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
Policy Rules:
podSelector- Select pods to apply policyingress- Incoming traffic rulesegress- Outgoing traffic rulespolicyTypes- Which directions to enforce
Cross-Zone and Cross-Region Networking
Cross-Zone Communication
Pods in different availability zones communicate via VPC routing:
- No additional configuration needed
- Uses VPC routing tables
- Same latency as EC2 instances
- Security groups apply across zones
Cross-Region Communication
For multi-region setups:
Option 1: VPC Peering
- Connect VPCs across regions
- Configure route tables
- Manage security groups
Option 2: VPN or Direct Connect
- Site-to-site VPN
- AWS Direct Connect
- Transit Gateway
Option 3: Public Internet
- Use public load balancers
- Secure with TLS
- Not recommended for sensitive data
PrivateLink and Endpoint Access
VPC Endpoints
Access AWS services without internet:
# Create VPC endpoint for S3
aws ec2 create-vpc-endpoint \
--vpc-id vpc-12345678 \
--service-name com.amazonaws.us-west-2.s3 \
--route-table-ids rtb-12345678
Benefits:
- No internet gateway required
- Lower latency
- No data transfer costs
- Enhanced security
PrivateLink
Expose services via PrivateLink:
apiVersion: v1
kind: Service
metadata:
name: private-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
spec:
type: LoadBalancer
selector:
app: private-app
ports:
- port: 443
CIDR Planning and IP Management
Subnet CIDR Planning
Plan subnets to accommodate pods:
Planning Formula:
Required IPs = (Number of Nodes × IPs per Node) + Buffer
Subnet Size = 2^(32 - subnet_mask) - 5 (AWS reserved)
Reserved IPs:
- 1 for network address
- 1 for broadcast address
- 3 for AWS use (router, DNS, future use)
Secondary CIDR Ranges
For custom networking or higher density:
# Associate secondary CIDR
aws ec2 associate-vpc-cidr-block \
--vpc-id vpc-12345678 \
--cidr-block 10.1.0.0/16
Use Cases:
- Custom networking mode
- Pod IP exhaustion
- Multi-tenancy isolation
IP Address Monitoring
Monitor IP address usage:
# Check ENI utilization
kubectl get nodes -o json | jq '.items[].status.addresses'
# Check pod IP allocation
kubectl get pods -o wide
# VPC CNI metrics
kubectl get pods -n kube-system -l app=vpc-cni
Best Practices
Plan IP Addresses Carefully - Ensure sufficient CIDR space for growth
Use Private Subnets for Nodes - More secure, use NAT for outbound access
Enable Custom Networking - For higher pod density if needed
Use Security Groups at Pod Level - Fine-grained network isolation
Use ALB for HTTP/HTTPS - Better features than NLB for web traffic
Use NLB for TCP/UDP - Better performance for non-HTTP traffic
Implement Network Policies - Defense in depth with Calico
Use VPC Endpoints - Avoid internet gateway for AWS services
Monitor IP Utilization - Track ENI and IP usage
Test Cross-Zone Communication - Verify networking works across AZs
Common Issues
IP Address Exhaustion
Problem: Pods can’t get IP addresses
Solutions:
- Increase subnet CIDR size
- Use custom networking mode
- Add secondary CIDR ranges
- Use larger instance types (more IPs per node)
Load Balancer Creation Fails
Problem: LoadBalancer service stuck in pending
Solutions:
- Check AWS Load Balancer Controller logs
- Verify IAM permissions for controller
- Check security group rules
- Verify subnet tags
Network Policy Not Working
Problem: Network policies not enforced
Solutions:
- Verify Calico is installed
- Check network policy syntax
- Verify pod selectors match labels
- Check Calico logs
See Also
- Cluster Setup - Initial cluster configuration
- Security - Network security and policies
- Troubleshooting - Networking issues