EKS Networking

EKS networking differs from traditional Kubernetes networking because pods get real VPC IP addresses instead of using overlay networks. This VPC-native approach provides better performance, security group integration, and seamless integration with AWS services, but requires careful IP address planning.

VPC CNI Plugin Architecture

The VPC Container Network Interface (CNI) plugin is the default networking solution for EKS. It assigns VPC IP addresses directly to pods, making them first-class citizens in your VPC.

graph TB subgraph vpc[VPC: 10.0.0.0/16] subgraph subnet[Subnet: 10.0.1.0/24] NODE[Worker Node<br/>10.0.1.10] --> ENI1[Primary ENI<br/>10.0.1.10] NODE --> ENI2[Secondary ENI 1<br/>10.0.1.11-15] NODE --> ENI3[Secondary ENI 2<br/>10.0.1.16-20] ENI1 --> POD1[Pod 1<br/>10.0.1.10] ENI2 --> POD2[Pod 2<br/>10.0.1.11] ENI2 --> POD3[Pod 3<br/>10.0.1.12] ENI3 --> POD4[Pod 4<br/>10.0.1.16] end end CNI[VPC CNI Plugin] -->|Manages| ENI1 CNI -->|Manages| ENI2 CNI -->|Manages| ENI3 style NODE fill:#e1f5ff style POD1 fill:#fff4e1 style POD2 fill:#fff4e1 style POD3 fill:#fff4e1 style POD4 fill:#fff4e1 style CNI fill:#e8f5e9

How VPC CNI Works

Elastic Network Interfaces (ENIs):

  • Each EC2 instance has a primary ENI
  • VPC CNI attaches secondary ENIs as needed
  • Each ENI can have multiple IP addresses
  • Pods get IP addresses from ENI secondary IPs

IP Address Allocation:

  • Primary ENI: Node IP address
  • Secondary ENIs: Pod IP addresses
  • IPs come from the subnet’s CIDR block
  • No NAT required for pod-to-pod communication

Benefits:

  • Native VPC performance (no overlay overhead)
  • Security groups at pod level
  • Direct integration with AWS services
  • No additional network hops

ENI Limits and Planning

Instance types have different ENI and IP limits:

Instance TypeENIsIPs per ENITotal IPs
t3.small3412
t3.medium4624
t3.large41040
m5.large31030
m5.xlarge41560
c5.2xlarge41560

Planning Considerations:

  • Reserve 1 IP for the node
  • Remaining IPs available for pods
  • Plan for pod density requirements
  • Consider using larger instance types for more IPs

Pod Networking

Pods in EKS get real VPC IP addresses and can use security groups for network isolation.

Security Groups for Pods

With VPC CNI, you can apply security groups directly to pods:

apiVersion: v1
kind: Pod
metadata:
  name: web-app
  annotations:
    vpc.amazonaws.com/security-group-ids: sg-1234567890abcdef0
spec:
  containers:
  - name: app
    image: nginx:latest

Security Group Selection:

  • Pod-level security groups (via annotation)
  • Node security group (default)
  • Both (union of rules)

Use Cases:

  • Isolate pods from each other
  • Restrict database access to specific pods
  • Allow only certain pods to access external services

Custom Networking Mode

For higher pod density, use custom networking:

# Enable custom networking
aws eks update-addon \
  --cluster-name my-cluster \
  --addon-name vpc-cni \
  --addon-version latest \
  --configuration-values '{"env":{"ENABLE_PREFIX_DELEGATION":"true"}}'

Benefits:

  • Higher pod density per node
  • Uses /28 prefixes instead of individual IPs
  • Better IP utilization

Configuration:

  • Requires secondary CIDR ranges
  • Configured at cluster creation
  • Works with security groups

Service Networking

Kubernetes services provide stable endpoints for pods. EKS supports all standard service types with AWS-specific integrations.

ClusterIP Services

Internal service IPs (default):

apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP
  • Accessible only within cluster
  • Uses kube-proxy for load balancing
  • No AWS resources created

NodePort Services

Expose services on node IPs:

apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  type: NodePort
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 30080
  • Accessible via <node-ip>:30080
  • Requires security group rules
  • Not recommended for production (use LoadBalancer)

LoadBalancer Services

Integrate with AWS Load Balancers:

apiVersion: v1
kind: Service
metadata:
  name: web-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080

Load Balancer Types:

  • Classic Load Balancer - Legacy, not recommended
  • Network Load Balancer (NLB) - Layer 4, high performance
  • Application Load Balancer (ALB) - Layer 7, advanced routing

AWS Load Balancer Controller

The AWS Load Balancer Controller manages AWS load balancers for Kubernetes services and ingresses.

Installation

# Add Helm repository
helm repo add eks https://aws.github.io/eks-charts
helm repo update

# Install controller
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=my-cluster \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller

Application Load Balancer (ALB)

Use ALB for HTTP/HTTPS traffic with advanced routing:

apiVersion: v1
kind: Service
metadata:
  name: web-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "external"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080

ALB Features:

  • Path-based routing
  • Host-based routing
  • SSL/TLS termination
  • WebSocket support
  • HTTP/2 support

Ingress with ALB

Use ALB Ingress Controller for advanced routing:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/ssl-redirect: '443'
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

Ingress Annotations:

  • alb.ingress.kubernetes.io/scheme - internet-facing or internal
  • alb.ingress.kubernetes.io/target-type - ip or instance
  • alb.ingress.kubernetes.io/certificate-arn - SSL certificate
  • alb.ingress.kubernetes.io/ssl-policy - SSL policy

Network Load Balancer (NLB)

Use NLB for TCP/UDP traffic:

apiVersion: v1
kind: Service
metadata:
  name: tcp-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "external"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
  type: LoadBalancer
  selector:
    app: tcp-app
  ports:
  - port: 443
    targetPort: 8443
    protocol: TCP

NLB Features:

  • Layer 4 load balancing
  • Preserves source IP
  • High performance
  • Static IP support

Network Policies

Network policies provide pod-to-pod network isolation using firewall rules.

Calico Network Policies

Install Calico for network policy support:

# Install Calico
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.28/config/master/calico-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.28/config/master/calico-crs.yaml

Network Policy Example

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: web-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: web
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: database
    ports:
    - protocol: TCP
      port: 5432
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53

Policy Rules:

  • podSelector - Select pods to apply policy
  • ingress - Incoming traffic rules
  • egress - Outgoing traffic rules
  • policyTypes - Which directions to enforce

Cross-Zone and Cross-Region Networking

Cross-Zone Communication

Pods in different availability zones communicate via VPC routing:

graph LR subgraph az1[Availability Zone 1] N1[Node 1] --> P1[Pod 1<br/>10.0.1.10] end subgraph az2[Availability Zone 2] N2[Node 2] --> P2[Pod 2<br/>10.0.2.10] end P1 -->|VPC Routing| P2 P2 -->|VPC Routing| P1 style P1 fill:#e1f5ff style P2 fill:#fff4e1
  • No additional configuration needed
  • Uses VPC routing tables
  • Same latency as EC2 instances
  • Security groups apply across zones

Cross-Region Communication

For multi-region setups:

Option 1: VPC Peering

  • Connect VPCs across regions
  • Configure route tables
  • Manage security groups

Option 2: VPN or Direct Connect

  • Site-to-site VPN
  • AWS Direct Connect
  • Transit Gateway

Option 3: Public Internet

  • Use public load balancers
  • Secure with TLS
  • Not recommended for sensitive data

VPC Endpoints

Access AWS services without internet:

# Create VPC endpoint for S3
aws ec2 create-vpc-endpoint \
  --vpc-id vpc-12345678 \
  --service-name com.amazonaws.us-west-2.s3 \
  --route-table-ids rtb-12345678

Benefits:

  • No internet gateway required
  • Lower latency
  • No data transfer costs
  • Enhanced security

Expose services via PrivateLink:

apiVersion: v1
kind: Service
metadata:
  name: private-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
spec:
  type: LoadBalancer
  selector:
    app: private-app
  ports:
  - port: 443

CIDR Planning and IP Management

Subnet CIDR Planning

Plan subnets to accommodate pods:

graph TB VPC[VPC: 10.0.0.0/16<br/>65,536 IPs] --> S1[Subnet 1: 10.0.1.0/24<br/>256 IPs] VPC --> S2[Subnet 2: 10.0.2.0/24<br/>256 IPs] VPC --> S3[Subnet 3: 10.0.3.0/24<br/>256 IPs] S1 --> N1[5 Nodes × 40 IPs = 200 IPs] S2 --> N2[5 Nodes × 40 IPs = 200 IPs] S3 --> N3[5 Nodes × 40 IPs = 200 IPs] style VPC fill:#e1f5ff style S1 fill:#fff4e1 style N1 fill:#e8f5e9

Planning Formula:

Required IPs = (Number of Nodes × IPs per Node) + Buffer
Subnet Size = 2^(32 - subnet_mask) - 5 (AWS reserved)

Reserved IPs:

  • 1 for network address
  • 1 for broadcast address
  • 3 for AWS use (router, DNS, future use)

Secondary CIDR Ranges

For custom networking or higher density:

# Associate secondary CIDR
aws ec2 associate-vpc-cidr-block \
  --vpc-id vpc-12345678 \
  --cidr-block 10.1.0.0/16

Use Cases:

  • Custom networking mode
  • Pod IP exhaustion
  • Multi-tenancy isolation

IP Address Monitoring

Monitor IP address usage:

# Check ENI utilization
kubectl get nodes -o json | jq '.items[].status.addresses'

# Check pod IP allocation
kubectl get pods -o wide

# VPC CNI metrics
kubectl get pods -n kube-system -l app=vpc-cni

Best Practices

  1. Plan IP Addresses Carefully - Ensure sufficient CIDR space for growth

  2. Use Private Subnets for Nodes - More secure, use NAT for outbound access

  3. Enable Custom Networking - For higher pod density if needed

  4. Use Security Groups at Pod Level - Fine-grained network isolation

  5. Use ALB for HTTP/HTTPS - Better features than NLB for web traffic

  6. Use NLB for TCP/UDP - Better performance for non-HTTP traffic

  7. Implement Network Policies - Defense in depth with Calico

  8. Use VPC Endpoints - Avoid internet gateway for AWS services

  9. Monitor IP Utilization - Track ENI and IP usage

  10. Test Cross-Zone Communication - Verify networking works across AZs

Common Issues

IP Address Exhaustion

Problem: Pods can’t get IP addresses

Solutions:

  • Increase subnet CIDR size
  • Use custom networking mode
  • Add secondary CIDR ranges
  • Use larger instance types (more IPs per node)

Load Balancer Creation Fails

Problem: LoadBalancer service stuck in pending

Solutions:

  • Check AWS Load Balancer Controller logs
  • Verify IAM permissions for controller
  • Check security group rules
  • Verify subnet tags

Network Policy Not Working

Problem: Network policies not enforced

Solutions:

  • Verify Calico is installed
  • Check network policy syntax
  • Verify pod selectors match labels
  • Check Calico logs

See Also