AKS Networking

AKS networking supports two modes: Azure CNI (advanced networking) where pods get real Virtual Network IP addresses, and kubenet (basic networking) where pods use an overlay network. Understanding both networking modes and when to use each is essential for designing your AKS cluster network architecture.

Networking Modes Overview

AKS supports two networking plugins:

graph TB subgraph azure_cni[Azure CNI - Advanced Networking] AC[Azure CNI] --> VNET[Virtual Network IPs] VNET --> POD1[Pod 1<br/>Real VNet IP] VNET --> POD2[Pod 2<br/>Real VNet IP] end subgraph kubenet[Kubenet - Basic Networking] KUB[Kubenet] --> OVERLAY[Overlay Network] OVERLAY --> POD3[Pod 3<br/>Overlay IP] OVERLAY --> POD4[Pod 4<br/>Overlay IP] end style AC fill:#e1f5ff style KUB fill:#fff4e1 style POD1 fill:#e8f5e9 style POD3 fill:#f3e5f5

Azure CNI vs Kubenet

FeatureAzure CNIKubenet
Pod IPsReal VNet IPsOverlay network IPs
PerformanceBetter (no NAT)Good (NAT overhead)
IP PlanningRequiredMinimal
VNet IntegrationNativeLimited
Network PoliciesAzure or CalicoCalico only
ComplexityHigherLower

Azure CNI Networking

Azure CNI assigns Virtual Network IP addresses directly to pods, making them first-class citizens in your VNet.

How Azure CNI Works

graph TB subgraph vnet[Virtual Network: 10.0.0.0/16] subgraph subnet[Subnet: 10.0.1.0/24] NODE[Worker Node<br/>10.0.1.10] --> PRIMARY[Primary IP Range<br/>10.0.1.0/24] NODE --> SECONDARY[Secondary IP Range<br/>10.0.2.0/24] PRIMARY --> NODE_IP[Node IP<br/>10.0.1.10] SECONDARY --> POD1[Pod 1<br/>10.0.2.5] SECONDARY --> POD2[Pod 2<br/>10.0.2.6] SECONDARY --> POD3[Pod 3<br/>10.0.2.7] end end CNI[Azure CNI Plugin] -->|Manages| SECONDARY style NODE fill:#e1f5ff style POD1 fill:#fff4e1 style POD2 fill:#fff4e1 style POD3 fill:#fff4e1 style CNI fill:#e8f5e9

IP Address Allocation:

  • Primary IP range: Node IP addresses
  • Secondary IP range: Pod IP addresses
  • IPs come from the subnet’s CIDR blocks
  • No NAT required for pod-to-pod communication

Benefits:

  • Native VNet performance (no overlay overhead)
  • Network Security Groups at pod level
  • Direct integration with Azure services
  • No additional network hops

Configuring Azure CNI

Create Cluster with Azure CNI:

# Create virtual network and subnet
az network vnet create \
  --resource-group myResourceGroup \
  --name myVNet \
  --address-prefix 10.0.0.0/16 \
  --subnet-name mySubnet \
  --subnet-prefix 10.0.1.0/24

# Get subnet ID
SUBNET_ID=$(az network vnet subnet show \
  --resource-group myResourceGroup \
  --vnet-name myVNet \
  --name mySubnet \
  --query id -o tsv)

# Create AKS cluster with Azure CNI
az aks create \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --network-plugin azure \
  --vnet-subnet-id $SUBNET_ID \
  --service-cidr 10.1.0.0/16 \
  --dns-service-ip 10.1.0.10

IP Address Planning:

  • Reserve IPs for nodes (1 per node)
  • Reserve IPs for pods (varies by node size)
  • Reserve IPs for Azure services (load balancers, etc.)
  • Plan for growth and scaling

Kubenet Networking

Kubenet uses an overlay network where pods get IP addresses from a separate CIDR range, and nodes perform NAT for external access.

How Kubenet Works

graph TB subgraph vnet[Virtual Network: 10.0.0.0/16] subgraph subnet[Subnet: 10.0.1.0/24] NODE[Worker Node<br/>10.0.1.10] --> NODE_IP[Node IP<br/>10.0.1.10] NODE --> OVERLAY[Overlay Network<br/>172.17.0.0/16] OVERLAY --> POD1[Pod 1<br/>172.17.0.5] OVERLAY --> POD2[Pod 2<br/>172.17.0.6] end end NAT[NAT Gateway] -->|External Access| NODE style NODE fill:#e1f5ff style POD1 fill:#fff4e1 style POD2 fill:#fff4e1 style NAT fill:#e8f5e9

IP Address Allocation:

  • Node IPs from subnet
  • Pod IPs from overlay network (172.17.0.0/16 by default)
  • NAT required for external access
  • Simpler IP planning

Benefits:

  • Simpler setup
  • Less IP address planning
  • Good for smaller clusters
  • Easier to get started

Configuring Kubenet

Create Cluster with Kubenet:

# Create cluster with kubenet
az aks create \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --network-plugin kubenet \
  --pod-cidr 172.17.0.0/16 \
  --service-cidr 10.1.0.0/16 \
  --dns-service-ip 10.1.0.10

Service Networking

Kubernetes services provide stable endpoints for pods. AKS supports all standard service types with Azure-specific integrations.

ClusterIP Services

Internal service IPs (default):

apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP
  • Accessible only within cluster
  • Uses kube-proxy for load balancing
  • No Azure resources created

NodePort Services

Expose services on node IPs:

apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  type: NodePort
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 30080
  • Accessible via <node-ip>:30080
  • Requires Network Security Group rules
  • Not recommended for production (use LoadBalancer)

LoadBalancer Services

Integrate with Azure Load Balancer:

apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080

Load Balancer Types:

  • Basic Load Balancer - Free tier, limited features
  • Standard Load Balancer - Production grade, more features

Load Balancer Features:

  • External or internal load balancing
  • TCP/UDP traffic
  • High availability
  • Health probes

Ingress

AKS supports Ingress with Application Gateway or NGINX Ingress Controller.

Application Gateway Ingress Controller (AGIC)

Azure-native Ingress controller:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-ingress
  annotations:
    kubernetes.io/ingress.class: azure/application-gateway
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

AGIC Features:

  • HTTP(S) Load Balancing
  • SSL/TLS termination
  • Path-based routing
  • Host-based routing
  • WAF integration
  • Azure-native integration

NGINX Ingress Controller

Open-source Ingress controller:

# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml

NGINX Ingress Features:

  • HTTP(S) Load Balancing
  • SSL/TLS termination
  • Advanced routing rules
  • Custom annotations
  • Community support

Network Policies

Network policies provide pod-to-pod network isolation using firewall rules.

Azure Network Policy

Azure-native network policy:

# Enable Azure Network Policy (requires Azure CNI)
az aks update \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --network-policy azure

Network Policy Example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: web-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: web
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: database
    ports:
    - protocol: TCP
      port: 5432

Calico Network Policy

Calico network policy (works with kubenet):

# Install Calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml

Private Clusters and Private Endpoints

Private Clusters

Private clusters restrict network access for enhanced security:

# Create private cluster
az aks create \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --enable-private-cluster \
  --private-dns-zone system

Private Cluster Features:

  • Private API server endpoint
  • No public API server access
  • Enhanced security
  • Requires VPN or bastion host for access

Private Endpoints

Use private endpoints for Azure service access:

# Create private endpoint
az network private-endpoint create \
  --name myPrivateEndpoint \
  --resource-group myResourceGroup \
  --vnet-name myVNet \
  --subnet mySubnet \
  --private-connection-resource-id /subscriptions/.../resourceGroups/.../providers/Microsoft.Storage/storageAccounts/myStorageAccount \
  --group-id blob \
  --connection-name myConnection

Network Security Groups

Network Security Groups provide network-level access control:

Default NSG Rules:

  • Allow ingress from other nodes in cluster
  • Allow egress to internet
  • Allow ingress from load balancers

Custom NSG Rules:

# Create NSG rule
az network nsg rule create \
  --resource-group myResourceGroup \
  --nsg-name myNSG \
  --name allow-internal \
  --priority 1000 \
  --source-address-prefixes 10.0.2.0/24 \
  --destination-port-ranges 8080 \
  --access Allow \
  --protocol Tcp

Best Practices

  1. Use Azure CNI for Production - Better performance and integration

  2. Plan IP Addresses Carefully - Ensure sufficient CIDR space for growth

  3. Use Private Subnets for Nodes - More secure, use NAT for outbound access

  4. Enable Network Policies - For pod-to-pod isolation

  5. Use Application Gateway - For HTTP(S) load balancing with WAF

  6. Use Standard Load Balancer - For production workloads

  7. Implement Private Clusters - For enhanced security

  8. Use Network Security Groups - Network-level security

  9. Monitor Network Traffic - Use Azure Network Watcher

  10. Test Cross-Zone Communication - Verify networking works across zones

Common Issues

IP Address Exhaustion

Problem: Pods can’t get IP addresses

Solutions:

  • Increase subnet CIDR size
  • Use larger secondary IP range
  • Consider using kubenet for simpler setup
  • Scale down unused pods

Load Balancer Creation Fails

Problem: LoadBalancer service stuck in pending

Solutions:

  • Check Network Security Group rules
  • Verify subnet configuration
  • Check service quota limits
  • Review Azure Activity Log

Network Policy Not Working

Problem: Network policies not enforced

Solutions:

  • Verify network policy is enabled on cluster
  • Check network policy syntax
  • Verify pod selectors match labels
  • Review network policy logs

See Also