AKS Networking
AKS networking supports two modes: Azure CNI (advanced networking) where pods get real Virtual Network IP addresses, and kubenet (basic networking) where pods use an overlay network. Understanding both networking modes and when to use each is essential for designing your AKS cluster network architecture.
Networking Modes Overview
AKS supports two networking plugins:
Azure CNI vs Kubenet
| Feature | Azure CNI | Kubenet |
|---|---|---|
| Pod IPs | Real VNet IPs | Overlay network IPs |
| Performance | Better (no NAT) | Good (NAT overhead) |
| IP Planning | Required | Minimal |
| VNet Integration | Native | Limited |
| Network Policies | Azure or Calico | Calico only |
| Complexity | Higher | Lower |
Azure CNI Networking
Azure CNI assigns Virtual Network IP addresses directly to pods, making them first-class citizens in your VNet.
How Azure CNI Works
IP Address Allocation:
- Primary IP range: Node IP addresses
- Secondary IP range: Pod IP addresses
- IPs come from the subnet’s CIDR blocks
- No NAT required for pod-to-pod communication
Benefits:
- Native VNet performance (no overlay overhead)
- Network Security Groups at pod level
- Direct integration with Azure services
- No additional network hops
Configuring Azure CNI
Create Cluster with Azure CNI:
# Create virtual network and subnet
az network vnet create \
--resource-group myResourceGroup \
--name myVNet \
--address-prefix 10.0.0.0/16 \
--subnet-name mySubnet \
--subnet-prefix 10.0.1.0/24
# Get subnet ID
SUBNET_ID=$(az network vnet subnet show \
--resource-group myResourceGroup \
--vnet-name myVNet \
--name mySubnet \
--query id -o tsv)
# Create AKS cluster with Azure CNI
az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--network-plugin azure \
--vnet-subnet-id $SUBNET_ID \
--service-cidr 10.1.0.0/16 \
--dns-service-ip 10.1.0.10
IP Address Planning:
- Reserve IPs for nodes (1 per node)
- Reserve IPs for pods (varies by node size)
- Reserve IPs for Azure services (load balancers, etc.)
- Plan for growth and scaling
Kubenet Networking
Kubenet uses an overlay network where pods get IP addresses from a separate CIDR range, and nodes perform NAT for external access.
How Kubenet Works
IP Address Allocation:
- Node IPs from subnet
- Pod IPs from overlay network (172.17.0.0/16 by default)
- NAT required for external access
- Simpler IP planning
Benefits:
- Simpler setup
- Less IP address planning
- Good for smaller clusters
- Easier to get started
Configuring Kubenet
Create Cluster with Kubenet:
# Create cluster with kubenet
az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--network-plugin kubenet \
--pod-cidr 172.17.0.0/16 \
--service-cidr 10.1.0.0/16 \
--dns-service-ip 10.1.0.10
Service Networking
Kubernetes services provide stable endpoints for pods. AKS supports all standard service types with Azure-specific integrations.
ClusterIP Services
Internal service IPs (default):
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
- port: 80
targetPort: 8080
type: ClusterIP
- Accessible only within cluster
- Uses kube-proxy for load balancing
- No Azure resources created
NodePort Services
Expose services on node IPs:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 8080
nodePort: 30080
- Accessible via
<node-ip>:30080 - Requires Network Security Group rules
- Not recommended for production (use LoadBalancer)
LoadBalancer Services
Integrate with Azure Load Balancer:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 8080
Load Balancer Types:
- Basic Load Balancer - Free tier, limited features
- Standard Load Balancer - Production grade, more features
Load Balancer Features:
- External or internal load balancing
- TCP/UDP traffic
- High availability
- Health probes
Ingress
AKS supports Ingress with Application Gateway or NGINX Ingress Controller.
Application Gateway Ingress Controller (AGIC)
Azure-native Ingress controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
AGIC Features:
- HTTP(S) Load Balancing
- SSL/TLS termination
- Path-based routing
- Host-based routing
- WAF integration
- Azure-native integration
NGINX Ingress Controller
Open-source Ingress controller:
# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
NGINX Ingress Features:
- HTTP(S) Load Balancing
- SSL/TLS termination
- Advanced routing rules
- Custom annotations
- Community support
Network Policies
Network policies provide pod-to-pod network isolation using firewall rules.
Azure Network Policy
Azure-native network policy:
# Enable Azure Network Policy (requires Azure CNI)
az aks update \
--resource-group myResourceGroup \
--name myAKSCluster \
--network-policy azure
Network Policy Example:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-policy
namespace: default
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
Calico Network Policy
Calico network policy (works with kubenet):
# Install Calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
Private Clusters and Private Endpoints
Private Clusters
Private clusters restrict network access for enhanced security:
# Create private cluster
az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--enable-private-cluster \
--private-dns-zone system
Private Cluster Features:
- Private API server endpoint
- No public API server access
- Enhanced security
- Requires VPN or bastion host for access
Private Endpoints
Use private endpoints for Azure service access:
# Create private endpoint
az network private-endpoint create \
--name myPrivateEndpoint \
--resource-group myResourceGroup \
--vnet-name myVNet \
--subnet mySubnet \
--private-connection-resource-id /subscriptions/.../resourceGroups/.../providers/Microsoft.Storage/storageAccounts/myStorageAccount \
--group-id blob \
--connection-name myConnection
Network Security Groups
Network Security Groups provide network-level access control:
Default NSG Rules:
- Allow ingress from other nodes in cluster
- Allow egress to internet
- Allow ingress from load balancers
Custom NSG Rules:
# Create NSG rule
az network nsg rule create \
--resource-group myResourceGroup \
--nsg-name myNSG \
--name allow-internal \
--priority 1000 \
--source-address-prefixes 10.0.2.0/24 \
--destination-port-ranges 8080 \
--access Allow \
--protocol Tcp
Best Practices
Use Azure CNI for Production - Better performance and integration
Plan IP Addresses Carefully - Ensure sufficient CIDR space for growth
Use Private Subnets for Nodes - More secure, use NAT for outbound access
Enable Network Policies - For pod-to-pod isolation
Use Application Gateway - For HTTP(S) load balancing with WAF
Use Standard Load Balancer - For production workloads
Implement Private Clusters - For enhanced security
Use Network Security Groups - Network-level security
Monitor Network Traffic - Use Azure Network Watcher
Test Cross-Zone Communication - Verify networking works across zones
Common Issues
IP Address Exhaustion
Problem: Pods can’t get IP addresses
Solutions:
- Increase subnet CIDR size
- Use larger secondary IP range
- Consider using kubenet for simpler setup
- Scale down unused pods
Load Balancer Creation Fails
Problem: LoadBalancer service stuck in pending
Solutions:
- Check Network Security Group rules
- Verify subnet configuration
- Check service quota limits
- Review Azure Activity Log
Network Policy Not Working
Problem: Network policies not enforced
Solutions:
- Verify network policy is enabled on cluster
- Check network policy syntax
- Verify pod selectors match labels
- Review network policy logs
See Also
- Cluster Setup - Initial cluster configuration
- Security - Network security and policies
- Troubleshooting - Networking issues