GKE Networking
GKE networking uses VPC-native networking where pods get IP addresses from your VPC network’s secondary IP ranges instead of using overlay networks. This approach provides better performance, native VPC integration, and seamless connectivity with Google Cloud services, but requires careful IP address planning.
VPC-Native Networking Architecture
GKE uses VPC-native networking where pods get real VPC IP addresses from secondary IP ranges. This eliminates the need for overlay networks and provides direct VPC connectivity.
How VPC-Native Networking Works
Primary IP Range:
- Used for node IP addresses
- Standard VPC subnet CIDR
- Nodes get IPs from this range
Secondary IP Range (Alias IP Range):
- Used for pod IP addresses
- Separate CIDR block for pods
- Pods get IPs from this range
- No NAT required for pod-to-pod communication
Benefits:
- Native VPC performance (no overlay overhead)
- Direct integration with Google Cloud services
- Firewall rules at pod level
- No additional network hops
- Better visibility in VPC Flow Logs
IP Address Planning
Plan secondary IP ranges carefully:
Planning Considerations:
- Reserve IPs for nodes in primary range
- Use secondary range for pods (larger CIDR)
- Plan for pod density requirements
- Consider cluster growth
Secondary Range Size:
- Minimum: /24 (256 IPs)
- Recommended: /14 (262,144 IPs) for large clusters
- Maximum pods per node: ~110 (depends on machine type)
Service Networking
Kubernetes services provide stable endpoints for pods. GKE supports all standard service types with Google Cloud-specific integrations.
ClusterIP Services
Internal service IPs (default):
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
- port: 80
targetPort: 8080
type: ClusterIP
- Accessible only within cluster
- Uses kube-proxy for load balancing
- No Google Cloud resources created
NodePort Services
Expose services on node IPs:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 8080
nodePort: 30080
- Accessible via
<node-ip>:30080 - Requires firewall rules
- Not recommended for production (use LoadBalancer)
LoadBalancer Services
Integrate with Google Cloud Load Balancing:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 8080
Load Balancer Types:
- Network Load Balancer - Layer 4, high performance
- HTTP(S) Load Balancer - Layer 7, advanced routing (via Ingress)
Network Load Balancer:
- Automatic creation for LoadBalancer services
- External or internal load balancing
- TCP/UDP traffic
- High performance
Ingress with GKE Ingress Controller
GKE provides built-in HTTP(S) Load Balancing via Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-ip
networking.gke.io/managed-certificates: my-ssl-cert
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Ingress Features:
- HTTP(S) Load Balancing
- SSL/TLS termination
- Path-based routing
- Host-based routing
- Custom static IPs
- Managed SSL certificates
Managed SSL Certificates:
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: my-ssl-cert
spec:
domains:
- example.com
- www.example.com
Network Policies
Network policies provide pod-to-pod network isolation using firewall rules.
Enabling Network Policies
# Enable network policy when creating cluster
gcloud container clusters create my-cluster \
--zone us-central1-a \
--enable-network-policy
# Or enable on existing cluster
gcloud container clusters update my-cluster \
--zone us-central1-a \
--enable-network-policy
Network Policy Example
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-policy
namespace: default
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
Policy Rules:
podSelector- Select pods to apply policyingress- Incoming traffic rulesegress- Outgoing traffic rulespolicyTypes- Which directions to enforce
Private Clusters and Private Endpoints
Private Clusters
Private clusters restrict network access for enhanced security:
# Create private cluster
gcloud container clusters create private-cluster \
--zone us-central1-a \
--enable-private-nodes \
--enable-private-endpoint \
--master-ipv4-cidr 172.16.0.0/28
Private Cluster Features:
- Private nodes (no external IPs)
- Private endpoint (API server only accessible from VPC)
- Enhanced security
- Requires VPN or bastion host for access
Private Endpoint Access:
- API server accessible only from VPC
- Authorized networks for external access
- VPN or Cloud Interconnect required
Authorized Networks
Allow external access to private endpoint:
# Add authorized network
gcloud container clusters update my-cluster \
--zone us-central1-a \
--master-authorized-networks 203.0.113.0/24,198.51.100.0/24
Cross-Region and Cross-Project Networking
Cross-Region Communication
Pods in different regions communicate via VPC peering:
VPC Peering:
- Connect VPCs across regions
- Configure routes automatically
- Manage firewall rules
- Same latency as cross-region VMs
Cross-Project Networking
Connect clusters in different projects:
# Create VPC peering
gcloud compute networks peerings create peer-1 \
--network=vpc-1 \
--peer-network=projects/PROJECT-2/global/networks/vpc-2 \
--peer-project=PROJECT-2
Firewall Rules
Google Cloud firewall rules provide network-level access control:
Default Firewall Rules:
- Allow ingress from other nodes in cluster
- Allow egress to internet
- Allow ingress from load balancers
Custom Firewall Rules:
# Create firewall rule
gcloud compute firewall-rules create allow-internal \
--network my-vpc \
--allow tcp:8080 \
--source-ranges 10.10.0.0/14 \
--target-tags gke-my-cluster-node
Pod-Level Firewall:
- Network policies for pod-to-pod isolation
- Firewall rules for node-level isolation
- Use both for defense in depth
Load Balancing Options
HTTP(S) Load Balancing
Use Ingress for HTTP(S) load balancing:
- Layer 7 load balancing
- Path-based routing
- Host-based routing
- SSL/TLS termination
- Managed SSL certificates
- Global load balancing
Network Load Balancing
Use LoadBalancer services for TCP/UDP:
- Layer 4 load balancing
- TCP/UDP traffic
- Regional load balancing
- High performance
Internal Load Balancing
Use internal load balancers for internal traffic:
apiVersion: v1
kind: Service
metadata:
name: internal-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: LoadBalancer
selector:
app: internal-app
ports:
- port: 80
targetPort: 8080
Best Practices
Plan IP Addresses Carefully - Ensure sufficient secondary IP range size
Use VPC-Native Networking - Default and recommended approach
Use Private Clusters - For enhanced security in production
Implement Network Policies - For pod-to-pod isolation
Use Ingress for HTTP(S) - Better features than LoadBalancer for web traffic
Use LoadBalancer for TCP/UDP - Better performance for non-HTTP traffic
Enable Managed SSL Certificates - Automatic certificate management
Configure Firewall Rules - Network-level security
Use Authorized Networks - For private endpoint access
Plan for Multi-Region - Use VPC peering for cross-region connectivity
Common Issues
IP Address Exhaustion
Problem: Pods can’t get IP addresses
Solutions:
- Increase secondary IP range size
- Create additional secondary ranges
- Use larger subnet CIDR
- Reduce pod density per node
Load Balancer Creation Fails
Problem: LoadBalancer service stuck in pending
Solutions:
- Check firewall rules
- Verify subnet configuration
- Check service quota limits
- Review Cloud Logging for errors
Network Policy Not Working
Problem: Network policies not enforced
Solutions:
- Verify network policy is enabled on cluster
- Check network policy syntax
- Verify pod selectors match labels
- Review network policy logs
See Also
- Cluster Setup - Initial cluster configuration
- Security - Network security and policies
- Troubleshooting - Networking issues