GKE Autoscaling

Autoscaling on GKE involves scaling at multiple levels: pods (HPA/VPA), nodes (Cluster Autoscaler), and cluster capacity. GKE provides several autoscaling solutions that work together to ensure your applications have the right resources at the right time while optimizing costs.

Autoscaling Overview

GKE autoscaling operates at different levels:

graph TB subgraph pod_level[Pod Level] HPA[Horizontal Pod Autoscaler] --> SCALE_PODS[Scale Pod Replicas] VPA[Vertical Pod Autoscaler] --> ADJUST_RESOURCES[Adjust Pod Resources] end subgraph node_level[Node Level] CA[Cluster Autoscaler] --> SCALE_NODES[Scale Node Pools] end subgraph cluster_level[Cluster Level] SCALE_PODS --> TRIGGER_NODES[Trigger Node Scaling] ADJUST_RESOURCES --> OPTIMIZE[Optimize Resource Usage] SCALE_NODES --> ADD_NODES[Add/Remove Nodes] end style HPA fill:#e1f5ff style CA fill:#fff4e1 style VPA fill:#e8f5e9

Cluster Autoscaler

Cluster Autoscaler automatically adjusts the size of node pools based on pod scheduling demands. When pods can’t be scheduled due to insufficient resources, it adds nodes. When nodes are underutilized, it removes them.

How Cluster Autoscaler Works

graph LR A[Pod Pending] --> B{Resources<br/>Available?} B -->|No| C[Cluster Autoscaler<br/>Detects] C --> D[Increase Node Pool<br/>Node Count] D --> E[New Nodes Created] E --> F[Pods Scheduled] G[Node Underutilized] --> H{Can Pods<br/>Move?} H -->|Yes| I[Cluster Autoscaler<br/>Detects] I --> J[Drain Node] J --> K[Decrease Node Count] K --> L[Node Terminated] style A fill:#e1f5ff style E fill:#fff4e1 style L fill:#e8f5e9

Enabling Cluster Autoscaler

Using gcloud CLI:

# Enable auto-scaling on node pool
gcloud container node-pools update general-pool \
  --cluster my-cluster \
  --zone us-central1-a \
  --enable-autoscaling \
  --min-nodes 1 \
  --max-nodes 10

When Creating Node Pool:

# Create node pool with auto-scaling
gcloud container node-pools create general-pool \
  --cluster my-cluster \
  --zone us-central1-a \
  --enable-autoscaling \
  --min-nodes 1 \
  --max-nodes 10 \
  --num-nodes 3

Cluster Autoscaler Configuration

Scaling Parameters:

  • min-nodes - Minimum number of nodes in pool
  • max-nodes - Maximum number of nodes in pool
  • initial-nodes - Initial number of nodes (optional)

Scaling Behavior:

  • Scales up when pods can’t be scheduled
  • Scales down when nodes are underutilized
  • Respects min/max node limits
  • Uses conservative scale-down to prevent thrashing

Horizontal Pod Autoscaler (HPA)

HPA automatically scales the number of pod replicas based on observed metrics like CPU, memory, or custom metrics.

Basic HPA

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: web-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: web
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 50
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15
      - type: Pods
        value: 4
        periodSeconds: 15
      selectPolicy: Max

HPA Behavior:

  • Scale Down - Conservative scaling down to prevent thrashing
  • Scale Up - Aggressive scaling up to handle traffic spikes
  • Stabilization Window - Time to wait before scaling

Custom Metrics HPA

Scale based on custom application metrics:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: web-hpa-custom
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: web
  minReplicas: 2
  maxReplicas: 20
  metrics:
  - type: Pods
    pods:
      metric:
        name: http_requests_per_second
      target:
        type: AverageValue
        averageValue: "100"

Requires:

  • Metrics Server (for resource metrics)
  • Prometheus Adapter (for custom metrics)
  • External Metrics API

Vertical Pod Autoscaler (VPA)

VPA automatically adjusts CPU and memory requests and limits for pods based on historical usage.

Installation

# Install VPA
git clone https://github.com/kubernetes/autoscaler.git
cd autoscaler/vertical-pod-autoscaler/

# Install VPA
./hack/vpa-up.sh

VPA Configuration

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: web-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: web
  updatePolicy:
    updateMode: "Auto"  # Auto, Off, Initial, Recreate
  resourcePolicy:
    containerPolicies:
    - containerName: app
      minAllowed:
        cpu: 100m
        memory: 128Mi
      maxAllowed:
        cpu: 2
        memory: 4Gi
      controlledResources: ["cpu", "memory"]

Update Modes:

  • Auto - Automatically update pod resources (requires pod restart)
  • Off - Only provide recommendations
  • Initial - Set resources on pod creation only
  • Recreate - Recreate pods with new resources

Note: VPA and HPA should not be used together for the same resource (CPU/memory). Use VPA for resource optimization, HPA for replica scaling.

Autopilot Mode Autoscaling

Autopilot mode provides automatic scaling without node pool management:

Features:

  • Automatic node provisioning
  • Automatic node scaling
  • Pay-per-pod pricing
  • No node pool management
  • Enhanced security defaults

Configuration:

  • No node pool configuration needed
  • Automatic scaling based on pod requirements
  • Automatic optimization

Preemptible VM Integration

Cluster Autoscaler with Preemptible VMs

Configure node pools for preemptible VMs:

# Create node pool with preemptible VMs
gcloud container node-pools create spot-pool \
  --cluster my-cluster \
  --zone us-central1-a \
  --preemptible \
  --enable-autoscaling \
  --min-nodes 0 \
  --max-nodes 20 \
  --num-nodes 3 \
  --node-labels preemptible=true \
  --node-taints preemptible=true:NoSchedule

Pod Tolerations:

apiVersion: v1
kind: Pod
metadata:
  name: spot-workload
spec:
  tolerations:
  - key: preemptible
    value: "true"
    effect: NoSchedule
  containers:
  - name: app
    image: my-app:latest

Cost Optimization Strategies

Right-Sizing

Use VPA to right-size pod resources:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: optimize-resources
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  updatePolicy:
    updateMode: "Auto"

Preemptible VMs

Use preemptible VMs for cost savings:

  • Up to 80% cost savings
  • Automatic interruption handling
  • Auto-scaling integration
  • Fallback to on-demand

Scheduled Scaling

Scale down during off-hours:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: scale-down
spec:
  schedule: "0 20 * * *"  # 8 PM daily
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: kubectl
            image: bitnami/kubectl:latest
            command:
            - kubectl
            - scale
            - deployment
            - web
            --replicas=1

Scaling Best Practices

  1. Set Appropriate Limits - Configure min/max replicas and node limits

  2. Use Multiple Metrics - Combine CPU, memory, and custom metrics

  3. Configure Behavior - Tune scale-up and scale-down policies

  4. Monitor Scaling - Watch HPA and Cluster Autoscaler behavior

  5. Test Scaling - Verify autoscaling works before production

  6. Use Preemptible VMs - For cost optimization where appropriate

  7. Right-Size Resources - Use VPA to optimize resource requests

  8. Plan for Spikes - Configure aggressive scale-up for traffic spikes

  9. Prevent Thrashing - Use stabilization windows and conservative scale-down

  10. Combine Solutions - Use HPA for pods, Cluster Autoscaler for nodes

Common Issues

HPA Not Scaling

Problem: HPA not scaling pods

Solutions:

  • Verify Metrics Server is running
  • Check HPA target metrics
  • Verify resource requests are set
  • Check HPA status and events

Cluster Autoscaler Not Scaling Nodes

Problem: Nodes not being added

Solutions:

  • Verify auto-scaling is enabled
  • Check min/max node limits
  • Verify pods are unschedulable
  • Check node pool quotas
  • Review Cloud Logging for errors

See Also