Scheduling
Scheduling determines which nodes pods run on in your Kubernetes cluster. The Kubernetes scheduler assigns pods to nodes based on resource requirements, constraints, affinity rules, and other criteria. Understanding scheduling helps you control pod placement, optimize resource utilization, and ensure workloads run on appropriate nodes.
What Is Scheduling?
Scheduling is the process of assigning pods to nodes. When you create a pod, the scheduler evaluates cluster nodes and selects the best node based on various factors like resource availability, node constraints, pod requirements, and affinity/anti-affinity rules.
The Scheduler
The Kubernetes scheduler is a control plane component that runs as a daemon. It watches for newly created pods that don’t have a node assigned and selects a node for them to run on.
Scheduling Process
The scheduling process involves two phases:
1. Filtering (Predicates)
Filter out nodes that don’t meet pod requirements:
- Resource availability - Node has enough CPU and memory
- Node constraints - Node is not tainted or matches pod tolerations
- Affinity rules - Node satisfies pod affinity requirements
- Port availability - Required ports are available
- Volume constraints - Required volumes can be attached
2. Scoring (Priorities)
Rank the remaining nodes to select the best one:
- Resource balance - Prefer nodes with balanced resource usage
- Affinity preferences - Prefer nodes that match preferred affinity
- Inter-pod affinity - Prefer nodes with related pods
- Least requested - Prefer nodes with fewer requested resources
- Node affinity - Prefer nodes that match node affinity preferences
Scheduling Constraints
Various mechanisms control pod placement:
Resource Requests and Limits
Define how much CPU and memory pods need:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
The scheduler uses requests to ensure nodes have enough resources before placing pods.
Node Selectors
Simple node selection based on labels:
nodeSelector:
disktype: ssd
zone: us-west-1
Affinity and Anti-Affinity
Advanced rules for pod placement:
- Node Affinity - Place pods on nodes with specific characteristics
- Pod Affinity - Place pods near other pods
- Pod Anti-Affinity - Keep pods away from other pods
Taints and Tolerations
Node-level restrictions:
- Taints - Mark nodes to repel pods (unless they have matching tolerations)
- Tolerations - Allow pods to run on tainted nodes
Useful for:
- Dedicated nodes (e.g., GPU nodes, database nodes)
- Preventing regular workloads from scheduling on master nodes
- Isolating workloads
Topology Spread Constraints
Control how pods are distributed across zones, nodes, or other topology domains:
- Distribute pods evenly across zones
- Prevent too many pods on a single node
- Ensure availability zone distribution
Scheduling Scenarios
Scenario 1: Resource-Based Scheduling
Scenario 2: Affinity-Based Scheduling
Scenario 3: Taint-Based Scheduling
Scheduling Best Practices
Set resource requests - Always specify CPU and memory requests for predictable scheduling
Use resource limits - Prevent pods from consuming excessive resources
Leverage node selectors - For simple placement requirements
Use affinity for complex rules - When you need more sophisticated placement logic
Implement anti-affinity - For high availability and workload isolation
Use taints for dedicated nodes - Reserve nodes for specific workloads
Apply topology spread - Distribute pods across zones and nodes
Monitor scheduling - Watch for unschedulable pods
Test placement - Verify pods schedule on intended nodes
Document constraints - Clearly document why specific scheduling rules are needed
Common Scheduling Issues
Pods Stuck in Pending
Pods can’t be scheduled when:
- No nodes have sufficient resources
- No nodes match node selectors or affinity rules
- All nodes are tainted and pods lack tolerations
- Resource quotas are exceeded
- PersistentVolumeClaims can’t be satisfied
Pods Scheduled on Wrong Nodes
- Check node labels and selectors
- Verify affinity rules are correct
- Review taint/toleration configuration
- Check node capacity and resource availability
Uneven Pod Distribution
- Use topology spread constraints
- Implement pod anti-affinity
- Review node capacity and resource requests
- Consider node affinity preferences
Topics
- Requests & Limits - CPU and memory resource requirements
- Affinity & Anti-Affinity - Advanced pod placement rules
- Taints & Tolerations - Node-level pod restrictions
- Topology Spread Constraints - Distributing pods across topology domains
- Priority & Preemption - Pod priority and preemption
- Pod Admission - Pod admission control and validation
See Also
- Deployments - Workloads that use scheduling constraints
- Nodes - Understanding Kubernetes nodes
- Services - How scheduling affects service discovery