⌂ Home
🎯 Kubernetes Node Selection Process
Repository YAML Files:
k8s/labs/scheduling/nodeselector/nodeselector.yaml — Pod using nodeSelector to target nodes by labels.
k8s/labs/scheduling/nodeselector/notin-nodeselector.yaml — Pod with preferred node affinity using NotIn to avoid nodes labeled env=staging.
1
PreFilter
Check eligibility
2
Filter
Eliminate unsuitable nodes
3
Score
Rank remaining nodes
4
Bind
Assign pod to node
Scheduler Framework: Kubernetes uses a modular, pluggable framework with extensible phases. The scheduler evaluates pod specifications against cluster state to find the optimal node placement.
📊 Key Decision Factors
💾
Resources
CPU, Memory, Storage availability
🧲
Affinity
Node/Pod preference rules
🚫
Taints
Node restrictions & tolerations
🌐
Topology
Zone/rack distribution
🔽 Node Filtering Process
Available Nodes: 100
All cluster nodes
After Resource Check: 60
Nodes with sufficient CPU/Memory
After Taints Filter: 40
Nodes with matching tolerations
After Affinity: 15
Nodes matching labels/affinity
Best Scored Node: 1
Optimal node selected
🌳 Example: Scheduling Decision
❌ Node1: Insufficient memory (has 1GB, needs 2GB)
❌ Node2: Taint key=gpu:NoSchedule (pod has no toleration)
✅ Node3: All checks passed, score: 85
✅ Node4: All checks passed, score: 92 → Selected!
❌ Node5: Missing label disktype=ssd (required by nodeSelector)
📄
Pod with Node Selection Constraints
apiVersion: v1
kind: Pod
metadata:
name: node-selection-demo
spec:
# Simple node selector (filter phase)
nodeSelector:
disktype: ssd
# Tolerations allow scheduling on tainted nodes
tolerations:
- key: "dedicated"
operator: "Equal"
value: "database"
effect: "NoSchedule"
# Node affinity for advanced selection
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "topology.kubernetes.io/zone"
operator: "In"
values:
- "us-west-1a"
- "us-west-1b"
containers:
- name: app
image: nginx:1.21
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "4Gi"
cpu: "2000m"
💡 Key Insight: The scheduler ensures pods are placed on nodes that meet ALL constraints. Resource requests are used for scheduling decisions (not limits). Node affinity and nodeSelector filter nodes, while scoring determines the best match among eligible nodes.