⌂ Home

Kubernetes Control Plane & Worker Upgrade Sequence

Complete step-by-step upgrade procedure with commands

Critical Pre-Upgrade Best Practices

Complete Upgrade Sequence 0. Backup etcd Pre-flight 1. Control Plane kubeadm, kubelet, kubectl 2. Worker-1 Drain & Upgrade 3. Worker-2 Drain & Upgrade 4. Validate Verify cluster Click boxes to cycle states: Not Started In Progress Completed Typical Timeline Start ~5 min Control Plane ~10 min Worker-1 Done ~15 min Worker-2 Done ~17 min Complete ✓ ⚠️ Critical Sequence Rules • Control plane MUST be upgraded before any worker nodes • Worker nodes MUST be upgraded one at a time to maintain workload availability
0

Pre-Flight: Backup & Preparation

Critical first step - backup cluster state

1
Backup etcd Database
Create a snapshot of etcd to enable rollback if upgrade fails
⚠️ CRITICAL: Without this backup, you cannot rollback a failed upgrade
Set environment variables:
export ETCDCTL_API=3
export ETCDCTL_ENDPOINTS=https://127.0.0.1:2379
export ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt
export ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt
export ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key
Create snapshot:
sudo etcdctl snapshot save /backup/etcd-snapshot-$(date +%Y%m%d-%H%M%S).db
Verify backup:
sudo etcdctl snapshot status /backup/etcd-snapshot-*.db
2
Check Cluster Health
Verify current cluster status before proceeding
kubectl version --short
kubectl get nodes -o wide
kubectl get pods -A
kubectl get --raw /metrics | grep apiserver_requested_deprecated_apis
1

Phase 1: Control Plane Upgrade

Upgrade control plane components first

1
Upgrade kubeadm on Control Plane Node
Install target version of kubeadm
For Ubuntu/Debian:
sudo apt-mark unhold kubeadm
sudo apt-get update
sudo apt-get install -y kubeadm=1.32.0-00
sudo apt-mark hold kubeadm
kubeadm version
For RHEL/CentOS:
sudo yum install -y kubeadm-1.32.0-0 --disableexcludes=kubernetes
kubeadm version
2
Drain Control Plane Node
Safely evict workloads from control plane node
kubectl drain <control-plane-node> --ignore-daemonsets --delete-emptydir-data
This marks the node as unschedulable and evicts pods
3
Plan and Apply Upgrade
Review upgrade plan and apply to control plane
Check upgrade plan:
sudo kubeadm upgrade plan
Apply upgrade (first control plane node):
sudo kubeadm upgrade apply v1.32.0
For additional control plane nodes:
sudo kubeadm upgrade node
4
Upgrade kubelet and kubectl
Upgrade kubelet and kubectl on control plane node
For Ubuntu/Debian:
sudo apt-mark unhold kubelet kubectl
sudo apt-get update
sudo apt-get install -y kubelet=1.32.0-00 kubectl=1.32.0-00
sudo apt-mark hold kubelet kubectl
For RHEL/CentOS:
sudo yum install -y kubelet-1.32.0-0 kubectl-1.32.0-0 --disableexcludes=kubernetes
5
Restart kubelet and Uncordon Node
Restart kubelet service and bring node back online
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon <control-plane-node>
kubectl get nodes
Control plane node should now show v1.32.0
2

Phase 2: Worker Node Upgrade (One at a Time)

Upgrade each worker node sequentially

⚠️ Repeat this phase for EACH worker node, one at a time to maintain workload availability
1
Cordon and Drain Worker Node
Prevent new pods and evict existing workloads (run from control plane)
kubectl cordon <worker-node-name>
kubectl drain <worker-node-name> --ignore-daemonsets --delete-emptydir-data
Workloads will be rescheduled to other available nodes
2
SSH to Worker Node and Upgrade kubeadm
Connect to worker node and install target kubeadm version
For Ubuntu/Debian:
sudo apt-mark unhold kubeadm
sudo apt-get update
sudo apt-get install -y kubeadm=1.32.0-00
sudo apt-mark hold kubeadm
For RHEL/CentOS:
sudo yum install -y kubeadm-1.32.0-0 --disableexcludes=kubernetes
3
Upgrade Node Configuration
Apply kubeadm upgrade to worker node configuration
sudo kubeadm upgrade node
4
Upgrade kubelet and kubectl
Install target versions of kubelet and kubectl
For Ubuntu/Debian:
sudo apt-mark unhold kubelet kubectl
sudo apt-get update
sudo apt-get install -y kubelet=1.32.0-00 kubectl=1.32.0-00
sudo apt-mark hold kubelet kubectl
For RHEL/CentOS:
sudo yum install -y kubelet-1.32.0-0 kubectl-1.32.0-0 --disableexcludes=kubernetes
5
Restart kubelet Service
Reload systemd and restart kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
6
Uncordon Node and Verify
Bring node back online and verify upgrade (run from control plane)
kubectl uncordon <worker-node-name>
kubectl get nodes -o wide
Worker node should now show v1.32.0 and Ready status
Repeat steps 1-6 for each remaining worker node

Phase 3: Post-Upgrade Validation

Verify cluster health and functionality

1
Verify All Nodes Upgraded
Check that all nodes are running target version
kubectl version --short
kubectl get nodes -o wide
All nodes should show v1.32.0 and Ready status
2
Check System Components
Verify all system pods are running
kubectl get pods -n kube-system
kubectl get pods -A
3
Test Sample Workload
Deploy test pod to validate cluster functionality
kubectl run nginx-test --image=nginx:latest --port=80
kubectl expose pod nginx-test --port=80 --type=NodePort
kubectl get svc nginx-test

# Test access, then cleanup:
kubectl delete pod nginx-test
kubectl delete svc nginx-test
4
Check for Issues
Review recent events and verify no deprecated APIs
kubectl get events -A --sort-by='.lastTimestamp' | tail -20
kubectl get --raw /metrics | grep apiserver_requested_deprecated_apis