βΆοΈ Play
βοΈ Next Step
π Reset
Step-by-Step Flow
MaxSurge Demo
Traffic Shift
Health Check View
Step 0: Initial State
All pods are running Version 1. The deployment has 3 replicas.
π Deployment Configuration
Replicas: 3
Strategy: RollingUpdate
Zero Downtime Deployment
π maxSurge
Maximum pods above desired count
Value: 1 (25% default)
Max Total: 4 pods
π maxUnavailable
Maximum pods that can be unavailable
Value: 1 (25% default)
Min Available: 2 pods
Rolling Update Flow Diagram
MaxSurge and MaxUnavailable Scenarios
Balanced Strategy
maxSurge: 1 maxUnavailable: 1
Default balanced approach. Allows 1 extra pod and 1 unavailable pod.
Conservative Strategy
maxSurge: 1 maxUnavailable: 0
Zero downtime guarantee. Always maintains full capacity.
Aggressive Strategy
maxSurge: 2 maxUnavailable: 2
Fast rollout. Higher resource usage during update.
Resource Impact
Peak Pods:
4
Min Available:
2
Resource Overhead:
33%
Rollout Characteristics
Update Speed:
Moderate
Risk Level:
Low
Downtime Risk:
None
Traffic Flow During Rolling Update
Traffic Distribution Pattern
As new pods become ready, the Service automatically adds them to its endpoint pool. Traffic is distributed evenly across all healthy pods.
Automatic Load Balancing
Gradual Shift
Old pods continue serving traffic until they are terminated. No connection drops occur during the transition.
Zero Connection Loss
Service Discovery
The Service selector matches pods based on labels, not version. Both old and new pods are seamlessly integrated.
Label-Based Routing
Health Check Integration During Rollout
Readiness Probe
Purpose: Determines when a pod is ready to receive traffic
Impact: New pods only receive traffic after passing readiness checks
Gates Traffic Flow
Liveness Probe
Purpose: Detects unhealthy pods and triggers restarts
Impact: Ensures only healthy pods remain in rotation
Automatic Recovery
Rollout Safety
Behavior: Controller waits for new pods to pass readiness before terminating old pods
Result: Prevents premature pod termination
Protected Rollout
Pod Lifecycle with Health Checks
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: app
image: myapp:v2
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
π YAML Configuration Example
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache-deployment
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache-container
image: karthickponcloud/k8slabs:apache_v2
ports:
- containerPort: 80
βΉοΈ How Rolling Update Works
Update Process
1. Initial State: All 3 pods running v1 (Old ReplicaSet)
2. New ReplicaSet Created: When deployment spec changes, a new ReplicaSet is created for v2
3. Gradual Scaling:
New ReplicaSet scales up: 0 β 1 pod (total: 4 pods, respecting maxSurge)
Old ReplicaSet scales down: 3 β 2 pods (total: 3 pods, respecting maxUnavailable)
Pattern continues until all pods are v2
4. Service Routing: Kubernetes Service automatically routes traffic to all healthy pods (both v1 and v2) during the update
5. Completion: Old ReplicaSet scaled to 0 (but retained for rollback), all traffic goes to v2
Key Benefits
β
Zero Downtime: Service remains available throughout the update
β
Controlled Rate: maxSurge and maxUnavailable prevent resource spikes
β
Easy Rollback: Old ReplicaSet retained for quick reversions
β
Automatic Traffic Distribution: Service handles load balancing automatically