⌂ Home

πŸš€ Kubernetes Rolling Update Strategy

Interactive visualization of pod replacement during deployment updates

Step 0: Initial State

All pods are running Version 1. The deployment has 3 replicas.

Version 1 Pod
Version 2 Pod
Running
Starting
Terminating

πŸ“Š Deployment Configuration

Replicas: 3

Strategy: RollingUpdate

Zero Downtime Deployment

πŸ“ˆ maxSurge

Maximum pods above desired count

Value: 1 (25% default)

Max Total: 4 pods

πŸ“‰ maxUnavailable

Maximum pods that can be unavailable

Value: 1 (25% default)

Min Available: 2 pods
🌐 Service Load Balancing
Version 1 Traffic
100%
Version 2 Traffic
0%
πŸ“¦ ReplicaSet v1 (Old)
3 pods
πŸ“¦ ReplicaSet v2 (New)
0 pods
Rolling Update Flow Diagram
MaxSurge and MaxUnavailable Scenarios

Balanced Strategy

maxSurge: 1
maxUnavailable: 1
Default balanced approach. Allows 1 extra pod and 1 unavailable pod.

Conservative Strategy

maxSurge: 1
maxUnavailable: 0
Zero downtime guarantee. Always maintains full capacity.

Aggressive Strategy

maxSurge: 2
maxUnavailable: 2
Fast rollout. Higher resource usage during update.

Resource Impact

Peak Pods: 4
Min Available: 2
Resource Overhead: 33%

Rollout Characteristics

Update Speed: Moderate
Risk Level: Low
Downtime Risk: None
Traffic Flow During Rolling Update

Traffic Distribution Pattern

As new pods become ready, the Service automatically adds them to its endpoint pool. Traffic is distributed evenly across all healthy pods.

Automatic Load Balancing

Gradual Shift

Old pods continue serving traffic until they are terminated. No connection drops occur during the transition.

Zero Connection Loss

Service Discovery

The Service selector matches pods based on labels, not version. Both old and new pods are seamlessly integrated.

Label-Based Routing
Health Check Integration During Rollout

Readiness Probe

Purpose: Determines when a pod is ready to receive traffic

Impact: New pods only receive traffic after passing readiness checks

Gates Traffic Flow

Liveness Probe

Purpose: Detects unhealthy pods and triggers restarts

Impact: Ensures only healthy pods remain in rotation

Automatic Recovery

Rollout Safety

Behavior: Controller waits for new pods to pass readiness before terminating old pods

Result: Prevents premature pod termination

Protected Rollout
Pod Lifecycle with Health Checks
πŸ₯ Health-Monitored Pods
3 pods
apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: app image: myapp:v2 readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 5 periodSeconds: 3 livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 15 periodSeconds: 10
πŸ“ YAML Configuration Example
apiVersion: apps/v1 kind: Deployment metadata: name: apache-deployment spec: replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 # Allow 1 extra pod during update maxUnavailable: 1 # Max 1 pod can be unavailable selector: matchLabels: app: apache template: metadata: labels: app: apache spec: containers: - name: apache-container image: karthickponcloud/k8slabs:apache_v2 # Updated from v1 to v2 ports: - containerPort: 80
ℹ️ How Rolling Update Works

Update Process

1. Initial State: All 3 pods running v1 (Old ReplicaSet)

2. New ReplicaSet Created: When deployment spec changes, a new ReplicaSet is created for v2

3. Gradual Scaling:

4. Service Routing: Kubernetes Service automatically routes traffic to all healthy pods (both v1 and v2) during the update

5. Completion: Old ReplicaSet scaled to 0 (but retained for rollback), all traffic goes to v2

Key Benefits

βœ… Zero Downtime: Service remains available throughout the update

βœ… Controlled Rate: maxSurge and maxUnavailable prevent resource spikes

βœ… Easy Rollback: Old ReplicaSet retained for quick reversions

βœ… Automatic Traffic Distribution: Service handles load balancing automatically