⌂ Home

Service Load Balancing During Rolling Update

Zero-Downtime Deployments: During a rolling update, the Service continuously routes traffic to healthy pods. As new v2 pods become ready, they're added to the endpoint list. Old v1 pods are removed only after graceful termination. This ensures 100% availability throughout the rollout.
1 2 3
Stage 1: Before Rollout - All v1 Pods
Service: apache-service
selector: app=apache
100% traffic to v1
Pod 1 (v1)
Ready ✓
Pod 2 (v1)
Ready ✓
Endpoints: Service tracks 2 ready endpoints
• 10.244.1.5:80 (pod-1-v1)
• 10.244.1.6:80 (pod-2-v1)
Initial Deployment (apache_v1)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apache-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: apache
  template:
    metadata:
      labels:
        app: apache
    spec:
      containers:
      - name: apache-container
        image: karthickponcloud/k8slabs:apache_v1
        ports:
        - containerPort: 80

Key Points:

  • Service selector matches label app=apache
  • All 2 pods are running version 1 and are healthy
  • Service distributes traffic equally across both v1 pods
  • Endpoints are automatically managed by Kubernetes
Stage 2: During Rollout - Mixed v1/v2 Pods
Service: apache-service
selector: app=apache
~50% v1, ~50% v2
Pod 1 (v1)
Ready ✓
Pod 2 (v1)
Terminating...
Pod 3 (v2)
Ready ✓
Endpoints: Service tracks 2 ready endpoints (mixed versions)
• 10.244.1.5:80 (pod-1-v1) - Still serving
• 10.244.1.7:80 (pod-3-v2) - Newly added
• 10.244.1.6:80 (pod-2-v1) - Removed from endpoints
Rollout Update Command
kubectl set image deployment/apache-deployment \
  apache-container=karthickponcloud/k8slabs:apache_v2

# Or update the deployment YAML and apply:
kubectl apply -f deployment.yaml

Key Points:

  • Rolling update creates new v2 pods while keeping v1 pods running
  • New v2 pods are added to endpoints only after passing readiness probes
  • Old v1 pods are gracefully terminated (removed from endpoints first)
  • Service routes traffic to both v1 and v2 pods during transition
  • Zero downtime: Service always has healthy pods to route to
Stage 3: After Rollout - All v2 Pods
Service: apache-service
selector: app=apache
100% traffic to v2
Pod 3 (v2)
Ready ✓
Pod 4 (v2)
Ready ✓
Endpoints: Service tracks 2 ready endpoints (all v2)
• 10.244.1.7:80 (pod-3-v2)
• 10.244.1.8:80 (pod-4-v2)
Verify Rollout
kubectl rollout status deployment/apache-deployment
# Output: deployment "apache-deployment" successfully rolled out

kubectl get pods
# All pods show the new v2 image

kubectl describe service apache-service
# Endpoints show only v2 pod IPs

Key Points:

  • All v1 pods have been replaced by v2 pods
  • Service now routes 100% traffic to v2 pods
  • Same Service (apache-service) used throughout - no configuration change
  • If issues occur, rollback with: kubectl rollout undo deployment/apache-deployment
  • Service selector remains app=apache - works for both versions
Service Definition (Unchanged Throughout Rollout)
apiVersion: v1
kind: Service
metadata:
  name: apache-service
spec:
  selector:
    app: apache    # Matches BOTH v1 and v2 pods
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: NodePort   # Accessible externally via NodePort

Service Load Balancing Behavior: