⌂ Home

🧩 DaemonSet Distribution Pattern

Ensuring One Pod Per Node in Kubernetes

Repository YAML Files:

📊 Core Concept: One Pod Per Node

Initial Cluster State
🖥️ Node-1
📦 DS Pod
🖥️ Node-2
📦 DS Pod
🖥️ Node-3
📦 DS Pod
  • DaemonSet automatically schedules exactly one pod on each node
  • No manual scheduling required - fully automated by Kubernetes
  • Ideal for cluster-wide node-level services

➕ Automatic Pod Creation on Node Addition

Before
🖥️ Node-1
📦 DS Pod
🖥️ Node-2
📦 DS Pod
🖥️ Node-3
📦 DS Pod
After (New Node Added)
🖥️ Node-1
📦 DS Pod
🖥️ Node-2
📦 DS Pod
🖥️ Node-3
📦 DS Pod
✨ Node-4
📦 DS Pod ✨
  • New node joins cluster → DaemonSet controller detects it
  • Pod automatically created and scheduled on new node
  • No manual intervention required

➖ Automatic Pod Cleanup on Node Removal

Before
🖥️ Node-1
📦 DS Pod
🖥️ Node-2
📦 DS Pod
🖥️ Node-3
📦 DS Pod
After (Node Removed)
🖥️ Node-1
📦 DS Pod
🖥️ Node-2
📦 DS Pod
🗑️ Node-3
Pod Terminated
  • Node removed or drained → Pod gracefully terminated
  • Automatic cleanup prevents orphaned pods
  • Maintains cluster consistency

🎯 Node Selector: Targeted Deployment

DaemonSet with nodeSelector: disktype=ssd
🖥️ Node-1
disktype=ssd
📦 DS Pod
🖥️ Node-2
disktype=hdd
No Pod
🖥️ Node-3
disktype=ssd
📦 DS Pod
📌 Node Selector Use Case

Deploy storage drivers only on nodes with SSD disks, or monitoring agents only on production nodes.

💼 Common Use Cases

📊 Monitoring Agents
Prometheus Node Exporter, Datadog Agent - collect metrics from every node
📝 Log Collection
Fluentd, Filebeat, Logstash - gather logs from node and container filesystems
🌐 Network Plugins
CNI plugins, kube-proxy - manage pod networking on each node
💾 Storage Drivers
CSI node drivers - enable persistent volume mounting capabilities

📝 YAML Example: Monitoring Agent DaemonSet

daemonset-monitoring.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-monitoring-agent
  namespace: monitoring
  labels:
    app: monitoring-agent
spec:
  selector:
    matchLabels:
      app: monitoring-agent
  template:
    metadata:
      labels:
        app: monitoring-agent
    spec:
      # Optional: Target specific nodes
      nodeSelector:
        monitoring: enabled
      containers:
      - name: agent
        image: prom/node-exporter:v1.7.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9100
          name: metrics
        # Access host metrics
        volumeMounts:
        - name: proc
          mountPath: /host/proc
          readOnly: true
        - name: sys
          mountPath: /host/sys
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: sys
        hostPath:
          path: /sys
      # Run on master nodes if needed
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule

🎨 Interactive Flow Diagram

DaemonSet Created kubectl apply DaemonSet resource created Watches Controller Scans Cluster Finds 4 nodes DaemonSet controller scans for eligible nodes Creates Pod 1 Node-1 Pod 2 Node-2 Pod 3 Node-3 Pod 4 Node-4 Node Added Node-5 Added ✨ New Pod 5 Auto-created DaemonSet automatically creates pod on new node Complete 5 Pods on 5 Nodes Automatic Node Coverage Controller ensures exactly one pod per eligible node at all times
DaemonSet + Node Selector + Taints DaemonSet Config nodeSelector: type: monitoring tolerations: - key: node-role Node-1 (Worker) type: monitoring ✓ No taints ✓ Pod Scheduled Node-2 (Worker) type: logging ✗ No taints ✗ No Pod Node-3 (Control) type: monitoring ✓ Taint: node-role ✓ Pod Scheduled Node-4 (Worker) type: monitoring ✓ Taint: special ⚠️ ⚠ Blocked Node-5 (Worker) type: monitoring ✓ No taints ✓ Pod Scheduled Result: 3 Pods Scheduled ✓ Nodes 1, 3, 5 match selector and have tolerations ✗ Node 2 (wrong label), Node 4 (untolerated taint)
Regular Node
New Node
Removed Node
DaemonSet Pod