ReplicaSet control loop
ReplicaSet FlowReplicaSet is always asking whether the current Pod count matches the desired Pod count.
This page answers a very common classroom question: both controllers create Pods automatically, so what is the real difference? The clean answer is that ReplicaSet maintains a desired number of Pods, while DaemonSet maintains Pod presence on nodes.
This is the main side-by-side view you can use directly in class.
ReplicaSet focuses on app availability. If the desired count is 4 and one Pod disappears, it creates a replacement. It does not care which node gets that replacement, as long as the total count is restored.
DaemonSet focuses on node presence. If a new node joins, the controller schedules a Pod there automatically. If a node leaves, the corresponding Pod leaves with it.
Use the filter buttons to switch between the two controller thought processes and the shared concept summary.
ReplicaSet is always asking whether the current Pod count matches the desired Pod count.
DaemonSet is always asking whether each matching node has its required Pod.
Both controllers reconcile desired state, but the desired state itself is different.
The wrong choice usually happens when learners confuse app replicas with node-wide helpers.
This table is useful when you want a crisp verbal summary after the diagrams.
| Question | ReplicaSet | DaemonSet |
|---|---|---|
| What does it maintain? | Total number of Pods. | Pod presence on each eligible node. |
| What happens when a node is added? | No extra Pod unless desired count is not met. | A Pod is created on the new node automatically. |
| Typical use case | Frontend app, API, stateless worker replicas. | Log collection, metrics, networking, security agents. |
| Scheduling concern | Any suitable node is fine. | Every matching node should run one. |
| Common production pattern | Usually behind a Deployment. | Directly used for node-level cluster services. |
These cards connect the abstract controller behavior to practical workloads.
You want 3 or 4 frontend Pods running behind a Service. Any node can host them, and if one crashes, a replacement is created.
Every node should run a log shipping Pod locally so logs are collected from everywhere, including newly added nodes.
If traffic increases, you scale the number of Pods. The main concern is capacity and availability, not one-per-node presence.
Each node needs the same background helper because the data is local to the node itself.
A short sequence that helps explain the difference clearly and concisely.
Both are controllers that keep actual state aligned with desired state.
ReplicaSet wants a number. DaemonSet wants node-by-node presence.
Three web Pods for a frontend is a ReplicaSet-style need.
One logging Pod on every worker node is a DaemonSet-style need.
DaemonSet expands to the new node automatically; ReplicaSet does not unless total count is low.
ReplicaSet equals replicas of an app. DaemonSet equals daemon on each node.
These examples make the concept concrete and are simple enough for live explanation.
This is the direct controller form for keeping a fixed app replica count.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: shop-frontend
spec:
replicas: 3
selector:
matchLabels:
app: shop-frontend
template:
metadata:
labels:
app: shop-frontend
spec:
containers:
- name: web
image: nginx:1.25
This mirrors the repo's DaemonSet lab idea for running a Pod on every node.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-nginx-simple
spec:
selector:
matchLabels:
app: node-nginx-simple
template:
metadata:
labels:
app: node-nginx-simple
spec:
containers:
- name: nginx
image: nginx:1.25
In real production app delivery, teams usually create a Deployment rather than hand-managing a ReplicaSet. The ReplicaSet concept still matters because Deployment uses it internally.
Use these answers for common follow-up questions.
Because ReplicaSet may place all requested Pods on only a few nodes, leaving some nodes without the agent.
Because your app usually needs a chosen number of replicas, not one copy tied to every node in the cluster.
Yes. You can narrow it with node selectors, affinity, and tolerations.
ReplicaSet equals replicas of an app. DaemonSet equals daemon on each node.