Control pod-to-pod, namespace, and external traffic with ingress and egress rules
k8s/labs/security/:
deny-all-ingress.yaml — Deny all ingress to pods labeled app: k8slearningallow-ingress.yaml — Allow ingress only from pods with matching app: k8slearning labeldeny-from-other-namespaces.yaml — Deny cross-namespace ingress in the prod namespaceNetworkPolicies are Kubernetes resources that express Layer 3 / Layer 4 rules (IP, ports, protocols) for which traffic may reach your Pods. Think of them as a distributed firewall: the CNI plugin enforces the rules on the data plane.
If no NetworkPolicy selects a Pod, all traffic is allowed in both directions. The cluster is “open” until you opt in to restrictions.
As soon as any NetworkPolicy’s podSelector matches a Pod, that Pod is now governed by NetworkPolicy semantics for the directions listed in policyTypes. Traffic that does not match an explicit allow rule is dropped for those directions.
podSelector — Pods in the same namespace as the policy, matched by labels.namespaceSelector — Namespaces matched by labels; often combined with podSelector for Pods in those namespaces.ipBlock — CIDR ranges (cluster/external IPs), with optional except subnets.NetworkPolicy requires a CNI that implements it. Examples: Calico, Cilium, Weave Net (and many managed CNIs). Flannel alone does not enforce NetworkPolicy unless paired with another controller (for example Calico for policy). Always verify your platform’s docs.
Below is a annotated skeleton. Your cluster must serve networking.k8s.io/v1 NetworkPolicy.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: example
namespace: default
spec:
podSelector: # Which pods this policy applies to
matchLabels:
app: backend
policyTypes:
- Ingress
- Egress
ingress: # Who can talk TO these pods
- from:
- podSelector: {}
- namespaceSelector: {}
- ipBlock: {}
ports:
- protocol: TCP
port: 80
egress: # Where these pods can talk TO
- to:
- podSelector: {}
ports:
- protocol: TCP
port: 443
from (or to) list items: OR — traffic may match any peer.podSelector and namespaceSelector, they are ANDed — the source must satisfy both.
Example A — OR: two separate from entries allow either Pods with role=monitor or any Pod in namespaces labeled env=staging.
ingress:
- from:
- podSelector:
matchLabels:
role: monitor
- namespaceSelector:
matchLabels:
env: staging
Example B — AND: one from entry with both selectors allows only Pods labeled app=worker that live in namespaces labeled team=data.
ingress:
- from:
- podSelector:
matchLabels:
app: worker
namespaceSelector:
matchLabels:
team: data
ipBlock in the same peer is also ANDed with any pod/namespace selectors in that peer (unusual but valid).
Click a scenario to reveal YAML, explanation, and a before/after diagram. Adapt namespaces and labels to your cluster.
Applies to all Pods in the namespace (empty podSelector). Declaring Ingress with no rules denies all inbound to those Pods.
📂 See also: k8s/labs/security/deny-all-ingress.yaml (targets pods labeled app: k8slearning specifically)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: apps
spec:
podSelector: {}
policyTypes: [Ingress]
Selected Pods may not initiate connections anywhere until you add egress rules (remember DNS — see Pod-level tab).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: apps
spec:
podSelector: {}
policyTypes: [Egress]
Backend Pods only accept ingress from Pods in namespaces labeled env=frontend. Label the frontend namespace accordingly.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-from-frontend-ns
namespace: backend
spec:
podSelector:
matchLabels:
app: backend
policyTypes: [Ingress]
ingress:
- from:
- namespaceSelector:
matchLabels:
env: frontend
ports:
- protocol: TCP
port: 8080
Only Pods labeled role=api-gateway in the same namespace may reach role=backend on TCP 8080.
📂 See also: k8s/labs/security/allow-ingress.yaml — a simpler variant allowing ingress from pods sharing the same app label
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-from-gateway
namespace: apps
spec:
podSelector:
matchLabels:
role: backend
policyTypes: [Ingress]
ingress:
- from:
- podSelector:
matchLabels:
role: api-gateway
ports:
- protocol: TCP
port: 8080
Ingress allows TCP 8080 only; other ports on the same Pod are not matched by this rule (implicit deny for ingress to that Pod if this is the only policy).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-port-8080
namespace: apps
spec:
podSelector:
matchLabels:
app: backend
policyTypes: [Ingress]
ingress:
- ports:
- protocol: TCP
port: 8080
from:
- podSelector: {}
Use ipBlock for non-Pod sources. Tighten with except if you need holes inside a CIDR.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-from-corporate
namespace: apps
spec:
podSelector:
matchLabels:
app: public-api
policyTypes: [Ingress]
ingress:
- from:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- protocol: TCP
port: 443
Backend accepts traffic from the frontend namespace and may egress only to Pods labeled app=database on TCP 5432. Add a separate DNS egress policy in real clusters.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-combined
namespace: apps
spec:
podSelector:
matchLabels:
app: backend
policyTypes: [Ingress, Egress]
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
Teams often mirror environments as namespaces (dev, staging, prod). The usual pattern is default deny plus targeted allows.
kubectl label namespace prod env=production
kubectl label namespace staging env=staging
kubectl label namespace dev env=development
After default-deny ingress, an empty podSelector under from means “Pods in the policy’s namespace” — i.e. same-namespace only.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: same-namespace-only
namespace: prod
spec:
podSelector: {}
policyTypes: [Ingress]
ingress:
- from:
- podSelector: {}
📂 This is exactly what k8s/labs/security/deny-from-other-namespaces.yaml does for the prod namespace. Apply it directly: kubectl apply -f k8s/labs/security/deny-from-other-namespaces.yaml
To allow any namespace labeled env=production (not only the local namespace), use namespaceSelector:
ingress:
- from:
- namespaceSelector:
matchLabels:
env: production
Allow monitoring agents in observability to scrape Pods in prod:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring
namespace: prod
spec:
podSelector: {}
policyTypes: [Ingress]
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: observability
ports:
- protocol: TCP
port: 9090
For logging/egress from app namespaces to a central collector, mirror the idea with egress and namespaceSelector or ipBlock for the collector VIP.
Inside one namespace you can enforce tiers: UI → API → database, or restrict sidecars to only talk to their primary container’s ports (combine with Service meshes for L7).
tier=frontend may egress only to tier=backend on app port.tier=backend accepts from frontend; egress only to tier=database on 5432.tier=database accepts from backend only.Deny-all egress breaks name resolution unless you allow traffic to kube-dns/CoreDNS (typically UDP and TCP 53 to the cluster DNS Service cluster IP or to labeled DNS Pods — implementation varies by CNI; verify endpoints).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: apps
spec:
podSelector: {}
policyTypes: [Egress]
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
- podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Adjust labels/namespace to match your cluster’s DNS deployment.
Apply default deny, then stack policies: one for DNS, one per tier. Example backend ingress/egress fragment:
# Backend: ingress from frontend tier only
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
# Backend: egress to database tier only
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
from.podSelector — Pods in the policy namespace.from.namespaceSelector — Any Pod in matching namespaces (use with podSelector to narrow).from.ipBlock — Source CIDR outside or inside the cluster.ports — Optional per rule; restricts protocols/ports for that ingress stanza.to.podSelector, to.namespaceSelector, to.ipBlock — same semantics as ingress but for destinations.ports — Allowed destination ports for that egress stanza.endPort, Kubernetes 1.25+)ports:
- protocol: TCP
port: 8000
endPort: 8080
TCP, UDP, and SCTP are supported where the CNI and kernel allow. If unsure, test with your platform.
You may use the port name from a container’s ports list instead of a numeric port; the API resolves it to the numeric port on each Pod.
| Aspect | Ingress | Egress |
|---|---|---|
| Direction | Into selected Pods | Out of selected Pods |
| Peer field | from | to |
| Selectors | podSelector, namespaceSelector, ipBlock | Same set under to |
| Ports | Destination ports on your Pods | Destination ports on remote peers |
| PolicyTypes | Include Ingress to evaluate | Include Egress to evaluate |
# List policies in all namespaces
kubectl get networkpolicy -A
# Describe one policy (events + rules summary)
kubectl describe networkpolicy <name> -n <namespace>
# Full YAML
kubectl get networkpolicy <name> -n <namespace> -o yaml
kubectl exec -it <pod> -n <ns> -- wget -qO- --timeout=2 http://<target>:80
Replace target with a Service DNS name or Pod IP. Use nc or curl if your image includes them.
kubectl exec -it <pod> -n <ns> -- nslookup kubernetes.default
kubectl label namespace <ns> <key>=<value>
policyTypes.from/to peers, AND inside a peer.| Mistake | Symptom | Fix |
|---|---|---|
| Deny egress without DNS allow | Intermittent lookup failed, CrashLoop | Add DNS NetworkPolicy peer to kube-dns |
AND/OR confusion in from | Unexpected allow/deny | Split peers for OR; combine selectors in one peer for AND |
| Wrong namespace labels | Cross-namespace traffic blocked | kubectl get ns --show-labels; align selectors |
| CNI without policy support | Policies exist but nothing changes | Install supported CNI or enable policy feature |
Forgetting policyTypes | Egress unrestricted while tuning ingress (or vice versa) | Set both when you want both evaluated |
| Mechanism | Layer | Notes |
|---|---|---|
| Kubernetes NetworkPolicy | L3/L4 in Kubernetes API | Portable; depends on CNI enforcement |
| Istio AuthorizationPolicy | L7 (HTTP/gRPC), identities | Requires mesh; rich request-level rules |
| Cilium NetworkPolicy / CNP | L3/L4/L7 (with features) | CRDs + eBPF; extra policy types beyond baseline |