⌂ Home

Kubernetes Pod Communication

Understanding Network Architecture and Traffic Flow

Pod-to-Pod Communication (Same Node)

Pods on the same node communicate directly through the network bridge (cbr0/cni0). Each Pod has a unique IP address from the Pod CIDR range. No NAT is required - Pods communicate using their actual IP addresses.

Worker-Node-1 (192.168.1.10) Network Bridge (cbr0/cni0) Pod A app: frontend IP: 10.244.0.5 Pod B app: backend IP: 10.244.0.6 1. Send to 10.244.0.6 2. Forward 3. Return traffic

Key Networking Facts

  • Flat Network Model: Pods communicate directly using their IP addresses without NAT
  • Network Bridge: Linux bridge (cbr0 or cni0) connects all Pods on the same node
  • Pod CIDR: Pods receive IPs from the cluster Pod CIDR (e.g., 10.244.0.0/16)
  • Virtual Ethernet Pairs (veth): Each Pod connects to the bridge via a veth pair
  • No Service Discovery Needed: Direct Pod-to-Pod communication using IP addresses

Technical Implementation

The CNI (Container Network Interface) plugin creates a network namespace for each Pod and connects it to the node's bridge network using veth pairs. Traffic flows through the Linux kernel's network stack at Layer 2 (Ethernet).

# View bridge on the node ip link show cbr0 # Check Pod network namespace ip netns list # View routing table inside a Pod kubectl exec -it pod-a -- ip route

Pod-to-Service Communication (ClusterIP)

Services provide a stable virtual IP (ClusterIP) that load balances across backend Pods. Kube-proxy runs on each node and manages iptables/IPVS rules to redirect traffic from the Service IP to actual Pod IPs. The default mode is iptables.

Node with Service Routing Client Pod app: frontend IP: 10.244.0.10 Kube-proxy iptables rules DNAT translation (default mode) Service: backend-svc ClusterIP: 10.96.0.20 Port: 80 Backend Pod 1 IP: 10.244.0.15 Port: 8080 Backend Pod 2 IP: 10.244.0.16 Port: 8080 Backend Pod 3 IP: 10.244.0.17 Port: 8080 1. Request to 10.96.0.20:80 2. DNAT 3a. Route 3b. Route 3c. Route Load balanced by iptables rules

Key Networking Facts

  • Service CIDR: Services receive virtual IPs from a separate CIDR range (e.g., 10.96.0.0/12)
  • ClusterIP: Virtual IP that doesn't exist on any interface - managed by kube-proxy
  • Kube-proxy Modes: iptables (default), IPVS, or eBPF for routing rules
  • DNAT (Destination NAT): Service IP is translated to a backend Pod IP
  • Load Balancing: Kube-proxy distributes traffic across healthy backend Pods
  • Endpoints: Kubernetes automatically maintains the list of backend Pod IPs

Technical Implementation

When a Pod sends traffic to a Service IP, kube-proxy's iptables rules intercept the packet and perform DNAT to rewrite the destination to one of the backend Pod IPs. The default iptables mode uses random load balancing. For IPVS mode, more sophisticated load balancing algorithms are available.

# View Service details kubectl get svc backend-svc -o wide # Check endpoints (backend Pods) kubectl get endpoints backend-svc # View iptables rules for a Service (on the node) iptables-save | grep backend-svc # Check kube-proxy mode kubectl logs -n kube-system kube-proxy-xxxxx | grep "Using"

Cross-Namespace Communication

Kubernetes DNS enables service discovery across namespaces. Each Service gets a DNS name following the pattern: service-name.namespace.svc.cluster.local. CoreDNS resolves these names to ClusterIP addresses, allowing Pods in different namespaces to communicate.

Namespace: frontend Frontend Pod app: web IP: 10.244.1.5 Service: web-svc Namespace: backend Backend Pod app: api IP: 10.244.2.10 Service: api-svc CoreDNS Namespace: kube-system ClusterIP: 10.96.0.10 DNS Server 1. DNS Query: api-svc.backend .svc.cluster.local 2. Returns IP: 10.96.0.25 3. HTTP Request to ClusterIP:10.96.0.25:80 DNS Resolution Pattern Same namespace: service-name Cross namespace: service-name.namespace Fully qualified: service-name.namespace.svc.cluster.local

Key Networking Facts

  • DNS Service Discovery: CoreDNS provides automatic DNS resolution for all Services
  • Namespace Isolation: Namespaces provide logical isolation but not network isolation by default
  • DNS Naming Convention: service-name.namespace.svc.cluster.local
  • Short Names: Within the same namespace, use just "service-name"
  • Cross-Namespace Access: Use "service-name.namespace" for Services in other namespaces
  • DNS Server: CoreDNS runs as a Service with ClusterIP (typically 10.96.0.10)
  • Pod DNS Config: Each Pod is configured with /etc/resolv.conf pointing to CoreDNS

Technical Implementation

Every Pod's /etc/resolv.conf is automatically configured with the CoreDNS Service IP and search domains. When a Pod looks up "api-svc.backend", the search domain expands it to "api-svc.backend.svc.cluster.local". CoreDNS watches the Kubernetes API for Service changes and updates DNS records automatically.

# Check DNS configuration inside a Pod kubectl exec -it frontend-pod -- cat /etc/resolv.conf # Test DNS resolution kubectl exec -it frontend-pod -- nslookup api-svc.backend # Check CoreDNS Service kubectl get svc -n kube-system kube-dns # View CoreDNS ConfigMap kubectl get configmap -n kube-system coredns -o yaml

Cross-Node Pod Communication

When Pods on different nodes communicate, the CNI plugin handles routing. Most CNI plugins use overlay networks (VXLAN, IP-in-IP) to encapsulate Pod traffic and route it across nodes. Each node has a unique Pod CIDR subnet, and the CNI manages routing tables across the cluster.

Worker-Node-1 IP: 192.168.1.10 Pod CIDR: 10.244.0.0/24 Pod A IP: 10.244.0.5 app: frontend CNI Plugin (Flannel/Calico) Worker-Node-2 IP: 192.168.1.11 Pod CIDR: 10.244.1.0/24 Pod B IP: 10.244.1.10 app: backend CNI Plugin (Flannel/Calico) Physical/Underlay Network 192.168.1.0/24 Node-to-node communication via VXLAN/IP-in-IP overlay 1. Send to 2. Encapsulate 3. Route via overlay tunnel Src: 192.168.1.10 → Dst: 192.168.1.11 4. Decapsulate 5. Deliver CNI Plugin Responsibilities 1. Assign unique Pod CIDR to each node 2. Create overlay network for cross-node communication (VXLAN, IP-in-IP, etc.) 3. Program routing tables on each node for Pod traffic 4. Handle encapsulation/decapsulation of packets at node boundaries

Key Networking Facts

  • CNI Plugin Role: Manages Pod networking, IP allocation, and cross-node routing
  • Overlay Networks: Most CNI plugins use VXLAN or IP-in-IP to tunnel Pod traffic between nodes
  • Pod CIDR per Node: Each node gets a unique subnet from the cluster Pod CIDR
  • Encapsulation: Pod packets are wrapped in node IP packets for transport across the physical network
  • Flat Network Model Maintained: Pods see direct communication even though traffic is tunneled
  • No NAT Required: Pods communicate using their actual IP addresses across nodes
  • Routing Tables: CNI maintains routing tables on each node to direct Pod traffic

Technical Implementation

When Pod A (10.244.0.5) on Worker-Node-1 sends traffic to Pod B (10.244.1.10) on Worker-Node-2, the CNI plugin on Worker-Node-1 encapsulates the packet in a VXLAN/IP-in-IP tunnel with Worker-Node-2 as the destination. Worker-Node-2's CNI plugin decapsulates the packet and delivers it to Pod B. This maintains the flat network model while working across different nodes.

# View Pod CIDR allocation per node kubectl get nodes -o custom-columns=NAME:.metadata.name,PODCIDER:.spec.podCIDR # Check routing table on a node ip route show # View overlay interface (e.g., for Flannel) ip -d link show flannel.1 # Check CNI plugin configuration cat /etc/cni/net.d/*.conf # View CNI plugin logs kubectl logs -n kube-system -l app=flannel
Pod (Frontend)
Pod (Backend)
Client Pod
Service (ClusterIP)
Network Component
Kube-proxy