Kubernetes Course
Cluster Networking Basics
You know what Services do. Now let's understand what's underneath — how Pods actually talk to each other across different nodes, what a CNI plugin is, why every Pod gets its own IP, and how traffic flows from outside the cluster all the way to a container running on a specific machine.
Use Play with Kubernetes at labs.play-with-k8s.com or start Minikube with minikube start. This lesson has some exploration commands — having a running cluster makes them feel real.
The Question This Lesson Answers
Here's something that should feel a bit mysterious at this point. You have a Pod running on Node 1 and another Pod running on Node 2. These are completely separate machines. How does Pod A on Node 1 send a request to Pod B on Node 2, using Pod B's IP address directly?
In a normal network, two machines on different servers communicate through routers, gateways, NAT translations. But in Kubernetes, every Pod can reach every other Pod directly using its Pod IP — no matter which node it's on, no NAT, no port mapping. This is called the flat network model.
Understanding how this is achieved is what this lesson is about.
Kubernetes' Four Networking Rules
Kubernetes doesn't build networking itself. Instead it defines a set of rules that any networking solution must follow. These four rules are the foundation of everything:
10.244.0.5 knows it's at 10.244.0.5 and that's exactly the IP everything else uses to reach it.These rules sound simple but they're actually quite hard to implement across real physical machines. That's the job of the CNI plugin.
What Is a CNI Plugin?
CNI stands for Container Network Interface. It's a standard specification that defines how networking software should plug into Kubernetes. Kubernetes says "I need networking that follows my four rules" — and any CNI plugin that implements those rules can be used.
When a new Pod starts, the kubelet calls the CNI plugin and says "set up networking for this container." The CNI plugin creates a virtual network interface inside the container, assigns an IP from the cluster's address range, and sets up the routing rules so that IP is reachable from anywhere in the cluster.
There are many CNI plugins to choose from. They all do the same job but in different ways — some are simpler, some are faster, some have extra features like network security policies.
| CNI Plugin | Used by default in | What makes it special |
|---|---|---|
| Calico | Many self-managed clusters, EKS option | Very popular. Uses BGP routing. Supports NetworkPolicies — firewall rules between Pods. Great performance at scale. |
| Flannel | Minikube (default), many dev clusters | Simple and reliable. Uses overlay networking (VXLAN). Doesn't support NetworkPolicies. Good for learning and small clusters. |
| Cilium | GKE (default), EKS option, many modern clusters | Uses eBPF — Linux kernel technology that's extremely fast. Excellent observability, L7 policies. Increasingly the modern choice. |
| Weave | Older clusters | Simple to set up. Creates a virtual network mesh between nodes. Less common in newer deployments. |
| AWS VPC CNI | EKS (default) | Each Pod gets a real VPC IP address. Very high performance — no overlay network needed. Tight AWS integration. |
As a developer or DevOps engineer you probably won't choose the CNI plugin — your platform team or cloud provider picks it. But knowing what it is explains a lot. When you see calico-node Pods running in kube-system, now you know what they're doing.
How Pod-to-Pod Communication Actually Works
Let's trace what actually happens when Pod A on Node 1 sends a request to Pod B on Node 2. This is what's happening underneath every time your frontend calls your backend.
10.244.1.3, the node's routing table knows "that IP range belongs to Node 2" and forwards the packet there. Node 2's CNI setup knows "10.244.1.3 is on the bridge on this node" and delivers it directly to Pod B.
Overlay Networks vs Direct Routing — The Two Main Approaches
CNI plugins generally use one of two approaches to get Pod packets from one node to another. You'll hear these terms — here's what they mean in plain English.
Wraps Pod packets inside regular node-to-node packets. Pod A's packet gets put inside an outer envelope addressed to Node 2's IP. Node 2 unwraps it and delivers to Pod B.
Puts Pod IP routes directly into the network's routing tables. The network itself knows how to deliver Pod packets — no wrapping needed. Each node tells the router "Pod IPs 10.244.1.x live here."
DNS — How Services Find Each Other by Name
In Lesson 11 you called your Service by name — http://payment-service — from inside a test Pod. That worked because Kubernetes runs its own internal DNS server called CoreDNS.
Every Pod in the cluster automatically has its DNS resolver pointed at CoreDNS. When you call payment-service from inside a Pod, CoreDNS resolves it to the Service's ClusterIP. Your Pod then sends traffic to that IP, and kube-proxy routes it to a healthy Pod.
http://payment-service
payment-service.default.svc.cluster.local?"
10.96.142.87 (the Service's ClusterIP)
10.96.142.87. kube-proxy intercepts it and routes it to one of the backend Pods.
payment-service
payment-service.default
payment-service.default.svc.cluster.local
<service-name>.<namespace>.svc.<cluster-domain>How External Traffic Gets Into the Cluster
So far we've talked about traffic inside the cluster. But how does a request from a user's browser — sitting somewhere on the internet — actually reach a Pod running on a specific node? Let's trace the full path.
Exploring Your Cluster's Network
The scenario: A new engineer on your team asks "how does the networking actually work on our cluster?" Before you answer, you want to show them exactly what's running. Here's how to inspect the network setup on a real cluster.
# See the CNI and networking Pods running in kube-system
# You'll see CoreDNS, kube-proxy, and possibly flannel/calico/cilium Pods
kubectl get pods -n kube-system
# Check what CNI plugin is installed by looking at the network plugin directory
# (Run this on Minikube with: minikube ssh, then run the command below)
ls /etc/cni/net.d/
# See the IP ranges assigned to your nodes (PodCIDR column)
kubectl get nodes -o wide
# Look at a node's Pod CIDR — the IP range assigned to Pods on that node
kubectl describe node <node-name> | grep PodCIDR
# Check which IP your Pod got — and see which node it landed on
kubectl get pods -o wide
NAMESPACE NAME READY STATUS AGE kube-system coredns-5d78c9869d-4xvzk 1/1 Running 2d kube-system coredns-5d78c9869d-8n2qp 1/1 Running 2d kube-system etcd-minikube 1/1 Running 2d kube-system kube-apiserver-minikube 1/1 Running 2d kube-system kube-proxy-x9k2p 1/1 Running 2d kube-system kube-scheduler-minikube 1/1 Running 2d kube-system storage-provisioner 1/1 Running 2d NAME STATUS ROLES INTERNAL-IP EXTERNAL-IP OS-IMAGE minikube Ready control-plane 192.168.49.2 <none> Ubuntu 22.04 PodCIDR: 10.244.0.0/24 NAME READY STATUS IP NODE payment-api-7d9f8c6b4-x2p9k 1/1 Running 10.244.0.5 minikube payment-api-7d9f8c6b4-mn7ql 1/1 Running 10.244.0.6 minikube payment-api-7d9f8c6b4-k8sj2 1/1 Running 10.244.0.7 minikube
coredns Pods — there are two of them, running in kube-system. CoreDNS is deployed as a Deployment with two replicas for reliability. Every DNS query from every Pod in your cluster goes through one of these two Pods.
kube-proxy Pod — one per node. It runs as a DaemonSet, which is a special object type that ensures exactly one Pod runs on every node (Lesson 7 gave you the map, you'll go deep in a later lesson). This kube-proxy Pod manages the iptables/ipvs rules on its node.
PodCIDR: 10.244.0.0/24 — this is the IP range allocated to Pods on this node. Every Pod on this node gets an IP from 10.244.0.1 to 10.244.0.254. On a multi-node cluster, each node gets a different /24 range — so Pod IPs on different nodes never conflict.
Testing DNS Resolution From Inside a Pod
The scenario: A developer reports that their service can't connect to the database. Before diving into the application code, you want to confirm DNS is working correctly from inside the Pod. Here's exactly how to do that diagnosis.
# Create a debug Pod with DNS tools available
kubectl run dns-test --image=busybox:1.35 -it --rm -- /bin/sh
# You're now inside the Pod. Run these:
# See which DNS server this Pod is using — should be CoreDNS's ClusterIP
cat /etc/resolv.conf
# Look up a Service by name — should return the ClusterIP
nslookup payment-service
# Look up using the full FQDN
nslookup payment-service.default.svc.cluster.local
# Lookup a non-existent service — should return NXDOMAIN (not found)
nslookup this-does-not-exist
# Exit the debug Pod
exit
/ # cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 / # nslookup payment-service Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: payment-service Address 1: 10.96.142.87 payment-service.default.svc.cluster.local / # nslookup this-does-not-exist Server: 10.96.0.10 Address 1: 10.96.0.10 nslookup: can't resolve 'this-does-not-exist'
nameserver 10.96.0.10 — every Pod's /etc/resolv.conf is automatically set to point at CoreDNS's ClusterIP. Kubernetes injects this when the Pod starts — you never have to configure it manually.
search default.svc.cluster.local — this is why short names work. When you call payment-service, the resolver automatically appends .default.svc.cluster.local and tries that. If your Pod is in a different namespace, it appends that namespace instead.
If a developer says "I can't connect to the database", the first debug step is always: can you resolve the DNS name? If nslookup returns an IP, DNS is working and the problem is elsewhere (wrong port, missing Service, wrong label selector). If it returns can't resolve, the Service doesn't exist or is in the wrong namespace.
The Full Network Stack — Everything in One View
Let's put everything together. Here's every layer of Kubernetes networking and what it does:
| Layer | Who runs it | What it does |
|---|---|---|
| Pod Networking | CNI Plugin | Assigns IPs to Pods, sets up virtual interfaces, creates routes between nodes |
| Service Routing | kube-proxy | Intercepts traffic to Service ClusterIPs and routes it to healthy Pod IPs via iptables/ipvs rules |
| DNS Resolution | CoreDNS | Resolves Service names to ClusterIPs. Runs as a Deployment in kube-system. |
| External Access | NodePort / LoadBalancer / Ingress | Exposes Services to traffic from outside the cluster |
| Network Policy | CNI Plugin (if supported) | Firewall rules between Pods — controls which Pods can talk to which |
This lesson is dense — but you really only need to take away a few things. Every Pod gets a unique IP. Services give stable DNS names on top of those changing IPs. CoreDNS resolves names, kube-proxy routes traffic, and the CNI plugin handles the physical delivery between nodes.
The most practical skill from this lesson is the DNS debugging technique — spinning up a busybox Pod and running nslookup. That single technique will help you diagnose the majority of networking issues you'll ever hit in a real cluster.
kubectl get pods -n kube-system and identify the CoreDNS and kube-proxy Pods. Then run kubectl describe pod <coredns-pod-name> -n kube-system and read the configuration — see what arguments CoreDNS is started with.frontend and backend, each with a Service. Then exec into a frontend Pod and run curl http://backend. This is real service-to-service communication — the foundation of every microservice architecture.kubectl get pods -o wide and note the Pod IPs. Then exec into any Pod and try pinging another Pod directly by IP: ping 10.244.0.6. This proves the flat network model — Pod-to-Pod, no NAT.Practice Questions
Type from memory.
1. What is the name of the internal DNS server that runs inside every Kubernetes cluster and resolves Service names to their ClusterIPs?
2. Kubernetes doesn't build networking itself — it defines rules and expects a plugin to implement them. What does CNI stand for?
3. You have a Service called payment-service in the production namespace. A Pod in the staging namespace needs to call it. What DNS name should it use?
Knowledge Check
Pick the best answer.
1. What does Kubernetes' flat network model guarantee?
2. A developer says their service can't connect to the database. What is the first networking check you should do?
3. A request arrives at a Service's ClusterIP. What component actually routes that request to a real Pod?
Up Next · Lesson 13
kubectl Introduction
You've been using kubectl throughout this section. Now we go deep — the full command structure, every essential flag, output formats, how to use context to switch between clusters, and the shortcuts that save experienced engineers hours every week.