Kubernetes Course
Worker Node Components
The control plane makes decisions. Worker nodes do the actual work. This is where your application containers live, run, and die. Let's walk through exactly what's inside a worker node — in plain English, no skipping anything.
What Is a Worker Node?
A worker node is just a computer — a physical server or a virtual machine — that Kubernetes uses to run your application containers. Nothing special about the hardware. It could be a beefy cloud machine with 64 cores, or a small VM with 2 cores. Kubernetes doesn't care, as long as three pieces of software are installed and running on it.
Those three pieces are: the kubelet, kube-proxy, and a container runtime. That's it. Every worker node in every Kubernetes cluster in the world runs these same three things.
The kubelet — The Node's Loyal Manager
Think of the kubelet as the manager of a single branch office. Head office (the control plane) sends it instructions. The kubelet reads those instructions, gets the work done, and reports back.
More precisely — the kubelet watches the Kubernetes API Server and asks: "Are there any Pods assigned to my node that I should be running?" When a new Pod arrives, the kubelet tells the container runtime to pull the image and start the container. Then it watches that container like a hawk.
When a Pod is assigned to this node, kubelet tells the container runtime to pull the image and start it. It passes along all the config — environment variables, port numbers, resource limits.
The kubelet runs health checks on every container it manages. If a container fails its check, the kubelet restarts it. No human needed, no alert required — it just fixes it.
Every few seconds the kubelet sends a heartbeat to the API Server — "I'm alive, here's what's running on me, here's how much CPU and memory each Pod is using."
When a Pod is deleted or moved to another node, the kubelet stops the containers and frees up the resources. It also cleans up unused images to save disk space.
The kubelet is the only Kubernetes component that is not run as a container. It runs directly on the operating system of the worker node as a regular system service. Why? Because it needs to exist before any containers can run — it's the thing that starts containers in the first place. You can't run the container-starter inside a container.
Watching the kubelet in action
The scenario: You're a junior DevOps engineer at a retail company. Black Friday is two weeks away. Your lead asks you to check that all nodes in the cluster are healthy and properly reporting in. Here's how you do it.
# List all nodes and see their current status
# This is the first command every engineer runs when checking a cluster
kubectl get nodes
# Get more detail on a specific node — perfect for investigating a problem
# Replace "worker-node-01" with your actual node name
kubectl describe node worker-node-01
NAME STATUS ROLES AGE VERSION control-plane Ready control-plane 30d v1.28.0 worker-node-01 Ready <none> 30d v1.28.0 worker-node-02 Ready <none> 30d v1.28.0 worker-node-03 NotReady <none> 30d v1.28.0
STATUS: Ready means the kubelet on that node is alive, healthy, and reporting in to the API Server regularly. The control plane trusts this node to run Pods.
STATUS: NotReady on worker-node-03 is a red flag. The kubelet on that node has stopped sending heartbeats. Maybe the node crashed. Maybe the network dropped. The Node Controller will wait 40 seconds before acting on this — and if it stays NotReady for 5 minutes, any Pods on that node will be rescheduled elsewhere automatically.
ROLES: <none> on worker nodes just means they're plain workers — no control plane responsibilities. The control-plane node shows its role explicitly.
kube-proxy — The Node's Postman
Here's a simple problem. Your frontend container needs to talk to your backend container. But containers get different IP addresses every time they start. You can't hardcode an IP — it'll break the moment the container restarts or moves to a different node.
Kubernetes solves this with Services — a stable name and IP address that always points to a group of healthy Pods, no matter which node they're on or how many times they've restarted. And kube-proxy is the component that makes Services actually work at the network level.
kube-proxy runs on every node and maintains a set of network rules in the Linux kernel. When a request comes in for a Service, those rules route it to the right Pod — automatically, instantly, across any node in the cluster.
Writes rules into the Linux kernel's iptables firewall. Very battle-tested. Slight overhead with large numbers of Services (thousands of rules).
Uses a different kernel module (IP Virtual Server). Scales much better for clusters with thousands of Services. Now the preferred choice for large production clusters.
The Container Runtime — The Hands That Do the Work
The container runtime is the software that actually creates and runs containers. The kubelet tells it what to do. The runtime just does it — no opinions, no decisions, pure execution.
When the kubelet says "start this container," the runtime does all the heavy lifting underneath:
node server.js or python app.py. When this process exits, the container is done.How All Three Work Together
Here's the thing most people don't fully appreciate: the kubelet, kube-proxy, and container runtime never communicate directly with each other. They each do their own job and Kubernetes wires the result together. Let's trace a real scenario from start to finish:
Checking What's Running on a Node
The scenario: Something is wrong on worker-node-02. Pods are being evicted and you don't know why. You need to see how much resource the node is actually using, and which Pods are currently running on it. Here's how.
# See all Pods running on a specific node
# --field-selector filters to only show Pods on that node
kubectl get pods --all-namespaces --field-selector spec.nodeName=worker-node-02
# Describe the node — shows resource usage, capacity, and events
kubectl describe node worker-node-02
NAMESPACE NAME READY STATUS NODE default payment-api-7d4b-x2p9k 1/1 Running worker-node-02 default auth-service-6c5f-k8sj2 1/1 Running worker-node-02 monitoring prometheus-5d8b9-mq7pl 1/1 Running worker-node-02 --- Name: worker-node-02 Roles: <none> Conditions: Type Status Reason ---- ------ ------ MemoryPressure True KubeletHasInsufficientMemory DiskPressure False KubeletHasNoDiskPressure PIDPressure False KubeletHasSufficientPID Ready False KubeletNotReady Allocated resources: Resource Requests Limits cpu 1850m/2 900m/2 memory 1.9Gi/2Gi 1.9Gi/2Gi Events: Warning Evicted 2m kubelet The node was low on resource: memory.
MemoryPressure: True — that's your answer. The node is running out of memory. The kubelet detected this and is starting to evict Pods to free up resources. This is the kubelet protecting the node from total failure.
Allocated resources shows you 1.9Gi of memory is allocated out of 2Gi capacity. Nearly full. The fix is to either add a new node to the cluster, or reduce the memory requests on some Pods.
The Events section at the bottom is always the first place to look when debugging a node. The kubelet writes everything there — why it evicted Pods, what health checks failed, what images it couldn't pull.
Node Conditions — What They Mean
The kubelet reports several "conditions" for each node. These tell you exactly what's healthy and what isn't. Here's a plain-English guide to each one:
| Condition | Healthy value | What it means |
|---|---|---|
| Ready | True | The kubelet is healthy. The node can accept new Pods. This is the main one to watch. |
| MemoryPressure | False | The node is running low on memory. Kubernetes will start evicting low-priority Pods to free up space. |
| DiskPressure | False | The node's disk is nearly full. Could mean too many container images cached, or log files filling up. |
| PIDPressure | False | Too many processes running on the node. Rare, but a runaway container forking lots of child processes can trigger this. |
| NetworkUnavailable | False | The network plugin hasn't been configured properly on this node. Usually happens when adding a new node to a cluster. |
On Minikube or Play with Kubernetes, try these commands to see the worker node components live:
kubectl get nodes -o wide
# See all system Pods — kube-proxy runs as a Pod on every node
kubectl get pods -n kube-system
# Describe a node to see conditions, capacity and running Pods
kubectl describe node <your-node-name>
# See resource usage across nodes (needs metrics-server installed)
kubectl top nodes
The -o wide flag on kubectl get nodes shows you the internal and external IP of each node, the container runtime version it's running, and the OS image. Useful for confirming what runtime each node is using.
Worker nodes are intentionally dumb. They don't make decisions. They don't have opinions. They just do what the control plane tells them, run the containers assigned to them, and report back honestly. That simplicity is what makes the whole system scale so well — you can add a new worker node to a cluster and within minutes the Scheduler is already placing Pods on it.
The kubelet is the one exception — it does have some local intelligence. It knows to restart a container that fails its health check, and it knows to protect the node by evicting Pods when memory gets critical. But everything else is decided by the control plane and just executed here.
Practice Questions
Have a go from memory.
1. Which worker node component is the only Kubernetes component that runs directly on the operating system — not inside a container — and why?
2. You run kubectl describe node and see one of the node conditions is True when it should be False — and the Events section says Pods are being evicted. Which condition is most likely showing True?
3. What is the most commonly used container runtime in modern Kubernetes clusters today?
Knowledge Check
Pick the best answer.
1. Your frontend Pod sends a request to "backend-service". The backend Pods are running on a completely different node. Which component makes sure the request actually reaches a healthy backend Pod?
2. A container on worker-node-01 fails its health check. What happens next?
3. A new Pod gets assigned to a worker node. Put the three worker node components in the correct order of what they each do:
Up Next · Lesson 7
Kubernetes Objects Overview
Pods, Deployments, Services, ConfigMaps — Kubernetes manages your app through a set of objects. We map the full picture before diving deep into each one.