Kubernetes Lesson 6 – Worker node components | Dataplexa
Kubernetes Fundamentals · Lesson 6

Worker Node Components

The control plane makes decisions. Worker nodes do the actual work. This is where your application containers live, run, and die. Let's walk through exactly what's inside a worker node — in plain English, no skipping anything.

What Is a Worker Node?

A worker node is just a computer — a physical server or a virtual machine — that Kubernetes uses to run your application containers. Nothing special about the hardware. It could be a beefy cloud machine with 64 cores, or a small VM with 2 cores. Kubernetes doesn't care, as long as three pieces of software are installed and running on it.

Those three pieces are: the kubelet, kube-proxy, and a container runtime. That's it. Every worker node in every Kubernetes cluster in the world runs these same three things.

Inside a Worker Node
Worker Node
🤝
kubelet
The node's agent. Talks to control plane, runs containers.
🔀
kube-proxy
Handles network routing. Gets traffic to the right container.
📦
Container Runtime
Pulls images and actually starts containers. Usually containerd.
Your Pods Running Here
payment-api
auth-svc
redis

The kubelet — The Node's Loyal Manager

Think of the kubelet as the manager of a single branch office. Head office (the control plane) sends it instructions. The kubelet reads those instructions, gets the work done, and reports back.

More precisely — the kubelet watches the Kubernetes API Server and asks: "Are there any Pods assigned to my node that I should be running?" When a new Pod arrives, the kubelet tells the container runtime to pull the image and start the container. Then it watches that container like a hawk.

Starting containers

When a Pod is assigned to this node, kubelet tells the container runtime to pull the image and start it. It passes along all the config — environment variables, port numbers, resource limits.

Health checking

The kubelet runs health checks on every container it manages. If a container fails its check, the kubelet restarts it. No human needed, no alert required — it just fixes it.

Reporting back

Every few seconds the kubelet sends a heartbeat to the API Server — "I'm alive, here's what's running on me, here's how much CPU and memory each Pod is using."

Cleaning up

When a Pod is deleted or moved to another node, the kubelet stops the containers and frees up the resources. It also cleans up unused images to save disk space.

Something important to understand about the kubelet

The kubelet is the only Kubernetes component that is not run as a container. It runs directly on the operating system of the worker node as a regular system service. Why? Because it needs to exist before any containers can run — it's the thing that starts containers in the first place. You can't run the container-starter inside a container.

Watching the kubelet in action

The scenario: You're a junior DevOps engineer at a retail company. Black Friday is two weeks away. Your lead asks you to check that all nodes in the cluster are healthy and properly reporting in. Here's how you do it.

# List all nodes and see their current status
# This is the first command every engineer runs when checking a cluster
kubectl get nodes

# Get more detail on a specific node — perfect for investigating a problem
# Replace "worker-node-01" with your actual node name
kubectl describe node worker-node-01
What just happened?

STATUS: Ready means the kubelet on that node is alive, healthy, and reporting in to the API Server regularly. The control plane trusts this node to run Pods.

STATUS: NotReady on worker-node-03 is a red flag. The kubelet on that node has stopped sending heartbeats. Maybe the node crashed. Maybe the network dropped. The Node Controller will wait 40 seconds before acting on this — and if it stays NotReady for 5 minutes, any Pods on that node will be rescheduled elsewhere automatically.

ROLES: <none> on worker nodes just means they're plain workers — no control plane responsibilities. The control-plane node shows its role explicitly.

kube-proxy — The Node's Postman

Here's a simple problem. Your frontend container needs to talk to your backend container. But containers get different IP addresses every time they start. You can't hardcode an IP — it'll break the moment the container restarts or moves to a different node.

Kubernetes solves this with Services — a stable name and IP address that always points to a group of healthy Pods, no matter which node they're on or how many times they've restarted. And kube-proxy is the component that makes Services actually work at the network level.

kube-proxy runs on every node and maintains a set of network rules in the Linux kernel. When a request comes in for a Service, those rules route it to the right Pod — automatically, instantly, across any node in the cluster.

How Traffic Finds Your Pod — Step by Step
1
A request arrives for "payment-service" on port 80
Maybe from another Pod inside the cluster, or from an external load balancer.
2
kube-proxy's rules intercept the request
Network rules written into the Linux kernel (iptables or ipvs) catch the request before it goes anywhere.
3
kube-proxy picks a healthy Pod from the list
It knows which Pods are currently healthy (from the Endpoints object updated by the Endpoints Controller). It picks one and routes the traffic there.
4
Traffic reaches your Pod — on any node
The Pod could be on Node 1, Node 2, or Node 3. kube-proxy handles the cross-node routing transparently. Your frontend didn't need to know where the backend was.
Two modes kube-proxy can use — you'll see both mentioned
iptables mode (most common)

Writes rules into the Linux kernel's iptables firewall. Very battle-tested. Slight overhead with large numbers of Services (thousands of rules).

ipvs mode (modern, faster)

Uses a different kernel module (IP Virtual Server). Scales much better for clusters with thousands of Services. Now the preferred choice for large production clusters.

The Container Runtime — The Hands That Do the Work

The container runtime is the software that actually creates and runs containers. The kubelet tells it what to do. The runtime just does it — no opinions, no decisions, pure execution.

When the kubelet says "start this container," the runtime does all the heavy lifting underneath:

Pulls the image — Downloads the container image from the registry (Docker Hub, AWS ECR, etc.) to the local machine if it isn't already cached there.
Unpacks and mounts the filesystem — Sets up the container's own isolated file system from the image layers. The container sees only what it's supposed to see — not the host's files.
Sets up the network interface — Gives the container its own network namespace and a private IP address so it can send and receive traffic.
Enforces resource limits — Uses Linux kernel features (cgroups) to make sure the container can only use the CPU and memory that Kubernetes allocated to it. It can't steal resources from other containers.
Starts the process — Kicks off the main process inside the container. For a web server, that might be node server.js or python app.py. When this process exits, the container is done.
The three runtimes you'll hear about
containerd
The most widely used runtime today. Lightweight, fast, and purpose-built for Kubernetes. It's actually the engine that runs inside Docker itself — when you use Docker, it calls containerd under the hood. Most managed Kubernetes clusters (EKS, GKE, AKS) use containerd directly.
CRI-O
A lightweight runtime built specifically and exclusively for Kubernetes. No Docker compatibility layer at all — just the minimum needed to run containers for Kubernetes. Common in Red Hat OpenShift environments.
Docker Engine
Still works in Kubernetes via a compatibility shim called cri-dockerd. Mostly seen in older clusters. New clusters almost never set this up — there's no reason to add Docker's extra layers when containerd does the job directly.

How All Three Work Together

Here's the thing most people don't fully appreciate: the kubelet, kube-proxy, and container runtime never communicate directly with each other. They each do their own job and Kubernetes wires the result together. Let's trace a real scenario from start to finish:

Scenario: You deploy a new backend service
1
Control plane schedules the Pod to worker-node-02
The Scheduler picks worker-node-02 because it has the most spare CPU. It writes this assignment to etcd.
2
kubelet on worker-node-02 notices the new Pod
It sees a Pod assigned to it. It reads the Pod's spec — what image to use, what ports to open, how much memory to allocate — and passes all of that to containerd.
3
containerd pulls the image and starts the container
It downloads the image, sets up the filesystem and network interface, enforces the resource limits, and starts the process. The container is now running.
4
kubelet reports "Running" back to the API Server
Status update goes to etcd. kubectl get pods will now show the Pod as Running.
5
kube-proxy updates the routing rules on every node
Now that the Pod is healthy, kube-proxy adds it to the list of Pods behind its Service. Traffic from any node in the cluster can now reach this new Pod instantly.

Checking What's Running on a Node

The scenario: Something is wrong on worker-node-02. Pods are being evicted and you don't know why. You need to see how much resource the node is actually using, and which Pods are currently running on it. Here's how.

# See all Pods running on a specific node
# --field-selector filters to only show Pods on that node
kubectl get pods --all-namespaces --field-selector spec.nodeName=worker-node-02

# Describe the node — shows resource usage, capacity, and events
kubectl describe node worker-node-02
What just happened?

MemoryPressure: True — that's your answer. The node is running out of memory. The kubelet detected this and is starting to evict Pods to free up resources. This is the kubelet protecting the node from total failure.

Allocated resources shows you 1.9Gi of memory is allocated out of 2Gi capacity. Nearly full. The fix is to either add a new node to the cluster, or reduce the memory requests on some Pods.

The Events section at the bottom is always the first place to look when debugging a node. The kubelet writes everything there — why it evicted Pods, what health checks failed, what images it couldn't pull.

Node Conditions — What They Mean

The kubelet reports several "conditions" for each node. These tell you exactly what's healthy and what isn't. Here's a plain-English guide to each one:

Condition Healthy value What it means
Ready True The kubelet is healthy. The node can accept new Pods. This is the main one to watch.
MemoryPressure False The node is running low on memory. Kubernetes will start evicting low-priority Pods to free up space.
DiskPressure False The node's disk is nearly full. Could mean too many container images cached, or log files filling up.
PIDPressure False Too many processes running on the node. Rare, but a runaway container forking lots of child processes can trigger this.
NetworkUnavailable False The network plugin hasn't been configured properly on this node. Usually happens when adding a new node to a cluster.
👨‍💻 Where to practise — inspecting worker node components

On Minikube or Play with Kubernetes, try these commands to see the worker node components live:

# Check all nodes and their status
kubectl get nodes -o wide

# See all system Pods — kube-proxy runs as a Pod on every node
kubectl get pods -n kube-system

# Describe a node to see conditions, capacity and running Pods
kubectl describe node <your-node-name>

# See resource usage across nodes (needs metrics-server installed)
kubectl top nodes

The -o wide flag on kubectl get nodes shows you the internal and external IP of each node, the container runtime version it's running, and the OS image. Useful for confirming what runtime each node is using.

The thing to remember about worker nodes

Worker nodes are intentionally dumb. They don't make decisions. They don't have opinions. They just do what the control plane tells them, run the containers assigned to them, and report back honestly. That simplicity is what makes the whole system scale so well — you can add a new worker node to a cluster and within minutes the Scheduler is already placing Pods on it.

The kubelet is the one exception — it does have some local intelligence. It knows to restart a container that fails its health check, and it knows to protect the node by evicting Pods when memory gets critical. But everything else is decided by the control plane and just executed here.

Practice Questions

Have a go from memory.

1. Which worker node component is the only Kubernetes component that runs directly on the operating system — not inside a container — and why?



2. You run kubectl describe node and see one of the node conditions is True when it should be False — and the Events section says Pods are being evicted. Which condition is most likely showing True?



3. What is the most commonly used container runtime in modern Kubernetes clusters today?



Knowledge Check

Pick the best answer.

1. Your frontend Pod sends a request to "backend-service". The backend Pods are running on a completely different node. Which component makes sure the request actually reaches a healthy backend Pod?


2. A container on worker-node-01 fails its health check. What happens next?


3. A new Pod gets assigned to a worker node. Put the three worker node components in the correct order of what they each do:


Up Next · Lesson 7

Kubernetes Objects Overview

Pods, Deployments, Services, ConfigMaps — Kubernetes manages your app through a set of objects. We map the full picture before diving deep into each one.