Kubernetes Course
Pods Explained
The Pod is the most fundamental building block in all of Kubernetes. Every application you ever run on Kubernetes runs inside a Pod. This lesson — you'll understand exactly what a Pod is, write your first YAML file, deploy it to a real cluster, and inspect what's running inside.
From this lesson onwards, every lesson has real commands to run. Open Play with Kubernetes at labs.play-with-k8s.com (free, no install, runs in your browser) or start Minikube with minikube start. You should be able to run kubectl get nodes and see at least one node in Ready state before continuing.
So What Exactly Is a Pod?
A Pod is a wrapper around one or more containers. Think of it as a little private bubble — the containers inside it share the same network address, the same storage volumes, and can talk to each other as if they're on the same machine.
In practice, most Pods contain just one container. Your payment API lives in one Pod. Your auth service in another. Each has its own isolated bubble.
Occasionally a Pod has a second "helper" container alongside the main one. We call this a sidecar — more on that later in the lesson.
localhost — just like two programs on the same computer. The payment-api can call the log-shipper at localhost:9000, no Service needed.
Pods are ephemeral — temporary. When a Pod dies, it's gone. A new Pod that replaces it is a brand new Pod with a brand new IP address and an empty filesystem. It is not the same Pod restarted — it's a replacement.
This is why you never store important data inside a Pod's container filesystem. Anything written there disappears when the Pod goes. For persistent data, you need a PersistentVolume (Lesson 28).
Writing Your First Pod YAML
Every Kubernetes object is defined in a YAML file. YAML is just a way of writing structured data that humans can read easily — think of it as a very clean, indentation-based config format. We'll go deep on YAML syntax in Lesson 14. For now, let's just read through this file and understand every single line.
The scenario: You're a backend engineer at a fintech startup. Your team has just containerised the payment API for the first time. Your lead says "deploy a single Pod to staging so we can test it." Here's the YAML file you'd write.
apiVersion: v1 # Which version of the Kubernetes API to use
kind: Pod # What type of object we're creating
metadata: # Information ABOUT the object
name: payment-api # The name of this Pod — must be unique in the namespace
namespace: default # Which namespace to create it in (default = the default one)
labels: # Tags we put on this Pod so other objects can find it
app: payment-api # Key-value pair — Services use this to route traffic here
environment: staging # Another label — useful for filtering with kubectl
spec: # What we WANT the Pod to look like
containers: # A list of containers to run inside this Pod
- name: payment-api # Name for this specific container (not the Pod name)
image: nginx:1.25 # The container image to use (nginx as a stand-in here)
ports:
- containerPort: 80 # The port this container listens on INSIDE the Pod
resources: # How much CPU and memory this container needs
requests: # The minimum it needs to start
memory: "128Mi" # 128 mebibytes of RAM
cpu: "250m" # 250 millicores = 0.25 of one CPU core
limits: # The maximum it's allowed to use — hard ceiling
memory: "256Mi" # It will be killed if it goes above this
cpu: "500m" # It will be throttled if it uses more than this
apps/v1 — that's a different API group added later.payment-api in the same namespace.- dash, which means "list item" in YAML). Most Pods have one item here.Deploying the Pod to Your Cluster
The scenario: You've saved the YAML above to a file called payment-pod.yaml. Now you apply it to the cluster and watch it come to life.
# Apply the YAML file — this tells Kubernetes to create the Pod
# -f means "from this file"
kubectl apply -f payment-pod.yaml
# Watch the Pod come to life — -w means "watch" (updates live, Ctrl+C to stop)
kubectl get pod payment-api -w
# Or just check it once
kubectl get pod payment-api
pod/payment-api created NAME READY STATUS RESTARTS AGE payment-api 0/1 ContainerCreating 0 2s payment-api 0/1 ContainerCreating 0 4s payment-api 1/1 Running 0 8s
ContainerCreating — Kubernetes accepted your request, the Scheduler picked a node, the kubelet on that node is telling containerd to pull the nginx image from Docker Hub. You see this state while the image is downloading.
READY 0/1 → 1/1 — the first number is "how many containers are ready", the second is "how many containers exist in this Pod". Once the container starts and passes its readiness check, it flips to 1/1.
Running — the container is running, the Pod is healthy, and it's ready to serve traffic. RESTARTS: 0 means it hasn't crashed since it started — a healthy sign.
Inspecting a Running Pod
The scenario: The Pod is running but your colleague says traffic isn't reaching it. You need to look inside — check the logs, see the events, confirm the IP address. These are the exact commands you'll use every single day as a Kubernetes engineer.
# Get detailed info about this Pod — the most useful debugging command in Kubernetes
# Shows IP address, which node it's on, events, container status
kubectl describe pod payment-api
# See the logs coming out of the container
# Add -f to stream logs live (like tail -f)
kubectl logs payment-api
# Stream logs live
kubectl logs payment-api -f
# If a Pod has multiple containers, specify which one with -c
kubectl logs payment-api -c payment-api
Name: payment-api
Namespace: default
Node: worker-node-01/10.0.0.4
Start Time: Mon, 16 Mar 2026 09:14:22 +0000
Labels: app=payment-api
environment=staging
Status: Running
IP: 10.244.0.5
Containers:
payment-api:
Image: nginx:1.25
Port: 80/TCP
State: Running
Started: Mon, 16 Mar 2026 09:14:30 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 256Mi
Requests:
cpu: 250m
memory: 128Mi
Events:
Type Reason Age Message
---- ------ --- -------
Normal Scheduled 45s Successfully assigned default/payment-api to worker-node-01
Normal Pulling 44s Pulling image "nginx:1.25"
Normal Pulled 38s Successfully pulled image "nginx:1.25"
Normal Created 38s Created container payment-api
Normal Started 38s Started container payment-api
The Events section at the bottom is your best friend when debugging. It shows the complete lifecycle of this Pod — Scheduled (Scheduler assigned it), Pulling (kubelet fetching image), Pulled, Created, Started. If anything went wrong, you'd see a Warning event here with a description of what failed.
IP: 10.244.0.5 — this is the Pod's IP address. Other Pods can reach this Pod at this IP, but only from within the cluster. From outside the cluster, you need a Service (Lesson 11) or Ingress (Lesson 33).
Node: worker-node-01 — tells you exactly which machine is running this Pod. Useful when you need to SSH into a node for deeper investigation.
Getting a Shell Inside a Running Pod
The scenario: You need to check whether the container can reach the database. You want to run a curl command from inside the container itself — not from your laptop. Here's how you open a terminal directly inside a running container.
# Open an interactive terminal inside the container
# -it means interactive + allocate a terminal
# -- /bin/bash is the command to run inside (use /bin/sh if bash isn't available)
kubectl exec -it payment-api -- /bin/bash
# Once you're inside, you can run any command as if you're on that machine
# For example, check if nginx is running
ps aux
# Check network connectivity to another service
curl http://auth-service:8080/health
# Exit the container's terminal when done
exit
root@payment-api:/# ps aux
USER PID COMMAND
root 1 nginx: master process nginx -g daemon off;
nginx 29 nginx: worker process
nginx 30 nginx: worker process
root@payment-api:/# curl http://auth-service:8080/health
{"status":"healthy","version":"2.1.0"}
root@payment-api:/# exit
exit
kubectl exec -it tunnels a terminal session through the Kubernetes API Server, through the kubelet, all the way into the container's process namespace. You're literally typing commands that run inside that container on whichever node it's sitting on.
The curl to auth-service worked — meaning DNS resolution and Service routing are working correctly from inside this Pod. This is one of the most useful debugging techniques for networking issues.
PID 1 is nginx — every container has exactly one "main" process with PID 1. When this process exits, the container stops. That's why your Dockerfile's CMD or ENTRYPOINT must run the process in the foreground, not as a background daemon.
The Pod Lifecycle — All Five States
Every Pod moves through a set of phases from birth to death. Understanding these phases means you'll never be confused when you see a status in kubectl get pods.
kubectl logs <pod> is your first move.This isn't an official Pod phase — it's a container state. It means: "the container started, immediately crashed, Kubernetes restarted it, it crashed again, Kubernetes is waiting a bit longer before trying again... and this is repeating." The backoff time doubles each time — 10s, 20s, 40s, up to 5 minutes.
The most common causes:
First step when you see this: kubectl logs <pod-name> --previous — the --previous flag shows you logs from the last crashed instance, not the current one that's just starting up.
Deleting a Pod
The scenario: You created this Pod directly for testing. Your lead wants you to clean it up before end of day. Here's how — and an important note about what happens when you delete a Pod that's managed by a Deployment.
# Delete a Pod by name
kubectl delete pod payment-api
# Delete using the same YAML file you used to create it
kubectl delete -f payment-pod.yaml
# Delete immediately — no graceful shutdown wait
# Use this only when a Pod is stuck terminating
kubectl delete pod payment-api --force --grace-period=0
pod "payment-api" deleted
We created this Pod directly — no Deployment managing it. So deleting it means it's truly gone.
But in real production, your Pods are created and managed by a Deployment. If you delete one of those Pods, the ReplicaSet controller notices immediately and creates a fresh replacement. The Pod is gone — the replacement is a brand new one. This is by design. If you want to truly remove a Pod from a Deployment, you scale down the Deployment — not delete individual Pods.
Your Pod Command Cheat Sheet
| Command | What it does |
|---|---|
| kubectl apply -f pod.yaml | Create or update a Pod from a YAML file |
| kubectl get pods | List all Pods in the current namespace |
| kubectl get pods -o wide | List Pods with extra info — IP address and which node |
| kubectl describe pod <name> | Full details — events, resource usage, container state |
| kubectl logs <name> | See what the container has printed to stdout |
| kubectl logs <name> -f | Stream logs live (follow mode) |
| kubectl logs <name> --previous | Logs from the last crashed container — essential for CrashLoopBackOff |
| kubectl exec -it <name> -- /bin/bash | Open a terminal inside the running container |
| kubectl delete pod <name> | Delete a Pod gracefully (waits for running requests to finish) |
| kubectl get pod <name> -o yaml | See the full YAML of a running Pod — including the status Kubernetes wrote |
In this lesson you created a Pod directly — which is the right way to learn. But in production you almost never do this. Here's why: if you create a Pod directly and the node it runs on dies, that Pod is gone forever. Nothing recreates it. No self-healing.
In real work, you always use a Deployment (or StatefulSet for databases). The Deployment creates the Pods for you and takes care of replacing them if anything goes wrong. Think of directly-created Pods like a one-off test you'd throw away — not something you'd run your business on.
payment-pod.yaml, apply it, and run every command in this lesson. Read every line of output.kubectl get pod payment-api -o yaml — look at the status: section at the bottom and compare it to the spec: section you wrote.nginx:does-not-exist, apply it, and see what happens. Run kubectl describe pod payment-api and find the error in the Events.Practice Questions
Type from memory.
1. Your Pod is in CrashLoopBackOff. You want to see the logs from the last time it crashed (not the current restart attempt). What command do you run?
2. Pods are __________ — when one dies it is gone forever, and a replacement is a brand new Pod, not the same one restarted.
3. A second helper container that runs alongside the main container inside the same Pod — sharing its network and storage — is called a ________.
Knowledge Check
Pick the best answer.
1. Your Pod is stuck in Pending state. It's been 5 minutes and it hasn't started. What's the best first command to diagnose why?
2. Two containers are running inside the same Pod. How do they communicate with each other?
3. In a Pod's resource section, what is the difference between requests and limits?
Up Next · Lesson 9
ReplicaSets
One Pod is never enough for production. We introduce ReplicaSets — the object that keeps your app alive even when individual Pods crash — and watch self-healing happen in real time.