Kubernetes Lesson 8 – Pods Explained | Dataplexa
Kubernetes Fundamentals · Lesson 8
Hands-On Lesson — Open your cluster and follow along

Pods Explained

The Pod is the most fundamental building block in all of Kubernetes. Every application you ever run on Kubernetes runs inside a Pod. This lesson — you'll understand exactly what a Pod is, write your first YAML file, deploy it to a real cluster, and inspect what's running inside.

🖥️
Get your cluster ready before you start

From this lesson onwards, every lesson has real commands to run. Open Play with Kubernetes at labs.play-with-k8s.com (free, no install, runs in your browser) or start Minikube with minikube start. You should be able to run kubectl get nodes and see at least one node in Ready state before continuing.

So What Exactly Is a Pod?

A Pod is a wrapper around one or more containers. Think of it as a little private bubble — the containers inside it share the same network address, the same storage volumes, and can talk to each other as if they're on the same machine.

In practice, most Pods contain just one container. Your payment API lives in one Pod. Your auth service in another. Each has its own isolated bubble.

Occasionally a Pod has a second "helper" container alongside the main one. We call this a sidecar — more on that later in the lesson.

Inside a Pod
Most common — 1 container
📦 Pod
payment-api container
node server.js · Port 3000
IP Address
10.244.0.5
Shared Volume
/tmp
With sidecar — 2 containers
📦 Pod
main container
payment-api
sidecar container
log-shipper
Both share the same IP and volumes
The shared network: Containers inside the same Pod talk to each other using localhost — just like two programs on the same computer. The payment-api can call the log-shipper at localhost:9000, no Service needed.
The most important rule about Pods

Pods are ephemeral — temporary. When a Pod dies, it's gone. A new Pod that replaces it is a brand new Pod with a brand new IP address and an empty filesystem. It is not the same Pod restarted — it's a replacement.

This is why you never store important data inside a Pod's container filesystem. Anything written there disappears when the Pod goes. For persistent data, you need a PersistentVolume (Lesson 28).

Writing Your First Pod YAML

Every Kubernetes object is defined in a YAML file. YAML is just a way of writing structured data that humans can read easily — think of it as a very clean, indentation-based config format. We'll go deep on YAML syntax in Lesson 14. For now, let's just read through this file and understand every single line.

The scenario: You're a backend engineer at a fintech startup. Your team has just containerised the payment API for the first time. Your lead says "deploy a single Pod to staging so we can test it." Here's the YAML file you'd write.

apiVersion: v1                    # Which version of the Kubernetes API to use
kind: Pod                         # What type of object we're creating
metadata:                         # Information ABOUT the object
  name: payment-api               # The name of this Pod — must be unique in the namespace
  namespace: default              # Which namespace to create it in (default = the default one)
  labels:                         # Tags we put on this Pod so other objects can find it
    app: payment-api              # Key-value pair — Services use this to route traffic here
    environment: staging          # Another label — useful for filtering with kubectl
spec:                             # What we WANT the Pod to look like
  containers:                     # A list of containers to run inside this Pod
    - name: payment-api           # Name for this specific container (not the Pod name)
      image: nginx:1.25           # The container image to use (nginx as a stand-in here)
      ports:
        - containerPort: 80       # The port this container listens on INSIDE the Pod
      resources:                  # How much CPU and memory this container needs
        requests:                 # The minimum it needs to start
          memory: "128Mi"         # 128 mebibytes of RAM
          cpu: "250m"             # 250 millicores = 0.25 of one CPU core
        limits:                   # The maximum it's allowed to use — hard ceiling
          memory: "256Mi"         # It will be killed if it goes above this
          cpu: "500m"             # It will be throttled if it uses more than this
Breaking down every section
apiVersion: v1 — Pods are a core Kubernetes object, so they use the original v1 API. When you get to Deployments you'll see apps/v1 — that's a different API group added later.
kind: Pod — tells Kubernetes exactly what type of object you're creating. Capitalised, exactly as written.
metadata.name — every object needs a name. Pod names must be unique within a namespace. You can't have two Pods both called payment-api in the same namespace.
metadata.labels — these are just tags. Nothing happens automatically when you add them — but a Service with a matching selector will start sending traffic here. Labels are how Kubernetes wires objects together.
spec.containers — a list (notice the - dash, which means "list item" in YAML). Most Pods have one item here.
resources.requests vs limits — requests is what the Scheduler uses to find a node with enough spare capacity. Limits is the hard ceiling — exceed it and the container is throttled (CPU) or killed (memory). Always set both. We cover this in depth in Lesson 22.

Deploying the Pod to Your Cluster

The scenario: You've saved the YAML above to a file called payment-pod.yaml. Now you apply it to the cluster and watch it come to life.

# Apply the YAML file — this tells Kubernetes to create the Pod
# -f means "from this file"
kubectl apply -f payment-pod.yaml

# Watch the Pod come to life — -w means "watch" (updates live, Ctrl+C to stop)
kubectl get pod payment-api -w

# Or just check it once
kubectl get pod payment-api
What just happened?

ContainerCreating — Kubernetes accepted your request, the Scheduler picked a node, the kubelet on that node is telling containerd to pull the nginx image from Docker Hub. You see this state while the image is downloading.

READY 0/1 → 1/1 — the first number is "how many containers are ready", the second is "how many containers exist in this Pod". Once the container starts and passes its readiness check, it flips to 1/1.

Running — the container is running, the Pod is healthy, and it's ready to serve traffic. RESTARTS: 0 means it hasn't crashed since it started — a healthy sign.

Inspecting a Running Pod

The scenario: The Pod is running but your colleague says traffic isn't reaching it. You need to look inside — check the logs, see the events, confirm the IP address. These are the exact commands you'll use every single day as a Kubernetes engineer.

# Get detailed info about this Pod — the most useful debugging command in Kubernetes
# Shows IP address, which node it's on, events, container status
kubectl describe pod payment-api

# See the logs coming out of the container
# Add -f to stream logs live (like tail -f)
kubectl logs payment-api

# Stream logs live
kubectl logs payment-api -f

# If a Pod has multiple containers, specify which one with -c
kubectl logs payment-api -c payment-api
What just happened?

The Events section at the bottom is your best friend when debugging. It shows the complete lifecycle of this Pod — Scheduled (Scheduler assigned it), Pulling (kubelet fetching image), Pulled, Created, Started. If anything went wrong, you'd see a Warning event here with a description of what failed.

IP: 10.244.0.5 — this is the Pod's IP address. Other Pods can reach this Pod at this IP, but only from within the cluster. From outside the cluster, you need a Service (Lesson 11) or Ingress (Lesson 33).

Node: worker-node-01 — tells you exactly which machine is running this Pod. Useful when you need to SSH into a node for deeper investigation.

Getting a Shell Inside a Running Pod

The scenario: You need to check whether the container can reach the database. You want to run a curl command from inside the container itself — not from your laptop. Here's how you open a terminal directly inside a running container.

# Open an interactive terminal inside the container
# -it means interactive + allocate a terminal
# -- /bin/bash is the command to run inside (use /bin/sh if bash isn't available)
kubectl exec -it payment-api -- /bin/bash

# Once you're inside, you can run any command as if you're on that machine
# For example, check if nginx is running
ps aux

# Check network connectivity to another service
curl http://auth-service:8080/health

# Exit the container's terminal when done
exit
What just happened?

kubectl exec -it tunnels a terminal session through the Kubernetes API Server, through the kubelet, all the way into the container's process namespace. You're literally typing commands that run inside that container on whichever node it's sitting on.

The curl to auth-service worked — meaning DNS resolution and Service routing are working correctly from inside this Pod. This is one of the most useful debugging techniques for networking issues.

PID 1 is nginx — every container has exactly one "main" process with PID 1. When this process exits, the container stops. That's why your Dockerfile's CMD or ENTRYPOINT must run the process in the foreground, not as a background daemon.

The Pod Lifecycle — All Five States

Every Pod moves through a set of phases from birth to death. Understanding these phases means you'll never be confused when you see a status in kubectl get pods.

Pending
Pod has been accepted by Kubernetes but containers aren't running yet. Could be waiting for the Scheduler to assign a node, or waiting for the image to download. Normal to see this briefly on every Pod.
Running
At least one container is running. This is the healthy normal state. The Pod stays here until it's deleted or something goes wrong.
Succeeded
All containers finished with exit code 0 (success). This is the normal end state for Pods that run a one-time task — like a database migration or a batch job.
Failed
All containers have stopped and at least one exited with a non-zero exit code. Something went wrong. Check the logs — kubectl logs <pod> is your first move.
Unknown
The control plane can't get a status update from the node running this Pod. Usually means the node itself has a network problem or has gone down entirely.
The status every engineer dreads: CrashLoopBackOff

This isn't an official Pod phase — it's a container state. It means: "the container started, immediately crashed, Kubernetes restarted it, it crashed again, Kubernetes is waiting a bit longer before trying again... and this is repeating." The backoff time doubles each time — 10s, 20s, 40s, up to 5 minutes.

The most common causes:

→ Your app is crashing on startup — a missing env variable, wrong config, failed DB connection
→ The container's main process exits immediately — nothing to keep it running
→ Out of memory — the container is hitting its memory limit and being killed

First step when you see this: kubectl logs <pod-name> --previous — the --previous flag shows you logs from the last crashed instance, not the current one that's just starting up.

Deleting a Pod

The scenario: You created this Pod directly for testing. Your lead wants you to clean it up before end of day. Here's how — and an important note about what happens when you delete a Pod that's managed by a Deployment.

# Delete a Pod by name
kubectl delete pod payment-api

# Delete using the same YAML file you used to create it
kubectl delete -f payment-pod.yaml

# Delete immediately — no graceful shutdown wait
# Use this only when a Pod is stuck terminating
kubectl delete pod payment-api --force --grace-period=0
Important — what happens when you delete a Pod managed by a Deployment

We created this Pod directly — no Deployment managing it. So deleting it means it's truly gone.

But in real production, your Pods are created and managed by a Deployment. If you delete one of those Pods, the ReplicaSet controller notices immediately and creates a fresh replacement. The Pod is gone — the replacement is a brand new one. This is by design. If you want to truly remove a Pod from a Deployment, you scale down the Deployment — not delete individual Pods.

Your Pod Command Cheat Sheet

Command What it does
kubectl apply -f pod.yaml Create or update a Pod from a YAML file
kubectl get pods List all Pods in the current namespace
kubectl get pods -o wide List Pods with extra info — IP address and which node
kubectl describe pod <name> Full details — events, resource usage, container state
kubectl logs <name> See what the container has printed to stdout
kubectl logs <name> -f Stream logs live (follow mode)
kubectl logs <name> --previous Logs from the last crashed container — essential for CrashLoopBackOff
kubectl exec -it <name> -- /bin/bash Open a terminal inside the running container
kubectl delete pod <name> Delete a Pod gracefully (waits for running requests to finish)
kubectl get pod <name> -o yaml See the full YAML of a running Pod — including the status Kubernetes wrote
Why you rarely create Pods directly in real life

In this lesson you created a Pod directly — which is the right way to learn. But in production you almost never do this. Here's why: if you create a Pod directly and the node it runs on dies, that Pod is gone forever. Nothing recreates it. No self-healing.

In real work, you always use a Deployment (or StatefulSet for databases). The Deployment creates the Pods for you and takes care of replacing them if anything goes wrong. Think of directly-created Pods like a one-off test you'd throw away — not something you'd run your business on.

👨‍💻 Keep practising — try these yourself
1
Create the YAML above, save it as payment-pod.yaml, apply it, and run every command in this lesson. Read every line of output.
2
Try kubectl get pod payment-api -o yaml — look at the status: section at the bottom and compare it to the spec: section you wrote.
3
Change the image in the YAML to nginx:does-not-exist, apply it, and see what happens. Run kubectl describe pod payment-api and find the error in the Events.

Practice Questions

Type from memory.

1. Your Pod is in CrashLoopBackOff. You want to see the logs from the last time it crashed (not the current restart attempt). What command do you run?



2. Pods are __________ — when one dies it is gone forever, and a replacement is a brand new Pod, not the same one restarted.



3. A second helper container that runs alongside the main container inside the same Pod — sharing its network and storage — is called a ________.



Knowledge Check

Pick the best answer.

1. Your Pod is stuck in Pending state. It's been 5 minutes and it hasn't started. What's the best first command to diagnose why?


2. Two containers are running inside the same Pod. How do they communicate with each other?


3. In a Pod's resource section, what is the difference between requests and limits?


Up Next · Lesson 9

ReplicaSets

One Pod is never enough for production. We introduce ReplicaSets — the object that keeps your app alive even when individual Pods crash — and watch self-healing happen in real time.