Kubernetes Lesson 10 – Deployments | Dataplexa
Kubernetes Fundamentals · Lesson 10
Hands-On Lesson — Open your cluster and follow along

Deployments

This is the big one. The Deployment is the object you will use every single day as a Kubernetes engineer. It does everything a ReplicaSet does — plus rolling updates, rollbacks, and update history. By the end of this lesson you'll be deploying real apps, updating them without any downtime, and rolling back when something goes wrong. All from the command line.

🖥️
Get your cluster ready

Use Play with Kubernetes at labs.play-with-k8s.com or run minikube start. Confirm with kubectl get nodes — at least one node in Ready state before you begin.

Why ReplicaSets Alone Aren't Enough

In Lesson 9 you saw that a ReplicaSet keeps N copies of your Pod running — and heals itself when things go wrong. That's great. But there's a problem ReplicaSets can't solve on their own.

Say you need to update your payment API from version 1 to version 2. With a ReplicaSet, you'd have to stop all the old Pods and start new ones — taking your service down completely while the update happens. Or you'd have to manually delete and recreate Pods one by one, babysitting the whole process. Nobody wants that at midnight before a big release.

A Deployment solves this. It wraps a ReplicaSet and adds one superpower: it can replace old Pods with new ones gradually and safely, keeping your service running throughout. And if the new version breaks something, one command takes you back to the old one instantly.

What a Deployment Actually Is

A Deployment manages ReplicaSets. When you create a Deployment, it creates a ReplicaSet, which creates your Pods. When you update the Deployment, it creates a new ReplicaSet with the new configuration, gradually scales it up, and gradually scales down the old one. The old ReplicaSet sticks around (scaled to zero) as your rollback point.

The Three-Layer Stack
Deployment
Manages updates, rollbacks, history
creates and manages
ReplicaSet
Keeps N copies alive, self-heals
creates and owns
Pod 1
Running
Pod 2
Running
Pod 3
Running

Your First Deployment YAML

The scenario: You're a DevOps engineer at a fintech startup. The payment API has been tested and is ready for a real deployment. Your lead wants it running as a proper Deployment — 3 replicas, resource limits set, ready for rolling updates. Here's the YAML.

apiVersion: apps/v1                   # Deployments use the apps/v1 API group
kind: Deployment                      # The object type
metadata:
  name: payment-api                   # Name of the Deployment
  labels:
    app: payment-api                  # Label on the Deployment itself
spec:
  replicas: 3                         # Keep 3 Pods running at all times
  selector:                           # How this Deployment finds its Pods
    matchLabels:
      app: payment-api                # Match Pods with this label
  strategy:                           # How to handle updates — this is new vs ReplicaSet
    type: RollingUpdate               # Replace Pods gradually, not all at once
    rollingUpdate:
      maxUnavailable: 1               # At most 1 Pod can be unavailable during an update
      maxSurge: 1                     # At most 1 extra Pod can exist above the desired count
  template:                           # Blueprint for the Pods this Deployment creates
    metadata:
      labels:
        app: payment-api              # Must match the selector above
    spec:
      containers:
        - name: payment-api           # Container name
          image: nginx:1.24           # We start with v1.24 — we'll update to 1.25 shortly
          ports:
            - containerPort: 80       # Port the container listens on
          resources:
            requests:
              memory: "128Mi"         # Minimum memory to get scheduled
              cpu: "250m"             # 0.25 of one CPU core
            limits:
              memory: "256Mi"         # Container is killed if it exceeds this
              cpu: "500m"             # Container is throttled if it exceeds this
The new parts — strategy and rollingUpdate
strategy.type: RollingUpdate — instead of killing all old Pods and starting new ones (which would be Recreate), a rolling update replaces Pods one at a time. Users feel nothing.
maxUnavailable: 1 — at any point during an update, at most 1 Pod can be down. With 3 replicas, at least 2 are always serving traffic. Your service never drops to zero.
maxSurge: 1 — at most 1 extra Pod above your desired count can exist during an update. So with 3 replicas, you might temporarily have 4 Pods while the update is in progress. This lets new Pods start before old ones are killed — faster and safer.

Deploying It and Checking the Status

The scenario: You've saved the YAML as payment-deployment.yaml. Let's apply it, check the Deployment status, and explore what got created underneath.

# Apply the Deployment
kubectl apply -f payment-deployment.yaml

# Check the Deployment status — this is the most useful command
kubectl get deployment payment-api

# Watch the Pods come up
kubectl get pods -w

# See the ReplicaSet the Deployment created automatically
kubectl get replicaset

# Full details on the Deployment — shows events, strategy, rollout status
kubectl describe deployment payment-api
What just happened?

READY 3/3 — three Pods exist and all three have passed their readiness check. UP-TO-DATE: 3 — all Pods are running the current version of the template. AVAILABLE: 3 — all three are available to serve traffic.

Pod names now have two suffixespayment-api-7d9f8c6b4-x2p9k. The first suffix (7d9f8c6b4) is the hash of the ReplicaSet, which is a hash of the Pod template. The second suffix (x2p9k) is the unique Pod ID. When you update the Deployment, the ReplicaSet hash changes — new Pods get new names.

The Deployment automatically created a ReplicaSet. You didn't write a ReplicaSet YAML — the Deployment controller did that for you.

Performing a Rolling Update

The scenario: The engineering team has just released version 1.25 of the payment API (we're using nginx image versions as a stand-in here). You need to roll it out to production. Zero downtime. Here's how a rolling update works step by step — and then the command to do it.

Rolling Update — What Kubernetes Does Behind the Scenes
1
You update the image. Kubernetes creates a new ReplicaSet for v1.25.
v1.24 ✓
v1.24 ✓
v1.24 ✓
v1.25 new
2
New Pod is Running. One old Pod is terminated.
v1.24 ✓
v1.24 ✓
v1.24 ✗
v1.25 ✓
3
Another new Pod. Another old one terminated.
v1.24 ✓
v1.24 ✗
v1.25 ✓
v1.25 ✓
4
Update complete. All 3 Pods running v1.25. Old ReplicaSet scaled to 0.
v1.25 ✓
v1.25 ✓
v1.25 ✓
At every point during steps 1–4, at least 2 Pods are serving traffic. Users experience zero downtime. The whole update takes about 20–30 seconds.

Now let's actually do it. Two ways to trigger a rolling update:

# Option 1 — update the image directly with kubectl (fast for quick updates)
# This changes the image of the "payment-api" container in the "payment-api" Deployment
kubectl set image deployment/payment-api payment-api=nginx:1.25

# Watch the rolling update happen in real time
kubectl rollout status deployment/payment-api

# Option 2 — edit the YAML file: change image: nginx:1.24 to image: nginx:1.25
# Then re-apply (better for teams — the file stays as the source of truth)
kubectl apply -f payment-deployment.yaml

# Check what's happening while the update is running
kubectl get pods -w
What just happened?

kubectl rollout status blocks and reports live progress. It tells you exactly how many Pods have been updated, how many are still pending. When it says "successfully rolled out" — every Pod is on the new version.

The ReplicaSet hash changed — look at the Pod names. Old Pods had 7d9f8c6b4 as the middle segment. New Pods have 9b4c7d2f1. That's the hash of the new Pod template. A new ReplicaSet was created for the new version — the old one is still there, just scaled to zero.

The service never went down — at every point, 2 or more Pods were running. If you'd had a Service pointing to this Deployment, users would have felt absolutely nothing.

Rolling Back When Things Go Wrong

The scenario: It's 20 minutes after the deployment. Your monitoring system is lighting up. The v1.25 release has a bug — payment confirmations are failing. Your lead messages you: "roll it back now." Here's how.

# Roll back to the previous version — single command, takes effect immediately
kubectl rollout undo deployment/payment-api

# Watch it roll back just like the original update
kubectl rollout status deployment/payment-api

# Confirm the image is back to the old version
kubectl describe deployment payment-api | grep Image
What just happened?

Kubernetes scaled the old ReplicaSet (the one still sitting at zero) back up to 3, and scaled the current ReplicaSet back down to zero. Same rolling update process — just in reverse. The service kept running the whole time.

The image is back to nginx:1.24. Total time from "roll it back" to "rollback complete" — about 20 seconds. That's the production war story behind why Deployments matter.

Viewing and Using Rollout History

The scenario: A week later, you're asked what version was running last Tuesday during an incident. Or you need to roll back not to the previous version but to a specific earlier one. The rollout history has your answers.

# See the rollout history for this Deployment
kubectl rollout history deployment/payment-api

# See the details of a specific revision — what image was it running?
kubectl rollout history deployment/payment-api --revision=2

# Roll back to a specific revision number (not just the previous one)
kubectl rollout undo deployment/payment-api --to-revision=1

# Add a note when you deploy — makes history much more useful
# --record is deprecated but this annotation approach works well
kubectl annotate deployment payment-api kubernetes.io/change-cause="upgraded to nginx 1.25"
What just happened?

Revision 1 — the original deployment (nginx:1.24). Revision 2 — the update to nginx:1.25. Revision 3 — the rollback (which actually brought back the old ReplicaSet, but Kubernetes records it as a new revision).

The CHANGE-CAUSE column is empty unless you annotate your changes. Get into the habit of adding a cause — it turns your rollout history into a readable changelog that your whole team can use during incidents.

Pausing a Rollout Mid-Way

The scenario: You're doing a canary-style update. You want to update 1 Pod to the new version, watch it for 10 minutes, and only continue the rollout if everything looks healthy. You can pause and resume a Deployment rollout at any point.

# Pause the Deployment — any pending rollout stops mid-way
kubectl rollout pause deployment/payment-api

# While paused — update the image (nothing happens yet)
kubectl set image deployment/payment-api payment-api=nginx:1.25

# Watch your metrics. Check error rates. If happy — resume.
kubectl rollout resume deployment/payment-api

# If not happy — undo instead
kubectl rollout undo deployment/payment-api
When this is useful in production

Pausing lets you act as a human canary gate. You update to the new version, pause with 1 new Pod and 2 old Pods running, then watch your dashboards. If error rates stay flat for 10 minutes — resume. If they spike — undo. It's a manual safety net that many teams use before fully automating their canary deployments.

Your Deployment Command Cheat Sheet

Command What it does
kubectl apply -f deployment.yaml Create or update a Deployment from a file
kubectl get deployment List all Deployments and their status
kubectl describe deployment <name> Full details — strategy, events, rollout status
kubectl set image deployment/<name> <container>=<image> Update the container image — triggers rolling update
kubectl rollout status deployment/<name> Watch a rollout in progress — blocks until done
kubectl rollout undo deployment/<name> Roll back to the previous version immediately
kubectl rollout history deployment/<name> See all previous versions and their revision numbers
kubectl rollout undo deployment/<name> --to-revision=2 Roll back to a specific revision number
kubectl scale deployment <name> --replicas=5 Change the number of Pods immediately
kubectl rollout pause deployment/<name> Pause a rollout mid-way for canary inspection
kubectl rollout resume deployment/<name> Resume a paused rollout

RollingUpdate vs Recreate — Choosing the Right Strategy

Kubernetes gives you two update strategies. RollingUpdate is the default and what you'll use 95% of the time. But it's worth understanding when Recreate makes sense.

DEFAULT
RollingUpdate

Replaces Pods gradually. Some old, some new Pods run at the same time during the transition. Zero downtime.

Use when:
Your app can run two versions simultaneously without conflict. Most web APIs and stateless services.
CAUSES DOWNTIME
Recreate

Kills all old Pods first, then starts all new Pods. There's a gap where nothing is running. Causes downtime.

Use when:
Your app absolutely cannot have two versions running at the same time — e.g. a database schema migration that's incompatible with the old code.

Cleaning Up

# Delete the Deployment — also deletes all ReplicaSets and Pods it manages
kubectl delete deployment payment-api

# Or delete via the YAML file
kubectl delete -f payment-deployment.yaml

# Verify everything is cleaned up
kubectl get deployments
kubectl get replicasets
kubectl get pods
Cascading deletes

Deleting a Deployment deletes everything under it — all ReplicaSets (including the old ones stored at zero replicas), and all Pods. It's a full clean cascade. This is why you should always confirm what you're deleting before running the command in production.

The most important lesson from this lesson

Rolling updates and one-command rollbacks sound like small conveniences. They're not. They're the difference between a 2-minute incident and a 2-hour outage. The ability to confidently say "if this deployment breaks anything, I can undo it in 20 seconds" changes how your whole team ships software. Engineers deploy more often. More frequent deploys mean smaller changes. Smaller changes mean less risk.

Every time you use kubectl rollout undo in production, you're using something that took Google years to build for themselves with Borg — and it's yours in one command.

👨‍💻 Keep practising — try these yourself
1
Deploy with nginx:1.24. Open a second terminal running kubectl get pods -w. In the first terminal, run kubectl set image deployment/payment-api payment-api=nginx:1.25. Watch the rolling update happen Pod by Pod in the second terminal.
2
Update the image to nginx:this-does-not-exist. Watch the rollout get stuck. Then run kubectl rollout undo deployment/payment-api and watch it recover. This is the most realistic incident drill you can do.
3
After doing a few updates and rollbacks, run kubectl get replicasets. You'll see multiple ReplicaSets — the current one and the old ones at zero replicas. This is the rollback history Kubernetes keeps for you.

Practice Questions

Type from memory.

1. Your latest deployment is causing errors in production. What single command rolls it back to the previous version immediately?



2. In a rolling update strategy, which field controls the maximum number of Pods that can be unavailable at any point during the update?



3. Your application cannot run two versions simultaneously — the new database schema is incompatible with the old code. Which deployment strategy should you use?



Knowledge Check

Pick the best answer.

1. You update the container image in a Deployment from v1 to v2. What does Kubernetes actually do under the hood?


2. You need to see all previous versions of a Deployment and their revision numbers. What command do you run?


3. After a rolling update completes successfully, what happens to the old ReplicaSet?


Up Next · Lesson 11

Services Overview

Your Deployment is running. Now you need traffic to actually reach it. Services give your Pods a stable address and load balance across all of them — this is how the outside world connects to your application.