Kubernetes Course
Why Kubernetes is Needed
Before we touch any commands, let's talk about the real pain that made Kubernetes necessary. Because once you feel that pain, everything Kubernetes does will make complete sense.
It All Started With One Big App on One Server
Picture a small online shop from the early 2000s. The whole website — user logins, product pages, shopping cart, payments, confirmation emails — all of it was one giant program sitting on one physical computer in a server room somewhere.
We call this a monolith. One codebase. One server. Everything glued together. It worked — until it didn't.
The First Fix: Virtual Machines
Smart people came up with a clever idea. What if you could split one physical server into several smaller pretend-servers? Each one acts like its own independent computer — its own memory, its own operating system, its own slice of the hardware. These are called Virtual Machines.
Companies like VMware made this popular in the 2000s. Amazon Web Services turbocharged it — suddenly you could rent one of these virtual machines in minutes instead of buying hardware and waiting weeks for delivery.
Now if the payments app crashed, it crashed inside its own little box. The auth app kept running. Teams could own their own machine. This felt like magic in 2005.
Every VM carried a full operating system — gigabytes of it. A VM running a small payments API might have a 1 GB operating system just to support a 20 MB app. Starting a VM took a minute or more. One physical server could run maybe 10 to 20 of them before running out of steam.
And you had to patch, update, and secure every single VM separately. It was like owning 20 computers instead of one.
The Lightbulb Moment: Containers
In 2013, a developer named Solomon Hykes did a live demo at a conference. He showed something called Docker. The room went quiet — then the applause started.
The idea was beautifully simple. Instead of pretending to be a whole separate computer, what if you just wrapped your app and its exact dependencies in a tidy standardised box? The box would share the host computer's core engine (the OS kernel) but keep everything else neatly isolated.
That box is a container. And compared to a VM, it is featherlight.
The famous "it works on my machine" problem — the one responsible for so many midnight panic calls — was essentially solved. If your app ran in a container on your laptop, it ran identically in production. Same container, same result, every time.
Teams went all in. Within a year or two, a company might be running hundreds of containers across dozens of servers. And that's exactly when the next problem hit.
The New Headache Nobody Saw Coming
Containers solved the packaging problem brilliantly. But they created a management nightmare. Imagine you're the engineer on call, 300 containers running across 20 servers. Ask yourself these five questions honestly:
Google Had Already Solved All of This — Quietly
Here's the thing. While the rest of the world was battling these five problems, Google had already been running billions of containers for nearly a decade using an internal system called Borg. Every Google Search, every Gmail load, every YouTube video — all running on Borg.
In June 2014, a team of Google engineers took everything they had learned from Borg, rebuilt it from scratch as an open-source project, and gave it to the world for free. They called it Kubernetes.
It answered all five questions — in one system.
You declare what you want. Kubernetes figures out where to run it, keeps it running, restarts it if it crashes, helps containers find each other, deploys updates without downtime, and scales automatically when traffic hits.
| The Problem | Kubernetes' Answer | Lesson |
|---|---|---|
| Which server runs each container? | Scheduler — picks the best node automatically | Lesson 46 |
| Container crashes unnoticed | Self-healing — detects it and restarts automatically | Lesson 9 |
| Containers can't find each other | Services + DNS — stable names that always point to live containers | Lessons 11, 35 |
| Deployments cause downtime | Rolling Updates — swaps containers one at a time, no downtime | Lesson 25 |
| Can't scale fast enough | Autoscaling — watches traffic and adds containers automatically | Lesson 49 |
The Full Journey — In One Picture
Everything is coupled together. One crash takes the whole business down. You scale by buying a bigger server.
Apps isolated from each other — progress. But every app carries a full operating system. Massive overhead, slow startups, expensive at scale.
Lightweight, fast, portable. "Works on my machine" solved overnight. But managing hundreds of containers by hand across many servers becomes the new nightmare.
Crashes go undetected. Deployments need careful manual steps. Networking between containers breaks. On-call engineers burning out.
Scheduling, self-healing, service discovery, zero-downtime deployments, automatic scaling — all built in. You write down what you want and Kubernetes makes it reality, keeps it real, and fixes things automatically when anything goes wrong.
Where to Practice — Get Your Lab Ready Now
From Lesson 8 onwards, every lesson has real commands and real configuration files. Get your practice environment set up now so you're not scrambling when the time comes. Here are the four best options — one for every situation:
Open in your browser, sign in with GitHub or Docker Hub, and you have a real Kubernetes cluster running immediately. No installation. No setup. Completely free. Sessions last 4 hours which is more than enough for any lesson. This is the one we recommend while you work through Lessons 8 to 30.
Runs a small Kubernetes cluster right on your machine using Docker. Best if you want to practice offline or experiment without a time limit. You need Docker Desktop installed first, then just two commands:
brew install minikube && minikube start
Browser-based like Play with K8s but with guided step-by-step Kubernetes scenarios. Great when you want to test yourself on a specific topic. Search for what you just learned and do the matching scenario. Also useful later for CKA exam preparation.
You do not need this until Lesson 46. When you get there, Google Kubernetes Engine has a free Autopilot tier — you only pay for the Pods you actually run. A small test cluster costs almost nothing. We walk you through it in Lesson 59.
Looking back, the shift from VMs to containers feels obvious. At the time it really wasn't. Real engineering teams pushed back hard. "My VM works fine, why bother with this Docker thing?" was a totally normal thing to hear in 2015. The engineers who understood why each new tool existed — not just how to use it — were the ones who made the call to adopt early and built their careers on it. That's exactly the mindset this course is trying to build in you.
Practice Questions
Have a go from memory — no scrolling back up.
1. What was Google's internal container management system called — the one that directly inspired Kubernetes?
2. Containers are so much lighter than VMs because they share the host computer's OS ________ instead of each carrying a full operating system.
3. Which free browser-based platform gives you a real Kubernetes cluster instantly with nothing to install?
Knowledge Check
Pick the best answer.
1. What is the core technical difference between a container and a virtual machine?
2. Containers were brilliant — but created new problems. Which best describes what Kubernetes was built to fix?
3. You want to practice offline on your laptop with no time limit. Which tool is the right pick?
Up Next · Lesson 3
Kubernetes vs Docker
The most common confusion in the whole ecosystem — finally cleared up. They're not the same thing, they're not competitors, and understanding the difference will save you a lot of headaches.