Kubernetes Lesson 2 – Why Kubernetes is Needed | Dataplexa
Kubernetes Fundamentals · Lesson 2

Why Kubernetes is Needed

Before we touch any commands, let's talk about the real pain that made Kubernetes necessary. Because once you feel that pain, everything Kubernetes does will make complete sense.

It All Started With One Big App on One Server

Picture a small online shop from the early 2000s. The whole website — user logins, product pages, shopping cart, payments, confirmation emails — all of it was one giant program sitting on one physical computer in a server room somewhere.

We call this a monolith. One codebase. One server. Everything glued together. It worked — until it didn't.

The Classic Monolith
🖥 One Physical Server
ONE BIG APPLICATION
User Logins
Payments
Product Pages
Email Sender
Admin Panel
Reports
All of this = one single running process
Why this was painful:
A bug in the email feature could crash the whole app — including the payment page. Users couldn't check out because the email sender broke.
Deploying a tiny change meant pushing out the entire application. The whole team held their breath every single time.
When traffic grew, the only option was buying a bigger, more expensive server. There's a ceiling to how big one machine can get.

The First Fix: Virtual Machines

Smart people came up with a clever idea. What if you could split one physical server into several smaller pretend-servers? Each one acts like its own independent computer — its own memory, its own operating system, its own slice of the hardware. These are called Virtual Machines.

Companies like VMware made this popular in the 2000s. Amazon Web Services turbocharged it — suddenly you could rent one of these virtual machines in minutes instead of buying hardware and waiting weeks for delivery.

One Physical Server, Multiple Virtual Machines
🖥 Physical Server
Hypervisor software splits it into virtual machines
VM 1
Full Linux OS
Auth App
VM 2
Full Linux OS
Payments App
VM 3
Full Windows OS
Product App

Now if the payments app crashed, it crashed inside its own little box. The auth app kept running. Teams could own their own machine. This felt like magic in 2005.

But there was a hidden cost that bit everyone eventually:

Every VM carried a full operating system — gigabytes of it. A VM running a small payments API might have a 1 GB operating system just to support a 20 MB app. Starting a VM took a minute or more. One physical server could run maybe 10 to 20 of them before running out of steam.

And you had to patch, update, and secure every single VM separately. It was like owning 20 computers instead of one.

The Lightbulb Moment: Containers

In 2013, a developer named Solomon Hykes did a live demo at a conference. He showed something called Docker. The room went quiet — then the applause started.

The idea was beautifully simple. Instead of pretending to be a whole separate computer, what if you just wrapped your app and its exact dependencies in a tidy standardised box? The box would share the host computer's core engine (the OS kernel) but keep everything else neatly isolated.

That box is a container. And compared to a VM, it is featherlight.

Virtual Machines
Your App
Full OS — 1 GB+
Your App
Full OS — 1 GB+
Hypervisor
Host OS
Hardware
~1 GB overhead · 60 second startup
Containers
App + libs
App + libs
Container Runtime (Docker)
Shared Host OS Kernel
Hardware
~50 MB overhead · under 1 second startup

The famous "it works on my machine" problem — the one responsible for so many midnight panic calls — was essentially solved. If your app ran in a container on your laptop, it ran identically in production. Same container, same result, every time.

Teams went all in. Within a year or two, a company might be running hundreds of containers across dozens of servers. And that's exactly when the next problem hit.

The New Headache Nobody Saw Coming

Containers solved the packaging problem brilliantly. But they created a management nightmare. Imagine you're the engineer on call, 300 containers running across 20 servers. Ask yourself these five questions honestly:

1
Which server should a new container go on?
You have 20 servers each with different amounts of spare CPU and memory. You cannot manually check them all every time you need to start something new.
2
What happens when a container crashes at 3 AM?
Is someone watching every container? Do you have a script that notices when one dies? Does that script always work? Usually users found out before the engineers did.
3
How does Container A find Container B?
Your auth service needs to talk to your database. But containers get new IP addresses every time they restart. Hardcoding those IPs into your config broke constantly. There was no stable address to point to.
4
How do you release a new version without downtime?
You have 10 containers running your payment app. You want to upgrade to a new version. Stop them all at once? That's downtime. Replace one at a time? That's a manual job that's easy to get wrong under pressure.
5
How do you scale when traffic suddenly explodes?
Your payment service is drowning. You need five more containers right now. Someone has to find spare server capacity, spin them up, and update the load balancer — manually — while the site is on fire.

Google Had Already Solved All of This — Quietly

Here's the thing. While the rest of the world was battling these five problems, Google had already been running billions of containers for nearly a decade using an internal system called Borg. Every Google Search, every Gmail load, every YouTube video — all running on Borg.

In June 2014, a team of Google engineers took everything they had learned from Borg, rebuilt it from scratch as an open-source project, and gave it to the world for free. They called it Kubernetes.

It answered all five questions — in one system.

You declare what you want. Kubernetes figures out where to run it, keeps it running, restarts it if it crashes, helps containers find each other, deploys updates without downtime, and scales automatically when traffic hits.

The Problem Kubernetes' Answer Lesson
Which server runs each container? Scheduler — picks the best node automatically Lesson 46
Container crashes unnoticed Self-healing — detects it and restarts automatically Lesson 9
Containers can't find each other Services + DNS — stable names that always point to live containers Lessons 11, 35
Deployments cause downtime Rolling Updates — swaps containers one at a time, no downtime Lesson 25
Can't scale fast enough Autoscaling — watches traffic and adds containers automatically Lesson 49

The Full Journey — In One Picture

1990s – 2005 One Big App on One Server

Everything is coupled together. One crash takes the whole business down. You scale by buying a bigger server.

2000s – 2013 Virtual Machines

Apps isolated from each other — progress. But every app carries a full operating system. Massive overhead, slow startups, expensive at scale.

2013 – 2014 Docker and Containers

Lightweight, fast, portable. "Works on my machine" solved overnight. But managing hundreds of containers by hand across many servers becomes the new nightmare.

2014 – 2015 Container Chaos

Crashes go undetected. Deployments need careful manual steps. Networking between containers breaks. On-call engineers burning out.

2014 → Today Kubernetes

Scheduling, self-healing, service discovery, zero-downtime deployments, automatic scaling — all built in. You write down what you want and Kubernetes makes it reality, keeps it real, and fixes things automatically when anything goes wrong.

Where to Practice — Get Your Lab Ready Now

From Lesson 8 onwards, every lesson has real commands and real configuration files. Get your practice environment set up now so you're not scrambling when the time comes. Here are the four best options — one for every situation:

Best Starting Point · Free
🌐
Play with Kubernetes
labs.play-with-k8s.com

Open in your browser, sign in with GitHub or Docker Hub, and you have a real Kubernetes cluster running immediately. No installation. No setup. Completely free. Sessions last 4 hours which is more than enough for any lesson. This is the one we recommend while you work through Lessons 8 to 30.

Nothing to install Real multi-node cluster Works in any browser
💻
Minikube — On Your Own Laptop
minikube.sigs.k8s.io · Mac, Windows, Linux

Runs a small Kubernetes cluster right on your machine using Docker. Best if you want to practice offline or experiment without a time limit. You need Docker Desktop installed first, then just two commands:

brew install minikube && minikube start
No time limits Works offline Free forever
⚔️
Killercoda — Guided Challenges
killercoda.com · Free tier available

Browser-based like Play with K8s but with guided step-by-step Kubernetes scenarios. Great when you want to test yourself on a specific topic. Search for what you just learned and do the matching scenario. Also useful later for CKA exam preparation.

Guided scenarios Nothing to install CKA exam prep
☁️
Cloud Free Tiers — For the Advanced Section
Google GKE · AWS EKS · Azure AKS

You do not need this until Lesson 46. When you get there, Google Kubernetes Engine has a free Autopilot tier — you only pay for the Pods you actually run. A small test cluster costs almost nothing. We walk you through it in Lesson 59.

Real cloud cluster Start at Lesson 46
Which platform for which lessons — at a glance
Lessons 1–7
Reading only — no setup needed
Lessons 8–30
Play with K8s or Minikube
Lessons 31–45
Minikube or Killercoda
Lessons 46–60
Cloud free tier recommended
A quick thought before we move on

Looking back, the shift from VMs to containers feels obvious. At the time it really wasn't. Real engineering teams pushed back hard. "My VM works fine, why bother with this Docker thing?" was a totally normal thing to hear in 2015. The engineers who understood why each new tool existed — not just how to use it — were the ones who made the call to adopt early and built their careers on it. That's exactly the mindset this course is trying to build in you.

Practice Questions

Have a go from memory — no scrolling back up.

1. What was Google's internal container management system called — the one that directly inspired Kubernetes?



2. Containers are so much lighter than VMs because they share the host computer's OS ________ instead of each carrying a full operating system.



3. Which free browser-based platform gives you a real Kubernetes cluster instantly with nothing to install?



Knowledge Check

Pick the best answer.

1. What is the core technical difference between a container and a virtual machine?


2. Containers were brilliant — but created new problems. Which best describes what Kubernetes was built to fix?


3. You want to practice offline on your laptop with no time limit. Which tool is the right pick?


Up Next · Lesson 3

Kubernetes vs Docker

The most common confusion in the whole ecosystem — finally cleared up. They're not the same thing, they're not competitors, and understanding the difference will save you a lot of headaches.