Docker Course
Problems with Traditional Deployment
Every team that eventually adopts Docker has a breaking point — a moment where the old way caused enough pain that someone finally said "there has to be a better way." This lesson is about that pain.
Understanding why Docker exists makes you a far better Docker user. If you know the exact problems it solves, you'll know exactly when and how to reach for it. So before we go any further with commands and containers — let's spend this lesson living in the problem.
The Way Software Used to Be Deployed
Before containerization, deploying software meant taking your code and manually setting up an environment for it to run in. The right operating system. The right runtime version. The right libraries. The right environment variables. All configured by hand, usually by a human following a document that was already out of date.
This sounds manageable for one app on one server. It becomes a disaster at scale — and even at small scale, it produces the same class of problems over and over again.
Problem 1 — It Works on My Machine
The Moving House Analogy
Imagine you build a piece of furniture perfectly in your living room. It fits exactly, looks great, works well. Then you move to a new house and try to reassemble it — but the ceiling is lower, the floor is a different material, and the room is a different shape. The furniture hasn't changed. The environment has. That's exactly what happens when code moves from a developer's laptop to a production server.
A developer writes code on their MacBook running Node 18.2 with a specific version of a database driver. The production server runs Ubuntu 20.04 with Node 16.4 and a slightly different driver version. The code works perfectly in development. It crashes in production. Nobody changed the code. The environment changed.
This is the classic "works on my machine" problem — and it has ended careers, delayed launches, and caused production outages that cost real money. It's not a skill problem. It's a structural problem with how software was being deployed.
Problem 2 — Dependency Hell
Your new e-commerce project needs Python 3.11. Your older analytics tool running on the same server needs Python 3.7. These two versions can't peacefully coexist on a bare server without complex workarounds. You install one, it conflicts with the other.
Now multiply this across five applications on the same server, each with their own dependency requirements. Some need the same library at different versions. Some need conflicting system packages. This is dependency hell — and it gets worse every time you add a new application.
The Snowflake Server Problem
After months of patches, fixes, and workarounds, your server becomes a unique snowflake — configured in ways nobody fully remembers or documented. It works, but nobody knows exactly why. When it breaks, nobody knows how to rebuild it. When you need a second server for scaling, you can't reproduce it. Traditional deployment creates snowflake servers constantly.
Problem 3 — Inconsistent Environments Across the Pipeline
In a typical team, code travels through multiple environments before reaching users: a developer's local machine, a shared development server, a testing environment, staging, and finally production. Each of these is set up and maintained separately — by different people, at different times, with slightly different configurations.
The Traditional Deployment Pipeline
Node 18, macOS
Node 16, Ubuntu
Node 16, CentOS
Node 14, Ubuntu
Every environment is different. Every hop is a potential failure point. Bugs that don't exist in dev appear in staging. Staging passes but production breaks.
These failures aren't code bugs. They're environment bugs. And they're nearly impossible to diagnose without being able to reproduce the exact environment where the failure happened.
Problem 4 — Slow, Manual, Error-Prone Setup
Onboarding a new developer in a traditional setup can take anywhere from half a day to several days. There's an onboarding document — usually out of date — that lists 30 steps to install dependencies, configure databases, set environment variables, and run setup scripts. One wrong step and nothing works. The new developer spends their first day not writing code but debugging their local environment.
The same problem hits when you need to scale. Spinning up a new production server means following another document, running setup scripts manually, and hoping nothing has changed since the last time someone did it. Infrastructure that should take minutes takes hours.
Problem 5 — Virtual Machines Are Resource-Heavy
Teams who recognised these environment problems often turned to virtual machines. If every app runs in its own VM, the environments are isolated. Problem solved — right?
Partially. VMs do solve the isolation problem. But each VM runs a full operating system — its own kernel, its own OS processes, its own memory overhead. A VM can easily consume 1–2 GB of RAM just for the OS before your application even starts. Startup times are measured in minutes. Running five apps means running five complete operating systems.
The Apartment Building Analogy
A virtual machine is like building a separate house for every person who needs a room. Each house has its own foundation, plumbing, electrical system, and roof — even though all they needed was a bedroom. A container is like an apartment building — each tenant has their own private space, but they share the foundation, plumbing, and electricity. Same isolation. A fraction of the cost.
Traditional Deployment vs Docker — Side by Side
BEFORE — Traditional Deployment
- New dev setup takes half a day minimum
- "Works on my machine" — broken on the server
- Dependencies conflict across applications
- Every environment is slightly different
- Snowflake servers nobody can reproduce
- VMs waste 1–2 GB RAM just for the OS
- Scaling a new server takes hours
AFTER — Docker
- New dev: one command, running in 2 minutes
- Same container image runs identically everywhere
- Each container has its own isolated dependencies
- Dev, staging, and prod run the exact same image
- Containers are reproducible by definition
- Containers share the host OS — lightweight and fast
- New container starts in seconds
Seeing the Difference — Two Commands vs Forty-Seven Steps
The scenario: A new backend engineer just joined your SaaS startup. Without Docker, their first day is a 47-step document, three broken installs, and a Slack message asking "which version of Node do I need?" With Docker, it looks like this.
git clone https://github.com/acmecorp/payment-api.git # pull the codebase down
cd payment-api
docker run -d --name payment-api-container payment-api # app is live — that's it
# No installing Node. No configuring a database. No reading a 47-step doc.
# Everything the app needs is already locked inside the image.
Cloning into 'payment-api'... remote: Enumerating objects: 143, done. remote: Counting objects: 100% (143/143), done. Receiving objects: 100% (143/143), 84.22 KiB | 2.1 MiB/s, done. 7f3a9c12e44d1b8e5f920c3d6a1b7e4f8c3d9a2b1e6f5c4d3a2b1c9e8f7d6
What just happened?
Two commands. The app is running. The new engineer didn't install Node, didn't configure a database, didn't touch a config file. All of that is already baked into the Docker image — done once by the team that built it, reproducible forever by anyone who pulls it. That long container ID at the end confirms the container is live. This is what Docker does to onboarding.
Teacher's Note
The problems in this lesson aren't ancient history — teams without Docker hit every single one of them today. If any of them sounded familiar, that's exactly why you're here.
Practice Questions
1. When two applications on the same server need conflicting versions of the same library, this situation is commonly called what?
2. A server that has been patched and configured so many times that nobody can reproduce it is called a what?
3. The main reason VMs are resource-heavy is that each one runs a full what?
Quiz
1. A developer's app works locally but crashes on the production server. The most likely root cause is:
2. What is the main drawback of using virtual machines to solve the environment isolation problem?
3. How does Docker solve the "works on my machine" problem?
Up Next · Lesson 3
Virtual Machines vs Containers
You know the problems VMs have — now let's see exactly how containers are built differently, and why that changes everything.