Docker Course
Building Docker Images
You ran docker build in Lesson 12 and got an image. This lesson goes deeper — build arguments, tagging strategies, targeting specific Dockerfiles, and what to do when a build breaks halfway through.
The docker build command has more options than most developers ever discover. Knowing the right flags at the right time is the difference between a brittle build process and one that scales cleanly across teams, environments, and CI/CD pipelines.
The Build Command in Full
The full syntax of docker build has several important options beyond the basics. Here are the ones that appear constantly in real projects.
docker build — the flags that matter
-t
Tag the image with a name and optional version. Without this the image gets only a SHA — hard to reference later. Always tag your builds.
-f
Specify a Dockerfile path. Useful when you have multiple Dockerfiles — Dockerfile.dev, Dockerfile.prod — in the same project.
--build-arg
Pass a build-time variable into the Dockerfile. Used for things like Node version, app version strings, or build environment — not secrets.
--no-cache
Force a completely fresh build — ignore all cached layers. Essential when debugging a build that uses stale cached data.
--platform
Build for a specific CPU architecture. Critical when building on an Apple Silicon Mac (arm64) for deployment on Linux servers (amd64).
--target
Stop the build at a specific stage in a multi-stage Dockerfile. Covered in depth when we reach multi-stage builds in Section IV.
Tagging Strategies That Actually Work
A tag is how you identify and differentiate versions of the same image. The tag you choose today determines whether you can roll back reliably in six months. Teams that use latest for everything eventually have a very bad day.
The Wardrobe Label Analogy
Imagine labelling clothes in a shared wardrobe with only "shirt" — you'll never find anything. Tagging Docker images with latest is the same mistake. A good tag is like a label that says "blue oxford shirt, size M, dry-clean only" — specific enough to find and trust exactly what you need. In production, your tags should be specific enough that you can identify and redeploy any previous version at any point in time.
Tag formats used in production
v1.2.3
Semantic versioning — the gold standard. Each tag maps to a specific release. Makes rollback trivial: docker run payment-api:v1.2.2.
git-a3f2c8d
Short git commit SHA — ties the image directly to the exact commit that produced it. Used extensively in CI/CD pipelines for full traceability.
staging
Environment tag — indicates the intended deployment target. Often combined with a version: payment-api:v1.2.3-staging.
latest
Acceptable only for local development and quick testing. Never rely on it in production or CI/CD — it moves and gives you no rollback path.
Build Arguments — Parameterising Your Dockerfile
Build arguments let you pass values into a Dockerfile at build time — making the same Dockerfile reusable across different environments or versions without editing it directly.
A build argument is declared in the Dockerfile with ARG and passed in from the command line with --build-arg. Unlike environment variables set with ENV, build arguments only exist during the build — they're not available inside a running container.
The scenario: You're a platform engineer maintaining a base Node.js image used by twelve teams. Rather than maintaining separate Dockerfiles for Node 18 and Node 20, you want one parameterised Dockerfile that both teams can use with a single flag change.
ARG NODE_VERSION=18
# ARG declares a build-time variable with a default value of 18
# If --build-arg is not passed at build time, the default is used
FROM node:${NODE_VERSION}-alpine
# Use the build argument in the FROM instruction
# ${NODE_VERSION} is replaced with the value passed via --build-arg
ARG APP_VERSION=1.0.0
# A second build argument for the application version
# Note: ARG after FROM only applies to that build stage
WORKDIR /app
COPY package*.json ./
RUN npm install --omit=dev
COPY . .
LABEL version="${APP_VERSION}"
# LABEL bakes metadata into the image — readable via docker image inspect
# Useful for tracking which version of code produced this image
EXPOSE 3000
CMD ["node", "server.js"]
# Build with Node 18 (the default)
docker build -t payment-api:v1.2.0 .
# Build with Node 20 explicitly
docker build \
--build-arg NODE_VERSION=20 \
--build-arg APP_VERSION=1.2.0 \
-t payment-api:v1.2.0 .
# Force a completely fresh build — bypass all cached layers
docker build --no-cache -t payment-api:v1.2.0 .
[+] Building 3.1s (9/9) FINISHED => [internal] load build definition from Dockerfile 0.0s => [internal] load metadata for docker.io/library/node:20-alpine 0.8s => [1/5] FROM node:20-alpine 0.0s => CACHED [2/5] WORKDIR /app 0.0s => CACHED [3/5] COPY package*.json ./ 0.0s => CACHED [4/5] RUN npm install --omit=dev 0.0s => [5/5] COPY . . 0.3s => exporting to image 2.0s => => naming to docker.io/library/payment-api:v1.2.0 0.0s
What just happened?
The build completed in 3.1 seconds — dramatically faster than the 22-second first build in Lesson 12. See those CACHED lines for steps 2, 3, and 4? The Daemon recognised that package.json hadn't changed and served those layers straight from cache. Only step 5 — COPY . . — ran fresh because source code changed. This is the layer cache working exactly as designed. The --build-arg NODE_VERSION=20 replaced the default 18 with 20, so the image now runs Node 20 — confirmed on the first line of the build output where it pulls node:20-alpine.
Building for Multiple Platforms
Apple Silicon Macs build images for arm64 by default. Most production Linux servers run amd64. Pushing an arm64 image to a registry and deploying it on an amd64 server causes a silent, confusing failure — the container either refuses to start or runs through emulation and performs terribly.
# Build specifically for Linux amd64 — the standard production server architecture
docker build \
--platform linux/amd64 \
-t payment-api:v1.2.0 .
# Build for both amd64 and arm64 simultaneously (requires Docker Buildx)
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t acmecorp/payment-api:v1.2.0 \
--push .
# --push sends the multi-platform image directly to the registry
[+] Building 28.4s (10/10) FINISHED => [internal] load build definition from Dockerfile 0.0s => [internal] load metadata for docker.io/library/node:18-alpine 1.1s => [linux/amd64 1/5] FROM node:18-alpine 8.2s => [linux/amd64 2/5] WORKDIR /app 0.0s => [linux/amd64 3/5] COPY package*.json ./ 0.1s => [linux/amd64 4/5] RUN npm install --omit=dev 16.3s => [linux/amd64 5/5] COPY . . 0.3s => exporting to image 2.4s => => naming to docker.io/library/payment-api:v1.2.0 0.0s
What just happened?
Each build step is now prefixed with [linux/amd64] — confirming the image was compiled for the amd64 architecture, regardless of what machine ran the build. The build took longer because Docker had to emulate the target platform when building from an Apple Silicon Mac. The result is an image that runs natively on standard Linux production servers with no performance penalty. For teams shipping to AWS, GCP, or any standard Linux infrastructure, --platform linux/amd64 should be part of every CI/CD build step.
Debugging a Failed Build
Builds fail. The key is knowing how to read the error and fix it without starting from scratch.
Reading a Build Failure
When a build fails, Docker tells you exactly which step failed and shows the error output from that step. The failed step's number tells you which Dockerfile instruction caused it. Everything above the failure was successful and is cached — when you fix the error and rebuild, Docker resumes from the last successful cached layer, not from scratch.
# A build that fails — missing dependency in package.json
docker build -t payment-api:v1.2.0 .
# After fixing the error, rebuild — Docker resumes from cached layers
docker build -t payment-api:v1.2.0 .
# If the build uses stale cached data that you need to refresh
docker build --no-cache -t payment-api:v1.2.0 .
# Use --no-cache sparingly — it makes every build slow
[+] Building 4.2s (6/9) FAILED
=> [internal] load build definition from Dockerfile 0.0s
=> [1/5] FROM node:18-alpine 0.0s
=> CACHED [2/5] WORKDIR /app 0.0s
=> CACHED [3/5] COPY package*.json ./ 0.0s
=> CACHED [4/5] RUN npm install --omit=dev 0.0s
=> [5/5] COPY . . 0.3s
=> ERROR [6/5] RUN node -e "require('./server.js')" 3.9s
------
> [6/5] RUN node -e "require('./server.js')":
> Error: Cannot find module 'express'
------
ERROR: failed to solve: process "/bin/sh -c node -e \"require('./server.js')\"" did not complete successfully: exit code: 1
What just happened?
The build failed at step 6 — a verification step that tried to require the server. The error message is clear: Cannot find module 'express' — express is missing from package.json. Notice steps 1–4 all show CACHED — Docker didn't re-run them. Once you add express to package.json and rebuild, Docker resumes from step 3 (where package.json changed) and runs npm install fresh. The other cached layers above step 3 are still reused. Failed builds are not wasted builds — the cache saves everything that worked.
Teacher's Note
When a build fails and you can't tell why even after reading the error, add --progress=plain to docker build — it shows the full unformatted output of every step, including the complete stdout and stderr from failed commands.
Practice Questions
1. To force Docker to rebuild every layer from scratch and ignore all cached layers, which flag do you add to docker build?
2. The Dockerfile instruction used to declare a build-time variable that can be passed in via --build-arg is called what?
3. To build an image targeting standard Linux production servers from an Apple Silicon Mac, which flag do you use?
Quiz
1. A developer uses ARG DB_PASSWORD in a Dockerfile and passes it via --build-arg. What is the key limitation of this approach?
2. A build fails at step 6 of 8. After fixing the error and rebuilding, what happens to steps 1–5?
3. For a CI/CD pipeline that needs full traceability between deployed images and source code, the most useful image tag format is:
Up Next · Lesson 14
Docker Image Layers
You've seen the cache at work — now let's go deep on how layer caching actually works, and why the order of your Dockerfile instructions can make the difference between a 2-second build and a 20-second one.