Docker Course
Mini Project — Containerized Payment API
Every lesson in this course was a single concept, applied in isolation. This project is all of them applied together. You'll build a production-ready containerized payment API from scratch — a Node.js service backed by Postgres and Redis, with a multi-stage Dockerfile, a full Docker Compose setup for development and production, a complete CI/CD pipeline, and every security and performance pattern from the course applied correctly. By the end, you'll have a deployable project that demonstrates the full depth of what you've learned.
This is not a tutorial where you follow steps and get a result. Each section presents the requirement — what the component must do — and the implementation. Read the implementation, understand why each decision was made, then build it yourself before moving to the next section. The project is complete when a single git push builds, tests, scans, and deploys a running HTTPS service.
What You're Building
Project architecture
/health endpoint for health checks.pg_isready health check. The API waits for service_healthy before starting.Step 1 — Project Structure and .dockerignore
Before writing a single Dockerfile line, establish the project structure and the .dockerignore. Every file in the project directory is sent to the Docker Daemon as build context unless explicitly excluded. Get this right first — everything else builds on it.
# Project file structure:
payment-api/
├── .dockerignore
├── .env.development ← dummy values, committed
├── .env.production ← real values, gitignored
├── .gitignore
├── .github/
│ └── workflows/
│ └── ci.yml ← CI/CD pipeline
├── docker-compose.yml ← base Compose file
├── docker-compose.dev.yml ← development overrides
├── docker-compose.prod.yml ← production overrides
├── Dockerfile ← multi-stage: development + production
├── init/
│ └── 01-schema.sql ← Postgres initialisation
├── package.json
├── package-lock.json
└── src/
├── server.js
├── routes/
│ └── payments.js
└── db.js
# .dockerignore — trim the build context from ~500MB to ~2MB
.git
.gitignore
.dockerignore
node_modules
.env
.env.*
coverage/
*.test.js
*.spec.js
README.md
docs/
.vscode/
.idea/
.DS_Store
docker-compose*.yml
.github/
# .gitignore
node_modules/
.env.production
.env*.local
coverage/
dist/
Step 2 — The Multi-Stage Dockerfile
The Dockerfile defines three stages: a shared base, a development stage for hot-reload, and a production stage that is hardened, minimal, and non-root. Dependency manifests are copied before source code to preserve the layer cache. The production image runs as a non-root user and exposes a health check. This applies lessons 36, 37, 40, and the security patterns from lessons 32 and 41.
# syntax=docker/dockerfile:1
FROM node:18-alpine AS base
WORKDIR /app
COPY package*.json ./
# Dependency manifest copied first — cache survives code-only changes.
# ─── Development stage ──────────────────────────────────────────────────────
FROM base AS development
RUN npm install
# All dependencies including devDependencies — nodemon, jest, eslint.
# Source code NOT copied — mounted as a volume for hot-reload.
EXPOSE 3000
CMD ["npm", "run", "dev"]
# ─── Production stage ───────────────────────────────────────────────────────
FROM base AS production
RUN --mount=type=cache,target=/root/.npm \
npm install --omit=dev
# BuildKit cache mount — npm cache persists between builds on the same host.
# --omit=dev strips test runners, type checkers, and build tools.
COPY src/ ./src/
# Only the src/ directory — not tests, not docs, not config files.
RUN addgroup -S appgroup && \
adduser -S appuser -G appgroup && \
chown -R appuser:appgroup /app
USER appuser
# Non-root user. Ownership transferred before USER switch.
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD wget -qO- http://localhost:3000/health || exit 1
EXPOSE 3000
CMD ["node", "src/server.js"]
# Build and verify the production image:
docker build --target production -t payment-api:dev .
docker images payment-api
REPOSITORY TAG SIZE
payment-api dev 89MB ← Alpine + prod node_modules only
docker run --rm payment-api:dev whoami
appuser ✓
docker run --rm payment-api:dev node -e "require('./src/server.js')" 2>&1 | head -1
# Should start without error — DB connection error is expected (no DB running yet)
# Second build after a code change — npm install is cached:
docker build --target production -t payment-api:dev .
[+] Building 1.9s
=> CACHED RUN npm install --omit=dev ← 0ms — BuildKit cache hit
=> COPY src/ ./src/ ← 0.3s — only changed layer
Step 3 — Docker Compose Files
Three Compose files: a shared base, a development override, and a production override. The base defines service structure. Development adds volume mounts, extra ports, and dummy credentials. Production adds resource limits, security flags, logging configuration, and the condition: service_healthy dependency — ensuring the API never starts before Postgres and Redis are ready.
# docker-compose.yml — base, shared structure
version: "3.8"
services:
api:
build:
context: .
dockerfile: Dockerfile
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
db:
image: postgres:15-alpine
volumes:
- pgdata:/var/lib/postgresql/data
- ./init:/docker-entrypoint-initdb.d/
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
redis:
image: redis:7-alpine
command: >
redis-server
--requirepass ${REDIS_PASSWORD}
--appendonly yes
--appendfsync everysec
--maxmemory 128mb
--maxmemory-policy allkeys-lru
volumes:
- redisdata:/data
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 10s
volumes:
pgdata:
redisdata:
# docker-compose.dev.yml — development overrides
version: "3.8"
services:
api:
build:
target: development
volumes:
- .:/app
- /app/node_modules
# Anonymous volume prevents host node_modules overwriting container's.
ports:
- "3000:3000"
- "9229:9229" ← Node.js debugger port
environment:
- NODE_ENV=development
- LOG_LEVEL=debug
env_file:
- .env.development
db:
ports:
- "5432:5432" ← exposed for local DB tools
env_file:
- .env.development
redis:
ports:
- "6379:6379" ← exposed for local Redis tools
env_file:
- .env.development
# docker-compose.prod.yml — production overrides
version: "3.8"
services:
api:
build:
target: production
image: acmecorp/payment-api:${GIT_SHA}
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- LOG_LEVEL=info
env_file:
- .env.production
deploy:
resources:
limits:
cpus: "1.5"
memory: 512M
reservations:
cpus: "0.5"
memory: 256M
restart: unless-stopped
read_only: true
tmpfs:
- /tmp
security_opt:
- no-new-privileges:true
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
db:
# No port mapping — database not exposed outside the stack
env_file:
- .env.production
deploy:
resources:
limits:
memory: 1G
restart: unless-stopped
logging:
driver: json-file
options:
max-size: "20m"
max-file: "3"
redis:
# No port mapping — redis not exposed outside the stack
env_file:
- .env.production
deploy:
resources:
limits:
memory: 160M
# Slightly above maxmemory (128mb) to absorb Redis overhead.
restart: unless-stopped
# .env.development — committed, dummy values only
POSTGRES_DB=payment_db
POSTGRES_USER=payment_user
POSTGRES_PASSWORD=dev_password_not_real
REDIS_PASSWORD=dev_redis_password
DB_HOST=db
REDIS_HOST=redis
# .env.production — gitignored, real values, lives on server only
POSTGRES_DB=payment_db
POSTGRES_USER=payment_user
POSTGRES_PASSWORD=
REDIS_PASSWORD=
DB_HOST=db
REDIS_HOST=redis
Step 4 — Database Schema
The schema initialisation script runs automatically on first Postgres startup via /docker-entrypoint-initdb.d/. It creates the payments table, indexes, and the UUID extension. The ON CONFLICT DO NOTHING pattern in any seed data prevents errors on container recreation.
# init/01-schema.sql
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE IF NOT EXISTS payments (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
user_id UUID NOT NULL,
amount NUMERIC(12, 2) NOT NULL CHECK (amount > 0),
currency CHAR(3) NOT NULL DEFAULT 'USD',
status VARCHAR(20) NOT NULL DEFAULT 'pending'
CHECK (status IN ('pending','processing','completed','failed')),
idempotency_key VARCHAR(64) UNIQUE,
-- Idempotency key stored in both Postgres and Redis.
-- Postgres: permanent record. Redis: fast lookup during the request window.
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_payments_user_id ON payments (user_id);
CREATE INDEX IF NOT EXISTS idx_payments_status ON payments (status);
CREATE INDEX IF NOT EXISTS idx_payments_idem_key ON payments (idempotency_key)
WHERE idempotency_key IS NOT NULL;
Step 5 — The Application
The Node.js application reads all configuration from environment variables at startup — no hardcoded values, no config files with credentials. It binds to 0.0.0.0 so Docker's port mapping can reach it. The /health endpoint verifies the database connection — not just process presence — so the Docker health check and condition: service_healthy reflect actual readiness.
// src/server.js
'use strict';
const express = require('express');
const { Pool } = require('pg');
const redis = require('redis');
const app = express();
app.use(express.json());
// ── Database connection ──────────────────────────────────────────────────────
const db = new Pool({
host: process.env.DB_HOST,
port: parseInt(process.env.DB_PORT || '5432'),
database: process.env.POSTGRES_DB,
user: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
// All values from environment — never hardcoded. Lesson 33.
max: 10,
idleTimeoutMillis: 30000,
});
// ── Redis connection ─────────────────────────────────────────────────────────
const cache = redis.createClient({
socket: { host: process.env.REDIS_HOST, port: 6379 },
password: process.env.REDIS_PASSWORD,
});
cache.connect().catch(console.error);
// ── Health check endpoint ────────────────────────────────────────────────────
app.get('/health', async (req, res) => {
try {
await db.query('SELECT 1');
// Tests actual DB connectivity — not just process presence.
// The Docker HEALTHCHECK and depends_on condition rely on this.
res.json({ status: 'healthy', db: 'connected', cache: cache.isReady ? 'connected' : 'connecting' });
} catch (err) {
res.status(503).json({ status: 'unhealthy', error: err.message });
// 503 causes the Docker health check to fail → container marked unhealthy
// → depends_on condition blocks dependent services → crash loop detected early
}
});
// ── Payment routes ───────────────────────────────────────────────────────────
app.use('/payments', require('./routes/payments'));
// ── Start server ─────────────────────────────────────────────────────────────
const PORT = parseInt(process.env.PORT || '3000');
app.listen(PORT, '0.0.0.0', () => {
// 0.0.0.0 — accepts connections from outside the container. Lesson 37.
console.log(JSON.stringify({
level: 'info', msg: 'server started', port: PORT,
env: process.env.NODE_ENV, ts: new Date().toISOString()
}));
// Structured JSON logging to stdout. Lesson 35.
});
Step 6 — CI/CD Pipeline
The pipeline brings together lessons 31, 40, 43, and 44: build with registry cache, test inside the container, Trivy scan with hard failure on CRITICAL/HIGH, push with the git SHA tag, and deploy to ECS with zero-downtime rolling update. Every stage gates the next. Nothing broken reaches the registry.
# .github/workflows/ci.yml
name: Payment API CI/CD
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
IMAGE_NAME: acmecorp/payment-api
AWS_REGION: ap-south-1
ECS_CLUSTER: production
ECS_SERVICE: payment-api
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- uses: docker/build-push-action@v5
with:
context: .
target: production
push: false
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=registry,ref=${{ env.IMAGE_NAME }}:cache
cache-to: type=registry,ref=${{ env.IMAGE_NAME }}:cache,mode=max
outputs: type=docker,dest=/tmp/image.tar
- uses: actions/upload-artifact@v4
with:
name: docker-image
path: /tmp/image.tar
test:
runs-on: ubuntu-latest
needs: build
services:
postgres:
image: postgres:15-alpine
env:
POSTGRES_DB: payment_test
POSTGRES_USER: payment_user
POSTGRES_PASSWORD: test_password
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-retries 5
redis:
image: redis:7-alpine
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-retries 3
steps:
- uses: actions/download-artifact@v4
with: { name: docker-image, path: /tmp }
- run: docker load --input /tmp/image.tar
- run: |
docker run --rm \
--network ${{ job.services.postgres.network }} \
-e NODE_ENV=test \
-e DB_HOST=postgres \
-e POSTGRES_DB=payment_test \
-e POSTGRES_USER=payment_user \
-e POSTGRES_PASSWORD=test_password \
-e REDIS_HOST=redis \
-e REDIS_PASSWORD="" \
${{ env.IMAGE_NAME }}:${{ github.sha }} \
npm test
scan:
runs-on: ubuntu-latest
needs: build
steps:
- uses: actions/download-artifact@v4
with: { name: docker-image, path: /tmp }
- run: docker load --input /tmp/image.tar
- uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.IMAGE_NAME }}:${{ github.sha }}
exit-code: 1
ignore-unfixed: true
severity: CRITICAL,HIGH
push:
runs-on: ubuntu-latest
needs: [test, scan]
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/download-artifact@v4
with: { name: docker-image, path: /tmp }
- run: docker load --input /tmp/image.tar
- uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- run: |
SHORT_SHA=${GITHUB_SHA::7}
docker tag ${{ env.IMAGE_NAME }}:${{ github.sha }} \
${{ env.IMAGE_NAME }}:${SHORT_SHA}
docker tag ${{ env.IMAGE_NAME }}:${{ github.sha }} \
${{ env.IMAGE_NAME }}:latest
docker push ${{ env.IMAGE_NAME }}:${SHORT_SHA}
docker push ${{ env.IMAGE_NAME }}:latest
env:
GITHUB_SHA: ${{ github.sha }}
deploy:
runs-on: ubuntu-latest
needs: push
if: github.ref == 'refs/heads/main'
environment: production
steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ env.AWS_REGION }}
- run: |
aws ecs describe-task-definition \
--task-definition payment-api \
--query taskDefinition > task-definition.json
- id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: task-definition.json
container-name: payment-api
image: ${{ env.IMAGE_NAME }}:${{ github.sha }}
- uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
Step 7 — Running the Project
With all files in place, the developer workflow is a single command. The production workflow is a git push. Everything else is automated.
# ── Developer workflow ───────────────────────────────────────────────────────
git clone git@github.com:acmecorp/payment-api.git
cd payment-api
# Start the full stack with hot-reload:
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
# API is live at http://localhost:3000
# Edit any file in src/ → nodemon restarts in <1s
# Postgres accessible at localhost:5432 via pgAdmin or psql
# Redis accessible at localhost:6379
# ── Test the API manually ────────────────────────────────────────────────────
# Health check:
curl http://localhost:3000/health
{"status":"healthy","db":"connected","cache":"connected"}
# Create a payment:
curl -X POST http://localhost:3000/payments \
-H "Content-Type: application/json" \
-d '{"userId":"a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11","amount":99.99,"currency":"USD"}'
{"id":"...","status":"pending","amount":99.99}
# ── Production deploy ────────────────────────────────────────────────────────
git add .
git commit -m "feat: add payment retry logic"
git push origin main
# GitHub Actions pipeline runs automatically:
# ✓ build (42s) — production image with registry cache
# ✓ test (38s) — all tests pass against real Postgres + Redis
# ✓ scan (12s) — 0 CRITICAL, 0 HIGH vulnerabilities
# ✓ push (8s) — SHA-tagged image in registry
# ✓ deploy (14s) — ECS rolling update, zero downtime
# Total: ~2 minutes from commit to deployed production.
# docker compose up — development stack starting:
[+] Running 3/3
✔ Container redis Started (healthy after 8s)
✔ Container postgres-db Started (healthy after 22s)
✔ Container payment-api Started (depends_on waited for both)
payment-api | {"level":"info","msg":"server started","port":3000,"env":"development"}
payment-api | [nodemon] watching: /app/src/**/*
# Edit src/routes/payments.js — nodemon detects the change:
payment-api | [nodemon] restarting due to changes...
payment-api | {"level":"info","msg":"server started","port":3000,"env":"development"}
# 800ms. No rebuild. Hot-reload working.
# Full health check:
curl http://localhost:3000/health
{"status":"healthy","db":"connected","cache":"connected"}
# Postgres query succeeded. Redis ping returned PONG.
# All three services healthy and communicating by service name via Docker DNS.
What just happened?
Every concept from every lesson in this course is running simultaneously in one coherent project. The multi-stage Dockerfile separates development and production concerns (Lesson 36, 37). The dependency manifest is copied before source code — the layer cache survives code changes (Lesson 40). Postgres and Redis are declared with health checks and named volumes — the API waits for service_healthy (Lessons 34, 38). Service names resolve via Docker's embedded DNS — no hardcoded IPs (Lesson 39). Secrets come from environment variables — never baked into the image (Lesson 33). The CI pipeline builds, tests inside the container, scans for CVEs, and deploys with zero downtime (Lessons 43, 44). The entire stack runs in development with one command and deploys to production with a git push.
What Every File Applies From the Course
Lessons applied — where each concept lives in the project
.dockerignore
Lesson 37 — build context from ~500MB to ~2MB before a single instruction runs
Dockerfile
Lessons 32, 36, 37, 40, 41 — multi-stage, Alpine, cache ordering, non-root user, health check, BuildKit cache mount
docker-compose.yml
Lessons 38, 39 — named volumes, health checks, condition: service_healthy, service name DNS
docker-compose.dev.yml
Lesson 36 — volume mounts for hot-reload, debugger port, exposed DB ports for local tools
docker-compose.prod.yml
Lessons 32, 34, 35, 36 — resource limits, read-only filesystem, logging rotation, restart policy, no-new-privileges
.env.development
Lesson 33 — committed dummy values, no real credentials in the repository
.env.production
Lesson 33 — gitignored, lives on the server only, real credentials injected at runtime
init/01-schema.sql
Lesson 38 — runs once on first Postgres startup via /docker-entrypoint-initdb.d/
src/server.js
Lessons 33, 35, 37 — env vars only, structured JSON logs to stdout, binds to 0.0.0.0, health check queries DB
.github/workflows/ci.yml
Lessons 31, 40, 43, 44 — SHA tagging, registry cache, test in container, Trivy scan, ECS rolling deploy
Where to Take This Next
This project gives you a working, production-grade foundation. From here: add a reverse proxy (nginx) in front of the API, add a second microservice and practice the network segmentation patterns from Lesson 39, migrate the deployment target to ECS Fargate using the patterns from Lesson 44, or extend the CI pipeline with a staging environment that requires manual approval before production. Each extension applies a concept from the course in a context you own — which is where real understanding comes from. The course taught you how Docker works. The project is where you learn how to make it work for you.
Project Checklist
1. What is the single command that starts the full development stack — API with hot-reload, Postgres, and Redis — using the base and development Compose files?
2. The Node.js server binds to a specific host address so Docker's port mapping can deliver external traffic to it. What address must it bind to?
3. The API service in docker-compose.yml uses which depends_on condition to ensure it only starts after Postgres has passed its pg_isready health check?
Final Quiz
1. After completing the project, a teammate changes one line in src/routes/payments.js and rebuilds. The build takes 45 seconds. You inspect the Dockerfile and see COPY . . before RUN npm install. What is the cause and the fix?
2. On first run, the API container starts and immediately crashes with a database connection error. Postgres is running. What is the most likely cause and fix?
3. The CI pipeline fails at the scan stage with exit code 1. The Trivy output shows a CRITICAL CVE in OpenSSL with a fixed version available. The application code is unchanged. What is the cause and the fix?
Course Complete
You've finished the Docker Course.
From what Docker is and why it exists, through images, containers, volumes, and networking, through security hardening, performance optimization, microservices, CI/CD, and AWS deployment — you now have a complete, production-grade mental model of containerisation. The mini project is not the end. It's the beginning of applying everything you've learned to systems that matter.