Docker Lesson 25 – Multi-Container Applications | Dataplexa
Section III · Lesson 25

Multi-Container Applications

Almost every real application is more than one container. A web API, a database, a cache, a background worker, a reverse proxy — these are the building blocks. This lesson builds a complete, production-grade multi-container application from scratch using everything from Sections I, II, and III.

This is the lesson where theory becomes practice. Every concept introduced in the previous 24 lessons appears here — Dockerfiles, volumes, networking, environment variables, health checks, and Compose — all working together in a single coherent application stack.

The Application We're Building

The scenario: You're the lead engineer at a growing SaaS startup. Your team is building an order management platform — a Node.js REST API backed by PostgreSQL for persistent storage, Redis for session caching and rate limiting, and nginx as a reverse proxy handling SSL termination and static file serving. A background worker service processes order notifications asynchronously.

The full stack:

Order management platform — full stack

Client (browser / mobile app)
↓ HTTPS port 443
nginx
reverse proxy · SSL termination
↓ port 3000 (internal)
order-api
Node.js REST API
worker
notification processor
postgres
persistent storage
redis
sessions · rate limiting

Five services, two networks, two volumes. Everything communicates by service name. Only nginx is exposed to the outside world.

Step 1 — The Dockerfiles

Both the API and the worker are Node.js services. They share the same Dockerfile pattern from Lesson 12 — Alpine base, dependency caching, non-root user.

# Dockerfile (used by both api and worker services)
FROM node:18-alpine

WORKDIR /app

COPY package*.json ./
RUN npm install --omit=dev

COPY . .

RUN addgroup -S appgroup && adduser -S appuser -G appgroup
RUN chown -R appuser:appgroup /app
USER appuser

HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1

EXPOSE 3000
CMD ["node", "server.js"]

Step 2 — The Complete Compose File

services:

  postgres:
    image: postgres:15-alpine
    environment:
      POSTGRES_USER: orders_user
      POSTGRES_PASSWORD: ${DB_PASSWORD}   # from .env file — never hardcode
      POSTGRES_DB: orders
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    networks:
      - backend-net
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U orders_user -d orders"]
      interval: 5s
      timeout: 5s
      retries: 5
      start_period: 10s

  redis:
    image: redis:7-alpine
    command: redis-server --requirepass ${REDIS_PASSWORD}
    # command overrides the default CMD — adds password authentication
    volumes:
      - redis-data:/data
    networks:
      - backend-net
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
      interval: 5s
      timeout: 3s
      retries: 5

  order-api:
    build:
      context: .
      dockerfile: Dockerfile
    environment:
      NODE_ENV: production
      PORT: 3000
      DATABASE_URL: postgresql://orders_user:${DB_PASSWORD}@postgres:5432/orders
      REDIS_URL: redis://:${REDIS_PASSWORD}@redis:6379
      JWT_SECRET: ${JWT_SECRET}
    networks:
      - backend-net
      - frontend-net
    restart: unless-stopped
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy

  worker:
    build:
      context: .
      dockerfile: Dockerfile
    command: ["node", "worker.js"]     # override CMD — run worker process instead
    environment:
      NODE_ENV: production
      DATABASE_URL: postgresql://orders_user:${DB_PASSWORD}@postgres:5432/orders
      REDIS_URL: redis://:${REDIS_PASSWORD}@redis:6379
    networks:
      - backend-net
    restart: unless-stopped
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/certs:/etc/nginx/certs:ro
    networks:
      - frontend-net
    restart: unless-stopped
    depends_on:
      - order-api

networks:
  frontend-net:
    driver: bridge
  backend-net:
    driver: bridge
    internal: true        # database and cache have no outbound internet access

volumes:
  postgres-data:
  redis-data:

Step 3 — The .env File

# .env — never commit to git, always in .dockerignore
DB_PASSWORD=str0ngP@ssw0rd_2024
REDIS_PASSWORD=r3disS3cr3t!
JWT_SECRET=jwt-secret-minimum-32-chars-long-abc123

# Compose automatically loads .env from the project directory
# Reference variables in docker-compose.yml with ${VARIABLE_NAME}

Step 4 — Starting the Stack

docker compose up -d --build
# --build  → rebuild images to pick up any Dockerfile changes
# -d       → detached mode

# Watch startup progress
docker compose ps

# Follow all logs in real time
docker compose logs -f

# Follow a specific service
docker compose logs -f order-api
[+] Building 18.4s (9/9) FINISHED
[+] Running 7/7
 ✔ Network orders_frontend-net   Created                                 0.1s
 ✔ Network orders_backend-net    Created                                 0.1s
 ✔ Volume "orders_postgres-data" Created                                 0.0s
 ✔ Volume "orders_redis-data"    Created                                 0.0s
 ✔ Container orders-postgres-1   Healthy                                 9.1s
 ✔ Container orders-redis-1      Healthy                                 6.3s
 ✔ Container orders-order-api-1  Started                                 9.4s
 ✔ Container orders-worker-1     Started                                 9.4s
 ✔ Container orders-nginx-1      Started                                 9.7s

NAME                    STATUS                    PORTS
orders-postgres-1       Up 12s (healthy)
orders-redis-1          Up 12s (healthy)
orders-order-api-1      Up 3s                     3000/tcp
orders-worker-1         Up 3s
orders-nginx-1          Up 2s                     0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp

What just happened?

The entire stack came up in the correct order — postgres and redis reached Healthy status (9.1s and 6.3s respectively) before the API and worker started (9.4s). Compose built the custom images, created both networks, created both volumes, and wired everything together. Only nginx has external ports — 80 and 443. The postgres and redis containers show no ports in the output — they're on the internal backend-net and completely unreachable from outside the Docker environment. The API is accessible at https://localhost through the nginx proxy.

Scaling a Service

One of Compose's most useful features — scaling a service to multiple instances with a single command. The worker processes notifications asynchronously — during peak load you can run more of them without changing any configuration.

docker compose up -d --scale worker=3
# --scale worker=3 → run 3 instances of the worker service
# Compose starts worker-2 and worker-3 alongside the existing worker-1
# All three connect to the same postgres and redis on backend-net

docker compose ps
NAME                    STATUS          PORTS
orders-postgres-1       Up (healthy)
orders-redis-1          Up (healthy)
orders-order-api-1      Up
orders-worker-1         Up
orders-worker-2         Up
orders-worker-3         Up
orders-nginx-1          Up              0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp

What just happened?

Three worker instances are now running — worker-1, worker-2, and worker-3. All three connected to the same postgres and redis services automatically because they're on the same backend-net. Note that the worker service uses command: ["node", "worker.js"] to override the default CMD from the Dockerfile — both the API and the worker use the same Dockerfile but start different processes. This pattern — one Dockerfile, multiple entry points — is common in Node.js monorepos.

Teacher's Note

The internal: true on backend-net is the one line that makes this stack significantly more secure — your database and cache are invisible to the outside world without it. Always segment your networks.

Practice Questions

1. To run three instances of a service called worker in a Compose stack, which flag do you add to docker compose up?



2. To override the default CMD from a Dockerfile for a specific Compose service — so it runs a different entry point — you use which Compose key?



3. In a docker-compose.yml file, to reference a variable called DB_PASSWORD from the .env file, you write what syntax?



Quiz

1. In the order management stack, the postgres and redis services show no published ports. What prevents them from being accessible from the internet?


2. The order-api and worker services both use the same Dockerfile but start different Node.js processes. How is this achieved?


3. A docker-compose.yml references ${DB_PASSWORD} but no -e flag or env_file key is configured. Where does Compose get this value from?


Up Next · Lesson 26

Docker Compose Networking

You've used networks in a real stack — now let's go deep on how Compose networking works, how to connect stacks from different Compose projects, and how to design networks that scale.