Docker Lesson 39 – Docker for Microservices | Dataplexa
Section IV · Lesson 39

Docker for Microservices

A team migrated from a monolith to microservices. They containerized each service correctly. Then they wired them together with hardcoded container IPs. Three weeks later, a routine container restart on the payment service changed its IP. The order service — which had the old IP baked into its config — started failing silently. Orders were accepted, payment calls timed out, and no money moved. The symptom appeared in the wrong service. The cause was an IP address that should never have been written down anywhere.

Microservices introduce problems that a single-container setup never encounters: how do services find each other when their IPs change constantly, how do you control which services can talk to which, how do you share infrastructure without coupling services together, and how do you deploy ten services without restarting all of them every time one changes. This lesson covers each of these — with the exact Docker and Compose patterns that solve them.

Monolith vs Microservices in Docker

Single container — monolith

  • One image, one container, one deployment unit
  • All code in one process — simple to reason about
  • No service discovery needed — everything is local
  • One database, one network, one config
  • Deploy the whole thing or nothing
  • Scale the whole thing together

Multiple containers — microservices

  • Many images, many containers, independent deployment units
  • Each service owns its domain — clear boundaries
  • Service discovery required — IPs change, names are stable
  • Each service can have its own database and config
  • Deploy one service without touching others
  • Scale individual services independently

The City Streets Analogy

The City Streets Analogy

IP addresses in a Docker network are like parking spots in a city — any car can occupy any spot, the spot number changes every time the car parks somewhere new, and you'd never give someone directions by saying "find my car, it's in spot 4B." You'd say "meet me at The Grand Hotel" — a name that stays constant regardless of which parking spot the hotel's delivery van is currently using. Docker's built-in DNS is the city's street addressing system: you reach services by name (payment-service, order-service), and Docker resolves the name to whatever IP the container currently has. The name is stable. The IP is irrelevant.

Service Discovery — DNS, Not IPs

Docker Compose creates a shared DNS resolver for every service in the stack. Each service is reachable by its Compose service name from any other container in the same network. No hardcoded IPs. No service registries. No sidecars. The service name in the Compose file is the hostname — and it stays correct even after a container is replaced, restarted, or scaled.

# Services reach each other by Compose service name — not by IP
version: "3.8"

services:
  api-gateway:
    image: acmecorp/api-gateway:${GIT_SHA}
    environment:
      - PAYMENT_SERVICE_URL=http://payment-service:3001
      - ORDER_SERVICE_URL=http://order-service:3002
      - USER_SERVICE_URL=http://user-service:3003
      # Service names resolve via Docker's built-in DNS.
      # When payment-service restarts and gets a new IP,
      # http://payment-service:3001 still works — DNS resolves the new IP.
    ports:
      - "80:3000"
    networks:
      - frontend
      - backend

  payment-service:
    image: acmecorp/payment-service:${GIT_SHA}
    environment:
      - DB_HOST=payment-db
      # payment-db resolves to the database container — same DNS mechanism.
    networks:
      - backend
    # NOT on the frontend network — cannot be reached directly from outside.

  order-service:
    image: acmecorp/order-service:${GIT_SHA}
    environment:
      - PAYMENT_SERVICE_URL=http://payment-service:3001
      # order-service can call payment-service by name — same backend network.
    networks:
      - backend

  user-service:
    image: acmecorp/user-service:${GIT_SHA}
    networks:
      - backend

  payment-db:
    image: postgres:15-alpine
    networks:
      - backend
      # Database is on backend only — never reachable from frontend network.
    volumes:
      - payment-db-data:/var/lib/postgresql/data

networks:
  frontend:
    # The public-facing network — only api-gateway sits here.
  backend:
    # The private network — all services and databases communicate here.

volumes:
  payment-db-data:
# Verify DNS resolution from inside a container:
docker exec api-gateway nslookup payment-service
Server:    127.0.0.11
Address:   127.0.0.11#53

Name:   payment-service
Address: 172.18.0.4
# Docker's embedded DNS server (127.0.0.11) resolved "payment-service"
# to its current IP. If the container restarts and gets 172.18.0.7,
# the next nslookup returns 172.18.0.7 — automatically, with no config change.

# Verify network isolation — api-gateway cannot reach payment-db directly:
docker exec api-gateway ping payment-db
ping: payment-db: Name or service not known
# payment-db is on the backend network only.
# api-gateway is on frontend + backend.
# payment-db resolves from order-service (backend) but NOT from the public internet.

What just happened?

Docker's embedded DNS server (127.0.0.11) resolved payment-service to its current container IP automatically. When the container restarts, the DNS entry updates — no config changes, no service restarts, no hardcoded IPs to update anywhere. Network isolation ensured payment-db is unreachable from the frontend network — the gateway can talk to backend services, but it cannot directly reach the database. Attackers who compromise the frontend cannot pivot directly to the database layer.

Network Segmentation — Who Can Talk to Whom

Every container on the same Docker network can reach every other container on that network. Without segmentation, a compromised frontend service can make direct connections to the database. With network segmentation, you define exactly which services belong to which network — and containers can only reach other containers that share at least one network with them.

# Network topology — three-tier architecture
networks:
  frontend:    # Internet-facing — only the gateway lives here
  backend:     # Service-to-service — all application services
  data:        # Database tier — only services that need direct DB access

services:
  api-gateway:
    networks: [frontend, backend]
    # Can receive public traffic. Can call backend services. Cannot reach databases.

  payment-service:
    networks: [backend, data]
    # Can receive calls from gateway. Can query payment-db. Cannot be reached from internet.

  order-service:
    networks: [backend, data]

  user-service:
    networks: [backend]
    # Does not query a database directly — calls payment-service via backend network.

  payment-db:
    networks: [data]
    # Only reachable by services on the data network.
    # api-gateway, user-service: cannot reach this. payment-service: can.

  redis:
    networks: [backend]
    # Session store — reachable by all services. Not in data tier (no persistent data concern).

Network access matrix — who can reach whom

From → To api-gateway payment-svc payment-db redis
Internet ✓ port 80
api-gateway ✓ backend ✓ backend
payment-svc ✓ backend ✓ data ✓ backend
payment-db

Independent Deployments — Updating One Service

The core promise of microservices is independent deployability — you update the payment service without touching the order service. Docker Compose makes this straightforward: target a single service with docker compose up --no-deps and only that container is restarted. Every other service keeps running, serving traffic, with zero interruption.

# Deploy a new version of payment-service only — everything else keeps running:
GIT_SHA=b4d9e1f docker compose \
  -f docker-compose.yml \
  -f docker-compose.prod.yml \
  up -d \
  --no-deps \
  --force-recreate \
  payment-service
# --no-deps        → do not restart dependencies (databases, redis)
#                   without this flag, Compose would also restart everything
#                   payment-service depends_on — bringing down the database
# --force-recreate → always recreate the container even if config hasn't changed
#                   ensures the new image tag is actually pulled and used
# payment-service  → only this service is affected

# Verify the other services were not touched:
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Image}}"
NAMES              STATUS           IMAGE
api-gateway        Up 3 hours       acmecorp/api-gateway:a3f2c8d
payment-service    Up 4 seconds     acmecorp/payment-service:b4d9e1f  ← new
order-service      Up 3 hours       acmecorp/order-service:a3f2c8d
user-service       Up 3 hours       acmecorp/user-service:a3f2c8d
payment-db         Up 3 hours       postgres:15-alpine
redis              Up 3 hours       redis:7-alpine
# Only payment-service has a new uptime and a new image tag.
# api-gateway, order-service, user-service: zero downtime, zero restarts.
[+] Running 1/1
 ✔ Container payment-service   Started   (4 seconds)

# Confirm from the gateway that the new version is responding:
docker exec api-gateway curl -s http://payment-service:3001/version
{"version":"b4d9e1f","service":"payment-service","status":"ok"}

# Confirm order-service is unaffected and still routing correctly:
docker exec order-service curl -s http://payment-service:3001/health
{"status":"healthy","uptime":12}

# Total deployment time: 4 seconds.
# Services affected: 1.
# Customer-facing downtime: 0.

What just happened?

The payment service was updated to a new image in four seconds. Every other service — gateway, order service, user service, all databases — kept running without interruption. The gateway immediately started routing to the new payment-service container because DNS resolution updated automatically when the new container started. This is independent deployability working correctly: one service changed, one container restarted, zero blast radius on the rest of the platform.

Shared Infrastructure — One Stack, Shared by All

In a microservices setup, some infrastructure is shared — a single Redis cluster serves all services, a centralised logging stack collects from every container, a reverse proxy routes all inbound traffic. Managing this with a single monolithic Compose file becomes unwieldy as the number of services grows. The pattern: an infrastructure stack defined separately from the application services, with services joining shared networks using external: true.

# docker-compose.infra.yml — shared infrastructure, deployed once
version: "3.8"

services:
  redis:
    image: redis:7-alpine
    command: redis-server --requirepass ${REDIS_PASSWORD} --appendonly yes
    volumes:
      - redisdata:/data
    networks:
      - infra
    healthcheck:
      test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
      interval: 10s
      timeout: 3s
      retries: 3

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/certs:/etc/nginx/certs:ro
    networks:
      - infra
      - frontend

networks:
  infra:
    name: acmecorp_infra
    # Named network — application stacks join this by name.
  frontend:
    name: acmecorp_frontend

volumes:
  redisdata:

# Deploy the infrastructure stack once:
# docker compose -f docker-compose.infra.yml up -d
# docker-compose.payment.yml — payment service joins the shared infrastructure
version: "3.8"

services:
  payment-service:
    image: acmecorp/payment-service:${GIT_SHA}
    environment:
      - REDIS_HOST=redis
      - REDIS_PASSWORD=${REDIS_PASSWORD}
      # redis resolves via the shared infra network — same DNS mechanism.
    networks:
      - infra
      # Join the shared infrastructure network to reach redis and nginx.
      - payment_internal
      # Private network for payment-service and its database only.

  payment-db:
    image: postgres:15-alpine
    networks:
      - payment_internal
    volumes:
      - payment-db-data:/var/lib/postgresql/data

networks:
  infra:
    external: true
    name: acmecorp_infra
    # external: true → this network was created by the infra stack.
    # Do not create it here — just join it.
    # If the infra stack is not running, this service fails to start cleanly.
  payment_internal:
    # Private to this Compose file — payment-service and payment-db only.

volumes:
  payment-db-data:

# Deploy the payment service independently:
# docker compose -f docker-compose.payment.yml up -d

What just happened?

The payment service joined the shared infrastructure network (acmecorp_infra) using external: true — meaning it connects to a network created by a different Compose file, without recreating it. The shared Redis is reachable by service name from any container on that network. The payment database is on a private internal network — only the payment service can reach it. Each application team deploys their own Compose file independently; the infrastructure team manages the shared stack separately. No single Compose file grows to hundreds of lines.

A Complete Microservices Scenario

The scenario: You're the platform engineer for a three-service e-commerce system — an API gateway, a payment service, and an order service. Each team deploys their service independently. Shared infrastructure (Redis, nginx) is managed centrally. Here's the complete topology, the deployment sequence, and the validation commands that confirm everything is wired correctly.

# Deployment sequence — order matters for network creation:

# Step 1 — bring up shared infrastructure first (creates named networks):
docker compose -f docker-compose.infra.yml up -d
# Creates: acmecorp_infra, acmecorp_frontend networks.
# Starts: redis, nginx.

# Step 2 — deploy each service (joins existing networks):
docker compose -f docker-compose.payment.yml up -d
docker compose -f docker-compose.order.yml up -d
docker compose -f docker-compose.user.yml up -d

# Step 3 — validate the full topology:
# Check all containers are running and healthy:
docker ps --format "table {{.Names}}\t{{.Status}}"

# Validate inter-service DNS resolution:
docker exec payment-service nslookup redis
docker exec order-service nslookup payment-service

# Validate network isolation:
docker exec nginx ping payment-db 2>&1 | grep "not known"
# Should print "Name or service not known" — nginx cannot reach the database.

# Validate end-to-end request flow:
curl http://localhost/api/orders
# nginx → api-gateway → order-service → payment-service → payment-db
docker ps --format "table {{.Names}}\t{{.Status}}"
NAMES                STATUS
nginx                Up 4 minutes (healthy)
redis                Up 4 minutes (healthy)
payment-service      Up 2 minutes (healthy)
payment-db           Up 2 minutes (healthy)
order-service        Up 1 minute  (healthy)
user-service         Up 1 minute  (healthy)

docker exec payment-service nslookup redis
Name:    redis
Address: 172.20.0.3   ← resolves via acmecorp_infra network

docker exec nginx ping payment-db 2>&1
ping: payment-db: Name or service not known
# Network isolation confirmed — nginx cannot see payment-db.

curl http://localhost/api/orders
{"orders":[{"id":"ord_4492","status":"pending","amount":99.99}]}
# Full request chain working:
# curl → nginx(80) → api-gateway → order-service → payment-service → payment-db

Never Use Container IPs — Always Use Service Names

Container IPs are assigned dynamically and change every time a container is recreated. Any configuration, environment variable, or application code that references a container IP by address will break silently the next time that container is restarted. Docker's DNS resolver exists precisely to solve this — every service name in a Compose file is a stable hostname. Use it. Never reference 172.18.0.x in any config file, and reject any pull request that does.

Teacher's Note

Docker Compose is the right tool for microservices on a single host — development environments, small production deployments, and staging. When you need services running across multiple hosts with automatic failover and load balancing, that's the point where Kubernetes or Docker Swarm with multiple nodes becomes worth the operational investment. Start with Compose, prove the architecture works, then graduate to an orchestrator when single-host limits become a real constraint — not before.

Practice Questions

1. Docker's embedded DNS server — which resolves Compose service names to container IPs — listens at which IP address inside every container?



2. When running docker compose up -d payment-service, which flag prevents Compose from also restarting the services that payment-service depends on?



3. To join a Docker network that was created by a different Compose file — without recreating it — which key must be set under the network definition in the Compose file?



Quiz

1. An engineer hardcodes the IP 172.18.0.4 in the order service config to reach the payment service. Three days later the order service starts failing. What happened and what is the fix?


2. A security audit flags that the api-gateway container can directly connect to the payment database. The database should only be reachable by the payment service. What is the correct fix?


3. A team wants to deploy a new version of only the payment service without restarting any other container. Which command achieves this?


Up Next · Lesson 40

Docker Performance Optimization

Microservices architecture sorted — now the speed question: builds that take three minutes, images that take forty seconds to pull, and layer caches that keep getting busted on every commit. Performance optimization turns Docker from functional into fast — and in CI pipelines, fast is money.