Docker Lesson 18 – Bridge Network | Dataplexa
Section II · Lesson 18

Bridge Network

The bridge network is Docker's default and most-used network driver. Lesson 17 introduced it — this lesson goes inside it. Understanding how traffic actually flows between containers on a bridge network is what makes you confident debugging connectivity issues at 11pm when something isn't talking to something else.

Most developers use bridge networks every day without ever thinking about them. The ones who understand what's happening under the hood are the ones who fix networking bugs in minutes instead of hours.

How a Bridge Network Works Internally

A bridge network is a software-defined Layer 2 network — essentially a virtual switch — running entirely inside the Linux kernel on your host machine. When you create a bridge network, Docker creates a virtual network interface on the host (you can see it with ip link show on Linux). Every container attached to that bridge network gets a virtual ethernet interface that connects to this switch.

Containers on the same bridge network can communicate directly with each other through this virtual switch. Traffic between them never leaves the host — it travels through the kernel's network stack, not through any physical network interface. This makes container-to-container communication on the same bridge both fast and isolated from the outside world.

The Office Switch Analogy

A bridge network is like an unmanaged network switch in an office. Plug a device into the switch and it can communicate with every other device on the same switch instantly — without going through the internet, without going through a router. Each container is a device. The bridge is the switch. The switch exists entirely inside the host machine — there's no physical hardware, just a software simulation running in the kernel.

Default Bridge vs Custom Bridge — The Real Difference

Both the default bridge network and user-defined custom bridge networks use the same underlying driver. The differences are in what features each one provides.

Default bridge (docker0)

  • All containers attach here unless told otherwise
  • No DNS — containers reach each other by IP only
  • IPs are dynamic — change on restart
  • No network isolation between unrelated containers
  • Legacy — not recommended for new projects
  • Cannot be configured beyond basic settings

Custom bridge network

  • Only containers you explicitly attach can join
  • Built-in DNS — reach containers by name
  • IPs are dynamic but names always resolve
  • Isolated by default from other networks
  • The correct approach for all real projects
  • Supports custom subnet, gateway, and MTU config

Network Aliases — Decoupling From Container Names

By default, a container's DNS hostname on a custom network is its container name. This creates a subtle coupling — if you rename the container or change your naming convention, every other container that references it by name breaks.

Network aliases solve this by giving a container an additional hostname on a specific network — completely independent of the container name. Multiple containers can share the same alias, which also enables simple load balancing between identical containers.

The scenario: You're a platform engineer running two identical payment-processing containers for redundancy. Both should be reachable via the same hostname — payment-processor — so the API doesn't need to know which specific instance it's talking to. Round-robin DNS across both containers gives you basic load balancing for free.

docker network create app-network

# Start two identical payment containers — both with the same alias
docker run -d \
  --name payment-processor-1 \
  --network app-network \
  --network-alias payment-processor \
  payment-api:v2.1.0
# --network-alias payment-processor → this container responds to "payment-processor" on app-network
# The container name is still payment-processor-1 but the alias is what other containers use

docker run -d \
  --name payment-processor-2 \
  --network app-network \
  --network-alias payment-processor \
  payment-api:v2.1.0
# Same alias as the first — both containers now respond to "payment-processor"
# Docker DNS round-robins between them automatically
a3f2c8d91e44b7e1a4c52f889201cd3f
b7e1a4c52f88d3a9e0c1f2b3d4e5f6a7

# From inside another container on app-network:
# nslookup payment-processor
# Server:    127.0.0.11
# Address:   127.0.0.11#53
#
# Name:      payment-processor
# Address:   172.20.0.2   ← payment-processor-1
# Address:   172.20.0.3   ← payment-processor-2
# DNS returns both IPs — clients rotate between them

What just happened?

Two containers were started with the same --network-alias payment-processor. Docker's internal DNS server — running at 127.0.0.11 inside every container on a custom network — now returns both IP addresses when any container resolves payment-processor. Clients receive both IPs in round-robin order and alternate between the two instances automatically. If payment-processor-1 goes down, DNS stops returning its IP and all traffic falls back to payment-processor-2. This is primitive but effective load balancing with zero additional infrastructure.

Configuring a Custom Bridge Network

Custom bridge networks accept configuration at creation time — subnet range, gateway address, and MTU. Most developers never need to touch these defaults, but there are real situations where you do: conflicting subnet ranges with corporate VPNs, specific IP ranges required by security policy, or matching the network configuration of a production environment.

# Create a custom bridge network with explicit subnet and gateway
docker network create \
  --driver bridge \
  --subnet 192.168.10.0/24 \
  --gateway 192.168.10.1 \
  --opt com.docker.network.bridge.name=app-br0 \
  secure-network
# --driver bridge         → explicit driver declaration (bridge is the default)
# --subnet                → IP range for containers on this network
# --gateway               → the gateway IP for the network
# --opt bridge.name       → name the actual Linux bridge interface on the host
# secure-network          → the Docker network name

# Verify the network was created with the right configuration
docker network inspect secure-network
[
  {
    "Name": "secure-network",
    "Driver": "bridge",
    "IPAM": {
      "Config": [
        {
          "Subnet": "192.168.10.0/24",
          "Gateway": "192.168.10.1"
        }
      ]
    },
    "Options": {
      "com.docker.network.bridge.name": "app-br0"
    },
    "Containers": {}
  }
]

What just happened?

The network was created with a specific 192.168.10.0/24 subnet — containers on this network will receive IPs in the range 192.168.10.2 to 192.168.10.254. The gateway at 192.168.10.1 is the virtual router that handles traffic leaving the network. The com.docker.network.bridge.name=app-br0 option names the actual Linux bridge interface — on Linux you can verify this with ip link show app-br0 and see the physical network interface Docker created. The Containers block is empty because no containers have joined yet.

Debugging Bridge Network Connectivity

The scenario: Your API container can't connect to the database container. Both are supposed to be on the same custom network but something isn't right. Here's the systematic approach to diagnosing bridge network connectivity issues.

# Step 1 — Check which networks the container is actually on
docker inspect order-api --format '{{json .NetworkSettings.Networks}}'
# --format lets you extract specific fields from the JSON output
# This shows exactly which networks the container is connected to

# Step 2 — Check whether the target container is on the same network
docker network inspect app-network --format '{{json .Containers}}'
# Lists every container currently connected to app-network

# Step 3 — Test connectivity from inside the container
docker exec -it order-api sh
ping db-container          # test DNS resolution and reachability
curl http://db-container:5432  # test if the port is reachable
exit

# Step 4 — If the container isn't on the right network, attach it
docker network connect app-network order-api
# Step 1 output — container is on the wrong network
{"bridge":{"IPAddress":"172.17.0.3","NetworkID":"3f8a2c1b..."}}
# It's on the default bridge, not app-network — that's the bug

# Step 2 output — db-container is on app-network, order-api is not
{"a1b2c3...":{"Name":"db-container","IPv4Address":"172.18.0.2/16"}}

# Step 4 output — connect order-api to the right network
# (no output — silence means success in Docker)

# After connecting — Step 1 again
{"app-network":{"IPAddress":"172.18.0.3","NetworkID":"9e1f3a5b..."}}
# Now on app-network — db-container is reachable by name

What just happened?

The debugging process revealed that order-api was attached to the default bridge network — not app-network where db-container lives. This is the most common bridge network bug: a container was started without --network app-network, so it defaulted to the bridge network. docker network connect attached it to the correct network without restarting it. The --format flag is essential here — docker inspect produces hundreds of lines of JSON; --format extracts only the field you need.

The Most Common Bridge Network Bug

You create a custom network, start your database on it, then start your API — and forget --network app-network. The API starts on the default bridge and can't find the database by name. Always explicitly pass --network on every docker run command that needs to join a specific network. There is no automatic inheritance.

Teacher's Note

docker inspect container-name --format '{{json .NetworkSettings.Networks}}' is the fastest command I know for diagnosing container networking issues — it tells you exactly which networks a container is on in two seconds.

Practice Questions

1. An additional hostname assigned to a container on a specific network — independent of its container name — is called a what?



2. Docker's internal DNS server runs at which IP address inside every container on a custom network?



3. To extract a specific field from docker inspect output without printing the entire JSON blob, which flag do you use?



Quiz

1. Two containers are started with --network-alias payment-processor on the same custom network. What happens when another container resolves the hostname payment-processor?


2. Two containers on the same bridge network communicate with each other. How does that traffic travel?


3. A container fails to reach db-container by hostname even though db-container is running. docker inspect reveals the failing container is on the bridge network, not app-network. The root cause is:


Up Next · Lesson 19

Host & Overlay Networks

Bridge handles single-host networking — but overlay networks connect containers across multiple hosts. That's the foundation of Docker Swarm and production distributed systems.