Docker Lesson 19 – Host & Overlay Networks | Dataplexa
Section II · Lesson 19

Host & Overlay Networks

Bridge networks handle everything on a single host. The moment your application spans multiple machines — a production cluster, a Docker Swarm, a cloud deployment across availability zones — bridge networks can't help you. That's where host and overlay networks come in.

This lesson covers two network drivers that sit at opposite ends of the isolation spectrum. The host network removes all isolation — the container shares the host's network stack directly. The overlay network extends container networking across multiple physical machines as if they were one. Understanding both makes you far more capable when real production problems land on your desk.

The Host Network Driver

When a container runs on the host network, it shares the host machine's network namespace entirely. There is no virtual network interface, no bridge, no NAT. The container binds directly to the host's ports — as if the application were running natively on the machine, not inside a container at all.

The immediate consequence: port mapping is meaningless on the host network. You don't use -p 3000:3000 because there's nothing to map — the container already owns port 3000 on the host directly. If something else is already using that port, the container fails to start.

The Open-Plan Office Analogy

The bridge network gives every container its own private office with a door and a lock. The host network tears down all the walls and puts everyone in a single open-plan space. Maximum communication efficiency — but zero privacy. The container processes are on the same "floor" as the host OS processes, sharing every network resource. That's fast, but it means a misconfigured container can interfere with the host's own network stack.

Host network — no isolation

nginx-container
port 80 on host
api-container
port 3000 on host
no bridge, no NAT, no port mapping
Host Network Stack (eth0, lo, all interfaces)

Containers bind directly to host ports. No -p flag needed or supported. All host network interfaces are visible inside the container.

Host Network — When It Makes Sense

The host network sounds like it removes Docker's core isolation guarantee — and it does, partially. So why use it at all?

Performance-critical workloads — Bridge and overlay networks add overhead from NAT translation and virtual interfaces. For high-throughput networking applications — load balancers, packet processors, network monitoring tools — this overhead is measurable. The host network eliminates it entirely.

Applications that need to discover or bind to host interfaces — Some tools need to enumerate and bind to all network interfaces on the machine. Inside a bridge-networked container, they only see the container's virtual interface. On the host network, they see everything the host sees.

Linux only — The host network driver only works on Linux. On macOS and Windows, Docker Desktop runs containers inside a Linux VM, so the "host" network is the VM's network stack — not your Mac or Windows machine's interfaces. For true host networking on macOS, there is no equivalent.

docker run -d \
  --name nginx-host \
  --network host \
  nginx:alpine
# --network host   → attach to the host network directly
# No -p flag — port mapping is not used on the host network
# nginx binds to port 80 on the host directly
# Access via http://localhost:80 — not through any Docker port mapping
3a7f2c9e1b4d6f8a0c2e4f6a8b0d2e4f6a8b0c2d4e6f8a0b2c4d6e8f0a2b4c6d

/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
2024/01/15 09:23:11 [notice] 1#1: nginx/1.25.3
2024/01/15 09:23:11 [notice] 1#1: start worker processes
# nginx is now listening on the host's port 80 directly
# docker ps shows no PORTS column — there is no mapping to display

What just happened?

nginx started and bound directly to port 80 on the host's network interface — not through Docker's port mapping mechanism. Visit http://localhost in your browser and you reach nginx, but there's no Docker NAT layer in between. Run docker ps and the PORTS column is empty — because there's nothing to map. The container doesn't have its own IP address on the host network; it shares the host's IP entirely. Two containers on the host network cannot both use the same port — they'd conflict just like two processes on the same machine trying to bind to port 80.

Host Network Removes Container Isolation

A container on the host network can see and potentially interfere with all host network interfaces and other processes binding to ports. If the container is compromised, the attacker has direct access to the host's network stack — not just the container's isolated namespace. Only use the host network when performance requirements demand it and the security implications are understood and accepted.

The Overlay Network Driver

An overlay network stretches a single virtual network across multiple Docker hosts. Containers on different physical machines can communicate with each other using container names and IPs as if they were on the same local network — even though packets are physically travelling across the internet or a data centre network between them.

Overlay networks are the backbone of Docker Swarm — Docker's built-in container orchestration system. When you deploy a service across a Swarm cluster of ten nodes, the overlay network ensures that a container on node 1 can reach a container on node 7 by name, with no manual networking configuration.

The VPN Analogy

An overlay network is like a VPN that connects offices in different cities. Each office has its own local network — staff can talk to each other locally at full speed. But the VPN creates a secure tunnel between offices so that a person in London can reach a server in Singapore using its internal hostname, as if they were in the same building. The overlay network creates the same tunnel between Docker hosts — containers on different machines communicate as if they're on the same local switch.

Overlay network — spans multiple hosts

Docker Host 1

api-service
10.0.0.3
worker-1
10.0.0.4
overlay: app-overlay
10.0.0.0/24 — VXLAN tunnel
encrypted packets over real network

Docker Host 2

api-service
10.0.0.5
worker-2
10.0.0.6

Containers on Host 1 and Host 2 share the same 10.0.0.0/24 overlay subnet. api-service on Host 1 can reach worker-2 on Host 2 by name. VXLAN tunnels the traffic transparently between hosts.

Creating and Using an Overlay Network

Overlay networks require either a Docker Swarm cluster or an external key-value store for node coordination. The most common path is initialising a Swarm — even a single-node Swarm — which unlocks the overlay driver.

The scenario: You're a DevOps engineer deploying a microservices application across a three-node Docker Swarm cluster. The API service and the worker service need to communicate across different physical machines on the same overlay network.

# Step 1 — Initialise Docker Swarm on the manager node
docker swarm init --advertise-addr 192.168.1.10
# --advertise-addr → the IP address other nodes use to reach this manager
# This outputs a join token for worker nodes to use

# Step 2 — Create an overlay network
docker network create \
  --driver overlay \
  --subnet 10.0.0.0/24 \
  app-overlay
# --driver overlay → use the overlay driver (requires Swarm mode)
# Containers on any Swarm node can now join this network

# Step 3 — Deploy services on the overlay network
docker service create \
  --name api-service \
  --network app-overlay \
  --replicas 2 \
  order-api:v1.0.0
# docker service create → Swarm command that deploys containers across nodes
# --replicas 2          → run 2 instances, spread across available nodes
# Both replicas join app-overlay and are reachable by name from any node
Swarm initialized: current node (x7k2m3n4p5q6) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-3xyz...abc 192.168.1.10:2377

q8r9s0t1u2v3w4x5y6z7a8b9

nzvs4qkp3y6m   api-service   replicated   2/2        order-api:v1.0.0

What just happened?

Swarm initialised and elected this node as the manager. The join token allows worker nodes to join the cluster. The overlay network app-overlay was created with a 10.0.0.0/24 subnet — Docker uses VXLAN (Virtual Extensible LAN) tunnels to carry container traffic between hosts over the existing physical network. The service deployed with two replicas — Swarm's scheduler distributed them across available nodes. Both replicas joined the overlay network and are now reachable by the service name api-service from any container on any node in the cluster.

Network Driver Summary

All four Docker network drivers at a glance

bridge Single-host networking. Containers isolated by default, communicate via DNS on custom networks. The right choice for the vast majority of use cases.
host No isolation — container shares the host's network stack directly. Linux only. Use for high-performance networking workloads where NAT overhead is unacceptable.
overlay Multi-host networking via VXLAN tunnels. Requires Docker Swarm. Containers across different physical machines communicate as if local. Used in production distributed systems.
none No network interface at all. Complete network isolation. Used for batch processing containers and security-sensitive workloads that must have zero network access.

Teacher's Note

For most developers, bridge is the only driver you'll use day to day. Host and overlay appear when you hit specific performance or multi-host requirements — knowing they exist and what they do means you'll recognise the right tool when that moment comes.

Practice Questions

1. A container that shares the host machine's network stack directly — with no bridge, no NAT, and no port mapping — is using the ___ network driver.



2. Overlay networks carry container traffic between Docker hosts using tunnels built on which tunnelling protocol?



3. Before you can create an overlay network, you must first initialise a Docker cluster using which command?



Quiz

1. A developer on macOS tries to run a container with --network host expecting it to bind directly to their Mac's network interfaces. Why does this not work as expected?


2. A production application is deployed across three Docker Swarm nodes. Containers on different nodes need to discover and communicate with each other by name. Which network driver enables this?


3. For a standard multi-container application running on a single developer machine or a single production server, which network driver is the correct default choice?


Up Next · Lesson 20

Port Mapping

Networks handle container-to-container communication — port mapping handles the other direction. How traffic from the outside world reaches your containers, and the exact mechanics of how Docker routes it.