Docker Lesson 16 – Bind Mounts vs Volumes | Dataplexa
Section II · Lesson 16

Bind Mounts vs Volumes

Volumes are Docker's preferred way to persist data. Bind mounts are a sharper tool — more direct, more powerful, and more dangerous if misused. Every developer reaches for bind mounts the moment they want live code reloading in a container. This lesson teaches you exactly when that's the right call and when it isn't.

Both volumes and bind mounts mount data into a container from outside its filesystem. The difference is in who controls where that data lives and how tightly it's coupled to your host machine's directory structure.

Volumes — A Quick Recap

A Docker volume is a storage unit fully managed by the Docker Daemon. You create it with a name, Docker stores its data in a dedicated directory on the host (/var/lib/docker/volumes/), and you mount it into a container using the -v flag. The key properties that define a volume:

Docker owns it — you reference it by name, Docker decides where it physically lives. You never need to know the host path.
Outlives the container — deleting the container leaves the volume intact. The data survives.
Portable and shareable — any container on any machine with Docker installed can mount the same named volume. Multiple containers can mount it simultaneously.
Not tied to the host path — the same volume works identically on macOS, Windows, and Linux regardless of where Docker stores it internally.
No live sync with the host — a volume is not a window into a specific folder on your machine. It's a managed storage unit. You interact with it through containers, not through your filesystem.

Volumes are the right choice whenever data needs to outlive a container — databases, uploaded user files, persistent caches, application state. The question this lesson answers is: when is a volume not enough, and when do you reach for a bind mount instead?

Bind Mounts — Direct Host Directory Access

A bind mount maps a specific directory or file on your host machine directly into a container. The container sees that exact path on your host — not a Docker-managed copy, but the actual files. Any change made on the host appears instantly inside the container, and any change made inside the container appears instantly on the host.

This bidirectional, real-time sync is why bind mounts are the go-to tool for local development — you edit code in your editor, save the file, and the running container immediately sees the change without rebuilding the image.

The Shared Folder Analogy

A bind mount is like a shared folder between two computers on the same network. Both computers see the same files in real time — edit on one, the other immediately has the change. There's no copy, no sync delay, no Docker layer in between. The container and your host machine are reading and writing the exact same files on the exact same disk. That's incredibly powerful for development — and risky in production if a container writes something destructive to a path it shouldn't have access to.

Bind Mounts in Practice — Live Development

The scenario: You're a full-stack developer working on a Node.js API. You're running it inside a Docker container to match the production environment — but you don't want to rebuild the image every time you change a line of code. A bind mount lets your editor changes hit the running container instantly.

docker run -d \
  --name order-api-dev \
  -p 3000:3000 \
  -v $(pwd):/app \
  -v /app/node_modules \
  -e NODE_ENV=development \
  order-api:v1.0.0
# -v $(pwd):/app          → bind mount: map the current host directory to /app in the container
#                           $(pwd) resolves to your current working directory
# -v /app/node_modules    → anonymous volume: preserve the container's node_modules
#                           without this, the bind mount overwrites node_modules with your
#                           host's version (or an empty directory if you don't have one locally)
# NODE_ENV=development    → tell the app it's in dev mode
f3a9c12e44d1b8e5f920c3d6a1b7e4f8c3d9a2b1e6f5c4d3a2b1c9e8f7d6e5f4

Order Management API running on port 3000
Environment: development
Watching for file changes...

What just happened?

The container started with your current project directory bind-mounted to /app. Now open server.js in your editor and change something — save it. The container immediately sees the change without any rebuild. The second -v /app/node_modules is a subtle but critical trick: it tells Docker to use an anonymous volume for node_modules inside the container, which takes precedence over the bind mount at that specific subdirectory. This prevents your host's node_modules (which might be missing or have different native binaries) from overwriting the container's cleanly installed packages.

The node_modules Trick

The -v /app/node_modules anonymous volume is one of those Docker patterns every Node.js developer needs to know. Without it, the bind mount maps your entire project including your host node_modules — which was compiled for your host OS, not the container's Linux environment. Native modules break silently. The anonymous volume carves node_modules out of the bind mount and keeps the container's version intact.

Volumes vs Bind Mounts — The Full Comparison

Docker Volumes

  • Managed entirely by Docker Daemon
  • Stored in /var/lib/docker/volumes/
  • Portable — work the same on any OS
  • No dependency on host directory structure
  • Sharable between multiple containers
  • Best for: databases, production data, persistent state
  • Can use remote storage drivers (NFS, S3, etc.)

Bind Mounts

  • Host directory mapped directly into container
  • Stored wherever you point them on the host
  • Tied to the host's directory structure
  • Real-time bidirectional sync with host
  • Dependent on the host path existing
  • Best for: local development, config injection, source code
  • Local filesystem only — not portable across machines

Volume vs bind mount — where data lives

Named Volume

Container
↕ mount at /var/lib/postgresql/data
Docker Volume (postgres-data)
stored in /var/lib/docker/volumes/
Host Filesystem

Bind Mount

Container
↕ direct map at /app
/Users/dev/projects/order-api
your actual project directory
Host Filesystem

Injecting Config Files with Bind Mounts

Bind mounts aren't only for source code. A common production pattern is using bind mounts to inject configuration files — nginx configs, SSL certificates, application configs — into containers without baking them into the image. The config lives on the host, gets mounted read-only into the container, and can be updated without rebuilding anything.

docker run -d \
  --name nginx-proxy \
  -p 80:80 \
  -p 443:443 \
  -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro \
  -v $(pwd)/certs:/etc/nginx/certs:ro \
  nginx:alpine
# -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro
#    host file    → container path → :ro means read-only inside the container
#    nginx reads its config from this path — now you control it from the host
# -v $(pwd)/certs:/etc/nginx/certs:ro
#    mount the SSL certificates directory as read-only
#    update certs on the host, restart the container — no image rebuild needed
8d4f2e9c1a3b5d7f9e1c3a5b7d9f1e3c5a7b9d1f3e5c7a9b1d3f5e7c9a1b3d5f

/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
2024/01/15 09:23:11 [notice] 1#1: nginx/1.25.3
2024/01/15 09:23:11 [notice] 1#1: start worker processes
2024/01/15 09:23:11 [notice] 1#1: using the "epoll" event method

What just happened?

nginx started using the config file from your host directory — not the default one baked into the nginx image. The :ro flag at the end of the volume mount makes it read-only inside the container — the container can read the file but cannot modify it. This is the correct security posture for config injection: the container gets what it needs, nothing more. Update nginx.conf on the host and restart the container — the new config takes effect immediately with no image rebuild.

The Decision Framework

Every time you need to persist or share data with a container, ask these questions in order:

Use a Volume
Data that needs to outlive a container and be portable — databases, uploaded files, application state, production caches.
Use a Bind Mount
Local development with live reloading, injecting config files or SSL certs from a specific host path, sharing a file between the host and a container temporarily.
Use Neither
Truly ephemeral data that's fine to lose on container deletion — temp files, build artifacts, processing queues where reprocessing is acceptable.

Never Use Bind Mounts in Production Images

Bind mounts are tied to specific host paths — which means they only work on a machine where that exact path exists. A container deployed to a production server or Kubernetes cluster with a bind mount to /Users/dev/projects/order-api will fail immediately. Bind mounts belong in docker run commands for local development and config injection — never in production deployment configs.

Teacher's Note

The pattern I use on every project: bind mount for local dev (live reload), named volume for the database, and nothing else. Two lines in the docker run command covers 95% of real development needs.

Practice Questions

1. To get live code reloading in a container during local development — so editor changes appear instantly without rebuilding the image — you use a what?



2. To make a bind-mounted file or directory read-only inside the container, which suffix do you add to the -v flag value?



3. For persisting production database data that needs to survive container deletion and be portable across environments, the correct storage type is a what?



Quiz

1. A developer uses a bind mount in their production Docker deployment config pointing to /Users/dev/projects/app. Why will this fail on the production server?


2. A Node.js dev container uses -v $(pwd):/app and -v /app/node_modules together. What is the purpose of the second volume flag?


3. What is the fundamental difference between how a Docker volume and a bind mount store their data?


Up Next · Lesson 17

Docker Networking Basics

Storage is solved — now let's tackle the other half of the puzzle. Containers need to talk to each other and to the outside world, and Docker's networking model makes both possible in a way that's secure by default.