Docker Course
Environment Variables
The same Docker image needs to connect to a different database in development, staging, and production. The image doesn't change — the configuration does. Environment variables are how you inject that configuration at runtime without touching the image itself.
Environment variables are one of the most important concepts in containerised applications. Get this right and your images become truly portable — the same image runs in every environment just by changing the variables you pass to it. Get it wrong and you end up baking passwords and database URLs into your images, which is both a security risk and an operational nightmare.
Environment Variables in the Twelve-Factor App
The Twelve-Factor App methodology — the design principles behind most modern cloud-native applications — explicitly states that configuration should be stored in the environment, not in the code. A correctly configured container application reads everything environment-specific — database URLs, API keys, feature flags, port numbers — from environment variables at startup.
This means one Dockerfile, one image, infinite environments. The same payment-api:v1.2.0 image runs in development pointing at a local Postgres, in staging pointing at the staging database, and in production pointing at the production cluster — because the image reads its database URL from DATABASE_URL, and you set that variable differently in each environment.
The Passport Analogy
A Docker image is like a person — their skills, capabilities, and personality are fixed. Environment variables are like a passport — they define where that person is allowed to go and under what identity. The same person can travel to different countries by presenting a different passport configuration. The same image can run in different environments by receiving different environment variables. The person doesn't change. The passport does.
Three Ways to Set Environment Variables
Docker gives you three distinct mechanisms for setting environment variables in a container, each suited to different situations.
Method 1 — The -e Flag at Runtime
The most direct method — pass variables explicitly on the command line when starting the container. Best for individual variables, quick testing, and CI/CD pipelines where variables are injected dynamically.
docker run -d \
--name payment-api \
-p 3000:3000 \
-e NODE_ENV=production \
-e DATABASE_URL=postgresql://user:pass@db-container:5432/payments \
-e REDIS_URL=redis://redis-cache:6379 \
-e JWT_SECRET=supersecretkey123 \
-e PORT=3000 \
payment-api:v1.2.0
# -e KEY=VALUE → set a single environment variable
# Use multiple -e flags for multiple variables
# These are available inside the container as process environment variables
7f3a9c12e44d1b8e5f920c3d6a1b7e4f8c3d9a2b1e6f5c4d3a2b1c9e8f7d6e5f4 Payment API v1.2.0 starting... Environment: production Database: postgresql://user:***@db-container:5432/payments Redis: redis://redis-cache:6379 Server listening on port 3000
What just happened?
Five environment variables were injected into the container at runtime. The app startup log confirms it read them correctly — NODE_ENV=production set the environment, the database and Redis URLs tell the app where its dependencies are, and PORT=3000 sets the listening port. The database password is masked in the log output — a good sign that the application is handling secrets responsibly. None of these values are in the image. Pull the same image and run it with different variables and it's a completely different configuration.
Method 2 — The --env-file Flag
When you have many variables, listing them all with -e becomes unwieldy. The --env-file flag reads variables from a plain text file — one KEY=VALUE per line. This is the standard approach for local development.
# .env file contents (never commit this to git)
# NODE_ENV=development
# DATABASE_URL=postgresql://postgres:devpass@localhost:5432/payments_dev
# REDIS_URL=redis://localhost:6379
# JWT_SECRET=dev-secret-not-for-production
# PORT=3000
# LOG_LEVEL=debug
# Run with the env file
docker run -d \
--name payment-api-dev \
-p 3000:3000 \
--env-file .env \
payment-api:v1.2.0
# --env-file .env → load all variables from the .env file
# Lines starting with # are treated as comments and ignored
# Empty lines are ignored
b2d4f6a8c0e2f4a6b8d0e2f4a6b8c0d2e4f6a8b0c2d4e6f8a0b2c4d6e8f0a2b4 Payment API v1.2.0 starting... Environment: development Database: postgresql://postgres:***@localhost:5432/payments_dev Server listening on port 3000 (debug logging enabled)
What just happened?
All six variables from the .env file were loaded into the container in one flag instead of six separate -e arguments. The app started in development mode with debug logging — exactly as configured. The .env file lives on your host machine and is never copied into the image — it's passed at runtime only, which means different team members can have their own .env with their own credentials, and the image itself stays clean. Keep .env in .gitignore and .dockerignore — always.
Method 3 — ENV in the Dockerfile
The ENV instruction in a Dockerfile bakes a variable into the image itself as a default value. Every container started from that image gets that variable automatically — unless you override it at runtime with -e.
ENV is the right place for non-sensitive defaults — the application's default port, the default log level, the Node environment type. It is absolutely not the right place for passwords, API keys, or anything that differs between environments.
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --omit=dev
COPY . .
ENV PORT=3000
# Set a sensible default port — can be overridden with -e PORT=8080 at runtime
ENV NODE_ENV=production
# Default to production — safer than defaulting to development
# Override with -e NODE_ENV=development for local dev runs
ENV LOG_LEVEL=info
# Default log level — override with -e LOG_LEVEL=debug when troubleshooting
EXPOSE ${PORT}
# EXPOSE can reference ENV variables — reads the PORT variable set above
CMD ["node", "server.js"]
[+] Building 19.2s (9/9) FINISHED
=> [1/5] FROM node:18-alpine 3.8s
=> [2/5] WORKDIR /app 0.0s
=> [3/5] COPY package*.json ./ 0.1s
=> [4/5] RUN npm install --omit=dev 13.6s
=> [5/5] COPY . . 0.2s
=> exporting to image 1.5s
# Inspect the image to confirm ENV defaults are baked in
docker inspect payment-api:v1.2.0 --format '{{json .Config.Env}}'
["PORT=3000","NODE_ENV=production","LOG_LEVEL=info","PATH=/usr/local/sbin:..."]
What just happened?
The ENV instructions baked three default variables into the image. The docker inspect command confirms they're there — PORT=3000, NODE_ENV=production, and LOG_LEVEL=info are part of the image's configuration metadata. Any container started from this image gets those defaults automatically. Override any of them at runtime with -e and the runtime value wins over the image default. This layered approach — sensible defaults in the image, overrides at runtime — is the correct pattern for containerised application configuration.
Checking Environment Variables in a Running Container
# Print all environment variables set in a running container
docker exec payment-api env
# Check a specific variable
docker exec payment-api printenv NODE_ENV
# Via docker inspect — useful for scripting
docker inspect payment-api --format '{{json .Config.Env}}'
NODE_ENV=production DATABASE_URL=postgresql://user:pass@db-container:5432/payments REDIS_URL=redis://redis-cache:6379 JWT_SECRET=supersecretkey123 PORT=3000 LOG_LEVEL=info PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=7f3a9c12e44d production ["NODE_ENV=production","DATABASE_URL=postgresql://...","PORT=3000","LOG_LEVEL=info"]
What just happened?
docker exec env dumps every environment variable the container process can see — both the ones you set and the ones Docker adds automatically like PATH and HOSTNAME. printenv NODE_ENV returns just the value of a single variable — useful for verifying one specific setting quickly. The docker inspect JSON output is machine-readable and what automated health checks and deployment verification scripts use. These three commands together cover every debugging scenario involving environment variables.
The Three-Method Decision
Choosing the right method
ENV in Dockerfile
Non-sensitive defaults that apply everywhere — default port, default log level, application name. Values you'd be comfortable committing to git.
--env-file
Local development with many variables. Team members have their own .env file with their own credentials. Never committed to git, never in the image.
-e at runtime
CI/CD pipelines, production deployments, and container orchestration systems where variables are injected programmatically from a secrets manager or environment config.
Never Bake Secrets into Images
Setting a database password or API key with ENV DB_PASSWORD=secret in a Dockerfile bakes that secret permanently into every layer of the image. Even if you try to overwrite it in a later layer, the original value is still visible in the image history via docker history --no-trunc. Anyone who can pull the image can read the secret. Secrets always go at runtime — via -e, --env-file, or Docker Secrets (Lesson 33).
Teacher's Note
The pattern that works everywhere: ENV in Dockerfile for safe defaults, --env-file .env locally, and -e flags injected by your CI/CD system in staging and production. Three methods, three contexts, zero secrets in images.
Practice Questions
1. To load a large number of environment variables from a plain text file when running docker run, which flag do you use?
2. To print every environment variable currently visible inside a running container, which command do you run?
3. The Dockerfile instruction used to set a default environment variable that gets baked into the image is called what?
Quiz
1. A developer adds ENV DB_PASSWORD=supersecret to their Dockerfile. What is the security risk?
2. A Dockerfile sets ENV NODE_ENV=production. At runtime the container is started with -e NODE_ENV=development. Which value does the running container see?
3. A production deployment pipeline needs to pass a database password and API key to a container. The most secure approach is:
Up Next · Lesson 22
Docker Best Practices
Section II ends with everything you've learned put together — the habits that separate a 1.2 GB amateur image from an 87 MB professional one, and the patterns that make containers reliable in production.