Ansible Course
Ansible with Docker
In this lesson
Ansible and Docker are
complementary tools, not competing ones. Docker packages and runs
applications in containers; Ansible provisions the hosts those containers run on,
installs Docker, manages the container lifecycle, and orchestrates multi-container
deployments across a fleet. The community.docker collection provides
idempotent modules for every Docker operation — pulling images, running containers,
creating networks and volumes, and deploying Docker Compose stacks — all from a
playbook that produces the same result whether containers already exist or are being
created for the first time.
Ansible and Docker — Who Does What
The most important mental model for this lesson is understanding the division of responsibility. Trying to use Docker for what Ansible does well, or vice versa, leads to complex and fragile automation.
/etc/docker/daemon.json)The community.docker Collection
All Docker automation in Ansible uses
the community.docker collection. Install it once per project via
requirements.yml and all its modules become available.
ansible-galaxy collection install community.docker
community.docker.docker_image
Pull, build, tag, and push Docker images. Supports registry authentication, build args, and Dockerfile path specification.
community.docker.docker_container
Create, start, stop, and remove containers. Full support for port mappings, volumes, environment variables, networks, restart policies, and health checks. The most frequently used Docker module.
community.docker.docker_network
Create and manage Docker networks. Supports bridge, overlay, macvlan drivers and subnet configuration. Networks should always be created before the containers that use them.
community.docker.docker_volume
Create and manage named Docker volumes. Volumes persist data independently of container lifecycle — essential for databases and any stateful workload.
community.docker.docker_compose_v2
Deploy
and manage Docker Compose stacks. Reads a
docker-compose.yml file and reconciles the running containers
to match the declared state — the idiomatic way to manage multi-container
applications with Ansible.
The Shipping Container Analogy
Docker containers are like shipping containers — standardised boxes that hold cargo (your application) and can run anywhere a crane (Docker daemon) is installed. Ansible is the logistics coordinator — it decides which containers go to which port (host), manages the cranes (Docker installation and configuration), arranges the containers on the dock (networking and volumes), and tracks the manifest (desired state). The container handles the cargo isolation; Ansible handles the orchestration.
Managing Images and Containers
The following patterns cover the most common Docker operations — pulling images, running containers with full configuration, and updating a running container to a new image version.
Pulling images — including from private registries
- name: Log in to private container registry
community.docker.docker_login:
registry_url: registry.example.com
username: "{{ vault_registry_user }}"
password: "{{ vault_registry_password }}"
no_log: true
- name: Pull application image
community.docker.docker_image:
name: registry.example.com/myapp
tag: "{{ app_version }}"
source: pull
force_source: true # always pull even if image already exists locally
state: present
- name: Pull specific public image
community.docker.docker_image:
name: nginx
tag: "1.25-alpine" # always pin image tags — never use 'latest' in production
source: pull
state: present
Running containers with full configuration
- name: Run the application container
community.docker.docker_container:
name: myapp
image: "registry.example.com/myapp:{{ app_version }}"
state: started
restart_policy: unless-stopped # restart automatically unless manually stopped
pull: true # pull the latest version of the tag on each run
# Port mappings
published_ports:
- "127.0.0.1:8000:8000" # bind to localhost only — Nginx proxies to it
# Environment variables — inject secrets from Vault
env:
DATABASE_URL: "postgresql://{{ vault_db_user }}:{{ vault_db_password }}@db:5432/appdb"
SECRET_KEY: "{{ vault_app_secret_key }}"
ENVIRONMENT: "{{ environment }}"
LOG_LEVEL: "{{ log_level | default('info') }}"
# Volumes
volumes:
- "app_uploads:/app/uploads" # named volume for user uploads
- "/etc/app/config.yml:/app/config.yml:ro" # bind mount config, read-only
# Network
networks:
- name: app_network
# Resource limits
memory: "512m"
cpus: "1.0"
# Health check
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Labels for monitoring / log routing
labels:
app: myapp
environment: "{{ environment }}"
version: "{{ app_version }}"
TASK [Run the application container] ****************************************** changed: [appserver01] # On second run (no changes to image or config): ok: [appserver01] <-- container already running with correct config # After updating app_version variable to a new tag: changed: [appserver01] <-- container stopped, removed, recreated with new image
What just happened?
The docker_container module
compared the running container's configuration to the desired state. On the second
run with no changes, it reported ok. When the version variable
changed, it stopped the old container, removed it, and started a new one with the
updated image — automatically, without you having to manage the stop/remove/start
sequence manually. This is idempotency applied to container lifecycle management.
Networks and Volumes
Networks and volumes must exist before the containers that use them are started. Always create them in separate tasks earlier in the play — or in a dedicated pre-task block — and use their names in container definitions. This ordering makes the dependency explicit and idempotent.
# Create infrastructure before containers
- name: Create application Docker network
community.docker.docker_network:
name: app_network
driver: bridge
ipam_config:
- subnet: "172.20.0.0/16" # explicit subnet avoids conflicts
state: present
- name: Create named volume for database data
community.docker.docker_volume:
name: postgres_data
driver: local
state: present
- name: Create named volume for application uploads
community.docker.docker_volume:
name: app_uploads
driver: local
state: present
# Now start containers — network and volumes are guaranteed to exist
- name: Run PostgreSQL container
community.docker.docker_container:
name: postgres
image: "postgres:15-alpine"
state: started
restart_policy: unless-stopped
env:
POSTGRES_DB: appdb
POSTGRES_USER: "{{ vault_db_user }}"
POSTGRES_PASSWORD: "{{ vault_db_password }}"
volumes:
- "postgres_data:/var/lib/postgresql/data"
networks:
- name: app_network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U {{ vault_db_user }}"]
interval: 10s
retries: 5
Docker Compose Integration
When your application is defined in a
docker-compose.yml file, use
docker_compose_v2
to deploy the entire stack from Ansible. This lets developers iterate locally with
docker compose up while production deployments go through Ansible —
both read the same Compose file, ensuring environment parity.
---
- name: Deploy application stack via Docker Compose
hosts: appservers
become: true
tasks:
- name: Create app directory
ansible.builtin.file:
path: /opt/myapp
state: directory
owner: "{{ app_user }}"
mode: "0755"
- name: Deploy docker-compose.yml from template
ansible.builtin.template:
src: docker-compose.yml.j2
dest: /opt/myapp/docker-compose.yml
owner: "{{ app_user }}"
mode: "0644"
- name: Create .env file with secrets
ansible.builtin.copy:
content: |
POSTGRES_PASSWORD={{ vault_db_password }}
SECRET_KEY={{ vault_app_secret_key }}
APP_VERSION={{ app_version }}
dest: /opt/myapp/.env
owner: "{{ app_user }}"
mode: "0600" # restrict — contains secrets
no_log: true
- name: Deploy Docker Compose stack
community.docker.docker_compose_v2:
project_src: /opt/myapp # directory containing docker-compose.yml
state: present
pull: always # pull updated images before reconciling
remove_orphans: true # remove containers no longer in compose file
TASK [Deploy Docker Compose stack] ********************************************
changed: [appserver01] => {
"actions": {
"myapp_web_1": "Starting",
"myapp_db_1": "Running", <-- already running, no change
"myapp_nginx_1": "Recreating" <-- config changed, recreated
}
}What just happened?
docker_compose_v2 reconciled the
running stack against the Compose file. The database was already running and
unchanged — reported as Running. The web container was starting fresh.
The Nginx container's configuration had changed — it was recreated. Only the
containers that needed updating were touched. This is Compose's declarative
reconciliation applied at scale through Ansible.
Multi-Host Docker Deployment
The scenario: A team runs a Python web application behind an Nginx reverse proxy, with a PostgreSQL database. Each tier runs on dedicated Docker hosts. Ansible manages container placement, networking, secret injection, and rolling updates across all three tiers from a single playbook run.
---
# docker_deploy.yml — multi-tier container deployment
- name: Deploy database tier
hosts: db_hosts
become: true
tasks:
- name: Ensure postgres data volume exists
community.docker.docker_volume:
name: postgres_data
state: present
- name: Run PostgreSQL container
community.docker.docker_container:
name: postgres
image: "postgres:15-alpine"
state: started
restart_policy: unless-stopped
env:
POSTGRES_DB: appdb
POSTGRES_USER: "{{ vault_db_user }}"
POSTGRES_PASSWORD: "{{ vault_db_password }}"
volumes:
- "postgres_data:/var/lib/postgresql/data"
published_ports:
- "{{ ansible_default_ipv4.address }}:5432:5432"
- name: Deploy application tier
hosts: app_hosts
become: true
tasks:
- name: Pull latest application image
community.docker.docker_image:
name: "registry.example.com/myapp"
tag: "{{ app_version }}"
source: pull
force_source: true
- name: Run application containers (one per CPU)
community.docker.docker_container:
name: "myapp_{{ item }}"
image: "registry.example.com/myapp:{{ app_version }}"
state: started
restart_policy: unless-stopped
published_ports:
- "127.0.0.1:{{ 8000 + item }}:8000"
env:
DATABASE_URL: >-
postgresql://{{ vault_db_user }}:{{ vault_db_password }}
@{{ hostvars[groups['db_hosts'][0]]['ansible_default_ipv4']['address'] }}
:5432/appdb
SECRET_KEY: "{{ vault_app_secret_key }}"
loop: "{{ range(ansible_processor_vcpus | int) | list }}"
loop_control:
label: "myapp_{{ item }}"
- name: Deploy web tier
hosts: web_hosts
become: true
tasks:
- name: Deploy Nginx config for app backends
ansible.builtin.template:
src: nginx_docker.conf.j2
dest: /etc/nginx/conf.d/app.conf
notify: nginx | Reload Nginx
- name: Run Nginx container
community.docker.docker_container:
name: nginx
image: "nginx:1.25-alpine"
state: started
restart_policy: unless-stopped
published_ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
volumes:
- "/etc/nginx:/etc/nginx:ro"
- "/etc/ssl:/etc/ssl:ro"
Docker Daemon Configuration
Before managing containers, Ansible must configure the Docker daemon itself — setting the log driver, storage driver, registry mirrors, and resource limits. This belongs in your provisioning playbook, run once after Docker is installed and before any containers are deployed.
- name: Configure Docker daemon
ansible.builtin.copy:
content: "{{ docker_daemon_config | to_nice_json }}"
dest: /etc/docker/daemon.json
owner: root
mode: "0644"
notify: docker | Restart Docker daemon
vars:
docker_daemon_config:
log-driver: "json-file"
log-opts:
max-size: "50m"
max-file: "3"
storage-driver: "overlay2"
live-restore: true # containers keep running when daemon restarts
default-ulimits:
nofile:
soft: 65536
hard: 65536
registry-mirrors:
- "https://mirror.gcr.io" # registry mirror for faster pulls
Never Use latest as a Container Image Tag in Production
Pinning image:
nginx:latest means Ansible pulls a different image every time a new
latest is published — silently changing the version running in
production. This is the container equivalent of state: latest
for packages. Always pin to a specific tag: nginx:1.25-alpine,
postgres:15.4, myapp:2.4.1. Update tags deliberately
through a deployment, not incidentally through a routine playbook run. Treat image
tags the same way you treat package versions — they are part of your infrastructure's
reproducibility contract.
Key Takeaways
env: values in the docker_container task with
no_log: true.
docker_compose_v2 for multi-container applications
defined in Compose files — it preserves environment parity with
local development while giving Ansible full orchestration control in
production.
Teacher's Note
Take any docker run
command you currently use manually and translate it into a
docker_container task — every flag has a corresponding parameter.
Run it twice and verify the second run reports ok. That exercise makes
the idempotency of the module concrete and gives you a task you can commit
to version control instead of a command in someone's runbook.
Practice Questions
1. Which Ansible collection provides
all Docker management modules including
docker_container, docker_image, and
docker_compose_v2?
2. Your application is defined in a
docker-compose.yml file. Which module deploys and reconciles the
entire stack with a single task?
3. A docker_container
task passes a database password in the env: block. Which task
attribute must be set to prevent the password appearing in Ansible output?
Quiz
1. A docker_container
task is run for the second time with no changes to the image tag or any
parameters. What does it report and why?
2. A container task fails with "network app_network not found". What is the cause and fix?
3. Why should a PostgreSQL container use a named volume rather than a bind mount to a host directory for its data?
Up Next · Lesson 33
Ansible with Kubernetes
Go beyond single-host containers — learn to manage Kubernetes resources with Ansible, deploy applications to clusters, manage namespaces and secrets, and integrate Ansible into a Kubernetes-native deployment pipeline.