Terraform Course
Backend Configuration
You have a working S3 backend. But hardcoding the bucket name and key in every project's versions.tf creates a different problem — the same configuration cannot deploy to multiple environments without editing the backend block. This lesson covers partial backend configuration, backend overrides at init time, and the patterns real teams use to manage backends across environments without duplication.
This lesson covers
Partial backend configuration → -backend-config flag → backend.hcl files → Switching backends → Backend configuration in CI/CD → Azure Blob and GCS backends
The Problem With Hardcoded Backends
When the full backend configuration is hardcoded in versions.tf, deploying the same codebase to different environments means changing the backend block — and that means changing committed code. The key path for dev is different from staging is different from prod. In CI/CD, each pipeline stage needs a different key. Hardcoding forces you to either duplicate configurations or change source-controlled files between deployments.
Partial backend configuration solves this. Leave the static parts in the backend block — the bucket name and region, which rarely change. Supply the dynamic parts — the key, which changes per environment — at terraform init time from outside the codebase.
The Analogy
Think of the backend block like a mailing address template. The city and country — the bucket and region — are the same for every letter. The street address and recipient — the key path and environment — change for each delivery. Partial backend configuration lets you print the city and country on the template and fill in the street address at the post office. One template, many destinations.
Setting Up
Create a project that will deploy to both dev and prod using the same codebase and the same backend bucket — but different state key paths.
mkdir terraform-lesson-16
cd terraform-lesson-16
touch versions.tf variables.tf main.tf outputs.tf backend-dev.hcl backend-prod.hcl .gitignore
Add this to variables.tf:
variable "environment" {
description = "Deployment environment — controls resource naming and sizing"
type = string
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "environment must be dev, staging, or prod."
}
}
variable "region" {
description = "AWS region"
type = string
default = "us-east-1"
}
Add this to main.tf:
# A simple resource to demonstrate multi-environment deployment
resource "aws_s3_bucket" "app" {
bucket = "lesson16-app-${var.environment}-${data.aws_caller_identity.current.account_id}"
tags = {
Name = "lesson16-app-${var.environment}"
Environment = var.environment
ManagedBy = "Terraform"
}
}
# Data source — current AWS account for unique bucket naming
data "aws_caller_identity" "current" {}
Partial Backend Configuration
A partial backend configuration leaves some required arguments out of the backend block. Terraform does not fill them in from variables — backend blocks do not support variable interpolation. The missing arguments must be supplied at terraform init time using -backend-config flags or a backend configuration file.
New terms:
- partial backend configuration — a backend block that omits one or more required arguments. Terraform accepts this — it will prompt for the missing values interactively or accept them via
-backend-configat init time. The static arguments stay in the backend block in source control. The dynamic arguments stay outside. - -backend-config flag — passes additional backend configuration at init time. Accepts either a key=value pair:
-backend-config="key=dev/terraform.tfstate", or a path to a HCL file containing multiple configuration values:-backend-config=backend-dev.hcl. Multiple flags can be combined. - backend.hcl file — a plain HCL file containing backend configuration key-value pairs. Not a Terraform configuration file — it has no blocks, only assignments. Typically gitignored for environment-specific files, or committed for shared non-sensitive values.
- -reconfigure flag — forces Terraform to reconfigure the backend even if the configuration has not changed. Used when you want to switch the active environment without modifying any files. Combined with
-backend-configto point at a different state key.
Add this to versions.tf — note the partial backend block with only the static values:
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
# Partial backend configuration — static values only
# The key (state file path) is supplied at terraform init time via -backend-config
# This allows the same codebase to deploy to dev, staging, and prod
# without editing this file between deployments
backend "s3" {
bucket = "acme-terraform-state-123456789012" # Static — same for all environments
region = "us-east-1" # Static — same for all environments
encrypt = true # Static — always true
dynamodb_table = "terraform-state-lock" # Static — same for all environments
# key is intentionally omitted — supplied at init time
}
}
provider "aws" {
region = var.region
}
Now create the environment-specific backend files. Add this to backend-dev.hcl:
# backend-dev.hcl — backend configuration overrides for the dev environment
# Passed to terraform init via: terraform init -backend-config=backend-dev.hcl
# This file CAN be committed to Git — it contains no secrets
key = "lesson-16/dev/terraform.tfstate" # Unique state path for dev
Add this to backend-prod.hcl:
# backend-prod.hcl — backend configuration overrides for the prod environment
# Passed to terraform init via: terraform init -backend-config=backend-prod.hcl
key = "lesson-16/prod/terraform.tfstate" # Unique state path for prod
Now deploy to dev — notice how the same codebase is used for both environments:
# Deploy to dev — init with dev backend config, apply with dev variable
terraform init -backend-config=backend-dev.hcl
terraform apply -var="environment=dev"
# Deploy to prod — reconfigure backend to point at prod state, apply with prod variable
terraform init -reconfigure -backend-config=backend-prod.hcl
terraform apply -var="environment=prod"
$ terraform init -backend-config=backend-dev.hcl
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Backend configuration:
bucket = "acme-terraform-state-123456789012"
dynamodb_table = "terraform-state-lock"
encrypt = true
key = "lesson-16/dev/terraform.tfstate" # Supplied from backend-dev.hcl
region = "us-east-1"
$ terraform apply -var="environment=dev"
+ aws_s3_bucket.app {
+ bucket = "lesson16-app-dev-123456789012"
}
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
$ terraform init -reconfigure -backend-config=backend-prod.hcl
Initializing the backend...
Successfully configured the backend "s3"!
Backend configuration:
key = "lesson-16/prod/terraform.tfstate" # Now pointing at prod state
$ terraform apply -var="environment=prod"
+ aws_s3_bucket.app {
+ bucket = "lesson16-app-prod-123456789012"
}
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.What just happened?
- The same codebase deployed to two environments — zero file edits between deployments. The
versions.tffile was never modified. The static backend values stayed constant. Only the-backend-configand-varflags changed between the two deployments — values passed at the command line, not in source control. - -reconfigure forced a backend switch without prompting. Normally when you run
terraform initand the backend configuration changes, Terraform asks whether to migrate existing state. With-reconfigure, it silently reconfigures to the new backend without migrating — it is pointing at a fresh state file for prod, which is exactly what you want when deploying to a different environment. - Two separate state files in the same S3 bucket. The dev bucket uses
lesson-16/dev/terraform.tfstateand the prod bucket useslesson-16/prod/terraform.tfstate. Each state file tracks only the resources for its environment. Destroying dev infrastructure does not touch prod infrastructure.
Using -backend-config with Key=Value Pairs
For CI/CD pipelines where environment variables are the primary configuration mechanism, passing backend config as inline key=value pairs is often cleaner than managing separate .hcl files. Multiple -backend-config flags can be combined — each one adds or overrides one argument.
# Pass individual backend arguments as key=value pairs
# Multiple flags combine — all are merged into the final backend configuration
terraform init \
-backend-config="key=lesson-16/staging/terraform.tfstate" \
-backend-config="bucket=acme-terraform-state-123456789012" \
-backend-config="region=us-east-1"
# In a CI/CD pipeline — environment variables drive the key path
# The pipeline sets ENV before running Terraform
export ENV="staging"
terraform init \
-backend-config="key=lesson-16/${ENV}/terraform.tfstate" \
-reconfigure
$ export ENV="staging"
$ terraform init -backend-config="key=lesson-16/${ENV}/terraform.tfstate" -reconfigure
Initializing the backend...
Successfully configured the backend "s3"!
Backend configuration:
bucket = "acme-terraform-state-123456789012" # From versions.tf
dynamodb_table = "terraform-state-lock" # From versions.tf
encrypt = true # From versions.tf
key = "lesson-16/staging/terraform.tfstate" # From -backend-config flag
region = "us-east-1" # From versions.tf
Terraform has been successfully initialized!What just happened?
- The -backend-config flag merged with the partial backend block. The static values — bucket, region, encrypt, dynamodb_table — came from the backend block in
versions.tf. The dynamic key came from the-backend-configflag. Terraform merged them into one complete backend configuration. Neither source is complete on its own — together they are. - The environment variable resolved in the shell before Terraform saw it.
"key=lesson-16/${ENV}/terraform.tfstate"— the${ENV}is shell variable interpolation, not Terraform interpolation. The shell replaces it withstagingbefore passing the string to Terraform. Terraform seeskey=lesson-16/staging/terraform.tfstate— a plain string with no interpolation needed. - This is the standard CI/CD pattern. The pipeline sets
ENV(or equivalent) as part of the deployment stage configuration. The Terraform init command uses it to select the correct state key. No Terraform files change between environments — the difference is entirely in the environment variables passed to the pipeline stage.
Backend Configuration in CI/CD
In CI/CD, the pipeline runs Terraform on behalf of engineers. The pipeline needs access to the backend credentials and must supply the correct backend configuration for each environment stage. Here is a complete GitHub Actions workflow that demonstrates the production-standard approach.
New terms:
- AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY — environment variables the AWS provider reads automatically. In CI/CD, these are stored as encrypted secrets in the pipeline configuration and injected as environment variables into the Terraform job. Never hardcoded, never printed in logs.
- TF_VAR_ in CI — any variable prefixed
TF_VAR_is automatically used as a Terraform variable value. Store sensitive variable values as pipeline secrets and inject them asTF_VAR_db_passwordenvironment variables. No-varflags needed. - -auto-approve — skips the interactive confirmation prompt on apply. Required in CI/CD where there is no human at a terminal to type yes. Only safe when the plan has been reviewed and approved in a prior step — for example, as a pull request check.
- -input=false — tells Terraform never to prompt for input. If a required value is missing, Terraform errors immediately rather than hanging waiting for input. Essential in CI/CD pipelines where interactive prompts would stall the job indefinitely.
# .github/workflows/terraform.yml
# Complete GitHub Actions workflow for Terraform deployment
# This is YAML — not HCL — but the Terraform commands inside are identical
name: Terraform Deploy
on:
push:
branches: [main] # Deploy to prod on every merge to main
pull_request:
branches: [main] # Plan-only on pull requests — no apply
jobs:
terraform:
runs-on: ubuntu-latest
environment: ${{ github.ref == 'refs/heads/main' && 'production' || 'preview' }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: "~> 1.6"
- name: Terraform Init
env:
# AWS credentials from GitHub secrets — never hardcoded
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
terraform init \
-input=false \
-backend-config="key=lesson-16/${{ github.ref == 'refs/heads/main' && 'prod' || 'dev' }}/terraform.tfstate"
- name: Terraform Plan
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# Sensitive Terraform variables injected as TF_VAR_ environment variables
TF_VAR_db_password: ${{ secrets.DB_PASSWORD }}
run: |
terraform plan \
-input=false \
-var="environment=${{ github.ref == 'refs/heads/main' && 'prod' || 'dev' }}" \
-out=tfplan
- name: Terraform Apply
# Only apply on pushes to main — not on pull requests
if: github.ref == 'refs/heads/main'
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
TF_VAR_db_password: ${{ secrets.DB_PASSWORD }}
run: |
terraform apply \
-input=false \
-auto-approve \
tfplan # Apply the exact plan reviewed in the previous step
What just happened?
- Credentials are injected from secrets — never in files.
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYcome from GitHub's encrypted secret store. The AWS provider reads them from environment variables automatically — no credentials file, no hardcoded values, nothing that could appear in logs or git history. - The plan is saved and applied in two separate steps. The plan runs first with
-out=tfplan. The apply runs in a later step usingterraform apply tfplan— the exact saved plan. Between these two steps, the plan output is available for review. In a PR workflow, the plan output is posted as a PR comment — engineers review it before the merge that triggers the apply. - -auto-approve is only on the apply step — not the plan. The plan step never applies anything —
-auto-approvewould be meaningless on a plan. The apply step uses-auto-approvebecause the review already happened at the PR stage. This is the safe pattern: human review at PR time, automated execution at merge time. - -input=false prevents pipeline hangs. Without this flag, if any required input is missing — a variable with no value, a missing backend argument — Terraform would wait indefinitely for terminal input that never comes. With
-input=false, Terraform errors immediately with a clear message about what is missing.
Azure Blob Storage Backend
For Azure-based teams, the Azure Blob Storage backend is the equivalent of S3 + DynamoDB. Blob leases provide native locking — no separate locking resource is needed. Here is the complete Azure backend configuration.
New terms:
- resource_group_name — in Azure, every resource must belong to a resource group. The storage account that holds state must be in an existing resource group. Create it before configuring the backend.
- storage_account_name — the Azure Storage Account containing the blob container. Must be globally unique across all Azure accounts worldwide — like an S3 bucket name.
- container_name — the blob container inside the storage account. Equivalent to an S3 bucket prefix. All state files for your organisation can share one container — the
keyargument separates them. - Blob lease locking — Azure blobs support native lease locks. When Terraform writes state, it acquires a lease on the blob. Any concurrent write attempt fails because the lease is held. No DynamoDB equivalent needed — locking is built into the storage layer.
# Azure backend — equivalent of S3 + DynamoDB for Azure-based teams
# Blob leases provide native locking — no separate locking resource needed
terraform {
required_version = ">= 1.5.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0"
}
}
backend "azurerm" {
resource_group_name = "rg-terraform-state" # Resource group for the storage account
storage_account_name = "acmeterraformstate" # Must be globally unique, 3-24 chars
container_name = "tfstate" # Blob container — like a folder
key = "PROJECT/ENV/terraform.tfstate" # Unique path — override at init time
# Authentication — use Azure CLI credentials locally, managed identity in CI/CD
# use_azure_cli_auth = true # Uncomment for local development
# use_msi = true # Uncomment for Azure-hosted CI/CD (managed identity)
}
}
provider "azurerm" {
features {} # Required empty features block for azurerm provider
}
$ terraform init -backend-config="key=lesson-16/dev/terraform.tfstate" Initializing the backend... Successfully configured the backend "azurerm"! Backend configuration: container_name = "tfstate" key = "lesson-16/dev/terraform.tfstate" resource_group_name = "rg-terraform-state" storage_account_name = "acmeterraformstate" Terraform has been successfully initialized! # State is now stored as a blob at: # https://acmeterraformstate.blob.core.windows.net/tfstate/lesson-16/dev/terraform.tfstate # Locking — Azure blob leases handle this automatically # No separate resource needed — locking is native to Azure Blob Storage
What just happened?
- Azure backend configured with four arguments — no separate locking resource. The S3 backend needs five resources: bucket, versioning, encryption, public access block, and DynamoDB table. The Azure backend needs one: a blob container in a storage account. Blob leases are native to Azure Storage — locking is automatic and requires zero extra configuration.
- Authentication uses the Azure CLI locally and managed identity in CI/CD. The commented-out
use_azure_cli_authanduse_msioptions show the two most common authentication approaches. Locally, engineers useaz loginand the CLI credentials are picked up automatically. In Azure-hosted CI/CD (Azure DevOps, GitHub Actions with Azure integration), managed identity provides credentials without any stored secrets.
GCS Backend for GCP Teams
Google Cloud Storage provides the backend for GCP-based teams. Like Azure, GCS has native object locking — no separate lock table is needed. The GCS backend is configured with bucket name, prefix for state organisation, and credentials.
# GCS backend — for GCP-based teams
# Object locking is native to GCS — no separate locking resource needed
terraform {
required_version = ">= 1.5.0"
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.0"
}
}
backend "gcs" {
bucket = "acme-terraform-state" # GCS bucket name — must exist before terraform init
prefix = "lesson-16/dev" # Prefix = folder path within the bucket
# State file stored at: gs://acme-terraform-state/lesson-16/dev/default.tfstate
}
}
provider "google" {
project = var.project_id
region = var.region
}
Common Mistakes
Trying to use variables in the backend block
The backend block is evaluated before the rest of the configuration — variables are not available at that stage. bucket = var.state_bucket inside a backend block throws an error: "Variables may not be used here." The backend block must contain literal values or be left empty for partial configuration. Use -backend-config to supply values dynamically.
Running terraform init -reconfigure when you meant -migrate-state
-reconfigure points Terraform at a new backend without copying existing state. -migrate-state copies the existing state to the new backend before switching. If you use -reconfigure when switching to a new backend and existing state is not there yet, Terraform starts with an empty state and plans to recreate all your resources. Know which flag you need before running it.
Committing backend.hcl files that contain credentials
Backend HCL files sometimes contain credentials — an access key, a client secret, a SAS token. These must never be committed to Git. If your backend config file contains only the key path and no credentials, it is safe to commit. If it contains any credentials, gitignore it and supply credentials via environment variables instead.
The complete init command reference
Four flags cover every init scenario you will encounter. terraform init for first-time setup. terraform init -backend-config=file.hcl for partial configuration. terraform init -reconfigure to switch environments without state migration. terraform init -migrate-state to switch backends and carry existing state across. terraform init -upgrade to update providers to newer versions within their constraints. These five patterns handle 95% of real init use cases.
Practice Questions
1. Your backend block omits the key argument. Which terraform init flag supplies the missing value at init time?
2. In a CI/CD pipeline, which flag on terraform apply skips the interactive yes/no confirmation prompt?
3. You want to switch the active environment from dev to prod — pointing Terraform at the prod state key — without migrating dev state. Which terraform init flag achieves this?
Quiz
1. Why can you not use var.bucket_name inside a backend block?
2. Why does the Azure Blob backend not require a separate locking resource like DynamoDB?
3. You are switching an existing project from local state to a new S3 remote backend. Which init flag copies the local state to S3 before switching?
Up Next · Lesson 17
State Locking and Consistency
You have seen locking work and fail. Lesson 17 goes deeper — how the DynamoDB lock entry is structured, what happens during a crash mid-apply, how to safely recover a corrupted lock, and the consistency guarantees that prevent state corruption even under network failures.