Terraform Course
Workspaces
Workspaces let a single Terraform configuration maintain multiple independent state files — one per workspace. They were designed for managing multiple environments from one codebase. But they are widely misused. This lesson covers how workspaces work, where they genuinely belong, and the more common situations where separate configurations are the safer choice.
This lesson covers
How workspaces work → Creating and switching workspaces → terraform.workspace in configuration → Workspace-aware sizing → When workspaces are the right tool → When separate configurations are better
How Workspaces Work
Every Terraform configuration starts in the default workspace. A workspace is simply a named state file. The configuration is identical across all workspaces — the only difference is which state file Terraform reads and writes.
With a local backend, workspaces create a terraform.tfstate.d/ directory containing one subdirectory per non-default workspace. With the S3 backend, each workspace gets its own state file at a path derived from the configured key.
The Analogy
A workspace is like a Git branch for state. The code — the configuration — is the same regardless of which branch you are on. But the history — the state — is completely separate per branch. Switching workspaces is like switching branches: the files do not change, but what Terraform considers "current reality" does.
Same configuration file, different state per workspace — terraform.workspace tells you which is active
Setting Up — Workspace Commands
Create a project and walk through every workspace command. These are the five operations you need for all workspace management.
mkdir terraform-lesson-24
cd terraform-lesson-24
touch versions.tf variables.tf main.tf outputs.tf .gitignore
Add this to versions.tf:
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
Run terraform init, then explore the workspace commands:
# List all workspaces — * marks the active one
terraform workspace list
# Create a new workspace
terraform workspace new dev
# Switch to an existing workspace
terraform workspace select prod # Must exist — creates nothing if it does not
# Create and immediately switch in one command
terraform workspace new staging
# Show the currently active workspace name
terraform workspace show
# Delete a workspace — must not be the active workspace and must have empty state
terraform workspace delete staging
$ terraform workspace list
* default # * = currently active workspace
$ terraform workspace new dev
Created and switched to workspace "dev"!
You're now on a new, empty workspace. When you run "terraform plan" on this
workspace, Terraform will treat all prior state as if it doesn't exist.
$ terraform workspace new prod
Created and switched to workspace "prod"!
$ terraform workspace list
default
dev
* prod # prod is now active
$ terraform workspace show
prod
$ terraform workspace select dev
Switched to workspace "dev".
$ terraform workspace list
default
* dev
prod
# Local state file layout after creating workspaces:
$ tree terraform.tfstate.d/
terraform.tfstate.d/
├── dev/
│ └── terraform.tfstate # Dev state — empty until first apply in dev workspace
└── prod/
└── terraform.tfstate # Prod state — empty until first apply in prod workspace
# Default workspace still uses ./terraform.tfstate directlyWhat just happened?
- Creating a workspace switches to it immediately.
terraform workspace new devboth creates the workspace and makes it active. Terraform tells you "You're now on a new, empty workspace" — meaning no state exists yet for this workspace. Any subsequent plan will treat the world as if nothing has been deployed. - Non-default workspaces use the tfstate.d/ directory structure. The default workspace uses
./terraform.tfstatedirectly. Every other workspace uses./terraform.tfstate.d/WORKSPACE_NAME/terraform.tfstate. This is a local backend detail — with S3, the state key is modified by the workspace name automatically. - Deleting a workspace requires it to be empty. You cannot delete a workspace that contains resources in its state. You must destroy all resources in the workspace first, then delete it. You also cannot delete the currently active workspace — switch to a different one first.
terraform.workspace in Configuration
The built-in value terraform.workspace contains the name of the currently active workspace. Use it in resource names, tags, locals, and conditionals to differentiate resources across environments — all from the same configuration file.
New terms:
- terraform.workspace — a string value always available in any Terraform configuration. Equals
"default"in the default workspace. Equals the workspace name in any other workspace. Use it like any other string expression — in interpolations, conditionals, locals, and variable defaults. - workspace-aware locals — a common pattern where a
localsblock usesterraform.workspaceto look up environment-specific values from a map. The map contains one entry per workspace name. This centralises workspace-specific configuration in one place rather than scattering conditionals throughout the configuration.
Add this to main.tf:
# Workspace-aware configuration using terraform.workspace
locals {
# Per-workspace configuration map — one entry per workspace name
# Centralises all environment-specific values in one place
workspace_config = {
default = {
instance_type = "t2.micro"
instance_count = 1
deletion_protection = false
log_retention_days = 7
}
dev = {
instance_type = "t2.micro"
instance_count = 1
deletion_protection = false
log_retention_days = 7
}
staging = {
instance_type = "t3.small"
instance_count = 2
deletion_protection = false
log_retention_days = 14
}
prod = {
instance_type = "t3.medium"
instance_count = 3
deletion_protection = true # Only prod gets deletion protection
log_retention_days = 90 # Prod must retain logs 90 days for compliance
}
}
# Look up config for the current workspace — falls back to "default" if unknown
# try() prevents an error if the workspace name is not in the map
config = try(
local.workspace_config[terraform.workspace], # Try to find the current workspace
local.workspace_config["default"] # Fall back to default config
)
# Common tags applied to every resource — includes the workspace name
common_tags = {
Environment = terraform.workspace # Tag tracks which workspace deployed this
ManagedBy = "Terraform"
Workspace = terraform.workspace
}
}
# S3 bucket — name includes workspace to prevent cross-env collisions
resource "aws_s3_bucket" "app_data" {
# Workspace name in the bucket name ensures dev and prod buckets never collide
bucket = "lesson24-app-data-${terraform.workspace}-${data.aws_caller_identity.current.account_id}"
tags = merge(local.common_tags, {
Name = "lesson24-app-data-${terraform.workspace}"
})
}
# CloudWatch log group — retention varies by workspace via local.config
resource "aws_cloudwatch_log_group" "app" {
name = "/lesson24/${terraform.workspace}/app"
retention_in_days = local.config.log_retention_days # 7 for dev, 90 for prod
tags = merge(local.common_tags, {
Name = "lesson24-${terraform.workspace}-app-logs"
})
}
# Data source for current account ID
data "aws_caller_identity" "current" {}
# Deploy in dev workspace
$ terraform workspace select dev
$ terraform apply
+ aws_s3_bucket.app_data {
+ bucket = "lesson24-app-data-dev-123456789012"
+ tags = { "Environment" = "dev", "Workspace" = "dev" }
}
+ aws_cloudwatch_log_group.app {
+ name = "/lesson24/dev/app"
+ retention_in_days = 7 # dev config
}
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
# Switch to prod and deploy — same config, different workspace
$ terraform workspace select prod
$ terraform apply
+ aws_s3_bucket.app_data {
+ bucket = "lesson24-app-data-prod-123456789012" # Different name
+ tags = { "Environment" = "prod", "Workspace" = "prod" }
}
+ aws_cloudwatch_log_group.app {
+ name = "/lesson24/prod/app"
+ retention_in_days = 90 # prod config — longer retention
}
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
# Two separate sets of resources — one per workspace — from the same config
$ terraform workspace select dev
$ terraform state list
aws_cloudwatch_log_group.app # dev log group — 7 day retention
aws_s3_bucket.app_data # dev bucket
$ terraform workspace select prod
$ terraform state list
aws_cloudwatch_log_group.app # prod log group — 90 day retention
aws_s3_bucket.app_data # prod bucketWhat just happened?
- The same configuration deployed two completely independent sets of resources. The dev and prod buckets have different names, different tags, and sit in different state files. A
terraform destroyin the dev workspace destroys only dev resources — prod is untouched. This is workspace isolation in action. - The workspace_config map is the single source of truth for environment differences. Every workspace-specific value — instance type, retention days, deletion protection — lives in one map in the locals block. To change prod's log retention from 90 to 180 days, edit one line in the map. No resource block changes needed.
- try() provides a safety net for unknown workspace names. If someone creates a workspace named "hotfix" that is not in the map,
try()silently falls back to the "default" config instead of throwing an error. The "default" config is a safe fallback — the smallest instance type, lowest retention, no deletion protection.
Workspaces with the S3 Backend
With the S3 backend, workspaces modify the state file key automatically. The configured key becomes a prefix, and the workspace name is inserted into the path. This means all workspace state files share the same bucket with no additional configuration.
New terms:
- workspace_key_prefix — an optional S3 backend argument that controls the directory prefix used for non-default workspace state files. Defaults to
"env:". Withkey = "app/terraform.tfstate", the dev workspace state lives atenv:/dev/app/terraform.tfstate. - S3 workspace state paths — default workspace uses the key exactly as written. Non-default workspaces use
workspace_key_prefix/WORKSPACE_NAME/key.
# versions.tf with S3 backend — workspace-aware automatically
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "acme-terraform-state-123456789012"
key = "lesson-24/terraform.tfstate" # Base key
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-lock"
# Optional — controls the prefix for non-default workspace state paths
# Default is "env:" — resulting in env:/dev/lesson-24/terraform.tfstate
# Setting to "workspaces" gives: workspaces/dev/lesson-24/terraform.tfstate
workspace_key_prefix = "workspaces"
}
}
# Resulting S3 state file paths:
# default workspace: lesson-24/terraform.tfstate
# dev workspace: workspaces/dev/lesson-24/terraform.tfstate
# prod workspace: workspaces/prod/lesson-24/terraform.tfstate
$ terraform workspace new dev Created and switched to workspace "dev"! # Terraform creates state at: workspaces/dev/lesson-24/terraform.tfstate $ terraform apply # State written to S3 at the workspace-specific path # Dev resources exist in state, prod state file does not exist yet $ terraform workspace new prod $ terraform apply # State written to S3 at: workspaces/prod/lesson-24/terraform.tfstate # Dev and prod states are completely independent — same bucket, different keys $ aws s3 ls s3://acme-terraform-state-123456789012/workspaces/ --recursive workspaces/dev/lesson-24/terraform.tfstate workspaces/prod/lesson-24/terraform.tfstate # Also check the default workspace state $ aws s3 ls s3://acme-terraform-state-123456789012/lesson-24/ lesson-24/terraform.tfstate # Default workspace — no prefix
What just happened?
- The S3 backend automatically scoped state paths by workspace. No backend configuration change was needed when switching workspaces. Terraform managed the key path transformation — appending the workspace prefix to the configured key. All workspace state files are in the same S3 bucket with clear path separation.
- DynamoDB locking still works correctly per workspace. The DynamoDB lock key includes the full state file path — including the workspace prefix. Dev and prod operations can run concurrently without locking each other because they use different lock keys.
- workspace_key_prefix lets you control the directory structure. The default
"env:"prefix uses a colon in the path — valid in S3 keys but unusual. Changing it to"workspaces"or the project name produces cleaner, more navigable bucket contents. Choose a prefix that matches your naming conventions before creating workspaces — changing it later requires migrating state files.
When Workspaces Are the Right Tool
Workspaces are excellent in specific scenarios. Outside these scenarios, they tend to create more problems than they solve.
Ephemeral environments for pull requests
Create a workspace per pull request — terraform workspace new pr-42. Deploy the PR's infrastructure into that workspace. Review and test. Delete the workspace when the PR closes. This is workspaces at their best: short-lived, isolated, disposable environments that share one configuration and one state bucket.
Infrastructure that truly is identical across environments
When dev, staging, and prod are genuinely the same except for sizing — same resource types, same structure, only counts and instance types differ — workspaces with a workspace_config map work well. The configuration is maintainable and the differences are explicit in one place.
Testing configuration changes before production
Create a temporary workspace to test a module refactoring or a new resource type. Apply in the test workspace. Verify. Destroy. Delete the workspace. The main workspace state is never touched during the experiment.
When Separate Configurations Are Better
Workspaces are widely misused as a replacement for proper multi-environment architecture. Here is when separate configuration directories per environment are the correct choice — and why.
| Situation | Workspaces | Separate configs |
|---|---|---|
| Environments have different resource types | Messy — conditionals everywhere | Clean — each env has its own resources |
| Prod needs strict access control | Hard — same config = same permissions needed | Easy — prod config in restricted repo/path |
| Environments drift significantly over time | Workspace config map grows unmanageable | Each env independently evolved |
| Different teams own different environments | Risky — easy to deploy to wrong workspace | Safe — teams work in separate directories |
| Short-lived ephemeral environments | Perfect — create, use, destroy, delete | Overhead — new directory per PR |
The workspace accidental-prod problem
The most dangerous workspace failure mode: an engineer runs terraform apply while the prod workspace is active. They meant to be in dev. The destroy-and-recreate of a prod database happens before anyone realises the mistake. Separate configuration directories make this impossible — there is no prod workspace to accidentally land in. The directory you are in is the environment you are deploying to.
Common Mistakes
Using workspaces to manage prod and dev as long-lived environments
The Terraform documentation itself warns against using workspaces for long-lived environment separation when those environments have significant differences. HashiCorp recommends separate directories with separate state files for dev/staging/prod — not workspaces. Workspaces are designed for short-lived, similar environments.
Not including terraform.workspace in resource names
If two workspaces deploy resources with the same name — the same S3 bucket name, the same IAM role name, the same security group name — they will conflict. Always include terraform.workspace or a workspace-derived value in every resource name that must be globally or regionally unique.
Deleting a workspace without destroying its resources first
Terraform will refuse to delete a workspace with non-empty state. But if you manually delete the state file before deleting the workspace, or if the workspace state becomes inaccessible, the real resources in AWS are orphaned — still running and incurring cost but no longer tracked by Terraform. Always run terraform destroy in a workspace before deleting it.
Add a workspace guard for production
If you do use workspaces for long-lived environments, add a precondition to critical resources that fires when someone is about to make destructive changes in prod from the wrong context. A local that checks terraform.workspace and a precondition that validates the expected workspace is a simple safety net. Better still — require prod deployments to come only from CI/CD pipelines that explicitly select the prod workspace, never from an engineer's terminal.
Practice Questions
1. Which command creates a new workspace called "staging" and immediately switches to it?
2. Which built-in value contains the name of the currently active workspace?
3. What is the use case where workspaces are genuinely the best tool — not separate configurations?
Quiz
1. You switch from the dev workspace to the prod workspace. What stays the same and what changes?
2. Your S3 backend has key = "app/terraform.tfstate" and workspace_key_prefix = "workspaces". Where does the dev workspace store its state?
3. Prod requires that only a restricted CI/CD pipeline can apply changes — engineers cannot apply locally. Should you use workspaces or separate configurations?
Up Next · Lesson 25
Core Best Practices
Section II closes with the distilled habits of professional Terraform engineers — file structure, naming conventions, tagging strategies, version pinning, and the pre-apply checklist that prevents most production incidents.