Terraform Course
Terraform State Commands
State commands are the surgeon's tools of Terraform. You rarely need them. When you do, you need to use them precisely. This lesson covers every state command in depth — with the real scenarios that require each one, the exact syntax, and the mistakes that turn a routine state operation into a recovery incident.
This lesson covers
terraform state list and show → terraform state mv for safe refactoring → terraform state rm to stop managing resources → terraform state pull and push → terraform import to adopt existing infrastructure → moved blocks as the modern alternative
When You Need State Commands
Most Terraform work never touches state commands directly. You write configuration, run plan and apply, and Terraform handles state automatically. State commands become necessary in four specific situations.
| Situation | Command needed | Why |
|---|---|---|
| Renaming a resource in config | state mv | Without it, Terraform destroys the old resource and creates a new one |
| Moving a resource into a module | state mv | Resource address changes when wrapped in a module block |
| Stop managing a resource | state rm | Remove from Terraform tracking without destroying the real resource |
| Adopting existing infrastructure | import | Bring manually-created resources under Terraform management |
Setting Up
Create a project with several real resources to practise state commands against. Running these commands against real infrastructure — not hypothetical examples — is the only way the operations make sense.
mkdir terraform-lesson-18
cd terraform-lesson-18
touch versions.tf main.tf .gitignore
Add this to versions.tf:
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.5"
}
}
}
provider "aws" {
region = "us-east-1"
}
Add this to main.tf:
# Three S3 buckets — we will practise state operations on these
# Using distinct names we will rename and reorganise during the lesson
resource "random_id" "suffix" {
byte_length = 4 # 8-character hex suffix for globally unique bucket names
}
resource "aws_s3_bucket" "logs" {
bucket = "lesson18-logs-${random_id.suffix.hex}"
tags = {
Name = "lesson18-logs"
ManagedBy = "Terraform"
}
}
resource "aws_s3_bucket" "data" {
bucket = "lesson18-data-${random_id.suffix.hex}"
tags = {
Name = "lesson18-data"
ManagedBy = "Terraform"
}
}
resource "aws_s3_bucket" "backup" {
bucket = "lesson18-backup-${random_id.suffix.hex}"
tags = {
Name = "lesson18-backup"
ManagedBy = "Terraform"
}
}
Deploy the infrastructure now:
terraform init
terraform apply
Apply complete! Resources: 4 added, 0 changed, 0 destroyed. # Four resources in state: random_id + 3 S3 buckets $ terraform state list aws_s3_bucket.backup aws_s3_bucket.data aws_s3_bucket.logs random_id.suffix
terraform state list and show
terraform state list and terraform state show are read-only — they never modify state. Use them freely to understand what Terraform is tracking before running any modifying command.
New terms:
- terraform state list -id=VALUE — filters the list to resources whose ID attribute matches the value. Useful when you know a resource's AWS ID but need to find its Terraform address. For example,
terraform state list -id=i-0abc123finds which resource block manages that EC2 instance. - terraform state list MODULE_ADDRESS — lists only resources inside a specific module.
terraform state list module.networkingshows all resources managed by the networking module. Useful in large configurations with many modules. - terraform state show ADDRESS — shows all attributes of one resource by its address. The address is the string from
terraform state list— for exampleaws_s3_bucket.logs.
# List all resources in state
terraform state list
# List only resources matching a specific AWS resource ID
# Useful when you have an ID from the AWS console and need the Terraform address
terraform state list -id=lesson18-logs-a3f2b1c4
# Show all attributes of a specific resource
terraform state show aws_s3_bucket.logs
# Show the random_id resource to see all its encodings
terraform state show random_id.suffix
$ terraform state list
aws_s3_bucket.backup
aws_s3_bucket.data
aws_s3_bucket.logs
random_id.suffix
$ terraform state list -id=lesson18-logs-a3f2b1c4
aws_s3_bucket.logs # Found it — this is the Terraform address for that bucket
$ terraform state show aws_s3_bucket.logs
# aws_s3_bucket.logs:
resource "aws_s3_bucket" "logs" {
arn = "arn:aws:s3:::lesson18-logs-a3f2b1c4"
bucket = "lesson18-logs-a3f2b1c4"
bucket_domain_name = "lesson18-logs-a3f2b1c4.s3.amazonaws.com"
bucket_regional_domain_name = "lesson18-logs-a3f2b1c4.s3.us-east-1.amazonaws.com"
hosted_zone_id = "Z3AQBSTGFYJSTF"
id = "lesson18-logs-a3f2b1c4"
object_lock_enabled = false
region = "us-east-1"
tags = {
"ManagedBy" = "Terraform"
"Name" = "lesson18-logs"
}
}What just happened?
- state list -id found the Terraform address from an AWS resource ID. You had the bucket name
lesson18-logs-a3f2b1c4and needed to know which Terraform resource block manages it. The-idflag searches state for any resource whoseidattribute matches — returning the Terraform addressaws_s3_bucket.logs. This is how you connect AWS console resources to Terraform configurations. - state show revealed computed attributes not in the configuration. Your configuration only declared
bucketandtags. State containsarn,bucket_domain_name,bucket_regional_domain_name,hosted_zone_id,region, and more — all assigned by AWS and stored in state. These are the attributes available for reference expressions likeaws_s3_bucket.logs.arn.
terraform state mv — Safe Refactoring
terraform state mv renames a resource in state. The real infrastructure is untouched — only the Terraform address changes. Use it every time you rename a resource block in your configuration, or when you refactor resources into modules.
Scenario: You decide that aws_s3_bucket.logs should be renamed to aws_s3_bucket.application_logs to be more descriptive. Without state mv, a plan would show the bucket being destroyed and recreated — losing all stored objects. With state mv, the rename is seamless.
# Step 1 — rename in state FIRST, before changing main.tf
# This order matters — if you change main.tf first, Terraform plans a destroy+recreate
terraform state mv aws_s3_bucket.logs aws_s3_bucket.application_logs
# Step 2 — verify the rename in state
terraform state list
# Step 3 — now rename the resource block in main.tf to match
# Change: resource "aws_s3_bucket" "logs" {
# To: resource "aws_s3_bucket" "application_logs" {
# Step 4 — verify plan shows zero changes
# If state mv was done correctly, Terraform sees the config and state as aligned
terraform plan
$ terraform state mv aws_s3_bucket.logs aws_s3_bucket.application_logs
Move "aws_s3_bucket.logs" to "aws_s3_bucket.application_logs"
Successfully moved 1 object(s).
$ terraform state list
aws_s3_bucket.application_logs # Renamed in state
aws_s3_bucket.backup
aws_s3_bucket.data
random_id.suffix
# After renaming in main.tf to match:
$ terraform plan
No changes. Your infrastructure matches the configuration.
# What happens if you rename in main.tf FIRST without state mv:
$ terraform plan
# aws_s3_bucket.logs will be destroyed <- WRONG — we don't want this
- resource "aws_s3_bucket" "logs" { ... }
# aws_s3_bucket.application_logs will be created <- WRONG
+ resource "aws_s3_bucket" "application_logs" { ... }
Plan: 1 to add, 0 to change, 1 to destroy.
# This would destroy the real bucket and all its contentsWhat just happened?
- Order matters — state mv before main.tf change. When you run
state mvfirst, state already has the new address when Terraform reads the updated configuration. Plan sees configuration and state aligned — no changes needed. If you changemain.tffirst, Terraform sees the old address in state and the new address in configuration as two different resources — one to destroy, one to create. - The real S3 bucket was never touched.
terraform state mvis a local operation — it modifies the state file only. No AWS API calls are made. The bucket with IDlesson18-logs-a3f2b1c4still exists in S3 with exactly the same contents, settings, and objects. Only its Terraform tracking address changed. - The wrong order would destroy a real bucket and all its objects. S3 bucket deletion removes the bucket but only if it is empty — or if
force_destroy = trueis set. In either case, this is a catastrophic mistake for a production bucket with data. State mv is the safe path — and it only takes ten seconds.
state mv Into a Module
The most common state mv scenario in production is moving resources into a module when refactoring. A resource that was declared at the root level needs to move inside a module block. The Terraform address changes format — from aws_s3_bucket.data to module.storage.aws_s3_bucket.data.
# Moving aws_s3_bucket.data into a module called "storage"
# The module must already exist in your configuration before running this
# Step 1 — move in state first
# Source: root-level resource address
# Destination: module-qualified address
terraform state mv \
aws_s3_bucket.data \
module.storage.aws_s3_bucket.data
# Step 2 — move the resource block in main.tf into the module
# Remove it from main.tf and add it inside the storage module
# Step 3 — verify plan is clean
terraform plan
$ terraform state mv aws_s3_bucket.data module.storage.aws_s3_bucket.data Move "aws_s3_bucket.data" to "module.storage.aws_s3_bucket.data" Successfully moved 1 object(s). $ terraform state list aws_s3_bucket.application_logs aws_s3_bucket.backup module.storage.aws_s3_bucket.data # Now inside the module random_id.suffix $ terraform plan No changes. Your infrastructure matches the configuration.
What just happened?
- The module prefix changes the address format. Root-level resources have addresses like
aws_s3_bucket.data. Resources inside a module have addresses likemodule.MODULE_NAME.aws_s3_bucket.data. For nested modules:module.outer.module.inner.aws_s3_bucket.data. Thestate mvcommand accepts both formats — you specify exactly where the resource should live in state. - Modules can be refactored without destroying infrastructure. Large Terraform configurations are regularly refactored — flat resource lists evolve into organised modules. With
state mv, this entire refactoring can happen without any real infrastructure change. State moves first, code follows, plan confirms zero drift.
terraform state rm — Stop Managing a Resource
terraform state rm removes a resource from Terraform's tracking. The real infrastructure continues to exist — Terraform simply stops managing it. After removal, if the resource block still exists in configuration, a plan will show it as a new resource to create.
Scenario: The backup bucket needs to be handed over to another team that manages it manually. You want to stop Terraform from managing it — without destroying it and without the other team needing to recreate it.
# Step 1 — backup state before any modification
terraform state pull > state-backup-$(date +%Y%m%d-%H%M%S).json
# Step 2 — remove the bucket from Terraform state
# The real bucket continues to exist in AWS — only tracking is removed
terraform state rm aws_s3_bucket.backup
# Step 3 — remove the resource block from main.tf too
# If you leave the block in, the next plan shows it as a resource to create
# Remove: resource "aws_s3_bucket" "backup" { ... }
# Step 4 — verify state and plan
terraform state list
terraform plan
$ terraform state rm aws_s3_bucket.backup
Removed aws_s3_bucket.backup
Successfully removed 1 resource instance(s).
$ terraform state list
aws_s3_bucket.application_logs # Still tracked
module.storage.aws_s3_bucket.data # Still tracked
random_id.suffix # Still tracked
# aws_s3_bucket.backup is gone from state — but still exists in AWS
$ terraform plan
No changes. Your infrastructure matches the configuration.
# Backup bucket resource block removed from main.tf — clean plan
# If you forgot to remove the block from main.tf:
$ terraform plan
# aws_s3_bucket.backup will be created <- Terraform wants to create it again
+ resource "aws_s3_bucket" "backup" {
+ bucket = "lesson18-backup-a3f2b1c4" # This name is already taken!
}
# This apply would fail — the bucket already exists in AWSWhat just happened?
- The backup bucket still exists in AWS — only tracking was removed. Run
aws s3 lsand the bucket is still there. Terraform simply does not know about it anymore. The other team can now manage it however they choose — console, AWS CLI, their own Terraform configuration. - Forgetting to remove the resource block causes a creation attempt. If the resource block stays in
main.tfafterstate rm, Terraform plans to create it. The apply fails because the bucket name is already taken — S3 bucket names are globally unique. This is whystate rmand config removal are always done together.
terraform import — Adopting Existing Infrastructure
terraform import is the inverse of state rm. Where state rm removes a resource from Terraform's awareness, import adds one — bringing an existing real resource under Terraform management without recreating it.
This is one of the most important Terraform operations for real-world adoption. Every team that starts using Terraform inherits infrastructure that was created manually. Import is how you bring that infrastructure under management without downtime.
New terms:
- terraform import ADDRESS ID — imports an existing real resource into state at the given address.
ADDRESSis the Terraform resource address — must match a resource block already in your configuration.IDis the cloud resource identifier — the format varies by resource type. Check the provider documentation for the correct ID format. - import block (Terraform 1.5+) — a declarative alternative to the command-line import. Written directly in your configuration files. Runs automatically during
terraform apply— no separate import command needed. Covered in the next section. - terraform plan -generate-config-out=FILE — after running an import, this flag generates a configuration file from the imported resource's current state. Saves manually writing the resource block — Terraform writes it for you from what is in state.
Scenario: The backup bucket was removed from Terraform management and handed to another team. That team has now decided Terraform should manage it after all. Import it back.
First create a minimal resource block in main.tf — import requires the block to exist before running:
# Add this back to main.tf — the resource block must exist before import
# The arguments do not need to be complete — import fills in the state
# After import, run plan to see what configuration changes are needed to match reality
resource "aws_s3_bucket" "backup" {
bucket = "lesson18-backup-a3f2b1c4" # Must match the real bucket name exactly
tags = {
Name = "lesson18-backup"
ManagedBy = "Terraform"
}
}
Now run the import — the ID for an S3 bucket is simply the bucket name:
# Import the existing bucket into state at the given address
# For S3 buckets, the ID is just the bucket name
# For EC2 instances it is the instance ID (i-0abc123)
# For IAM roles it is the role name
# Always check provider docs for the correct ID format
terraform import aws_s3_bucket.backup lesson18-backup-a3f2b1c4
# After import — run plan to see if configuration matches reality
# Import only adds to state — it does not update your configuration
# Plan will show any differences between your config and the real resource
terraform plan
$ terraform import aws_s3_bucket.backup lesson18-backup-a3f2b1c4
aws_s3_bucket.backup: Importing from ID "lesson18-backup-a3f2b1c4"...
aws_s3_bucket.backup: Import prepared!
Prepared aws_s3_bucket for import
aws_s3_bucket.backup: Refreshing state... [id=lesson18-backup-a3f2b1c4]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
$ terraform plan
# aws_s3_bucket.backup will be updated in-place
~ resource "aws_s3_bucket" "backup" {
id = "lesson18-backup-a3f2b1c4"
~ tags = {
+ "Environment" = "dev" # Real bucket has extra tag added by another team
# (2 unchanged attributes hidden)
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
# Plan shows the tag added by the other team — update config to match or apply to revert itWhat just happened?
- Import added the bucket to state without recreating it. Terraform queried AWS for the bucket's current state and wrote it to the state file. The real bucket was untouched — the same objects, the same settings, the same tags. Terraform now knows the bucket exists and considers itself its manager.
- The plan after import showed a real difference. The other team had added an
Environmenttag while they managed the bucket. The imported configuration did not include that tag. Terraform plans to add it. This is the correct behaviour — import reveals drift between configuration and reality. You now decide: update the config to include the tag, or apply to remove it. - Import does not write configuration. After import, state contains the resource but your configuration may not accurately describe it. Always run plan after import — it shows exactly what needs to change in your configuration to match the real resource. In Terraform 1.5+,
-generate-config-outwrites the configuration for you.
Import Blocks — The Modern Approach (Terraform 1.5+)
Terraform 1.5 introduced import blocks — a declarative alternative to the terraform import command. Instead of running a separate command, you declare the import inside your configuration. It runs automatically on the next apply and can be reviewed in the plan output before executing.
New terms:
- import block — a top-level block in any
.tffile. Thetoargument is the Terraform resource address. Theidargument is the cloud resource identifier. Processed during apply — the resource is imported into state and then normal plan logic applies. - terraform plan -generate-config-out=generated.tf — when an import block references a resource address with no matching resource block, this flag generates a new
.tffile containing a resource block that matches the imported resource's current state. Review the generated file, clean it up, then remove the import block once the apply succeeds.
# imports.tf — declarative import blocks (Terraform 1.5+)
# Add this file to your configuration to import existing resources
# Import an existing VPC that was created manually — without a resource block yet
import {
to = aws_vpc.existing # The resource address — must exist in config or use -generate-config-out
id = "vpc-0abc123def456789" # The AWS VPC ID from the console
}
# Import an existing EC2 instance
import {
to = aws_instance.legacy_app # Create a matching resource block in main.tf
id = "i-0987654321abcdef0" # The EC2 instance ID
}
# Import an existing RDS instance — ID format is the DB identifier, not the ARN
import {
to = aws_db_instance.primary
id = "production-postgres" # The RDS instance identifier (not the ARN)
}
# Generate configuration from import blocks when no resource block exists yet
# This creates a generated.tf file with resource blocks for each import
terraform plan -generate-config-out=generated.tf
# Review generated.tf — clean up any arguments you do not need
# Then apply to complete the import
terraform apply
# After successful apply — remove the import blocks from imports.tf
# They are no longer needed once the resources are in state
$ terraform plan -generate-config-out=generated.tf
Terraform will perform the following actions:
# aws_vpc.existing will be imported
+ resource "aws_vpc" "existing" {
+ cidr_block = "10.0.0.0/16"
+ enable_dns_hostnames = true
+ id = "vpc-0abc123def456789"
+ tags = {
+ "Name" = "production-vpc"
+ "Environment" = "prod"
}
}
Plan: 0 to add, 0 to change, 0 to destroy, 1 to import.
# generated.tf was written with the full resource block:
$ cat generated.tf
resource "aws_vpc" "existing" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
tags = {
Environment = "prod"
Name = "production-vpc"
}
}
$ terraform apply
aws_vpc.existing: Importing...
aws_vpc.existing: Import complete
Apply complete! Resources: 0 added, 0 changed, 0 destroyed, 1 imported.What just happened?
- -generate-config-out wrote a complete resource block from the real resource's state. Terraform queried AWS for the VPC, read every attribute, and generated a valid HCL resource block in
generated.tf. You do not need to know every argument — Terraform writes them from reality. Review and clean up the generated file (remove read-only computed attributes that should not be in configuration) then use it as the resource block. - The import block approach is reviewable in CI/CD. Unlike the command-line
terraform import, import blocks appear in the plan output. Teams can review what will be imported before it happens. Import blocks can also be code-reviewed in pull requests — the intent to adopt a resource is visible in source control. - Remove import blocks after a successful apply. Once a resource is in state, the import block is no longer needed and should be removed. If left in place, Terraform will attempt to import on every apply — which either does nothing (if state already has the resource) or causes errors. Clean up import blocks as part of the same PR that adds the generated resource configuration.
moved Blocks — The Declarative Alternative to state mv
Just as import blocks replace the terraform import command, moved blocks replace terraform state mv. They are declarative — written in configuration, reviewed in pull requests, applied automatically. The refactoring intent is visible in source control rather than buried in a shell history.
# moves.tf — declarative resource moves (Terraform 1.1+)
# Use these instead of terraform state mv for all refactoring operations
# Rename a resource — equivalent to: terraform state mv aws_s3_bucket.logs aws_s3_bucket.application_logs
moved {
from = aws_s3_bucket.logs # Old address — no longer in configuration
to = aws_s3_bucket.application_logs # New address — the renamed resource block
}
# Move a resource into a module
moved {
from = aws_s3_bucket.data # Root-level resource being moved into module
to = module.storage.aws_s3_bucket.data # New module-qualified address
}
# Move a for_each resource when keys change
moved {
from = aws_instance.web["server-1"] # Old key
to = aws_instance.web["primary"] # New key — map key was renamed
}
$ terraform plan
Terraform will perform the following actions:
# aws_s3_bucket.logs has moved to aws_s3_bucket.application_logs
resource "aws_s3_bucket" "application_logs" {
id = "lesson18-logs-a3f2b1c4"
# (no attribute changes)
}
# aws_s3_bucket.data has moved to module.storage.aws_s3_bucket.data
resource "aws_s3_bucket" "data" {
id = "lesson18-data-a3f2b1c4"
# (no attribute changes)
}
Plan: 0 to add, 0 to change, 0 to destroy.
# Moves are shown in the plan — no resource changes, just address changes
$ terraform apply
aws_s3_bucket.application_logs: Moving...
aws_s3_bucket.application_logs: Move complete
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.What just happened?
- Moves appeared in the plan output before any changes were made. The plan clearly shows "has moved to" — reviewers can see the rename before approving the PR. With
terraform state mv, the rename happens immediately with no plan review step. Moved blocks make refactoring safe to use in team workflows. - No resource changes — only address changes. The plan shows zero adds, zero changes, zero destroys. The move is purely a state operation — real infrastructure is untouched. The "Moving..." line during apply confirms state was updated.
- Keep moved blocks in source control permanently. Unlike import blocks, moved blocks should stay in your configuration. They act as documentation — a record of every rename and refactoring. If a teammate runs
terraform applyon an older copy of state that still has the old address, the moved block handles the rename automatically. Remove them only after you are certain no one will ever encounter the old address in their state.
Common Mistakes
Changing the resource block name before running state mv
Rename the resource block in main.tf before running state mv and Terraform plans a destroy-and-recreate on the next plan. Terraform sees the old address in state and the new address in configuration as two separate resources. Always run state mv (or add a moved block) first — then update the configuration to match.
Running terraform import without a matching resource block
The terraform import command requires a resource block at the target address to already exist in your configuration — otherwise it errors with "resource address not found". Either write the resource block first, or use -generate-config-out with import blocks to have Terraform generate the block for you.
Using the wrong ID format for terraform import
Every resource type has a specific ID format for import. S3 buckets use the bucket name. EC2 instances use the instance ID (i-0abc123). IAM roles use the role name. RDS instances use the DB identifier — not the ARN. VPCs use the VPC ID (vpc-0abc123). Always check the "Import" section at the bottom of the provider documentation for the resource type before running import.
Prefer declarative over imperative for all state changes
When choosing between terraform state mv and a moved block, choose the moved block. When choosing between terraform import and an import block, choose the import block. Declarative blocks are visible in PRs, reviewable before execution, reproducible, and leave a permanent record in source control. Imperative commands leave traces only in shell history and operator notes. The trend in Terraform is toward making everything declarative — follow it.
Practice Questions
1. You rename a resource block in main.tf from aws_instance.server to aws_instance.api. Without running any state command first, what does terraform plan show?
2. Which declarative block type, introduced in Terraform 1.1, replaces terraform state mv and makes refactoring visible in pull requests?
3. When using import blocks and you have no resource block yet, which terraform plan flag generates the resource block automatically from the imported resource's state?
Quiz
1. You run terraform state rm aws_s3_bucket.logs. What happens to the real S3 bucket?
2. After running terraform import on an existing EC2 instance, what must you do next?
3. What is the correct order of operations when renaming a resource block?
Up Next · Lesson 19
Terraform Import
Import deserves its own lesson. Lesson 19 covers a complete real-world adoption project — importing an entire manually-built environment into Terraform management, handling import ID formats across dozens of resource types, and the workflow that minimises risk during the transition.