Terraform Course
Terraform Import
Every team that adopts Terraform inherits infrastructure it did not build. Manually-created VPCs, databases configured through the console, load balancers provisioned years before anyone heard of IaC. Import is how you bring that infrastructure under Terraform management — without downtime, without recreation, without losing data. This lesson covers the complete import workflow with a real multi-resource adoption project.
This lesson covers
The import workflow end to end → ID formats for common resource types → Importing a complete environment → Handling configuration drift after import → The adoption strategy for large existing environments
The Import Workflow
Import has a specific sequence. Skipping steps or reversing them is where imports go wrong. Every import — whether a single resource or a hundred — follows the same five steps.
Find the ID → Write the config block → Import to state → Plan to reveal drift → Apply to reconcile
The goal of a successful import is reaching step 5 with zero planned changes. That means your configuration perfectly describes the real resource — no drift, no missing arguments, no extra attributes. Getting there requires iteration between steps 4 and the configuration. Plan reveals what is different. You update the configuration. Plan again. Repeat until clean.
Import ID Formats — The Lookup You Always Need
Every resource type has a specific ID format for import. The format is documented in the "Import" section at the bottom of every resource's provider documentation page. Here are the formats for the most commonly imported resource types.
| Resource Type | Import ID Format | Example |
|---|---|---|
| aws_s3_bucket | Bucket name | my-bucket-name |
| aws_instance | Instance ID | i-0abc123def456789 |
| aws_vpc | VPC ID | vpc-0abc123def456789 |
| aws_subnet | Subnet ID | subnet-0abc123def456789 |
| aws_security_group | Security group ID | sg-0abc123def456789 |
| aws_db_instance | DB identifier (not ARN) | my-production-db |
| aws_iam_role | Role name (not ARN) | MyApplicationRole |
| aws_iam_role_policy_attachment | role-name/policy-arn | MyRole/arn:aws:iam::123:policy/MyPolicy |
| aws_route53_record | zone-id_name_type | Z1234_example.com_A |
| aws_lb | Load balancer ARN | arn:aws:elasticloadbalancing:... |
Setting Up — A Manually-Built Environment to Import
We need real manually-created infrastructure to import. The following AWS CLI commands create a VPC, subnet, security group, and S3 bucket — exactly the kind of environment a team might have built before adopting Terraform. Run these first.
# Create the existing environment manually — this simulates pre-Terraform infrastructure
# Run these AWS CLI commands to build what we will import
# Create a VPC
aws ec2 create-vpc \
--cidr-block 10.0.0.0/16 \
--tag-specifications 'ResourceType=vpc,Tags=[{Key=Name,Value=legacy-vpc},{Key=Environment,Value=prod}]'
# Note the VpcId from the output: vpc-0abc123def456789
# Create a subnet inside the VPC
aws ec2 create-subnet \
--vpc-id vpc-0abc123def456789 \
--cidr-block 10.0.1.0/24 \
--availability-zone us-east-1a \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=legacy-subnet-a}]'
# Note the SubnetId: subnet-0abc123def456789
# Create a security group
aws ec2 create-security-group \
--group-name legacy-web-sg \
--description "Legacy web security group" \
--vpc-id vpc-0abc123def456789
# Note the GroupId: sg-0abc123def456789
# Add an HTTP ingress rule to the security group
aws ec2 authorize-security-group-ingress \
--group-id sg-0abc123def456789 \
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0
# Create an S3 bucket
aws s3api create-bucket \
--bucket legacy-app-data-prod \
--region us-east-1
Now create the Terraform project that will adopt this infrastructure:
mkdir terraform-lesson-19
cd terraform-lesson-19
touch versions.tf imports.tf main.tf outputs.tf .gitignore
Add this to versions.tf:
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
Step 1 — Declare the Import Blocks
Import blocks tell Terraform which real resources to bring into state. All four resources get import blocks first — before any resource configuration is written. We will use -generate-config-out to create the configuration from state.
Add this to imports.tf — replace the IDs with the real ones from your AWS CLI output above:
# imports.tf — import blocks for every manually-created resource
# These run during terraform apply and bring each resource into state
# After successful apply, remove each block — it is no longer needed
# Import the existing VPC
import {
to = aws_vpc.legacy # Target address — will be created in main.tf
id = "vpc-0abc123def456789" # VPC ID from AWS CLI output
}
# Import the existing subnet
import {
to = aws_subnet.legacy_a # Target address
id = "subnet-0abc123def456789" # Subnet ID from AWS CLI output
}
# Import the existing security group
import {
to = aws_security_group.legacy_web # Target address
id = "sg-0abc123def456789" # Security group ID from AWS CLI output
}
# Import the existing S3 bucket
# S3 bucket import ID is just the bucket name — not the ARN
import {
to = aws_s3_bucket.legacy_data # Target address
id = "legacy-app-data-prod" # Bucket name — exactly as created
}
Step 2 — Generate Configuration from Import
With import blocks declared and no resource blocks yet, run plan with -generate-config-out. Terraform queries AWS for each resource and writes a configuration file from what it finds.
# Initialize and generate configuration from the import blocks
terraform init
# Generate resource blocks from the real resources' current state
# This writes a file called generated.tf with complete resource blocks
terraform plan -generate-config-out=generated.tf
$ terraform plan -generate-config-out=generated.tf
Terraform will perform the following actions:
# aws_vpc.legacy will be imported
+ resource "aws_vpc" "legacy" {
+ cidr_block = "10.0.0.0/16"
+ enable_dns_hostnames = false
+ id = "vpc-0abc123def456789"
+ tags = {
+ "Environment" = "prod"
+ "Name" = "legacy-vpc"
}
}
# aws_subnet.legacy_a will be imported
+ resource "aws_subnet" "legacy_a" {
+ availability_zone = "us-east-1a"
+ cidr_block = "10.0.1.0/24"
+ id = "subnet-0abc123def456789"
+ vpc_id = "vpc-0abc123def456789"
}
# aws_security_group.legacy_web will be imported
+ resource "aws_security_group" "legacy_web" {
+ id = "sg-0abc123def456789"
+ name = "legacy-web-sg"
}
# aws_s3_bucket.legacy_data will be imported
+ resource "aws_s3_bucket" "legacy_data" {
+ bucket = "legacy-app-data-prod"
+ id = "legacy-app-data-prod"
}
Plan: 0 to add, 0 to change, 0 to destroy, 4 to import.
# generated.tf has been written with full resource blocks
$ cat generated.tf | head -40
resource "aws_vpc" "legacy" {
assign_generated_ipv6_cidr_block = false
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = false
enable_dns_support = true
enable_network_address_usage_metrics = false
instance_tenancy = "default"
tags = {
Environment = "prod"
Name = "legacy-vpc"
}
}
# ... more generated attributesWhat just happened?
- Terraform queried all four real resources and wrote complete resource blocks. Without writing a single line of resource configuration, Terraform produced
generated.tfcontaining valid HCL for all four resources. Every attribute the AWS API returned — including ones you would never set manually, likeassign_generated_ipv6_cidr_blockandenable_network_address_usage_metrics— is in the generated file. - The plan shows 4 to import — zero to add, change, or destroy. No real infrastructure will be created or modified. The import operation only writes to state — the real resources remain exactly as they are.
- generated.tf needs cleanup before it becomes your real configuration. Terraform generates every attribute it knows about — many of which are computed values you should not manage explicitly. The next step is cleaning up this file.
Step 3 — Clean Up Generated Configuration
The generated configuration contains every attribute — including many that are read-only, computed after creation, or not meant to be managed explicitly. Copy the useful content to main.tf and remove the clutter. Here is what the cleaned configuration looks like:
# main.tf — cleaned up from generated.tf
# Keep only the arguments you intend to manage
# Remove computed attributes that Terraform sets automatically
resource "aws_vpc" "legacy" {
cidr_block = "10.0.0.0/16" # The CIDR we created with
enable_dns_hostnames = false # Default — we will leave this as-is
enable_dns_support = true # Default — AWS enables this by default
instance_tenancy = "default" # Default tenancy
tags = {
Environment = "prod"
Name = "legacy-vpc"
ManagedBy = "Terraform" # Add this now — standard tag for all Terraform-managed resources
}
}
resource "aws_subnet" "legacy_a" {
vpc_id = aws_vpc.legacy.id # Reference by expression — not hardcoded ID
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
tags = {
Name = "legacy-subnet-a"
ManagedBy = "Terraform"
}
}
resource "aws_security_group" "legacy_web" {
name = "legacy-web-sg"
description = "Legacy web security group"
vpc_id = aws_vpc.legacy.id # Reference expression — not hardcoded
ingress {
description = "HTTP from anywhere"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "All outbound"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "legacy-web-sg"
ManagedBy = "Terraform"
}
}
resource "aws_s3_bucket" "legacy_data" {
bucket = "legacy-app-data-prod" # Must match existing bucket name exactly
tags = {
Name = "legacy-app-data"
ManagedBy = "Terraform"
}
}
What to remove from generated configuration
Remove any attribute that is: computed and set by AWS after creation (ARN, hosted_zone_id, owner_id), a default value you did not intentionally set (most boolean flags), or a read-only attribute that cannot be changed (id, arn). Keep anything that reflects a deliberate configuration choice — CIDR blocks, tags, instance types, engine versions. When in doubt, remove the attribute from configuration and see if plan shows a change. If it does, the attribute matters and should be kept.
Step 4 — Apply and Reconcile
With cleaned configuration in main.tf, delete generated.tf and run apply. The import blocks run during apply — resources move into state. Then the plan portion runs — revealing any remaining drift between configuration and reality.
# Remove generated.tf — main.tf now contains the cleaned configuration
rm generated.tf
# Apply — this runs the import blocks and then applies any drift changes
terraform apply
$ terraform apply
Terraform will perform the following actions:
# aws_vpc.legacy will be imported
+ resource "aws_vpc" "legacy" {
+ cidr_block = "10.0.0.0/16"
+ tags = { "Environment" = "prod", "ManagedBy" = "Terraform", "Name" = "legacy-vpc" }
}
# aws_subnet.legacy_a will be imported
+ resource "aws_subnet" "legacy_a" {
+ cidr_block = "10.0.1.0/24"
+ vpc_id = "vpc-0abc123def456789"
}
# aws_security_group.legacy_web will be imported
+ resource "aws_security_group" "legacy_web" {
+ name = "legacy-web-sg"
+ vpc_id = "vpc-0abc123def456789"
}
# aws_s3_bucket.legacy_data will be imported then updated
~ resource "aws_s3_bucket" "legacy_data" {
+ tags = {
+ "ManagedBy" = "Terraform" # Tag we added — not on real bucket yet
"Name" = "legacy-app-data" # This was not on the original bucket
}
}
Plan: 0 to add, 1 to change, 0 to destroy, 4 to import.
Enter a value: yes
aws_vpc.legacy: Importing...
aws_vpc.legacy: Import complete
aws_subnet.legacy_a: Importing...
aws_subnet.legacy_a: Import complete
aws_security_group.legacy_web: Importing...
aws_security_group.legacy_web: Import complete
aws_s3_bucket.legacy_data: Importing...
aws_s3_bucket.legacy_data: Import complete
aws_s3_bucket.legacy_data: Modifying... # Adding ManagedBy tag
aws_s3_bucket.legacy_data: Modifications complete
Apply complete! Resources: 0 added, 1 changed, 0 destroyed, 4 imported.What just happened?
- Four resources were imported in one apply — zero recreated. The VPC, subnet, security group, and S3 bucket all entered state with their existing AWS IDs intact. No resource was destroyed. No data was lost. The infrastructure that existed before Terraform now exists inside Terraform's management.
- One change was applied — adding the ManagedBy tag. The bucket did not have the
ManagedBy = "Terraform"tag we added to our configuration. After import, the plan showed this as a modification. The tag was applied on the same apply that completed the import. This is the expected pattern — import reveals drift, and that drift is reconciled immediately. - The vpc_id in subnet and security group is hardcoded in state but a reference in config. In the plan output,
vpc_id = "vpc-0abc123def456789"appears as a hardcoded string. But inmain.tfyou wrotevpc_id = aws_vpc.legacy.id— a reference expression. After import, Terraform resolves the expression to the actual VPC ID and confirms it matches — no change needed.
Step 5 — Verify and Clean Up
After a successful import apply, confirm the state is clean and remove the import blocks. Import blocks left in place are harmless but cause noise in future plans.
# Verify — plan should show zero changes
terraform plan
# Confirm all four resources are in state
terraform state list
# Remove the import blocks from imports.tf — they are no longer needed
# The resources are now in state and will be managed normally going forward
# Either delete imports.tf or remove the import {} blocks from it
$ terraform plan No changes. Your infrastructure matches the configuration. $ terraform state list aws_s3_bucket.legacy_data aws_security_group.legacy_web aws_subnet.legacy_a aws_vpc.legacy # All four resources fully managed by Terraform # Future plan/apply operations work exactly like any other Terraform-managed resource # The team can now modify these resources through configuration — no more console clicks
What just happened?
- Zero planned changes — import is complete. The goal of every import project is reaching this state: configuration perfectly describes real infrastructure, state matches both, plan shows nothing to do. All four manually-created resources are now fully under Terraform management.
- The transition is permanent. From this point forward, changes to these resources go through Terraform. A team member opens a PR to change a tag. The plan shows the tag change. The PR is reviewed and merged. The pipeline applies. No more console drift, no more manual tracking, no more wondering who changed what.
Adoption Strategy for Large Environments
Importing one VPC is manageable. Importing a production environment with 200 resources across dozens of resource types requires a strategy. Here is the approach that minimises risk.
Import from the outside in
Start with the resources that nothing else depends on — tags, S3 buckets, IAM roles. Work inward toward the foundation — VPCs, subnets. Import what depends on the foundation last — EC2 instances, databases, load balancers. Importing in dependency order means your reference expressions (aws_vpc.legacy.id) resolve to real values in state before the resources that reference them are imported.
Import one resource type at a time
Import all VPCs in one PR. Import all subnets in the next. Trying to import 200 resources in one PR creates an unreviable diff and a single failure blocks everything. Small PRs are reviewable, merge faster, and are easier to roll back if something goes wrong.
Accept drift, then clean it up separately
After importing a resource, the plan often shows changes — tags missing, settings that differ from what you would configure from scratch. Resist the urge to clean everything up in the import PR. First PR: import the resource, fix only breaking drift (wrong CIDR, wrong security rules). Second PR: bring configuration up to standard (tags, naming conventions). This separation makes the import reviewable.
Add prevent_destroy immediately after import
The moment a production resource enters Terraform management, add lifecycle { prevent_destroy = true } to its block. A mistyped resource block name, a wrong state mv, or a refactoring mistake could all trigger an unintended destroy. The lifecycle block is the last line of defence — it forces a deliberate code change before anything can be destroyed.
Common Mistakes
Importing without running plan first
Always run terraform plan after import before terraform apply. The plan reveals what drift exists between your configuration and the real resource. Applying without reviewing the plan means you might accidentally modify a production resource to match your guessed configuration — overwriting deliberate settings.
Using ARNs where import expects names or IDs
A common mistake is passing an ARN like arn:aws:iam::123:role/MyRole for an IAM role import when the correct ID is just the role name — MyRole. The error message from Terraform is usually "cannot import resource" without specifying why — check the provider documentation's Import section for the exact format before running import.
Not committing generated.tf before deleting it
After reviewing generated.tf and moving the content you want into main.tf, delete generated.tf before committing. If you accidentally commit generated.tf alongside main.tf, Terraform will try to create duplicate resources — two blocks declaring the same thing. Always clean up generated files before committing.
The import is done when plan shows zero changes
This is the only measure of a complete import. Not "I ran terraform import successfully." Not "the resource is in state." Only a clean terraform plan showing "No changes. Your infrastructure matches the configuration." means the import is truly done — configuration, state, and reality are aligned. Until you reach that point, the work is not finished.
Practice Questions
1. What does terraform plan output when an import is truly complete — configuration, state, and reality are all aligned?
2. Which terraform plan flag generates a .tf file containing resource blocks written from the imported resources' current state?
3. When importing an aws_s3_bucket, what is the correct import ID format — the bucket ARN or the bucket name?
Quiz
1. You run terraform apply with import blocks and all resources are imported. Is the import complete?
2. When importing a large environment with 200 resources, what is the correct order to import them?
3. After a successful import apply that shows zero drift on the follow-up plan, what should you do with the import blocks?
Up Next · Lesson 20
Lifecycle Rules
You have used prevent_destroy and create_before_destroy. Lesson 20 goes deeper — replace_triggered_by, precondition and postcondition checks, and the lifecycle patterns that prevent the specific incidents that happen most often in production.