Terraform Course
Output Values
Variables are inputs — data coming into your configuration. Outputs are the opposite — data coming out. They surface information after an apply, connect configurations together, and act as the return values of Terraform modules. This lesson covers outputs completely, from basic declarations to cross-stack data sharing between separate Terraform configurations.
This lesson covers
Output block anatomy → Sensitive outputs → Querying outputs from the command line → Using outputs as module return values → Cross-stack references with terraform_remote_state → Real project wiring two configurations together
What Outputs Are For
An output value serves three distinct purposes. First, it prints useful information to the terminal after a successful apply — IP addresses, resource IDs, DNS names, anything the operator needs to know immediately. Second, it exposes data from a child module back to the root module that called it — outputs are how modules return values. Third, it makes data available to other completely separate Terraform configurations via remote state — a networking configuration exposing VPC IDs for an application configuration to consume.
All three purposes use the same output block syntax. What changes is the context in which the output is consumed.
Setting Up — Two Projects
This lesson uses two separate Terraform projects to demonstrate cross-stack output sharing. The first project — networking — builds a VPC and outputs its identifiers. The second project — application — reads those outputs and uses them to deploy EC2 instances into the correct network.
Create both project directories now:
# Create both project directories side by side
mkdir -p terraform-lesson-12/networking
mkdir -p terraform-lesson-12/application
# Networking project files
cd terraform-lesson-12/networking
touch versions.tf variables.tf main.tf outputs.tf .gitignore
# Application project files
cd ../application
touch versions.tf variables.tf main.tf outputs.tf .gitignore
Add this versions.tf to the networking project:
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
# Remote backend — networking state is stored here
# Application project reads from this same bucket to consume outputs
backend "s3" {
bucket = "acme-terraform-state" # Your state bucket name
key = "networking/terraform.tfstate" # Unique key for networking state
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-lock"
}
}
provider "aws" {
region = var.region
default_tags {
tags = {
Project = "acme"
ManagedBy = "Terraform"
Layer = "networking" # Identifies which infrastructure layer owns these resources
}
}
}
Run terraform init inside the networking directory. Then continue below.
Basic Output Declarations
Every output block has a name, a description, and a value. The name is what you reference when consuming the output in another configuration. The description is documentation — shown when Terraform prompts for missing variables and in generated module documentation. The value is any valid Terraform expression.
We are writing the networking project's main.tf and outputs.tf. The outputs expose every identifier the application project will need — VPC ID, subnet IDs, and security group IDs.
New terms:
- output block — declares a named value that Terraform prints after apply and stores in the state file. The name must be unique within a configuration. Outside the configuration — in a consuming module or remote state reference — the name is how the value is looked up.
- value argument — any valid Terraform expression. Can reference resource attributes, local values, variables, function calls, or expressions combining multiple sources. Evaluated after all resources are created so attributes like
idandpublic_ipare available. - for expression in output — builds a collection from a for_each resource.
{ for k, v in aws_subnet.public : k => v.id }produces a map of subnet names to subnet IDs. Consuming configurations get a structured map rather than an anonymous list. - values() function — returns the values of a map as a list, discarding the keys. Used when you need all resource attributes as a list rather than a keyed map — for example, a list of all subnet IDs to pass to a load balancer.
Add this to networking/variables.tf:
variable "region" {
description = "AWS region for all networking resources"
type = string
default = "us-east-1"
}
variable "environment" {
description = "Deployment environment"
type = string
default = "dev"
}
variable "vpc_cidr" {
description = "CIDR block for the VPC — must be /16 to /28"
type = string
default = "10.0.0.0/16"
}
variable "public_subnets" {
description = "Map of subnet name to CIDR block for public subnets"
type = map(string)
default = {
"public-a" = "10.0.1.0/24" # Availability zone A
"public-b" = "10.0.2.0/24" # Availability zone B
}
}
Add this to networking/main.tf:
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true # Required for public DNS on instances
tags = {
Name = "vpc-${var.environment}"
Environment = var.environment
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id # Implicit dependency on VPC
tags = {
Name = "igw-${var.environment}"
}
}
# One subnet per entry in var.public_subnets map
# for_each key becomes the subnet's stable Terraform identity
resource "aws_subnet" "public" {
for_each = var.public_subnets
vpc_id = aws_vpc.main.id
cidr_block = each.value # CIDR from the map value
availability_zone = "${var.region}${split("-", each.key)[1]}" # Extract AZ suffix from name
map_public_ip_on_launch = true
tags = {
Name = "subnet-${each.key}-${var.environment}"
Tier = "public"
}
}
resource "aws_security_group" "web" {
name = "web-sg-${var.environment}"
description = "Security group for web tier instances"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "All outbound"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "web-sg-${var.environment}"
}
}
Now add this to networking/outputs.tf — this is the heart of the lesson:
# ── VPC ─────────────────────────────────────────────────────────────────────
output "vpc_id" {
description = "VPC ID — pass to any resource that must live in this network"
value = aws_vpc.main.id
}
output "vpc_cidr_block" {
description = "VPC CIDR block — used by peering configurations and security group rules"
value = aws_vpc.main.cidr_block
}
# ── SUBNETS ──────────────────────────────────────────────────────────────────
output "public_subnet_ids" {
description = "Map of subnet name to subnet ID — structured for easy lookup by name"
# for expression iterates the for_each resource and builds a name => id map
value = { for k, v in aws_subnet.public : k => v.id }
}
output "public_subnet_ids_list" {
description = "List of all public subnet IDs — pass directly to load balancers and ECS"
# values() extracts map values as a list — order is not guaranteed
value = values({ for k, v in aws_subnet.public : k => v.id })
}
# ── SECURITY GROUPS ──────────────────────────────────────────────────────────
output "web_security_group_id" {
description = "Web security group ID — attach to any instance serving HTTP or HTTPS traffic"
value = aws_security_group.web.id
}
output "web_security_group_arn" {
description = "Web security group ARN — used in IAM policies and cross-account references"
value = aws_security_group.web.arn
}
# ── ENVIRONMENT METADATA ─────────────────────────────────────────────────────
output "environment" {
description = "Environment name — consuming configurations use this for naming and tagging"
value = var.environment
}
output "region" {
description = "AWS region — consuming configurations reference this for provider config"
value = var.region
}
$ terraform apply
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Outputs:
environment = "dev"
public_subnet_ids = {
"public-a" = "subnet-0aaa111bbb"
"public-b" = "subnet-0ccc333ddd"
}
public_subnet_ids_list = [
"subnet-0aaa111bbb",
"subnet-0ccc333ddd",
]
region = "us-east-1"
vpc_cidr_block = "10.0.0.0/16"
vpc_id = "vpc-0abc123def"
web_security_group_arn = "arn:aws:ec2:us-east-1:123456789012:security-group/sg-0ghi789"
web_security_group_id = "sg-0ghi789jkl"What just happened?
- Eight outputs printed after apply. Every value the application project needs is exposed here — VPC ID, subnet IDs in two formats, security group ID and ARN, plus environment metadata. The networking project becomes a self-contained data source for everything built on top of it.
- public_subnet_ids is a map, public_subnet_ids_list is a list — both from the same source. The map format lets consumers look up a subnet by name:
data.terraform_remote_state.networking.outputs.public_subnet_ids["public-a"]. The list format works directly with AWS resources that accept a list of subnet IDs like load balancers and ECS services. Expose both and let the consumer pick the right format. - All outputs are stored in the remote state file in S3. This is the critical point: every output value is written to
networking/terraform.tfstatein the S3 bucket. Any other Terraform configuration with read access to that bucket can retrieve these values usingterraform_remote_state— without any API calls, without any hardcoded IDs.
Querying Outputs from the Command Line
After an apply, outputs are stored in state and can be queried at any time without running a new plan or apply. The terraform output command reads directly from state — useful in scripts, CI pipelines, and debugging sessions.
# Show all outputs in human-readable format
terraform output
# Show a specific output by name
terraform output vpc_id
# Output as JSON — useful for scripting and CI pipelines
terraform output -json
# Output a specific value as raw string — no quotes, no formatting
# Use this when piping into another command
terraform output -raw vpc_id
# Access a nested value from a map output
terraform output -json public_subnet_ids | jq '."public-a"'
$ terraform output vpc_id
"vpc-0abc123def"
$ terraform output -raw vpc_id
vpc-0abc123def
$ terraform output -json
{
"environment": { "sensitive": false, "type": "string", "value": "dev" },
"public_subnet_ids": {
"sensitive": false,
"type": ["object", { "public-a": "string", "public-b": "string" }],
"value": { "public-a": "subnet-0aaa111bbb", "public-b": "subnet-0ccc333ddd" }
},
"vpc_id": { "sensitive": false, "type": "string", "value": "vpc-0abc123def" },
...
}
$ terraform output -json public_subnet_ids | jq '."public-a"'
"subnet-0aaa111bbb"What just happened?
- terraform output -raw removes quotes and formatting. A plain
terraform output vpc_idprints"vpc-0abc123def"with surrounding quotes. The-rawflag printsvpc-0abc123defwith no quotes — the format you need when piping the value into a shell variable or another command. - terraform output -json exposes the full type metadata. Each output shows its sensitive flag, its type, and its value. The sensitive flag matters — sensitive outputs show
"sensitive": trueand their value is redacted even in JSON output. - jq extracts nested values from map outputs.
terraform output -json public_subnet_ids | jq '."public-a"'pipes the JSON map through thejqcommand-line JSON processor and extracts the value for keypublic-a. This pattern is commonly used in shell scripts and CI pipelines that need specific subnet IDs for subsequent commands.
Sensitive Outputs
When an output value is sensitive — a database connection string, an API endpoint with an embedded key, a generated password — mark it with sensitive = true. Any output that references a sensitive variable or resource attribute is automatically marked sensitive by Terraform — but you can also mark outputs explicitly.
Add these to networking/outputs.tf to demonstrate sensitive output behaviour:
# ── SENSITIVE OUTPUTS ────────────────────────────────────────────────────────
output "db_connection_string" {
description = "Database connection string — contains credentials, never log this"
# Marked sensitive — Terraform redacts this in all terminal output
sensitive = true
# In a real configuration this would reference an RDS endpoint and a secret
value = "postgresql://admin:${var.db_password}@${aws_db_instance.main.endpoint}/appdb"
}
output "internal_api_endpoint" {
description = "Internal API endpoint — only accessible within the VPC"
# Not sensitive — the endpoint is not a secret, just not public-facing
sensitive = false
value = "http://internal-api.${var.environment}.acme.internal"
}
$ terraform output
db_connection_string = (sensitive value)
environment = "dev"
internal_api_endpoint = "http://internal-api.dev.acme.internal"
vpc_id = "vpc-0abc123def"
...
# To read a sensitive output explicitly — requires intentional action
$ terraform output -json db_connection_string
{
"sensitive": true,
"type": "string",
"value": "postgresql://admin:my-super-secret-password@rds.us-east-1.amazonaws.com/appdb"
}
# -raw also reveals sensitive values — use with caution in scripts
$ terraform output -raw db_connection_string
postgresql://admin:my-super-secret-password@rds.us-east-1.amazonaws.com/appdbWhat just happened?
- The sensitive output printed as (sensitive value) in the normal output listing. The connection string — which contains a password — never appeared in the terminal. CI/CD log scrapers, screenshot tools, and terminal history all see the redacted placeholder, not the secret.
- terraform output -json and -raw do reveal sensitive values. These flags require an explicit, intentional command — not something that happens automatically in a log. This is the correct balance: the value is accessible when needed but not accidentally exposed in normal operations. Always be deliberate when using these flags in scripts that might log their output.
- Outputs that reference sensitive variables inherit sensitivity automatically. If the
db_connection_stringoutput referencesvar.db_password— which issensitive = true— Terraform marks the output sensitive even without an explicitsensitive = trueon the output block itself. Sensitivity propagates through the dependency chain.
Cross-Stack References with terraform_remote_state
The most powerful use of outputs is cross-stack data sharing. The networking project built a VPC and stored its outputs in remote state. Now the application project reads those outputs without duplicating infrastructure or hardcoding IDs.
terraform_remote_state is a data source — it reads another configuration's state file and exposes its outputs. No API calls, no duplication. Just a reference to an existing state file.
We are writing the application project's versions.tf and main.tf — it consumes the networking outputs to place EC2 instances in the correct VPC and subnets.
New terms:
- data source — a block that reads existing data rather than creating new infrastructure. Data sources are prefixed with
data.when referenced. They run during plan — before any resources are created — and their results are available throughout the configuration. - terraform_remote_state — a special data source built into Terraform core. Reads the state file of another Terraform configuration and exposes its outputs. The
backendargument specifies where the remote state is stored. Theconfigblock provides the backend configuration — bucket, key, region for S3. - data.terraform_remote_state.name.outputs.output_name — the full reference syntax for consuming a remote state output.
dataprefix indicates a data source,terraform_remote_stateis the type,networkingis the local name,outputsis the attribute containing all outputs, andvpc_idis the specific output name.
Add this to application/versions.tf:
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
# Application state stored at a different key — separate from networking state
backend "s3" {
bucket = "acme-terraform-state"
key = "application/terraform.tfstate" # Different key from networking
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-lock"
}
}
provider "aws" {
region = var.region
default_tags {
tags = {
Project = "acme"
ManagedBy = "Terraform"
Layer = "application" # Identifies this as the application layer
}
}
}
Add this to application/variables.tf:
variable "region" {
description = "AWS region — must match the networking project region"
type = string
default = "us-east-1"
}
variable "environment" {
description = "Deployment environment — must match the networking project environment"
type = string
default = "dev"
}
variable "instance_type" {
description = "EC2 instance type for application servers"
type = string
default = "t2.micro"
}
Add this to application/main.tf:
# Read the networking configuration's state file to get VPC and subnet IDs
# This project has zero knowledge of how the networking was built
# It only knows the output names — the networking project's public interface
data "terraform_remote_state" "networking" {
backend = "s3" # Same backend type as the networking project
config = {
bucket = "acme-terraform-state" # Same bucket as networking backend
key = "networking/terraform.tfstate" # The networking project's state key
region = "us-east-1"
}
}
# Reference networking outputs via data.terraform_remote_state.networking.outputs.*
# No hardcoded VPC IDs, no manual copy-paste — just references to the output names
locals {
# Grab the VPC ID from the networking outputs
vpc_id = data.terraform_remote_state.networking.outputs.vpc_id
# Grab all subnet IDs as a list — pass directly to resources that need a subnet list
subnet_ids = data.terraform_remote_state.networking.outputs.public_subnet_ids_list
# Grab the security group ID for web instances
web_sg_id = data.terraform_remote_state.networking.outputs.web_security_group_id
}
# EC2 instances deployed into the networking project's VPC and subnets
# This configuration has no aws_vpc or aws_subnet resources — it reuses existing ones
resource "aws_instance" "app" {
ami = "ami-0c55b159cbfafe1f0" # Amazon Linux 2 — us-east-1
instance_type = var.instance_type
subnet_id = local.subnet_ids[0] # Deploy into first available subnet
vpc_security_group_ids = [local.web_sg_id] # Attach the networking security group
tags = {
Name = "app-server-${var.environment}"
Environment = var.environment
VpcId = local.vpc_id # Tag shows which VPC this instance belongs to
}
lifecycle {
create_before_destroy = true # Zero-downtime replacement
}
}
$ cd application
$ terraform init
Initializing the backend...
Successfully configured the backend "s3"!
Initializing provider plugins...
Installing hashicorp/aws v5.31.0...
$ terraform plan
data.terraform_remote_state.networking: Reading...
data.terraform_remote_state.networking: Read complete after 1s
Terraform will perform the following actions:
# aws_instance.app will be created
+ resource "aws_instance" "app" {
+ ami = "ami-0c55b159cbfafe1f0"
+ instance_type = "t2.micro"
+ subnet_id = "subnet-0aaa111bbb" # From networking outputs
+ vpc_security_group_ids = ["sg-0ghi789jkl"] # From networking outputs
+ tags = {
+ "Environment" = "dev"
+ "Name" = "app-server-dev"
+ "VpcId" = "vpc-0abc123def" # From networking outputs
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
$ terraform apply
aws_instance.app: Creating...
aws_instance.app: Creation complete after 32s [id=i-0app123server]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.What just happened?
- The application project read the networking state without any AWS API calls.
data.terraform_remote_state.networking: Read complete after 1s— Terraform downloaded the networking state file from S3 and extracted the outputs. No DescribeVpcs call, no DescribeSubnets. The VPC ID, subnet IDs, and security group ID were all available immediately. - The EC2 instance landed in the correct VPC and subnet — without any aws_vpc or aws_subnet resources in the application project. The subnet ID
subnet-0aaa111bbbcame directly fromdata.terraform_remote_state.networking.outputs.public_subnet_ids_list[0]. If the networking team rebuilds the VPC tomorrow, the nextterraform planin the application project will detect the new subnet IDs and show what needs to update. - Two separate state files, two separate apply cycles, zero hardcoded IDs. The networking team owns and manages the networking state. The application team owns the application state. Neither team needs to coordinate manually — the outputs are the contract between the two configurations. Change an output name in the networking project and the application project's next plan will fail with a clear missing-output error, forcing a coordinated update.
Application Project Outputs
The application project also has its own outputs — surfacing the instance ID and public IP for any downstream configuration or human operator that needs them. Add this to application/outputs.tf:
output "app_instance_id" {
description = "EC2 instance ID of the application server"
value = aws_instance.app.id
}
output "app_public_ip" {
description = "Public IP address of the application server"
value = aws_instance.app.public_ip
}
output "app_public_dns" {
description = "Public DNS hostname — use this for HTTP access rather than the raw IP"
value = aws_instance.app.public_dns
}
# Surface which networking resources this application is using
# Useful for debugging and for any third configuration that builds on top of this one
output "networking_vpc_id" {
description = "VPC ID consumed from the networking layer — for reference and downstream use"
value = data.terraform_remote_state.networking.outputs.vpc_id
}
output "networking_environment" {
description = "Environment confirmed from the networking layer — validates config alignment"
value = data.terraform_remote_state.networking.outputs.environment
}
$ terraform output app_instance_id = "i-0app123server" app_public_dns = "ec2-54-211-89-100.compute-1.amazonaws.com" app_public_ip = "54.211.89.100" networking_environment = "dev" networking_vpc_id = "vpc-0abc123def" # The full chain — networking outputs flow through to application outputs # A third configuration could read application state and get the VPC ID too $ terraform output -raw app_public_ip 54.211.89.100
What just happened?
- networking_vpc_id in the application outputs re-exposes the networking output. A third configuration — say, a monitoring project — could read the application state and get the VPC ID without needing to read the networking state directly. Outputs chain naturally through layers of infrastructure.
- networking_environment validates configuration alignment. If the networking project is in "dev" and someone accidentally runs the application project with environment="prod", the
networking_environmentoutput would show "dev" — a clear signal that the two configurations are misaligned. This surfacing of cross-stack metadata makes debugging configuration drift much faster. - app_public_dns is more stable than app_public_ip. EC2 public IPs can change when an instance is stopped and started. The public DNS hostname is derived from the IP but provides a consistent naming pattern. For web access, always prefer DNS over raw IPs.
Clean Up Both Projects
Destroy in reverse dependency order — application first, then networking. If you destroy networking first, the VPC deletion will fail because the EC2 instance still exists inside it.
# Destroy application layer first — it depends on networking resources
cd application
terraform destroy
# Then destroy networking — no resources remain inside the VPC
cd ../networking
terraform destroy
$ cd application && terraform destroy Plan: 0 to add, 0 to change, 1 to destroy. Enter a value: yes aws_instance.app: Destroying... [id=i-0app123server] aws_instance.app: Destruction complete after 30s Destroy complete! Resources: 1 destroyed. $ cd ../networking && terraform destroy Plan: 0 to add, 0 to change, 4 to destroy. Enter a value: yes aws_security_group.web: Destroying... aws_security_group.web: Destruction complete after 2s aws_subnet.public["public-a"]: Destroying... aws_subnet.public["public-b"]: Destroying... aws_subnet.public["public-a"]: Destruction complete after 1s aws_subnet.public["public-b"]: Destruction complete after 1s aws_internet_gateway.main: Destroying... aws_internet_gateway.main: Destruction complete after 3s aws_vpc.main: Destroying... aws_vpc.main: Destruction complete after 1s Destroy complete! Resources: 4 destroyed.
What just happened?
- Application destroyed cleanly first. One resource — the EC2 instance — gone in 30 seconds. The networking resources remain untouched because they are managed by a separate state file.
- Networking destroyed cleanly second. With the EC2 instance gone, the VPC had no resources inside it. The subnets, internet gateway, security group, and VPC all destroyed in the correct order — security group first, then subnets and internet gateway in parallel, then the VPC last.
- Cross-stack destruction must always follow dependency order. If you destroyed networking first, AWS would reject the VPC deletion because the application EC2 instance still lives inside it. Terraform cannot enforce cross-stack destroy order automatically — you must do it manually. This is a fundamental characteristic of the multi-stack architecture — worth knowing before you adopt it.
Common Mistakes
Renaming an output that downstream configurations depend on
Output names are the public interface of your Terraform configuration. Renaming vpc_id to main_vpc_id in the networking project breaks every configuration that reads data.terraform_remote_state.networking.outputs.vpc_id. Treat output names as API endpoints — version them carefully, deprecate rather than rename, and coordinate with consuming teams before making breaking changes.
Destroying the networking layer before the application layer
Terraform cannot enforce destroy order across separate state files. If you destroy the networking VPC while an EC2 instance still lives inside it, AWS rejects the deletion and Terraform reports an error. Always destroy downstream (application) configurations before upstream (networking) configurations. Document this dependency for your team.
Exposing too much via outputs — creating tight coupling
Every output is a dependency. If the application project references twenty outputs from the networking project, changes to the networking project require checking all twenty values. Expose only what consuming configurations genuinely need — VPC ID, subnet IDs, security group IDs. Internal resource attributes that consuming layers do not need should stay private.
terraform_remote_state vs data sources — when to use each
terraform_remote_state is the right choice when both configurations are managed by the same team or organisation and you control both state files. For infrastructure you do not own — an existing VPC created by another team that does not use Terraform — use an AWS data source instead: data "aws_vpc" "existing" { id = var.vpc_id }. Data sources query the cloud API directly and do not require access to anyone's state file.
Practice Questions
1. Which command prints a single output value with no quotes or formatting — suitable for piping into a shell variable?
2. You have a data source declared as: data "terraform_remote_state" "networking" {...}. What is the full expression to access its vpc_id output?
3. You have a networking configuration and an application configuration that depends on it. In which order must you destroy them?
Quiz
1. What does terraform_remote_state do and how is it different from an AWS data source?
2. An output references var.db_password which is marked sensitive = true. Do you need to add sensitive = true to the output block too?
3. Another team built a VPC manually — not with Terraform. How do you reference that VPC in your Terraform configuration?
Up Next · Lesson 13
Data Sources
You just used one data source — terraform_remote_state. Lesson 13 covers all of them: querying existing AWS infrastructure, looking up AMI IDs dynamically, reading secrets from AWS Secrets Manager, and the pattern that replaces every hardcoded ID in your configuration.