Terraform Course
Security Best Practices
Misconfiguration is the leading cause of cloud security incidents — not hacking. An S3 bucket accidentally left public, a security group open to the world, a database without encryption. These are not hacking stories — they are configuration stories. Terraform is where the configuration is written, and Terraform is where the security controls must live. This lesson covers the practices that prevent those incidents.
This lesson covers
The Terraform security threat model → Never commit secrets → Sensitive variables → State file security → IAM least privilege → Secure resource defaults → Static analysis with tfsec → Security gates in CI/CD
The Terraform Security Threat Model
Terraform configurations have four distinct categories of security risk. Understanding them before reaching for tools is what makes the controls you choose actually effective.
| Risk category | What goes wrong | Primary control |
|---|---|---|
| Secrets in code | Credentials committed to Git — stolen via repo access | Environment variables + secrets managers |
| Secrets in state | Passwords stored in plaintext state file | Encrypted remote backend + access controls |
| Misconfiguration | Public S3 buckets, open security groups, unencrypted databases | Secure defaults in modules + static analysis |
| Overpermissioned identity | Terraform runs with admin access it does not need | IAM least privilege for the Terraform execution role |
1 — Never Put Secrets in Configuration Files
The most common Terraform security mistake is hardcoding credentials in .tf files or terraform.tfvars. Git history is permanent — even after removing the credential from the latest commit, it can be retrieved from history. The correct approach routes secrets through environment variables or secrets managers.
New terms:
- sensitive = true — marks a variable or output as sensitive. Terraform redacts the value in all plan and apply output — it shows
(sensitive value)instead of the actual content. Does not encrypt the value in the state file — that is a separate control. - TF_VAR_ prefix — environment variables prefixed with
TF_VAR_are automatically read as Terraform variable values.export TF_VAR_db_password="secret"sets the value ofvar.db_passwordwithout any file involvement.
# ── WRONG — never do this ────────────────────────────────────────────────────
resource "aws_db_instance" "main" {
username = "admin"
password = "my-super-secret-password" # Hardcoded — ends up in Git history forever
}
# Also wrong — terraform.tfvars with secrets committed to Git
# db_password = "my-super-secret-password"
# ── CORRECT PATTERN 1: TF_VAR_ environment variable ─────────────────────────
variable "db_password" {
description = "RDS master password — set via TF_VAR_db_password environment variable"
type = string
sensitive = true # Redacts from all plan/apply terminal output and CI logs
# No default — forces explicit supply at runtime
}
# In CI/CD pipeline or local shell — never in a file:
# export TF_VAR_db_password="$(aws secretsmanager get-secret-value \
# --secret-id prod/db/password --query SecretString --output text)"
resource "aws_db_instance" "main" {
username = "admin"
password = var.db_password # Read from the sensitive variable
}
# ── CORRECT PATTERN 2: AWS Secrets Manager data source ───────────────────────
data "aws_secretsmanager_secret_version" "db" {
secret_id = "prod/db/credentials" # Only the secret name in code — never the value
}
locals {
# jsondecode parses the JSON string stored in Secrets Manager
db_creds = jsondecode(data.aws_secretsmanager_secret_version.db.secret_string)
}
resource "aws_db_instance" "main" {
username = local.db_creds.username # Fetched at plan time from Secrets Manager
password = local.db_creds.password # Never in any .tf file or state diff output
}
$ terraform plan
+ aws_db_instance.main {
+ username = (sensitive value) # sensitive = true redacts in all output
+ password = (sensitive value) # Never appears in logs, terminal, or CI output
}
# Verify no secrets in your .tf files:
$ git grep -r "password\s*=" --include="*.tf"
# Returns nothing — only variable declarations with no default valueWhat just happened?
- sensitive = true completely redacts the value from all output. The password never appears in plan output, apply output, or error messages. CI/CD logs that capture all terminal output cannot leak the secret. Anyone reviewing the plan — including external auditors — cannot extract the credential.
- The Secrets Manager pattern pulls the secret at plan time. The
.tffile contains only the secret name —prod/db/credentials. The actual password lives in AWS Secrets Manager, encrypted, with IAM-controlled access. If the secret is rotated in Secrets Manager, the next plan picks up the new value automatically with no code change.
2 — State File Security
Even with sensitive = true, the actual secret value is stored in the state file in plaintext. The state file is Terraform's database — it contains every attribute of every managed resource, including generated passwords and access tokens. Protecting it is non-negotiable.
# Secure remote state configuration — every production project needs all of these
terraform {
backend "s3" {
bucket = "acme-terraform-state-123456789012"
key = "prod/app/terraform.tfstate"
region = "us-east-1"
encrypt = true # AES-256 encryption at rest — no extra cost
dynamodb_table = "terraform-state-lock" # Prevents concurrent applies corrupting state
}
}
# The state bucket needs these protections on the S3 resource itself:
resource "aws_s3_bucket_versioning" "state" {
bucket = aws_s3_bucket.state.id
versioning_configuration { status = "Enabled" }
# Versioning allows recovery of any previous state version — critical for rollbacks
}
resource "aws_s3_bucket_server_side_encryption_configuration" "state" {
bucket = aws_s3_bucket.state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256" # Encrypts all state file contents at rest
}
}
}
resource "aws_s3_bucket_public_access_block" "state" {
bucket = aws_s3_bucket.state.id
block_public_acls = true # State files must never be publicly accessible
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# IAM policy for the Terraform execution role — minimum required permissions
resource "aws_iam_policy" "terraform_state_access" {
name = "terraform-state-access"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = ["s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket"]
Resource = [
"arn:aws:s3:::acme-terraform-state-123456789012",
"arn:aws:s3:::acme-terraform-state-123456789012/*"
]
},
{
Effect = "Allow"
Action = ["dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:DeleteItem"]
Resource = "arn:aws:dynamodb:us-east-1:123456789012:table/terraform-state-lock"
}
]
})
}
3 — IAM Least Privilege for Terraform
Terraform needs IAM permissions to create, modify, and destroy resources. The temptation is to give it AdministratorAccess because it is simpler. The risk: a compromised CI/CD token with admin access means an attacker owns your entire AWS account instantly.
New terms:
- least privilege — grant only the permissions required for the specific task, nothing more. For Terraform, scope the execution role to exactly the AWS services the configuration manages.
- separate roles per environment — a dev Terraform role can be broader for experimentation. The prod role should be tightly scoped to only what production deployments need. A CI/CD token compromise in dev should not give access to prod.
- IAM Access Analyzer policy generation — after running Terraform in dev, Access Analyzer analyses CloudTrail logs and generates a policy containing only the actions actually used. The most reliable way to build a least-privilege policy from real usage data.
# IAM role for Terraform CI/CD — scoped to exactly what it needs
resource "aws_iam_role" "terraform_execution" {
name = "terraform-execution-prod"
# Trust policy — only the CI/CD runner role can assume this
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = { AWS = "arn:aws:iam::CI_ACCOUNT_ID:role/github-actions-runner" }
Action = "sts:AssumeRole"
# external_id prevents confused deputy attacks — only callers knowing this value can assume
Condition = { StringEquals = { "sts:ExternalId" = "prod-terraform-deployment" } }
}]
})
}
# Attach only the managed policies the configuration actually uses
# A configuration managing EC2 and VPC — not IAM, not RDS, not Lambda
resource "aws_iam_role_policy_attachment" "ec2" {
role = aws_iam_role.terraform_execution.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2FullAccess"
}
# Custom policy for VPC management — more precise than any managed policy
resource "aws_iam_policy" "vpc_management" {
name = "terraform-vpc-management"
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Action = [
"ec2:CreateVpc", "ec2:DeleteVpc", "ec2:DescribeVpcs",
"ec2:CreateSubnet", "ec2:DeleteSubnet", "ec2:DescribeSubnets",
# Only the actions this configuration actually calls
]
Resource = "*"
}]
})
}
resource "aws_iam_role_policy_attachment" "vpc" {
role = aws_iam_role.terraform_execution.name
policy_arn = aws_iam_policy.vpc_management.arn
}
4 — Secure Resource Defaults in Modules
The most durable security controls are baked into modules — not enforced externally. When a module defaults to encryption-on and public-access-blocked, callers get the secure configuration without thinking about it. The insecure path requires deliberate effort.
# Security baked into a module — callers cannot accidentally create insecure resources
# Pattern 1: Deny by default — require explicit opt-out for anything public-facing
variable "allow_public_access" {
description = "Allow public access to the S3 bucket — set true only for static websites"
type = bool
default = false # Secure by default — public access requires deliberate intent
}
# Pattern 2: Encryption always on — the variable controls key type, not whether to encrypt
variable "kms_key_arn" {
description = "KMS key ARN for encryption — null uses S3-managed AES-256 (still encrypted)"
type = string
default = null # null = free AES-256, non-null = customer-managed KMS key
}
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = var.kms_key_arn != null ? "aws:kms" : "AES256"
kms_master_key_id = var.kms_key_arn # null for AES-256, ARN for customer KMS
# Either path results in encryption — there is no unencrypted option exposed
}
}
}
# Pattern 3: Preconditions enforce security contracts at plan time
resource "aws_db_instance" "this" {
# ... resource arguments ...
lifecycle {
precondition {
# Production databases must use Multi-AZ — no exceptions
condition = var.environment != "prod" || var.multi_az == true
error_message = "Production RDS instances must have multi_az = true."
}
precondition {
condition = var.storage_encrypted == true
error_message = "storage_encrypted must be true — unencrypted databases are not permitted."
}
}
}
5 — Static Analysis with tfsec
Static analysis tools scan Terraform configuration before it is applied and flag security misconfigurations. They run without AWS credentials — directly against the .tf files. In CI/CD they run before the plan step and block the pipeline if critical issues are found.
New terms:
- tfsec — open-source static analysis tool specifically for Terraform. Checks for known misconfigurations across AWS, Azure, and GCP. Fast — runs in seconds with no credentials needed.
- checkov — broader policy-as-code framework covering Terraform, CloudFormation, Kubernetes, and Dockerfiles. Checks against CIS benchmarks, HIPAA, PCI-DSS, and SOC2 compliance frameworks.
- tfsec:ignore: — inline annotation to suppress a specific finding on a specific resource. Always requires a comment explaining why the finding is acceptable — suppressions without justification are security gaps to auditors.
# Install and run tfsec
brew install tfsec # macOS
# Scan the current directory — catches misconfigurations before apply
tfsec .
# Scan with minimum severity filter — only HIGH and CRITICAL
tfsec . --minimum-severity HIGH
# Output JSON for CI/CD parsing
tfsec . --format json --out tfsec-results.json
# ── Suppressing a finding when you have a legitimate reason ──────────────────
resource "aws_s3_bucket" "public_website" {
bucket = "acme-public-website-assets"
# tfsec:ignore:aws-s3-block-public-acls
# tfsec:ignore:aws-s3-no-public-buckets
# Intentional — this bucket serves a public static website
# Reviewed and approved by security team 2024-01
# The suppression comment is the audit trail — never suppress without explanation
}
$ tfsec .
Result #1 HIGH No encryption customer-managed keys detected
─────────────────────────────────────────────
main.tf Lines 1-5
1 resource "aws_s3_bucket" "app" {
2 bucket = "my-app-bucket"
3 }
─────────────────────────────────────────────
ID aws-s3-enable-bucket-encryption
Impact Bucket objects could be read if compromised
Resolution Add aws_s3_bucket_server_side_encryption_configuration
More Info aquasecurity.github.io/tfsec/checks/aws/s3/enable-bucket-encryption
Result #2 CRITICAL Ingress security group rule allows traffic from /0
─────────────────────────────────────────────
main.tf Lines 8-15
9 cidr_blocks = ["0.0.0.0/0"]
─────────────────────────────────────────────
ID aws-ec2-no-public-ingress-sgr
Impact Port is exposed to the entire internet
Resolution Restrict cidr_blocks to known IP ranges
2 potential problems detected.
CRITICAL: 1 HIGH: 1 MEDIUM: 0 LOW: 0
# Both issues found before a single AWS API call — no credentials needed
# Fix them, re-run tfsec, then proceed to terraform planWhat just happened?
- Two misconfigurations caught before any AWS API call. tfsec ran in milliseconds against local
.tffiles — no credentials, no network. Both findings include the exact file and line, the security impact, and a link to the fix documentation. This is the gate that runs in CI/CD before the plan step. - Each finding has a unique check ID.
aws-s3-enable-bucket-encryptionandaws-ec2-no-public-ingress-sgrare the identifiers used to suppress findings when the deviation is intentional and documented.
6 — Security Gates in CI/CD
Static analysis belongs in the CI/CD pipeline as a blocking step before plan. If tfsec finds a CRITICAL finding, the pipeline fails before a plan is generated. Misconfigured infrastructure never reaches any environment.
# .github/workflows/terraform-security.yml
# Security scanning runs BEFORE terraform plan — blocks on HIGH and CRITICAL
name: Terraform Security Gate
on:
pull_request:
branches: [main]
jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run tfsec — fail on HIGH and CRITICAL
uses: aquasecurity/tfsec-action@v1.0.0
with:
working_directory: .
minimum_severity: HIGH # MEDIUM and LOW are warnings only
soft_fail: false # Hard fail — pipeline stops on any finding
- name: Run checkov — compliance frameworks
uses: bridgecrewio/checkov-action@v12
with:
directory: .
framework: terraform
soft_fail: false
# checkov checks CIS AWS benchmarks, HIPAA, PCI-DSS out of the box
# Only if both security checks pass does the pipeline proceed to plan
- name: Terraform Init
run: terraform init
- name: Terraform Plan
run: terraform plan -out=tfplan
Common Security Mistakes
Treating sensitive = true as encryption
sensitive = true prevents terminal printing. It does not encrypt the value in the state file. A database password marked sensitive is still stored as plaintext in terraform.tfstate. State encryption — controlled by the backend's encrypt = true and the S3 bucket's SSE configuration — is a separate and equally necessary control. Both are required.
Using the same IAM role for dev and prod deployments
A single Terraform role with access to both dev and prod means a compromised dev credential has prod access. Create separate roles per environment. The dev role can be broad — engineers experiment. The prod role should be narrowly scoped. An inadvertent prod deployment from a compromised dev credential is the exact scenario you are preventing.
Suppressing tfsec findings without a documented reason
A tfsec:ignore: annotation without a comment looks identical to a security gap to auditors and new engineers. Every suppression must include the reason and when it was reviewed. "Intentional — public website bucket, reviewed by security team 2024-01" turns a suppression into auditable evidence of a deliberate decision.
The security hierarchy — prevention over detection
Layer security controls with prevention first. Bake security into modules so it is the default — callers cannot accidentally create insecure resources. Run static analysis in CI to catch what modules miss. Enforce policies via OPA or Sentinel for organisation-wide requirements. Detection after the fact — CloudTrail alerts, GuardDuty findings — is the last resort, not the first line of defence. Every security incident that reaches production is a failure of prevention. Terraform is where prevention lives.
Practice Questions
1. A variable is marked sensitive = true. Is the value encrypted in the Terraform state file?
2. Which tool scans Terraform .tf files for security misconfigurations without requiring any AWS credentials?
3. What environment variable prefix does Terraform use to automatically read values as variable inputs?
Quiz
1. An aws_db_instance with a password argument stores that password in Terraform state. How do you protect it?
2. Where in the CI/CD pipeline should tfsec run relative to terraform plan?
3. Which security layer is most valuable — module defaults, CI/CD static analysis, or runtime monitoring?
Up Next · Lesson 30
Secrets Management
Security principles established. Lesson 30 goes deeper on secrets specifically — AWS Secrets Manager integration patterns, HashiCorp Vault with Terraform, rotating secrets without redeploying infrastructure, and the patterns that keep credentials out of state entirely.