Terraform Course
Providers
You have declared providers and used them. But there is much more to providers than a single block in versions.tf. Real infrastructure spans multiple regions, multiple accounts, and multiple clouds — all in the same configuration. This lesson shows you how providers handle all of it.
This lesson covers
How providers authenticate → Multiple provider configurations → Provider aliases for multi-region and multi-account → Using multiple providers in one configuration → Provider version management in depth
How Providers Authenticate
Every provider needs some form of identity to interact with its platform. For AWS, that is IAM credentials. For Azure, it is a service principal. For Cloudflare, it is an API token. Each provider has its own authentication mechanism — but all of them follow the same principle: never hardcode credentials in your configuration files.
The AWS provider supports multiple authentication methods, checked in this order of precedence:
| Priority | Method | Best for |
|---|---|---|
| 1 — Highest | Static credentials in provider block | Never — credentials in code |
| 2 | Environment variables (AWS_ACCESS_KEY_ID etc.) | CI/CD pipelines |
| 3 | Shared credentials file (~/.aws/credentials) | Local development |
| 4 | AWS profile from shared config file | Multi-account local development |
| 5 — Lowest | IAM instance profile / ECS task role | Terraform running on EC2 or ECS |
The Analogy
Think of provider authentication like logging into a building. You can use a key card, a PIN code, biometrics, or a security guard who recognises your face. You pick the method that fits the situation — you do not engrave your PIN on the door. Terraform's provider authentication chain works the same way: pick the method appropriate for the environment, never embed credentials in the configuration itself.
A Single Provider — Done Properly
Before we get to multiple providers, let us write the AWS provider block with every argument you should know. This is the complete provider configuration for a single-account, single-region AWS setup — the kind most projects start with.
New terms:
- profile — the name of an AWS named profile in
~/.aws/credentials. Named profiles let you store credentials for multiple AWS accounts in the same file and switch between them by name. The default profile has the namedefault. - default_tags block — tags applied automatically to every resource the AWS provider creates in this configuration. Without this, you must add the same tags to every individual resource block. With it, tags cascade down to everything — no repetition, no forgetting.
- assume_role block — tells the AWS provider to assume an IAM role before making API calls. Used in multi-account setups where your personal credentials authenticate to one account and then assume a role in a different account. The ARN identifies which role to assume.
provider "aws" {
region = var.region
profile = var.aws_profile
default_tags {
tags = {
Project = "acme-platform"
ManagedBy = "Terraform"
Environment = var.environment
}
}
}
$ terraform apply
aws_s3_bucket.logs: Creating...
aws_instance.web: Creating...
aws_security_group.web: Creating...
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
$ aws s3api get-bucket-tagging --bucket acme-logs-prod-a3f2b1c4
{
"TagSet": [
{ "Key": "Environment", "Value": "prod" },
{ "Key": "ManagedBy", "Value": "Terraform" },
{ "Key": "Project", "Value": "acme-platform" }
]
}What just happened?
- default_tags applied to every resource without a single tag block in main.tf. The S3 bucket, EC2 instance, and security group all received the Project, ManagedBy, and Environment tags automatically. When AWS returned the bucket's tags via the CLI, all three were present — set by the provider, not by the resource block.
- Resource-level tags merge with default_tags. If a resource block also has a
tagsargument, those tags are merged with the defaults. The resource-level tag wins if there is a conflict on the same key. This lets you add resource-specific tags without repeating the global ones. - profile = var.aws_profile reads from ~/.aws/credentials. On a developer's machine this resolves to a named profile. In CI/CD, the variable would be left empty and credentials would come from environment variables instead — same configuration, different authentication method per environment.
Provider Aliases — Multiple Regions in One Configuration
A common real-world requirement: your main infrastructure runs in us-east-1 but you must also create an SSL certificate in us-east-1 and a CloudFront distribution — but AWS requires ACM certificates for CloudFront to exist specifically in us-east-1 regardless of where your application runs. Or your application is deployed to two regions for high availability.
You cannot solve this with one provider block. Terraform does not allow two provider blocks for the same provider with different configurations — unless you give one of them an alias. We are about to write a configuration that deploys an S3 bucket in two AWS regions simultaneously using provider aliases.
New terms:
- alias — a unique name given to a provider configuration. When a provider has an alias, it is no longer the default for that provider type. Resources must explicitly reference it using the
providermeta-argument. - provider meta-argument — an argument available on every resource block that tells Terraform which provider configuration to use. Written as
provider = aws.secondarywhereawsis the provider type andsecondaryis the alias. Resources without this argument use the default provider configuration. - default provider — the provider block without an alias. Every provider type can have at most one default configuration. All resources that do not specify a
providerargument use the default.
provider "aws" {
region = "us-east-1"
default_tags {
tags = { ManagedBy = "Terraform" }
}
}
provider "aws" {
alias = "eu"
region = "eu-west-1"
default_tags {
tags = { ManagedBy = "Terraform" }
}
}
resource "aws_s3_bucket" "primary" {
bucket = "acme-primary-us-${random_id.suffix.hex}"
}
resource "aws_s3_bucket" "replica" {
provider = aws.eu
bucket = "acme-replica-eu-${random_id.suffix.hex}"
}
resource "random_id" "suffix" {
byte_length = 4
}
$ terraform apply
Terraform will perform the following actions:
# aws_s3_bucket.primary will be created
+ resource "aws_s3_bucket" "primary" {
+ bucket = (known after apply)
+ provider = "provider[\"registry.terraform.io/hashicorp/aws\"]"
+ region = "us-east-1"
}
# aws_s3_bucket.replica will be created
+ resource "aws_s3_bucket" "replica" {
+ bucket = (known after apply)
+ provider = "provider[\"registry.terraform.io/hashicorp/aws\"].eu"
+ region = "eu-west-1"
}
Plan: 3 to add, 0 to change, 0 to destroy.
Enter a value: yes
random_id.suffix: Creating...
random_id.suffix: Creation complete [id=a3f2b1c4]
aws_s3_bucket.primary: Creating... [provider: us-east-1]
aws_s3_bucket.replica: Creating... [provider: eu-west-1]
aws_s3_bucket.primary: Creation complete [id=acme-primary-us-a3f2b1c4]
aws_s3_bucket.replica: Creation complete [id=acme-replica-eu-a3f2b1c4]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.What just happened?
- Two buckets were created in two different regions simultaneously. The primary bucket landed in
us-east-1because it used the default provider. The replica bucket landed ineu-west-1because it explicitly referenced the aliased provider withprovider = aws.eu. One configuration, one apply, two regions. - The plan output shows which provider each resource uses. Look at the provider field in the plan —
provider["registry.terraform.io/hashicorp/aws"]for the primary and...hashicorp/aws"].eufor the replica. This is how you verify in the plan that the right provider alias is being used before applying. - Both buckets were created in parallel. Since neither depends on the other — they both only depend on
random_id.suffix— Terraform created them simultaneously after the random ID was available. Multi-region deployments are as fast as single-region ones.
Multi-Account with assume_role
Enterprise AWS setups use multiple accounts — one for production, one for staging, one for shared services, one for logging. Your IAM credentials live in a central account. To manage resources in other accounts, Terraform assumes an IAM role in each target account. This is the assume_role pattern.
We are writing a configuration that manages resources in two AWS accounts simultaneously. Your credentials authenticate to account A. Terraform assumes a deployment role in account B to create resources there — all from one configuration.
New terms:
- assume_role block — instructs the provider to call AWS STS (Security Token Service) to obtain temporary credentials for the specified role before making any API calls. The temporary credentials expire after a session duration you control.
- role_arn — the Amazon Resource Name of the IAM role to assume. The role must exist in the target account and must have a trust policy that allows your IAM identity in the source account to assume it.
- session_name — a label attached to the temporary session. Appears in CloudTrail logs so you can identify which Terraform run made which API call. Always set this to something descriptive.
- external_id — an optional secret shared between the caller and the role's trust policy. Prevents the confused deputy problem — a scenario where a third party tricks your service into assuming a role on their behalf. Use it whenever an external party defines the role you assume.
provider "aws" {
region = "us-east-1"
profile = "shared-services"
}
provider "aws" {
alias = "production"
region = "us-east-1"
profile = "shared-services"
assume_role {
role_arn = "arn:aws:iam::111122223333:role/TerraformDeployRole"
session_name = "terraform-production-deploy"
external_id = var.assume_role_external_id
}
}
resource "aws_s3_bucket" "shared_artifacts" {
bucket = "acme-shared-artifacts"
}
resource "aws_s3_bucket" "prod_data" {
provider = aws.production
bucket = "acme-prod-data"
}
$ terraform apply
Terraform will perform the following actions:
# aws_s3_bucket.shared_artifacts will be created
+ resource "aws_s3_bucket" "shared_artifacts" {
+ bucket = "acme-shared-artifacts"
}
# Using profile: shared-services (account 999988887777)
# aws_s3_bucket.prod_data will be created
+ resource "aws_s3_bucket" "prod_data" {
+ bucket = "acme-prod-data"
}
# Assuming role in account 111122223333
Plan: 2 to add, 0 to change, 0 to destroy.
Enter a value: yes
aws_s3_bucket.shared_artifacts: Creating... [account: 999988887777]
aws_s3_bucket.prod_data: Creating... [account: 111122223333]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.What just happened?
- Two resources were created in two different AWS accounts. The shared_artifacts bucket landed in the shared-services account — the one your credentials authenticate to directly. The prod_data bucket landed in account 111122223333 — the production account — because Terraform called STS to assume the TerraformDeployRole there first.
- Your credentials never touched the production account directly. The assume_role flow is: authenticate to shared-services account → call STS in production account → receive temporary credentials → use them to create the bucket. The production account IAM logs show
terraform-production-deployas the session name — making the Terraform-originated API calls identifiable in audit logs. - external_id came from a variable. The external ID is a secret agreed between your organisation and the role's trust policy. By storing it in a variable — and never hardcoding it — it can be provided securely via environment variables or a secrets manager without appearing in your
.tffiles or Git history.
Multiple Different Providers — AWS and Cloudflare Together
One of Terraform's most powerful capabilities is managing completely different platforms in a single configuration. Here is a practical example: you deploy a web application to AWS and then automatically update the Cloudflare DNS record to point to it — all in one apply.
We are writing a configuration that creates an EC2 instance on AWS and a Cloudflare DNS A record pointing to its public IP. When the instance IP changes — for example after a destroy and recreate — the DNS record updates automatically on the next apply.
New terms:
- cloudflare provider — the Terraform provider for Cloudflare, maintained by Cloudflare themselves. Authenticates via an API token scoped to specific zones and permissions. Never uses username/password.
- cloudflare_record resource — creates or manages a DNS record in a Cloudflare zone. The
zone_ididentifies which domain the record belongs to. Thevalueis what the record resolves to — in this case, the EC2 instance's public IP. - cross-provider reference — when the value argument of a resource in one provider references an attribute of a resource in a different provider.
aws_instance.web.public_ipinside acloudflare_recordresource is a cross-provider reference. Terraform resolves the dependency automatically — the EC2 instance is created first, then the DNS record uses its IP. - proxied — a Cloudflare-specific argument. When
true, traffic to this DNS record flows through Cloudflare's network first — enabling CDN, DDoS protection, and WAF. Whenfalse, the record is a plain DNS record with no Cloudflare proxy.
provider "aws" {
region = "us-east-1"
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t3.micro"
tags = {
Name = "web-server"
ManagedBy = "Terraform"
}
}
resource "cloudflare_record" "web" {
zone_id = var.cloudflare_zone_id
name = "app"
type = "A"
value = aws_instance.web.public_ip
proxied = true
ttl = 1
}
$ terraform apply
Terraform will perform the following actions:
# aws_instance.web will be created
+ resource "aws_instance" "web" {
+ ami = "ami-0c55b159cbfafe1f0"
+ instance_type = "t3.micro"
+ public_ip = (known after apply)
}
# cloudflare_record.web will be created
+ resource "cloudflare_record" "web" {
+ name = "app"
+ proxied = true
+ type = "A"
+ value = (known after apply)
+ zone_id = "abc123def456"
}
Plan: 2 to add, 0 to change, 0 to destroy.
Enter a value: yes
aws_instance.web: Creating...
aws_instance.web: Creation complete after 32s [id=i-0abc123def456789]
public_ip = "54.211.89.132"
cloudflare_record.web: Creating...
cloudflare_record.web: Creation complete after 1s [id=cf-record-xyz]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.What just happened?
- The Cloudflare record waited for the EC2 instance. The DNS record value is
aws_instance.web.public_ip— an attribute that is unknown until AWS creates the instance. Terraform detected this cross-provider dependency automatically and created the EC2 instance first. Only after the instance was running and its public IP was known did it create the DNS record. - Two completely different APIs were called in one apply. The AWS API was called to create the EC2 instance. The Cloudflare API was called to create the DNS record. Both are managed as first-class Terraform resources — tracked in the same state file, destroyed together by
terraform destroy, and updated together if either resource changes. - proxied = true and ttl = 1 are Cloudflare-specific. The
ttlvalue of 1 is Cloudflare's special value meaning "automatic TTL" — used when proxying is enabled. These arguments exist only in the Cloudflare provider schema. Other providers have their own specific arguments that are irrelevant to AWS resources. Each provider defines its own resource schema independently.
Provider Version Management
Provider versions matter more than most beginners realise. A major provider version bump — AWS provider 4.x to 5.x — can introduce breaking changes that rename arguments, remove resources, or change behaviour. Understanding how to manage this is a production-critical skill.
We are writing the version constraint patterns you will use in every project and the command to upgrade providers safely.
New terms:
- ~> (pessimistic constraint operator) — allows only the rightmost version component to increment.
~> 5.0allows 5.x but not 6.0.~> 5.31allows 5.31.x but not 5.32. The more specific you are, the tighter the constraint. - terraform init -upgrade — re-resolves all provider versions against the current constraints and downloads newer versions if available. Updates the lock file with the new versions. Run this deliberately when you want to upgrade — not automatically.
- terraform providers lock — regenerates the lock file with checksums for multiple platforms. Use this when your local machine is macOS but your CI runs on Linux — the lock file needs checksums for both platforms so
terraform initworks on both without failing checksum verification.
terraform {
required_version = ">= 1.5.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.31"
}
cloudflare = {
source = "cloudflare/cloudflare"
version = ">= 4.0.0, < 5.0.0"
}
}
}
$ terraform init -upgrade Initializing provider plugins... - Finding hashicorp/aws versions matching "~> 5.31"... - Finding cloudflare/cloudflare versions matching ">= 4.0.0, < 5.0.0"... - Installing hashicorp/aws v5.35.0... - Installing cloudflare/cloudflare v4.20.0... Terraform has made some changes to the provider lock file: - hashicorp/aws: previous version was 5.31.0, new version is 5.35.0 - cloudflare/cloudflare: version unchanged at 4.20.0 $ terraform providers lock -platform=linux/amd64 -platform=darwin/arm64 Providers locked for: - linux/amd64 - darwin/arm64
What just happened?
- The AWS provider was upgraded from 5.31.0 to 5.35.0. The constraint
~> 5.31allows any 5.31.x or higher within the 5.x range. Runningterraform init -upgradefound 5.35.0 as the newest qualifying version and updated the lock file. The Cloudflare provider had no newer version available within its constraint, so it stayed at 4.20.0. - The lock file records the change explicitly. Terraform listed exactly which providers changed and from what version to what version. This diff is what you review and commit to Git — it is your team's record that a provider upgrade happened, when, and to which version.
- providers lock generated checksums for two platforms. The lock file now contains verified checksums for both linux/amd64 (CI server) and darwin/arm64 (Apple Silicon Mac). When a developer or CI runner calls
terraform init, their platform's checksum is verified against the lock file — preventing tampered provider binaries from being used silently.
Common Mistakes
Forgetting to specify provider on resources when using aliases
When you define an aliased provider, any resource that does not specify provider = aws.alias_name uses the default provider. This is correct behaviour — but beginners often forget and accidentally create a resource in the wrong region or account. Always double-check the plan output to confirm which provider each resource is using before applying.
Running terraform init -upgrade without reviewing the lock file diff
Upgrading providers is a deliberate action — not something to do casually before an apply. A provider upgrade can change how existing resources behave, rename arguments, or remove deprecated resources. Always review the lock file changes after upgrading, run terraform plan to check for unexpected diffs, and test in a non-production environment before upgrading production.
Using a loose version constraint like >= 4.0 with no upper bound
A constraint of >= 4.0 with no upper bound means Terraform will happily download version 5.0, 6.0, or any future major release — even if they contain breaking changes. Always include an upper bound or use the pessimistic operator ~> to prevent major version jumps from silently breaking your configuration on a fresh init.
Multi-provider configurations and the state file
When you use multiple providers in one configuration — AWS and Cloudflare together, for example — all resources from all providers land in the same state file. A single terraform destroy removes everything regardless of which provider owns it. This is by design. Keep logically unrelated infrastructure in separate configurations with separate state files — not everything in one giant configuration just because Terraform can handle it.
Practice Questions
1. What argument do you add to a provider block to allow multiple configurations of the same provider type in one project?
2. In a multi-account AWS setup, which block inside the provider configuration tells Terraform to obtain temporary credentials for a different account?
3. Which command re-resolves provider versions against current constraints and updates the lock file with newer versions?
Quiz
1. You define two AWS provider configurations — one default and one with alias = "eu". How does Terraform know which provider to use for each resource?
2. You have an aws_instance and a cloudflare_record that references aws_instance.web.public_ip. In what order does Terraform create them?
3. Which version constraint is safest for a production configuration that wants patch and minor updates but must not jump to a new major version?
Up Next · Lesson 9
Resources
You have been using resources since Lesson 1. Lesson 9 goes inside them — meta-arguments, lifecycle rules, dependencies, and the resource behaviours that catch every beginner off guard at least once.