Terraform Course
Terraform with Jenkins
Jenkins remains the most widely deployed CI/CD platform in enterprises — particularly organisations that predate GitHub Actions. The same Terraform pipeline principles from Lesson 37 apply, but Jenkins has its own idioms: Jenkinsfiles, credential binding, shared libraries, and the agent model. This lesson builds a complete Terraform pipeline in Jenkins from the ground up.
This lesson covers
Jenkins pipeline fundamentals → Declarative vs scripted pipeline → AWS credential binding → Full Terraform Jenkinsfile → Input step for approval gates → Jenkins shared libraries → Docker agent for consistent environments → Multibranch pipeline for environments
Jenkins Pipeline Fundamentals
A Jenkins pipeline is defined in a Jenkinsfile stored in the root of the repository — just as a GitHub Actions workflow lives in .github/workflows/. Jenkins reads the Jenkinsfile automatically when a repository is linked to a Jenkins job or multibranch pipeline.
New terms:
- Declarative pipeline — the modern Jenkins pipeline syntax. Structured with a fixed
pipeline { agent; stages { stage { steps {} } } }skeleton. Easier to read, more validation at parse time. Recommended for new pipelines. - Scripted pipeline — the older Groovy-based syntax. More flexible but harder to read and validate. Uses
node { stage { } }blocks. Still found in many enterprise codebases. - agent — the machine or container that executes the pipeline steps.
agent anyuses any available Jenkins worker.agent { docker { image '...' } }runs inside a specific Docker container. - withCredentials — Jenkins step that injects stored credentials as environment variables for the duration of a block. Credentials are masked in logs. The correct way to handle secrets in Jenkins — never hardcode or echo credentials.
- input step — pauses the pipeline and waits for a human to click Proceed or Abort in the Jenkins UI. The Jenkins equivalent of a GitHub Environment approval gate.
The Analogy
A Jenkins pipeline is like an assembly line on a factory floor. Each stage is a workstation — security scanning, initialisation, planning, approval, deployment. The agent is the worker who carries the product between stations. The input step is the quality gate where a human inspector must sign off before the product moves to the final packaging stage. Nothing leaves the factory without passing every station in order.
AWS Credential Binding in Jenkins
Jenkins stores credentials centrally in its Credentials store — not in the Jenkinsfile. The pipeline references them by ID and they are injected at runtime. For AWS, Jenkins supports three patterns in increasing order of security preference.
// Pattern 1: AWS access keys stored in Jenkins credentials (least preferred)
// Store as Username/Password credential: username=ACCESS_KEY_ID, password=SECRET_ACCESS_KEY
withCredentials([usernamePassword(
credentialsId: 'aws-terraform-credentials', // Jenkins credential ID
usernameVariable: 'AWS_ACCESS_KEY_ID', // Injected as env var
passwordVariable: 'AWS_SECRET_ACCESS_KEY' // Injected as env var — masked in logs
)]) {
sh 'terraform plan' // AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are available here
}
// Credentials are unset after the withCredentials block exits
// Pattern 2: AWS credentials plugin — dedicated binding type
withCredentials([aws(
credentialsId: 'aws-terraform-credentials',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
)]) {
sh 'terraform plan'
}
// Pattern 3: IAM role on the Jenkins agent EC2 instance (most preferred)
// The Jenkins controller and agents run on EC2 with an instance profile
// AWS credentials are fetched from IMDS automatically — no credentials stored anywhere
// The agent's instance profile has permission to assume the Terraform deployment role:
// agent { label 'terraform-agent' } // Agent has the IAM instance profile attached
// sh 'aws sts assume-role --role-arn ...' // Assume the specific deployment role
// No withCredentials block needed at all
// Pattern 4: AWS STS assume-role from the agent's instance profile
withCredentials([string(credentialsId: 'terraform-role-arn', variable: 'ROLE_ARN')]) {
sh '''
# Use the agent's instance profile to assume the deployment role
CREDS=$(aws sts assume-role \
--role-arn $ROLE_ARN \
--role-session-name jenkins-terraform-${BUILD_NUMBER} \
--query Credentials \
--output json)
export AWS_ACCESS_KEY_ID=$(echo $CREDS | jq -r .AccessKeyId)
export AWS_SECRET_ACCESS_KEY=$(echo $CREDS | jq -r .SecretAccessKey)
export AWS_SESSION_TOKEN=$(echo $CREDS | jq -r .SessionToken)
terraform apply -auto-approve tfplan
'''
}
Full Terraform Jenkinsfile — Declarative Pipeline
This is a complete production-ready Terraform Jenkinsfile implementing the same plan-review-apply pattern from Lesson 37, using Jenkins native constructs.
// Jenkinsfile — place in root of repository
// Implements: security scan → init → validate → plan → approval gate → apply
pipeline {
// Run inside a Docker container with Terraform pre-installed
// This ensures every build uses exactly the same Terraform version
agent {
docker {
image 'hashicorp/terraform:1.6.3' // Pinned version — never use :latest
args '-v /var/run/docker.sock:/var/run/docker.sock --entrypoint=""'
}
}
// Pipeline-level environment variables
environment {
TF_DIR = 'infrastructure/services/payments'
TF_IN_AUTOMATION = 'true' // Disables interactive prompts in Terraform
AWS_DEFAULT_REGION = 'us-east-1'
}
// Only keep last 10 builds — avoid filling disk with old plan artifacts
options {
buildDiscarder(logRotator(numToKeepStr: '10'))
timestamps() // Prefix every log line with a timestamp
ansiColor('xterm') // Enable colour in Terraform output
timeout(time: 1, unit: 'HOURS') // Kill runaway builds after 1 hour
}
stages {
// Stage 1: Security scan — must pass before any Terraform commands
stage('Security Scan') {
steps {
sh '''
# Install tfsec if not in the Docker image
curl -sL https://github.com/aquasecurity/tfsec/releases/latest/download/tfsec-linux-amd64 \
-o /usr/local/bin/tfsec && chmod +x /usr/local/bin/tfsec
# Scan — fail on HIGH and CRITICAL findings
tfsec ${TF_DIR} --minimum-severity HIGH --no-colour
'''
}
}
// Stage 2: Terraform Init
stage('Init') {
steps {
withCredentials([aws(
credentialsId: 'aws-terraform-credentials',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
)]) {
dir(env.TF_DIR) {
sh 'terraform init -no-color'
}
}
}
}
// Stage 3: Validate syntax and references (no API calls)
stage('Validate') {
steps {
dir(env.TF_DIR) {
sh 'terraform validate -no-color'
}
}
}
// Stage 4: Plan — generate and save plan file, display diff in logs
stage('Plan') {
steps {
withCredentials([aws(
credentialsId: 'aws-terraform-credentials',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
)]) {
dir(env.TF_DIR) {
sh '''
terraform plan -no-color -out=tfplan 2>&1 | tee plan-output.txt
echo "Plan exit code: $?"
'''
// Archive plan file and output as build artifacts
archiveArtifacts artifacts: 'tfplan, plan-output.txt', fingerprint: true
}
}
}
// Post-stage: display plan output in Jenkins build summary
post {
always {
script {
def planOutput = readFile("${env.TF_DIR}/plan-output.txt")
currentBuild.description = planOutput.readLines()
.findAll { it.contains('Plan:') }
.join('\n') ?: 'No changes'
}
}
}
}
// Stage 5: Approval gate — pause for human review
// This stage only runs for the main branch (production deploys)
stage('Approval') {
when {
branch 'main' // Only require approval for main branch
}
steps {
script {
// Read the plan summary to show the reviewer what they are approving
def planOutput = readFile("${env.TF_DIR}/plan-output.txt")
def planSummary = planOutput.readLines()
.findAll { it.contains('Plan:') || it.contains('will be') }
.join('\n')
// Input step — blocks until a human approves or aborts
input(
message: "Review Terraform Plan before applying to production:\n\n${planSummary}",
ok: 'Apply to Production',
submitter: 'platform-team,senior-engineers', // Only these users can approve
parameters: [
booleanParam(
name: 'CONFIRM',
defaultValue: false,
description: 'Check this box to confirm you have reviewed the full plan'
)
]
)
}
}
}
// Stage 6: Apply — use the saved plan file from Stage 4
stage('Apply') {
when {
anyOf {
branch 'main' // Production
branch 'develop' // Dev environment — no approval required
}
}
steps {
withCredentials([aws(
credentialsId: 'aws-terraform-credentials',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
)]) {
dir(env.TF_DIR) {
sh '''
terraform apply -no-color -auto-approve tfplan
'''
}
}
}
}
}
// Post-pipeline notifications
post {
success {
slackSend(
channel: '#infrastructure-deployments',
color: 'good',
message: "Terraform apply succeeded: ${env.JOB_NAME} #${env.BUILD_NUMBER}"
)
}
failure {
slackSend(
channel: '#infrastructure-alerts',
color: 'danger',
message: "Terraform pipeline FAILED: ${env.JOB_NAME} #${env.BUILD_NUMBER} - ${env.BUILD_URL}"
)
}
always {
// Clean up workspace to avoid stale plan files on next run
cleanWs()
}
}
}
Started by GitHub push to main
Running in Durably resumable mode
[Pipeline] stage: Security Scan
+ tfsec infrastructure/services/payments --minimum-severity HIGH
0 potential problems detected.
[Pipeline] stage: Init
[withCredentials] Using credentials aws-terraform-credentials
+ terraform init -no-color
Initializing modules... Initializing provider plugins...
[Pipeline] stage: Validate
+ terraform validate -no-color
Success! The configuration is valid.
[Pipeline] stage: Plan
+ terraform plan -no-color -out=tfplan
+ aws_security_group_rule.payments_egress
~ aws_ecs_service.payments
+ desired_count: 2 -> 3
Plan: 1 to add, 1 to change, 0 to destroy.
Archived artifacts: tfplan, plan-output.txt
[Pipeline] stage: Approval
Input requested: Review Terraform Plan before applying to production
Approved by: alice (platform-team)
CONFIRM: true
[Pipeline] stage: Apply
+ terraform apply -no-color -auto-approve tfplan
aws_security_group_rule.payments_egress: Creating...
aws_ecs_service.payments: Modifying... desired_count: 2 -> 3
Apply complete! Resources: 1 added, 1 changed, 0 destroyed.
Slack: #infrastructure-deployments — Terraform apply succeededWhat just happened?
- The Docker agent guarantees a consistent Terraform version. Every build runs in the same
hashicorp/terraform:1.6.3container — not whatever Terraform version happens to be installed on the Jenkins worker. When you upgrade Terraform, you change one line in the Jenkinsfile and all builds immediately use the new version. - The input step captures who approved and when. Jenkins records the approver's username, timestamp, and the exact build that was approved in the build log. This is the audit trail that compliance teams require — the same information that GitHub Environment approvals provide, but within Jenkins' own audit system.
- cleanWs() prevents stale plan files from affecting the next build. Without workspace cleanup, the plan file from build #42 could still be on disk when build #43 runs. If build #43's plan fails, it might inadvertently apply the old plan. Cleaning the workspace on every run ensures each build starts from a known clean state.
Jenkins Shared Libraries for Reusable Pipeline Logic
When ten teams each maintain their own Terraform Jenkinsfile, they each have their own version of the same init → validate → plan → approve → apply pattern. A Jenkins Shared Library extracts that common logic into a reusable function — one place to update when the pattern changes.
// Shared Library structure — lives in a separate Git repository
// jenkins-shared-library/
// └── vars/
// └── terraformPipeline.groovy ← The reusable pipeline function
// vars/terraformPipeline.groovy
def call(Map config) {
// config contains: tfDir, credentialsId, requireApproval, approvers, environment
pipeline {
agent { docker { image "hashicorp/terraform:${config.terraformVersion ?: '1.6.3'}" } }
environment {
TF_IN_AUTOMATION = 'true'
AWS_DEFAULT_REGION = config.region ?: 'us-east-1'
}
stages {
stage('Security Scan') {
steps { sh "tfsec ${config.tfDir} --minimum-severity HIGH" }
}
stage('Init') {
steps {
withCredentials([aws(credentialsId: config.credentialsId,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY')]) {
dir(config.tfDir) { sh 'terraform init -no-color' }
}
}
}
stage('Plan') {
steps {
withCredentials([aws(credentialsId: config.credentialsId,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY')]) {
dir(config.tfDir) {
sh 'terraform plan -no-color -out=tfplan'
archiveArtifacts artifacts: 'tfplan', fingerprint: true
}
}
}
}
stage('Approval') {
when { expression { return config.requireApproval == true } }
steps {
input(message: "Apply ${config.environment} infrastructure?",
submitter: config.approvers ?: 'platform-team')
}
}
stage('Apply') {
steps {
withCredentials([aws(credentialsId: config.credentialsId,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY')]) {
dir(config.tfDir) { sh 'terraform apply -no-color -auto-approve tfplan' }
}
}
}
}
}
}
// Any team's Jenkinsfile — 5 lines instead of 100
// @Library('jenkins-shared-library@v2.1.0') _ ← Import the shared library
terraformPipeline(
tfDir: 'infrastructure/services/payments',
credentialsId: 'aws-terraform-credentials',
environment: 'production',
requireApproval: true,
approvers: 'platform-team,alice,bob'
)
Multibranch Pipeline for Multiple Environments
A Jenkins Multibranch Pipeline automatically creates a separate job for each branch in the repository. This is the Jenkins pattern for deploying different branches to different environments — feature branches to dev, main to staging, releases to production.
// Multibranch Jenkinsfile — uses branch name to select environment config
pipeline {
agent { docker { image 'hashicorp/terraform:1.6.3' } }
environment {
TF_IN_AUTOMATION = 'true'
}
stages {
stage('Configure Environment') {
steps {
script {
// Map branch names to environment configurations
def branchEnvMap = [
'main': [env: 'staging', credId: 'aws-staging', approve: false],
'release': [env: 'production', credId: 'aws-prod', approve: true],
'develop': [env: 'dev', credId: 'aws-dev', approve: false]
]
def branchName = env.BRANCH_NAME
def envConfig = branchEnvMap[branchName]
if (!envConfig) {
// Feature branches deploy to dev without approval
envConfig = [env: 'dev', credId: 'aws-dev', approve: false]
}
// Make config available to downstream stages
env.TF_ENVIRONMENT = envConfig.env
env.CREDS_ID = envConfig.credId
env.REQUIRE_APPROVE = envConfig.approve.toString()
}
}
}
stage('Init') {
steps {
withCredentials([aws(credentialsId: env.CREDS_ID,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY')]) {
sh """
terraform init \
-backend-config="key=services/payments/${env.TF_ENVIRONMENT}.tfstate"
"""
}
}
}
stage('Plan') {
steps {
withCredentials([aws(credentialsId: env.CREDS_ID,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY')]) {
sh """
terraform plan -no-color -out=tfplan \
-var="environment=${env.TF_ENVIRONMENT}"
"""
archiveArtifacts artifacts: 'tfplan', fingerprint: true
}
}
}
stage('Approval') {
when { expression { return env.REQUIRE_APPROVE == 'true' } }
steps {
input message: "Deploy to ${env.TF_ENVIRONMENT}?",
submitter: 'platform-team'
}
}
stage('Apply') {
steps {
withCredentials([aws(credentialsId: env.CREDS_ID,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY')]) {
sh 'terraform apply -no-color -auto-approve tfplan'
}
}
}
}
post { always { cleanWs() } }
}
Common Jenkins + Terraform Mistakes
Not setting TF_IN_AUTOMATION=true
Without TF_IN_AUTOMATION=true, Terraform prints a message suggesting users run terraform plan before apply — cluttering CI logs with guidance meant for interactive users. More importantly, it suppresses the interactive Enter a value: prompt that would cause terraform apply to hang indefinitely waiting for terminal input that never comes in an automated pipeline. Always set this variable to true in CI/CD environments.
Using the Docker socket without restricting the agent
The -v /var/run/docker.sock:/var/run/docker.sock argument in the agent block mounts the host Docker socket inside the container. Any code running in the pipeline can use this socket to escape the container and access the host — including creating containers with root access to the host filesystem. Only use Docker socket mounting on trusted pipelines with vetted Dockerfiles. For untrusted code, use Docker-in-Docker (dind) with an isolated daemon instead.
Storing AWS credentials as plain text Jenkins environment variables
Setting environment { AWS_ACCESS_KEY_ID = '...' } directly in the Jenkinsfile stores the credential in the repository in plaintext. Always use withCredentials with credentials stored in the Jenkins Credentials store. The Credentials store encrypts them at rest, masks them in logs, and centralises rotation — one credential update in Jenkins applies to every pipeline that references it.
Key Jenkins + Terraform checklist
Every Jenkins Terraform pipeline should have: Docker agent with a pinned Terraform image version. TF_IN_AUTOMATION=true to prevent interactive prompt hangs. withCredentials for all AWS access — never in environment variables or the Jenkinsfile itself. archiveArtifacts on the tfplan file so the plan is traceable in Jenkins. input step before apply on protected branches. cleanWs() in the post block to remove plan files. Slack or email notification on failure so the team is alerted without checking Jenkins.
Practice Questions
1. Which environment variable prevents Terraform from hanging on an interactive prompt in a Jenkins pipeline?
2. Which Jenkins pipeline step injects stored credentials as environment variables and automatically masks them in build logs?
3. Which Jenkins pipeline step provides a human approval gate equivalent to a GitHub Environment approval?
Quiz
1. What is the benefit of using a Docker agent with a pinned Terraform image in a Jenkinsfile?
2. Ten teams each maintain their own Terraform Jenkinsfile with the same plan-approve-apply pattern. What Jenkins feature eliminates this duplication?
3. How does a Jenkins Multibranch Pipeline enable deploying different branches to different environments?
Up Next · Lesson 39
Terraform with GitOps
Jenkins pipeline complete. Lesson 39 takes a different philosophy — GitOps, where Git is the single source of truth and the system continuously reconciles actual state to desired state. We cover how Terraform fits into ArgoCD and Flux-based GitOps workflows and where the boundary between GitOps and Terraform belongs.