Jenkins Course
Jenkins in a Cloud Environment
Running Jenkins on a single server worked fine in 2015. Today, most teams run it on AWS, GCP, or Azure — with elastic build agents, managed Kubernetes clusters, and cloud-native storage. This lesson covers how Jenkins integrates with the cloud without losing the control that makes it valuable.
This lesson covers
Jenkins on AWS EC2 → Elastic build agents → S3 for artifact storage → EKS / GKE as agent pools → Cloud credentials and IAM → Jenkins on Kubernetes with Helm → Cloud cost patterns
Moving Jenkins to the cloud introduces two architectural shifts. The first is where the master runs — a VM, a managed Kubernetes cluster, or a cloud-hosted container. The second is how agents are provisioned — statically allocated VMs, auto-scaling EC2 spot instances, or ephemeral Kubernetes pods. Getting these right determines both your build speed and your cloud bill.
The Analogy
Running Jenkins on-premise is like owning your delivery fleet — you control every truck, but you also pay for them when they sit idle overnight. Running Jenkins in the cloud with elastic agents is like using a courier service — you only pay per delivery, the fleet scales with demand, and you never worry about a truck sitting empty on a Sunday. The trade-off is that you need to understand the billing model to avoid surprises.
Where to Run the Jenkins Master in the Cloud
EC2 / VM
Most common
Deploy Jenkins on a dedicated EC2 instance (or equivalent on GCP/Azure). Full control over the OS, JVM settings, and storage. Use EBS for JENKINS_HOME, S3 for artifact storage. The most widely used pattern — familiar, debuggable, and straightforward to operate.
Kubernetes pod
Cloud-native teams
Deploy the Jenkins master as a pod on EKS, GKE, or AKS. JENKINS_HOME backed by a PersistentVolumeClaim. Managed with Helm. Build agents run as Kubernetes pods (Lesson 23). The entire CI/CD infrastructure is part of the cluster — upgraded, scaled, and backed up like any other workload.
Docker / ECS
Containerised
Run Jenkins master in a Docker container on ECS, Cloud Run, or a standalone Docker host. JENKINS_HOME on a persistent volume. Easiest to deploy and upgrade via the custom Dockerfile from Lesson 27. Works well for teams not yet running Kubernetes.
Elastic Build Agents on AWS EC2
The Amazon EC2 plugin lets Jenkins spin up EC2 instances as build agents on demand and terminate them when builds finish. You pay only for the compute you actually use. Spot instances cut costs by up to 70% for batch build workloads.
The scenario:
You're a DevOps engineer at a startup on AWS. Build volume is uneven — quiet most of the day, peak at 9–10 AM when developers arrive and push overnight work. Static agents would sit idle for 80% of the day. You configure Jenkins to launch EC2 spot instances on demand and terminate them when the queue clears.
New terms in this code:
- Amazon EC2 plugin — a Jenkins plugin that integrates with the AWS EC2 API to provision and terminate instances as build agents. Install it from the Plugin Manager.
- AMI (Amazon Machine Image) — a pre-built EC2 image. You build an AMI with Java, Docker, and your build tools pre-installed so agent startup is fast — no installation time on each launch.
- Spot instance — spare EC2 capacity offered at up to 90% discount versus on-demand pricing. Can be reclaimed by AWS with 2-minute warning. Acceptable for build agents — a reclaimed instance means the build fails and is retried, not data loss.
- IAM role — AWS identity and access management role. The Jenkins master needs an IAM role with EC2 permissions to launch and terminate instances. Never use long-lived access keys — use IAM roles attached to the Jenkins EC2 instance instead.
- idle termination minutes — how long an agent can sit idle before Jenkins terminates it. Set to 5–15 minutes — long enough to absorb burst builds without paying for long idle periods.
# JCasC configuration for EC2 elastic agents
# The Amazon EC2 plugin reads this config and provisions agents on demand
# Requires the Jenkins master EC2 instance to have an IAM role with EC2 permissions
jenkins:
clouds:
- amazonEC2:
name: "aws-build-agents"
region: "eu-west-1"
# Use IAM role attached to the master instance — no hardcoded AWS keys
# The Jenkins master's EC2 instance profile must have ec2:RunInstances,
# ec2:TerminateInstances, ec2:DescribeInstances permissions
useInstanceProfileForCredentials: true
templates:
# Template 1: Standard Linux build agents — on-demand for reliability
- ami: "ami-0abcdef1234567890" # custom AMI with Java 21 + Docker pre-installed
description: "Linux build agent — on-demand"
labelString: "linux docker"
instanceType: "c6i.xlarge" # 4 vCPU, 8GB RAM
numExecutors: 2
idleTerminationMinutes: 10 # terminate after 10 minutes idle
stopOnTerminate: false # terminate (not stop) — spot-friendly
deleteRootOnTermination: true # clean up EBS volume on terminate
subnetId: "subnet-0123456789abcdef" # deploy into private subnet
securityGroups: "sg-jenkins-agents"
tags:
- name: "jenkins-agent"
value: "true"
- name: "team"
value: "platform"
# Template 2: Spot instances for cost-sensitive batch workloads
- ami: "ami-0abcdef1234567890"
description: "Linux build agent — spot (70% cheaper)"
labelString: "linux docker spot"
instanceType: "c6i.2xlarge" # 8 vCPU, 16GB RAM
numExecutors: 4
idleTerminationMinutes: 5 # terminate faster — spot is cheap, idle is not
spotConfig:
spotMaxBidPrice: "0.15" # maximum bid price per hour
fallbackToOndemand: true # fall back to on-demand if no spot available
tags:
- name: "jenkins-agent-spot"
value: "true"
Where to practice: Install the Amazon EC2 plugin. For the IAM role, create a policy with ec2:RunInstances, ec2:TerminateInstances, ec2:DescribeInstances, ec2:CreateTags and attach it to your Jenkins master instance. To build a custom agent AMI, start from the official Amazon Linux or Ubuntu AMI, install Java 21 and Docker, then use EC2 Image Builder or a Packer template to capture it. Full plugin docs at plugins.jenkins.io/ec2.
# Jenkins log when a build triggers and no agents are available:
INFO: No agents with label 'linux docker' available — provisioning from EC2 cloud
INFO: Launching EC2 instance from template: Linux build agent — on-demand
INFO: Instance i-0a1b2c3d4e5f6a7b8 launching in eu-west-1a (c6i.xlarge)
INFO: Waiting for instance to reach running state...
INFO: Instance running. Connecting via SSH to 10.0.4.22
INFO: Agent connected: ec2-agent-i-0a1b2c3d4e5f (labels: linux docker)
[Pipeline] node (ec2-agent-i-0a1b2c3d4e5f)
[Pipeline] { (Checkout) }
...build runs...
[Pipeline] End of Pipeline
Finished: SUCCESS
INFO: Agent ec2-agent-i-0a1b2c3d4e5f idle for 10 minutes — terminating instance
INFO: EC2 instance i-0a1b2c3d4e5f6a7b8 terminated. EBS volume deleted.
What just happened?
- Agent provisioned on demand — when the build triggered and no matching agent was available, Jenkins called the EC2 API and launched an instance from the configured AMI. The build waited ~90 seconds for the instance to start and SSH to become available — then ran immediately.
- No hardcoded AWS credentials —
useInstanceProfileForCredentials: truemeans Jenkins uses the IAM role attached to the master EC2 instance. No access keys in config files, no secrets to rotate, no credential leaks. This is the correct AWS-native authentication pattern. - Agent terminated after 10 minutes idle — once the build finished and no new builds arrived for 10 minutes, Jenkins terminated the instance. The EBS root volume was also deleted. The build cost approximately $0.04 in EC2 compute. A static agent running 24/7 would have cost $40+ per month for the same instance type.
- Spot template cuts costs further — jobs that use the
spotlabel run on spot instances at ~70% discount. ThefallbackToOndemand: truesetting means if AWS reclaims the spot instance, Jenkins falls back to on-demand automatically — no build is permanently lost.
Deploying Jenkins on Kubernetes with Helm
For teams running Kubernetes, deploying the Jenkins master as a Helm chart is the cleanest approach. The official Jenkins Helm chart handles the Deployment, Service, PersistentVolumeClaim, RBAC, and Ingress in one command.
Tools used:
- Helm — a package manager for Kubernetes. Helm charts are pre-packaged Kubernetes manifests with configurable values. The Jenkins chart is the most widely used way to deploy Jenkins on Kubernetes.
- values.yaml — the Helm chart configuration file. You override chart defaults here — replica count, persistence size, plugins to pre-install, JCasC config, resource limits.
- PersistentVolumeClaim — a Kubernetes storage request. Jenkins uses a PVC to persist JENKINS_HOME across pod restarts. Without it, every Jenkins restart loses all configuration.
- controller.installPlugins — a list of plugins to install at startup via the chart. Equivalent to
plugins.txtfrom Lesson 27 — the chart handles installation automatically. - controller.JCasC — inlines JCasC YAML directly into the Helm values file. Jenkins reads it on startup, applying all configuration without manual UI steps.
# Add the Jenkins Helm repository
helm repo add jenkins https://charts.jenkins.io
helm repo update
# Deploy Jenkins with custom values
helm install jenkins jenkins/jenkins \
--namespace jenkins \
--create-namespace \
--values jenkins-values.yaml \
--wait
# jenkins-values.yaml — Helm chart configuration
controller:
# Resource limits — size appropriately for your build load
resources:
requests:
cpu: "500m"
memory: "2Gi"
limits:
cpu: "2"
memory: "4Gi"
# JVM settings passed to the Jenkins master
javaOpts: >-
-Xms2g -Xmx4g
-XX:+UseG1GC
-XX:+HeapDumpOnOutOfMemoryError
# Plugins to install at startup — same principle as plugins.txt (Lesson 27)
installPlugins:
- kubernetes:latest
- workflow-aggregator:latest
- git:latest
- credentials-binding:latest
- configuration-as-code:latest
- slack:latest
- prometheus:latest
# Inline JCasC configuration — applied on first startup
# This is the jenkins.yaml from Lesson 37, embedded in the Helm values
JCasC:
defaultConfig: true
configScripts:
welcome-message: |
jenkins:
systemMessage: "Acmecorp Jenkins on EKS — managed by Helm + JCasC"
numExecutors: 0
security: |
jenkins:
securityRealm:
local:
allowsSignup: false
authorizationStrategy:
globalMatrix:
permissions:
- "hudson.model.Hudson.Administer:admin"
- "hudson.model.Hudson.Read:authenticated"
- "hudson.model.Item.Build:authenticated"
- "hudson.model.Item.Read:authenticated"
kubernetes-agents: |
jenkins:
clouds:
- kubernetes:
name: "kubernetes"
serverUrl: "" # empty = use in-cluster config
namespace: "jenkins"
jenkinsUrl: "http://jenkins:8080"
podTemplates:
- name: "default"
label: "linux docker"
containers:
- name: "jnlp"
image: "jenkins/inbound-agent:latest"
- name: "docker"
image: "docker:24-dind"
privileged: true
# Persistent storage for JENKINS_HOME
persistence:
enabled: true
size: "50Gi"
storageClass: "gp3" # AWS EBS gp3 storage class
# Kubernetes service account — used by build pods to interact with the cluster
serviceAccount:
create: true
name: "jenkins"
# Expose Jenkins via an ALB ingress (AWS)
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:eu-west-1:123456:certificate/abc"
hosts:
- host: "jenkins.acmecorp.com"
paths:
- path: "/"
pathType: Prefix
$ helm install jenkins jenkins/jenkins \
--namespace jenkins --create-namespace \
--values jenkins-values.yaml --wait
NAME: jenkins
LAST DEPLOYED: Mon Apr 08 10:22:14 2024
NAMESPACE: jenkins
STATUS: deployed
Waiting for Jenkins to be ready...
Jenkins pod jenkins-0 running
Jenkins UI available at: https://jenkins.acmecorp.com
# Verify:
$ kubectl get pods -n jenkins
NAME READY STATUS RESTARTS
jenkins-0 2/2 Running 0
$ kubectl get pvc -n jenkins
NAME STATUS VOLUME CAPACITY STORAGECLASS
jenkins Bound pvc-abc 50Gi gp3
# JCasC applied on startup:
INFO: Configuration as Code loaded from configScripts
✓ jenkins.systemMessage set
✓ securityRealm configured
✓ authorizationStrategy: globalMatrix (4 permissions)
✓ kubernetes cloud registered
✓ 7 plugins installed successfully
What just happened?
- Complete Jenkins deployed in one command — the Helm chart created the Deployment, Service, PersistentVolumeClaim, RBAC roles, ServiceAccount, and Ingress. Jenkins started with all plugins installed and JCasC applied. No setup wizard, no manual configuration.
- Pod shows 2/2 containers — the Jenkins pod runs two containers: the Jenkins master and a sidecar that initialises the plugin installation and JCasC. Once initialisation completes, the sidecar exits and Jenkins serves requests.
- 50GB gp3 PVC bound — JENKINS_HOME is persisted on an AWS EBS gp3 volume. If the Jenkins pod restarts or reschedules to a different node, the volume follows it. Build history, credentials, and configuration are preserved.
- Kubernetes build agents configured inline — the JCasC
configScriptsblock registered the in-cluster Kubernetes cloud. Builds that request thelinux dockerlabel will spin up pods in the same cluster — no separate cloud configuration needed. - HTTPS via ALB — the Ingress annotation configures AWS Application Load Balancer with a certificate from ACM. Jenkins is reachable at
https://jenkins.acmecorp.comover HTTPS from day one, without manually installing certificates.
Cloud Cost Patterns for Jenkins
| Pattern | Savings | When to use |
|---|---|---|
| Spot / preemptible agents | 60–90% | Batch builds, non-critical pipelines, feature branch tests |
| Ephemeral agents (K8s pods) | 40–60% | Any team running on Kubernetes — zero idle cost |
| Short idle termination (5–10 min) | 20–40% | Uneven build load — terminate quickly between bursts |
| ARM instances (Graviton on AWS) | 20–40% | Builds that don't require x86 — cheaper per vCPU than x86 |
| Custom AMI with pre-baked tools | Time savings | Reduces agent boot time from 5–10 min to 60–90 sec |
The hidden cloud cost — data transfer
Pulling large Docker images on every build is expensive. A 2GB image pulled 100 times per day across agents is 200GB of data transfer. Solutions: use a private registry in the same region, enable Docker layer caching on agent AMIs, or pin agent AMIs with the most-used base images pre-pulled. Data transfer costs can easily exceed compute costs if ignored.
Teacher's Note
The Helm + JCasC combination is the gold standard for cloud Jenkins. One helm install command gives you a production Jenkins with everything configured. One helm upgrade updates it. One helm uninstall tears it down cleanly.
Practice Questions
1. In the EC2 plugin JCasC configuration, which setting lets Jenkins authenticate with AWS using the IAM role attached to the master instance — without hardcoding access keys?
2. Which EC2 agent template setting controls how long an idle agent waits before Jenkins terminates its EC2 instance?
3. When running Jenkins on Kubernetes, what Kubernetes resource type must be used to persist JENKINS_HOME across pod restarts?
Quiz
1. Why are EC2 spot instances suitable for Jenkins build agents despite their risk of interruption?
2. In the Jenkins Helm chart, where do you embed JCasC configuration so it is applied automatically when Jenkins first starts?
3. What is the most effective way to reduce the hidden cost of Docker image pulls in a cloud-based Jenkins setup?
Up Next · Lesson 39
Jenkins on Kubernetes
Deep dive into running Jenkins fully within Kubernetes — pod agents at scale, RBAC, persistent storage, autoscaling, and the production patterns that keep it stable.