Jenkins Course
Why Jenkins is Used
Manual deployments don't fail occasionally — they fail predictably, at the worst possible times, by the most tired possible person. This lesson is about why that pattern exists and exactly how Jenkins breaks it.
Let's paint the picture that most engineering teams know intimately. It's 5:45 PM on a Thursday. The sprint ends tomorrow. Three features need to go to production tonight. The senior dev who "knows how the deploy works" is on a call. Someone else tries to do it from memory. They forget to set an environment variable. The app boots, connects to the wrong database, and starts silently writing bad data. Nobody notices until 2 AM.
That story is not a horror story. That story is a Tuesday at thousands of software companies. And it is almost entirely preventable.
The Checklist That Lives in One Person's Head
Here's the core problem with manual processes: the procedure only exists in someone's head — or at best, a Confluence page that's six months out of date. Every manual deploy is an act of trust that the right person remembers all the steps, in the right order, on a day when they might be distracted, rushed, or just plain tired.
Think about the pre-flight checklist used by airline pilots. A 747 captain with 20,000 hours of flight time still reads through the checklist before every single takeoff. Not because they're forgetful — because checklists remove human error from high-stakes repetitive processes. Jenkins is that checklist for your software delivery, except it runs itself.
The Core Idea
Jenkins replaces tribal knowledge with documented, executable process. The deploy procedure stops living in someone's head and starts living in version-controlled code that runs the same way every time.
Before Jenkins vs. After Jenkins
The difference isn't subtle. Here's what the same deploy workflow looks like on both sides of the line:
BEFORE — The Pain
- Dev SSHs into the server manually
- Runs tests locally — or skips them
- Deploy steps differ person to person
- No record of what changed or when
- Rollback means calling the senior dev at midnight
- New team member can't deploy alone for weeks
AFTER — The Fix
- Push to main → Jenkins takes over automatically
- Tests always run — no option to skip
- Deploy steps are identical every single time
- Full audit log: who triggered what, when, result
- Rollback is a single button click on a previous build
- New team member can trigger a deploy on day one
The Five Real Reasons Teams Choose Jenkins
When engineering leaders at larger organisations make the decision to run Jenkins, it usually comes down to one or more of these five reasons:
1. Speed of delivery. A team doing manual deploys might ship once a week because deployments are painful and risky. With Jenkins automating the pipeline, that same team can safely ship multiple times a day. The automation is the safety net that makes frequent releases possible.
2. Consistency. Humans make different decisions under pressure. Jenkins doesn't. The pipeline runs the same steps whether it's triggered at 9 AM on a Monday by the CTO or at 11 PM on a Friday by a junior developer. Same tests. Same checks. Same deploy procedure.
3. Fast feedback loops. Without CI, a developer might write broken code on Monday and not find out until code review on Wednesday. With Jenkins running tests automatically on every push, they find out in four minutes. The earlier you catch a bug, the cheaper it is to fix.
4. Audit trail. Jenkins keeps a record of every build: who triggered it, what commit it ran against, how long it took, whether it passed or failed. When a compliance team asks "what was deployed to production on March 14th and who approved it?" — Jenkins has the answer.
5. Flexibility and control. Unlike hosted CI/CD services, Jenkins runs on your infrastructure. You control where the builds happen, what tools are installed, what data leaves your network. For regulated industries — finance, healthcare, defence — this isn't optional. It's a requirement.
What a Jenkins-Powered Workflow Actually Looks Like
Here's the typical automated workflow Jenkins enables for a backend service. Every step happens without a human touching anything after the initial code push:
One git push triggers the entire chain. If any stage fails, the pipeline stops and the team is notified immediately — nothing broken reaches staging.
That entire flow — from push to Slack notification — typically takes two to five minutes for a medium-sized service. Compare that to a manual deploy that blocks a senior engineer for thirty minutes and carries a real risk of human error.
Now let's see what the beginning of that pipeline looks like in a real Jenkinsfile — the file that defines the automated workflow as code.
The scenario:
You're a DevOps engineer at a 40-person e-commerce company. The checkout service currently gets deployed manually — one specific developer does it every time, and when they're on holiday, releases pile up for a week. You've been asked to automate it. This is the Jenkinsfile you'd start with: a simple pipeline that checks out the code and runs the test suite automatically on every push to the main branch.
// Declarative pipeline — the recommended modern syntax for Jenkins
pipeline {
// 'any' means Jenkins can run this on whatever agent is available
// In production you'd pin this to a specific label like agent { label 'linux' }
agent any
// Triggers: run this pipeline automatically whenever code is pushed to main
triggers {
pollSCM('H/5 * * * *') // check the repo every 5 minutes for new commits
}
// Define pipeline-wide environment variables
environment {
APP_NAME = 'checkout-service' // used in log messages and artifact names
DEPLOY_ENV = 'staging' // target environment for this pipeline
SLACK_CHANNEL = '#deployments' // where build notifications will be sent
}
stages {
// Stage 1: pull the latest code from the repository
stage('Checkout') {
steps {
// checkout scm pulls the branch that triggered this build
checkout scm
echo "Checked out ${APP_NAME} — building for ${DEPLOY_ENV}"
}
}
// Stage 2: run the full test suite — no manual skipping allowed
stage('Test') {
steps {
// ./gradlew test runs unit tests via the Gradle build tool
sh './gradlew test'
}
}
}
// Post block: actions that run after all stages complete
post {
// 'failure' runs only if one of the stages above failed
failure {
echo "Pipeline failed — ${APP_NAME} build did not pass tests"
}
// 'success' runs only if every stage passed
success {
echo "All tests passed for ${APP_NAME} — ready to build artifact"
}
}
}
Started by SCM change
[Pipeline] Start of Pipeline
[Pipeline] node
Running on agent-linux-01 in /var/jenkins_home/workspace/checkout-service
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Checkout)
[Pipeline] checkout
Cloning repository https://github.com/acmecorp/checkout-service.git
> git fetch origin main
> git checkout main
[Pipeline] echo
Checked out checkout-service — building for staging
[Pipeline] }
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
+ ./gradlew test
Starting a Gradle Daemon...
> Task :test
BUILD SUCCESSFUL in 38s
23 tests completed, 0 failed
[Pipeline] }
[Pipeline] End of Pipeline
[post] success
All tests passed for checkout-service — ready to build artifact
Finished: SUCCESS
What just happened?
The pipeline { } block is the outermost container for a Declarative Jenkins pipeline — everything lives inside it. The agent any line tells Jenkins "run this on any available machine" — you'll learn to be more specific about this in Section II. The triggers { pollSCM(...) } block is what makes this automated: Jenkins checks the Git repository every 5 minutes and fires the pipeline the moment it sees a new commit on main. The H/5 syntax is a cron expression — the H adds a random offset so not every job fires at exactly the same second and hammers the server. The post block runs cleanup or notification actions after stages complete, regardless of which branch the code took. "Started by SCM change" in the console output is the key line — it tells you this build was triggered by a code push, not by a human clicking a button. That's the entire point of this lesson in one line of output.
The scenario:
It's two weeks later. The automated pipeline is running, but your manager wants to know: how often is the pipeline actually passing? Are we catching failures early or are broken builds sitting unnoticed? You need to pull a quick build history report from the Jenkins CLI to answer that question.
# Pull the last 10 build results for the checkout-service job
# This gives you an instant health snapshot without opening the browser
java -jar jenkins-cli.jar \
-s http://jenkins-master-01:8080 \
-auth admin:your-api-token \
list-builds checkout-service \
--format json | python3 -m json.tool
[
{ "number": 42, "result": "SUCCESS", "duration": 38420, "timestamp": 1710758400000 },
{ "number": 41, "result": "SUCCESS", "duration": 41100, "timestamp": 1710672000000 },
{ "number": 40, "result": "FAILURE", "duration": 12300, "timestamp": 1710585600000 },
{ "number": 39, "result": "SUCCESS", "duration": 39800, "timestamp": 1710499200000 },
{ "number": 38, "result": "SUCCESS", "duration": 37650, "timestamp": 1710412800000 }
]
What just happened?
The list-builds CLI command returns the build history for a named job. Each entry has four fields that actually matter: number is the sequential build number Jenkins assigns — build #40 failed, builds before and after it passed, which tells you the failure was isolated and fixed quickly. result is the final state: SUCCESS, FAILURE, ABORTED, or UNSTABLE. duration is in milliseconds — build #40 ran for only 12 seconds before failing, which is a signal that it failed fast (probably a compilation error or early test failure, not a flaky integration test). timestamp is epoch time. This kind of audit trail is what makes Jenkins valuable beyond just running tests — it gives you a searchable, queryable history of every automated action in your delivery pipeline.
Watch Out
A pipeline that is always green is not always a healthy sign. If your tests never catch anything, it might mean the tests aren't actually testing the right things — or that developers have learned to push only safe, trivial commits to avoid a red build. A good pipeline catches real bugs. Occasional failures are a feature, not a flaw.
Teacher's Note
The teams that benefit most from Jenkins aren't the ones with the most complex pipelines — they're the ones who automate the thing that's currently causing the most pain. Start there.
Practice Questions
1. What is the name of the file you create in your repository to define a Jenkins pipeline as code?
2. What block in a Declarative pipeline defines actions that run after all stages complete?
3. What Jenkins trigger directive tells Jenkins to check the repository on a schedule for new commits?
Quiz
1. Which of these is a key benefit of automating your pipeline with Jenkins?
2. In the build history output, which result value indicates that a pipeline stage did not complete successfully?
3. Why does automating a deployment pipeline improve reliability compared to a manual process?
Up Next · Lesson 3
Jenkins Architecture
How Jenkins is actually structured under the hood — and why that structure determines everything about how you scale it.