Jenkins Lesson 36 – Jenkins as Code | Dataplexa
Section IV · Lesson 36

Jenkins as Code

Everything you've configured by clicking through the Jenkins UI lives in a black box. If the server dies, that configuration is gone. Jenkins as Code means treating your pipelines, job definitions, and shared logic the same way you treat application code — versioned, reviewed, reproducible.

This lesson covers

Pipeline as code → Jenkinsfile in version control → Job DSL for generating jobs programmatically → Shared libraries for reusable pipeline logic → The full Jenkins-as-code stack and how the pieces fit together

Jenkins as Code is not a single feature or plugin. It's a philosophy — the idea that nothing important about your CI/CD system should exist only in a UI form that can't be versioned, diffed, or recovered. This lesson covers the three layers of that philosophy: pipeline as code (Jenkinsfiles), job generation as code (Job DSL), and shared pipeline logic as code (shared libraries).

The Analogy

A Jenkins server configured only through the UI is like a custom spreadsheet that lives on one person's laptop. It works fine — until the laptop gets stolen, the hard drive fails, or you need a second copy. Storing your Jenkins configuration as code in Git is like moving that spreadsheet to a shared, versioned document. Anyone can see it, anyone can rebuild it, and every change is tracked with a timestamp and a name.

The Three Layers of Jenkins as Code

Layer 1

Pipeline as Code — Jenkinsfile

The pipeline definition lives in a Jenkinsfile in the same repository as the application code. Covered throughout Section II. Every change to the pipeline goes through code review. The pipeline history is the Git history.

Layer 2

Job Generation as Code — Job DSL

Job DSL is a Jenkins plugin that lets you write Groovy scripts that create jobs. Instead of clicking "New Item" 30 times, you write one script that generates all 30 jobs with consistent naming, folders, and triggers. When a new service is added, running the script creates its jobs automatically.

Layer 3

Shared Pipeline Logic — Shared Libraries

Common pipeline steps (Docker builds, Kubernetes deploys, Slack notifications) live in a dedicated shared library repository. Jenkinsfiles import them with @Library. Changes to shared logic propagate to all pipelines without touching individual Jenkinsfiles.

Layer 1 — Pipeline as Code in Depth

You've written Jenkinsfiles throughout this course. This section focuses on the practices that make pipeline-as-code work properly in a team — not just the syntax.

Jenkinsfile location

Always in the root of the application repository. Not in a separate "jenkins" repo. Not in JENKINS_HOME. The pipeline that builds a service lives alongside the service it builds.

Jenkinsfile in code review

Pipeline changes go through pull requests like any other code change. A bad pipeline change that gets merged is visible in Git history and can be reverted with git revert. A bad pipeline change made through the UI is invisible.

Jenkinsfile stays simple

The Jenkinsfile orchestrates — it doesn't implement. Complex logic belongs in shell scripts or shared library functions. A 30-line Jenkinsfile that calls shared library functions is better than a 300-line Jenkinsfile doing everything itself.

Branch strategy drives pipeline behaviour

The Jenkinsfile uses when { branch } conditions to behave differently on feature branches vs main vs release branches. One file, multiple behaviours — covered in Lesson 21.

Layer 2 — Job DSL: Generating Jobs Programmatically

Multibranch Pipelines handle the per-branch job creation automatically. But for teams managing many services, folders, and views, the Job DSL plugin lets you generate the entire job structure from a script — no clicking required.

The scenario:

You're a platform engineer at a company with 15 microservices, each needing a Multibranch Pipeline job inside a team folder. You've been creating these manually. A new team is onboarding next Monday with 5 new services. You need a script that generates all jobs from a list — run it once and everything appears in Jenkins, correctly configured.

New terms in this code:

  • Job DSL plugin — a Jenkins plugin that provides a Groovy-based DSL (Domain Specific Language) for defining jobs. A seed job runs the DSL script and creates or updates all the jobs it defines.
  • seed job — a regular Jenkins Pipeline job whose only purpose is to run the Job DSL script. When you need to add or change jobs, update the DSL script and re-run the seed job.
  • multibranchPipelineJob() — a Job DSL method that creates a Multibranch Pipeline job. Takes the job path (including folder) as its argument.
  • branchSources { github() } — configures the branch source for the Multibranch Pipeline. Tells Jenkins which GitHub repo to scan for branches.
  • folder() — a Job DSL method that creates a Jenkins folder. Creates it if it doesn't exist, leaves it untouched if it does.
// jobs/seed.groovy — the Job DSL script
// This file lives in a Git repository and is run by a Jenkins seed job
// Running this script creates or updates all the jobs it defines

// Define all services and their team owners
// Adding a new service = adding one line to this list
def services = [
    [team: 'payments',  repo: 'checkout-service'],
    [team: 'payments',  repo: 'payment-api'],
    [team: 'payments',  repo: 'fraud-detection'],
    [team: 'frontend',  repo: 'web-app'],
    [team: 'frontend',  repo: 'mobile-app'],
    [team: 'platform',  repo: 'api-gateway'],
    [team: 'platform',  repo: 'auth-service'],
]

// Get unique team names to create folders
def teams = services.collect { it.team }.unique()

// Create a folder for each team
teams.each { teamName ->
    folder(teamName) {
        description("Jobs for the ${teamName} team")
        // Folder-level properties — e.g. views, health metrics
        properties {
            // Restrict who can see this folder (uses Project-based matrix auth)
            authorizationMatrix {
                inheritanceStrategy { nonInheriting() }
                entries {
                    user {
                        name("admin")
                        permissions(["hudson.model.Hudson.Administer"])
                    }
                }
            }
        }
    }
}

// Create a Multibranch Pipeline job for each service inside its team folder
services.each { service ->
    def jobPath = "${service.team}/${service.repo}"

    multibranchPipelineJob(jobPath) {
        description("CI/CD pipeline for ${service.repo}")

        // Where to find branches — GitHub in this case
        branchSources {
            github {
                id("${service.repo}-source")
                repoOwner("acmecorp")
                repository(service.repo)
                // Use a stored Jenkins credential for GitHub API access
                credentialsId("github-token")
                // Discover branches, PRs, and tags
                traits {
                    gitHubBranchDiscovery { strategyId(1) }
                    gitHubPullRequestDiscovery { strategyId(2) }
                }
            }
        }

        // How often to scan for new branches
        triggers {
            periodic(5)  // scan every 5 minutes
        }

        // Orphaned branch strategy — clean up deleted branches after 7 days
        orphanedItemStrategy {
            discardOldItems {
                daysToKeep(7)
                numToKeep(10)
            }
        }
    }
    println "Created job: ${jobPath}"
}

Where to practice: Install the Job DSL plugin from Manage Jenkins → Plugin Manager. Create a new Pipeline job called seed-job. In the job config, add a Build Step → Process Job DSLs and paste the script. Run the job — your folders and Multibranch pipelines appear automatically. Full Job DSL reference at jenkinsci.github.io/job-dsl-plugin.

Processing DSL script jobs/seed.groovy
Creating folder: payments
Creating folder: frontend
Creating folder: platform
Created job: payments/checkout-service
Created job: payments/payment-api
Created job: payments/fraud-detection
Created job: frontend/web-app
Created job: frontend/mobile-app
Created job: platform/api-gateway
Created job: platform/auth-service

Triggering branch scans for all Multibranch pipelines...
payments/checkout-service: scanning... found 4 branches
payments/payment-api: scanning... found 2 branches
payments/fraud-detection: scanning... found 3 branches
frontend/web-app: scanning... found 6 branches
frontend/mobile-app: scanning... found 2 branches
platform/api-gateway: scanning... found 3 branches
platform/auth-service: scanning... found 5 branches

Finished: SUCCESS

What just happened?

  • 3 folders and 7 jobs created in one run — what would have taken 30+ minutes of clicking through the UI happened in seconds. The seed job ran the DSL script, Jenkins applied every definition, and all seven Multibranch Pipelines immediately started scanning their repositories for branches.
  • Idempotent by design — run the seed job again and it updates existing jobs to match the script. If a job already exists and nothing changed, it's left alone. If you add a service to the list and re-run, only the new job is created. This makes the seed job safe to run repeatedly.
  • Adding the new team on Monday — add their five services to the services list, commit, push, run the seed job. Five Multibranch Pipelines appear in a new folder, all correctly configured, in under a minute. No manual steps.
  • Branch scans triggered automatically — each new Multibranch Pipeline immediately scanned its GitHub repository and found all existing branches. Those branches now have their own pipeline sub-jobs, each ready to build on the next push.

Layer 3 — Shared Libraries: Reusable Pipeline Logic

The third layer is shared libraries — a Git repository of Groovy code that any Jenkinsfile in the organisation can import. This is where your standard Docker build, Kubernetes deploy, Slack notification, and security scan logic lives.

A minimal shared library Jenkinsfile — the full power in 25 lines

// Jenkinsfile in checkout-service — using the shared library
// Everything team-specific is expressed here
// Everything standard is in the library
@Library('acme-pipelines@v2.1.0') _

pipeline {
    agent { label 'linux && docker' }

    options {
        timeout(time: 30, unit: 'MINUTES')
        buildDiscarder(logRotator(numToKeepStr: '20'))
    }

    stages {
        stage('Checkout') { steps { checkout scm } }
        stage('Test')     { steps { sh './gradlew test' }
                            post  { always { junit 'build/test-results/**/*.xml' } } }
        stage('Build')    { when { branch 'main' }
                            steps { buildDockerImage(appName: 'checkout-service') } }
        stage('Deploy')   { when { branch 'main' }
                            steps { deployToKubernetes(app: 'checkout-service', namespace: 'staging') } }
    }

    post {
        always  { notifySlack(appName: 'checkout-service') }
        always  { cleanWs() }
    }
}

How the Three Layers Fit Together

Git Repo

acme-pipeline
-library

vars/ · src/

Layer 3

Shared Library

@Library import

Layer 1

Jenkinsfile

in each service repo

Layer 2

Job DSL

seed job creates jobs

The shared library provides building blocks → Jenkinsfiles use them to define service pipelines → Job DSL creates and manages the jobs in Jenkins that run those Jenkinsfiles.

Teacher's Note

You don't need all three layers from day one. Start with Jenkinsfiles in Git (Layer 1). Add a shared library when you notice copy-paste across three or more Jenkinsfiles (Layer 3). Add Job DSL when you're managing 10+ services and manual job creation has become a bottleneck (Layer 2).

Practice Questions

1. In the Job DSL pattern, what is the name for the Jenkins job that runs the DSL script and creates or updates all the other jobs?



2. Which Job DSL method creates a Jenkins folder to organise jobs under a team or domain name?



3. What property of a Job DSL seed job means it is safe to run multiple times — creating only new jobs and updating existing ones without duplicating anything?



Quiz

1. What is the key advantage of storing a Jenkinsfile in the application repository rather than configuring the pipeline through the Jenkins UI?


2. A new team joins with 8 microservices. Which Jenkins-as-code layer lets you create all 8 Multibranch Pipeline jobs and their folders in one script run?


3. What is the recommended order for adopting the three layers of Jenkins as Code?


Up Next · Lesson 37

Configuration as Code (JCasC)

Pipelines as code, jobs as code — now the master itself as code. JCasC lets you define security settings, credentials, agents, and plugins in a single YAML file. Rebuild a production Jenkins in minutes.