Jenkins Lesson 25 – Pipeline Best Practice | Dataplexa
Section II · Lesson 25

Pipeline Best Practices

Section II ends here. Fifteen lessons of pipeline syntax, stages, steps, triggers, credentials, Docker, Kubernetes, and failure handling. This lesson is the distillation — the habits and rules that make everything you've built hold up under real-world pressure.

This lesson covers

The 12 pipeline best practices — structure, performance, security, maintainability, and the anti-patterns that turn a good pipeline into a maintenance nightmare

The difference between a pipeline that works on day one and one that's still working cleanly two years later isn't raw technical skill. It's habits. Small decisions made consistently — how you name things, where you put logic, how you handle secrets, how you structure stages — compound over time. This lesson names the habits worth forming.

Each practice comes with the reasoning behind it. Understanding why matters more than memorising the rule — because the why helps you make the right call in situations this list doesn't cover.

1. Keep the Jenkinsfile Simple — Put Logic in Scripts

A Jenkinsfile that's 800 lines of Groovy is a pipeline no one wants to maintain. The Jenkinsfile should read like a recipe — it describes the steps at a high level. Complex logic (data processing, dynamic stage generation, conditional trees) belongs in shell scripts or shared library functions that the Jenkinsfile calls.

❌ Jenkinsfile as the whole application

200 lines of Groovy inside script{} blocks, nested conditionals, data parsing, string manipulation — all inside the Jenkinsfile.

✅ Jenkinsfile as the orchestrator

sh './scripts/build.sh', sh './scripts/deploy.sh ${ENV}' — the logic lives in versioned shell scripts. The Jenkinsfile stays readable.

2. Fail Fast — Put the Cheapest Checks First

If a commit has a syntax error, you want to know in 30 seconds, not after a 10-minute Docker build. Order your stages so the fastest, cheapest checks run first. A developer gets faster feedback, and a failing build wastes the minimum amount of compute.

Recommended stage order:

Checkout
Lint / Static Analysis
Unit Tests
Build
Integration Tests
Deploy

Slowest steps (integration tests, E2E, deploys) run last — only after fast checks pass.

3. Never Store Secrets in the Jenkinsfile

This was covered in Lesson 18 but it's important enough to repeat in the best practices list. A Jenkinsfile committed to Git carries its history forever. A hardcoded API key, password, or token in a Jenkinsfile is a permanent security risk — even if you remove it in the next commit.

The rule: Every secret comes from credentials('id') or withCredentials(). If it's not in the Jenkins credential store, it shouldn't be in the pipeline.

4. Always Set a Pipeline Timeout

A hung pipeline ties up an agent executor indefinitely. Without a timeout, a network hiccup or a deadlocked process silently blocks your CI system until someone notices — which might be the next morning. Every pipeline should have a global timeout in its options { } block.

options {
    timeout(time: 30, unit: 'MINUTES')  // kill the pipeline if it runs longer than this
    buildDiscarder(logRotator(numToKeepStr: '20'))
    timestamps()
    disableConcurrentBuilds()
}

5. Use Parallel Stages for Independent Work

If unit tests, integration tests, and static analysis don't depend on each other, there's no reason to run them sequentially. Running them in parallel cuts pipeline time by as much as the longest single stage — often saving 3–5 minutes per build. On a team that pushes code 20 times a day, that's an hour of developer waiting time saved every day.

stage('Run All Checks in Parallel') {
    parallel {
        stage('Unit Tests')        { steps { sh './gradlew test' } }
        stage('Integration Tests') { steps { sh './gradlew integrationTest' } }
        stage('Static Analysis')   { steps { sh './gradlew checkstyle pmd' } }
    }
}

6. Always Clean the Workspace

Without workspace cleanup, every build leaves files behind on the agent. Over weeks of active development, agent disks fill up and builds start failing with "No space left on device" at 3 AM. Put cleanWs() in post { always { } } — it runs whether the build passed or failed.

post {
    always {
        cleanWs()  // deletes the workspace after every build, pass or fail
    }
}

7. Publish Test Results — Always, Even on Failure

If tests fail and you don't publish the results, you've lost the most important diagnostic information at the exact moment you need it most. Use a stage-level post { always { junit '...' } } block — not a pipeline-level one. That way, results are published immediately when the test stage finishes, even if a subsequent deploy stage fails.

stage('Test') {
    steps { sh './gradlew test' }
    post {
        always {
            // publish test results even if tests failed — you need this most when they do
            junit allowEmptyResults: true, testResults: 'build/test-results/**/*.xml'
        }
    }
}

8. Pin Your Docker Image Tags — Never Use latest

Using image: 'node:latest' as your build environment means your pipeline silently changes behaviour every time the Node.js team releases a new version. Pin to a specific tag — node:20-alpine — so your build environment only changes when you consciously decide to upgrade it.

❌ Unpinned

image: 'node:latest'

✅ Pinned

image: 'node:20.11-alpine3.19'

9. Notify on State Changes — Not Every Build

This was covered in Lesson 20 but belongs in the best practices list. Notifying on every passing build trains the team to ignore notifications. Notifying only on failures and recoveries means every notification is actionable. Use the currentBuild.previousBuild?.result pattern to detect state changes.

post {
    failure { slackSend(channel: '#ci', color: 'danger', message: "❌ Build failed") }
    success {
        script {
            // Only notify on recovery — stay silent when already green
            if (currentBuild.previousBuild?.result == 'FAILURE') {
                slackSend(channel: '#ci', color: 'good', message: "✅ Build fixed")
            }
        }
    }
}

10. Use when{} to Gate Stages — Not if/else in steps

Putting conditional logic inside sh commands or script { if } blocks hides what the pipeline is doing. When a stage is skipped by a when condition, Jenkins shows "skipped due to when condition" in the stage view — visible at a glance. When it's hidden in a script block, the stage appears to run and readers have to dig into logs to understand what happened.

❌ Hidden conditional

steps {
  script {
    if (env.BRANCH_NAME == 'main') {
      sh './deploy.sh'
    }
  }
}

✅ Visible when{} gate

when { branch 'main' }
steps {
  sh './deploy.sh'
}

11. Use Declarative — Reserve Scripted for Edge Cases

Covered in Lesson 12, worth repeating here as a practice. Declarative syntax is readable by engineers who don't know Groovy. It validates before running. It has built-in post, when, and options directives. Use script { } when you need Groovy logic. Only reach for full Scripted when you've genuinely hit Declarative's limits — which is rare.

12. Keep Pipelines in Version Control — Treat Them Like Code

A Jenkinsfile in a repository gets reviewed in pull requests, versioned with the code it builds, and recovered automatically if Jenkins dies. A pipeline configured only in the Jenkins UI is a single point of failure. This isn't just a best practice — it's the entire point of pipeline-as-code. If your team is still using Freestyle jobs in production, this is the most impactful change you can make.

The Reference Jenkinsfile — All 12 Practices Applied

Here's a production-grade Jenkinsfile that applies all twelve practices. Use this as a starting template for new services — strip out what you don't need, add back when you do.

The scenario:

You're a lead engineer at a SaaS company wrapping up Section II of your Jenkins training. You want one Jenkinsfile that embodies everything you've learned — clean structure, fast feedback, proper security, meaningful notifications, and correct failure handling. This is it.

// Reference Jenkinsfile — Section II Best Practices Applied
// Copy this as a starting point for new services
// Remove stages you don't need — add back as the service grows
pipeline {

    // Practice 8: pinned image tag — not 'latest'
    agent { label 'linux && docker' }

    // Practice 4: always set a pipeline timeout
    options {
        timeout(time: 30, unit: 'MINUTES')
        buildDiscarder(logRotator(numToKeepStr: '20'))
        timestamps()
        disableConcurrentBuilds()
    }

    environment {
        APP_NAME     = 'user-profile-service'
        REGISTRY     = 'registry.acmecorp.com'
        IMAGE_TAG    = "${BUILD_NUMBER}-${env.GIT_COMMIT?.take(7) ?: 'local'}"
        // Practice 3: all secrets from the credential store
        DOCKER_CREDS = credentials('docker-registry-credentials')
        SLACK_HOOK   = credentials('slack-webhook-url')
    }

    stages {

        stage('Checkout') {
            steps {
                checkout scm
                script {
                    currentBuild.description = "${env.BRANCH_NAME} · ${IMAGE_TAG}"
                    echo "Branch: ${env.BRANCH_NAME} · Commit: ${env.GIT_COMMIT?.take(7)}"
                }
            }
        }

        // Practice 2: cheapest checks first
        stage('Lint') {
            steps { sh './gradlew checkstyle' }
        }

        // Practice 5: parallel for independent work
        stage('Test') {
            parallel {
                stage('Unit Tests') {
                    steps { sh './gradlew test' }
                    // Practice 7: publish results in stage post, always
                    post {
                        always { junit allowEmptyResults: true, testResults: 'build/test-results/test/**/*.xml' }
                    }
                }
                stage('Integration Tests') {
                    steps { sh './gradlew integrationTest' }
                    post {
                        always { junit allowEmptyResults: true, testResults: 'build/test-results/integrationTest/**/*.xml' }
                    }
                }
            }
        }

        stage('Build Image') {
            // Practice 10: when{} gate — visible in the stage view
            when {
                allOf {
                    branch 'main'
                    not { expression { return env.CHANGE_ID != null } }
                }
            }
            steps {
                sh """
                    docker login -u ${DOCKER_CREDS_USR} -p ${DOCKER_CREDS_PSW} ${REGISTRY}
                    docker build -t ${REGISTRY}/${APP_NAME}:${IMAGE_TAG} .
                    docker push ${REGISTRY}/${APP_NAME}:${IMAGE_TAG}
                """
            }
        }

        stage('Deploy Staging') {
            when { branch 'main' }
            steps {
                script {
                    // Practice: capture previous version for rollback
                    def prev = sh(
                        script: "kubectl get deployment/${APP_NAME} -o jsonpath='{.spec.template.spec.containers[0].image}' --namespace=staging",
                        returnStdout: true
                    ).trim()

                    try {
                        sh """
                            kubectl set image deployment/${APP_NAME} \
                              ${APP_NAME}=${REGISTRY}/${APP_NAME}:${IMAGE_TAG} \
                              --namespace=staging
                            kubectl rollout status deployment/${APP_NAME} --namespace=staging --timeout=120s
                        """
                    } catch(Exception e) {
                        // Practice: auto-rollback on failure
                        echo "Deploy failed — rolling back to ${prev}"
                        sh "kubectl set image deployment/${APP_NAME} ${APP_NAME}=${prev} --namespace=staging"
                        currentBuild.result = 'FAILURE'
                        throw e
                    }
                }
            }
        }

    }

    post {
        // Practice 6: always clean the workspace
        always { cleanWs() }

        failure {
            slackSend(color: 'danger',
                      message: "❌ *${APP_NAME}* failed on `${env.BRANCH_NAME}` — <${env.BUILD_URL}|#${BUILD_NUMBER}>")
        }

        // Practice 9: only notify on state changes
        success {
            script {
                if (currentBuild.previousBuild?.result == 'FAILURE') {
                    slackSend(color: 'good',
                              message: "✅ *${APP_NAME}* fixed on `${env.BRANCH_NAME}` — <${env.BUILD_URL}|#${BUILD_NUMBER}>")
                }
                if (env.BRANCH_NAME == 'main') {
                    slackSend(color: 'good',
                              message: "🚀 *${APP_NAME}:${IMAGE_TAG}* deployed to staging")
                }
            }
        }
    }

}
Started by GitHub push by dev-ana (branch: main)
[Pipeline] Start of Pipeline
[Pipeline] node (agent-linux-01)
[Pipeline] { (Checkout) }
[Pipeline] checkout — git checkout main — HEAD: d4e5f6a
[Pipeline] script
Branch: main · Commit: d4e5f6a
[Pipeline] { (Lint) }
[Pipeline] sh
+ ./gradlew checkstyle
Checkstyle: 0 violations found
[Pipeline] { (Test) }
[Pipeline] parallel
[Pipeline] { (Unit Tests) }       [Pipeline] { (Integration Tests) }
+ ./gradlew test                  + ./gradlew integrationTest
47 tests, 0 failed                29 tests, 0 failed
[Pipeline] junit (unit)           [Pipeline] junit (integration)
Recording test results             Recording test results
[Pipeline] { (Build Image) }
+ docker build -t registry.acmecorp.com/user-profile-service:71-d4e5f6a .
Successfully built c9b4a3e1
+ docker push registry.acmecorp.com/user-profile-service:71-d4e5f6a
Pushed: registry.acmecorp.com/user-profile-service:71-d4e5f6a
[Pipeline] { (Deploy Staging) }
+ kubectl set image deployment/user-profile-service ... --namespace=staging
deployment.apps/user-profile-service image updated
+ kubectl rollout status deployment/user-profile-service --namespace=staging --timeout=120s
deployment "user-profile-service" successfully rolled out
[Pipeline] post (success)
[Pipeline] script
previousBuild.result = SUCCESS — skipping fixed notification
Sending deploy notification to #deployments
Slack: 🚀 user-profile-service:71-d4e5f6a deployed to staging
[Pipeline] cleanWs
Deleting project workspace... done
[Pipeline] End of Pipeline
Finished: SUCCESS

What just happened?

  • Lint ran first — the cheapest check passed in seconds. If it had failed, the parallel test stage would never have started, saving several minutes of compute.
  • Unit and integration tests ran simultaneously — both test suites executed in parallel. Total wall time was determined by the slower one (integration tests), not the sum of both. Practice 5 in action.
  • Both junit blocks fired in stage-level post — test results were published immediately after each parallel branch finished, not at the end of the pipeline. Practice 7 in action.
  • Build description set to main · 71-d4e5f6a — visible in the Jenkins build history as a subtitle. The build history page is now actually useful — you can see at a glance which branch and commit each build represents.
  • No "fixed" notification sent — the previous build was SUCCESS, so previousBuild.result == 'FAILURE' was false and the fixed notification was suppressed. Only the deploy notification fired. Practice 9 in action.
  • cleanWs() ran in always post — workspace deleted. Agent disk stays clean. Practice 6 in action.

Where to start: Copy this Jenkinsfile into your repository. Delete the Build Image and Deploy Staging stages if you're not using Docker or Kubernetes yet. Keep the structure, the options block, the parallel test stages, and the post block. Those four things alone put your pipeline ahead of 80% of what teams ship in production. Extend from there.

The Anti-Patterns to Actively Avoid

Running builds on the master

Every build that runs on the master is a security and stability risk. Set master executors to 0. Use agents. Always.

Stages that do too many things

A stage called "Build and Test and Package and Deploy" tells you nothing when it fails. One concern per stage. When something breaks, the stage view tells you exactly where to look.

No timeout — ever

A pipeline with no timeout will eventually hang for 18 hours and block every other build in the queue. Add a timeout. You will thank yourself.

Using retry() to hide a broken test

If a test fails 1 in 3 runs and you wrap it in retry(3), you've hidden a real problem. Fix the flaky test. retry() is for transient infrastructure failures, not broken code.

Notifying on every green build

Within a week the channel is on mute and nobody sees the real alerts. Notify on failures and recoveries only. Every notification should be actionable.

Teacher's Note

The reference Jenkinsfile above is your Section II graduation project. If you can read every line of it and explain what it does and why — you're ready for Section III.

Practice Questions

1. Which step should you always put inside post { always { } } to prevent agent disks from filling up with build artifacts?



2. Instead of using if/else inside a script{} block to conditionally skip a stage, which Declarative directive should you use so the skip is visible in the Jenkins stage view?



3. Which Jenkinsfile keyword lets you run multiple independent stages at the same time to reduce total pipeline duration?



Quiz

1. What does "fail fast" mean in pipeline design?


2. Why is using image: 'node:latest' in a pipeline agent a problem?


3. Why should you publish test results in a stage-level post block rather than a pipeline-level post block?


Up Next · Section III · Lesson 26

Plugins Overview

Section II is complete. Section III begins — security, plugins, and scaling. First stop: the plugin ecosystem that makes Jenkins what it is.