Jenkins Lesson 42 – Migration Strategies | Dataplexa
Section IV · Lesson 42

Migration Strategies

No Jenkins setup stays the same forever. You'll migrate Freestyle jobs to Pipelines, upgrade Jenkins major versions, move from a bare metal server to Kubernetes, or eventually move to a different CI/CD platform entirely. This lesson is about doing all of that without dropping a build.

This lesson covers

Freestyle to Pipeline migration → Jenkins version upgrades → Moving Jenkins to a new server → Plugin migration → Migrating from Jenkins to GitHub Actions or GitLab CI → The strangler fig migration pattern

Migration is where theory meets reality. The principles are simple. The practice is always messy — jobs that depend on each other in undocumented ways, plugins that stopped being maintained years ago, credentials that only one person knows about, and deployments happening every hour that you can't stop while you migrate. This lesson gives you the patterns that survive contact with that reality.

The Analogy

Migrating a live Jenkins is like renovating an airport that can't close. Planes are landing and taking off while you rebuild the terminal. The key technique is the same: build the new structure alongside the old one, move operations over gradually, and only demolish the old structure after the new one is proven to work. In software this pattern is called the Strangler Fig — new code grows around the old, gradually replacing it until nothing of the original remains.

Migration 1 — Freestyle Jobs to Pipelines

The most common Jenkins migration is modernising legacy Freestyle jobs into Declarative Pipelines. Freestyle jobs are configured through the UI, can't be code-reviewed, and can't be version-controlled. Every Freestyle job you migrate to a Jenkinsfile is a job that can now be treated like code.

The migration process — per job

1

Audit the Freestyle job

Document every build step, post-build action, trigger, and parameter. Open the job config XML from JENKINS_HOME/jobs/job-name/config.xml — this is the canonical record of what the job does.

2

Write the equivalent Jenkinsfile

Map each build step to a pipeline stage. Build steps → sh steps. Post-build actions → post { } blocks. Parameters → parameters { } directive. Upstream triggers → triggers { upstream() }.

3

Create a new Pipeline job — do NOT delete the Freestyle job yet

Run both in parallel for 2–3 weeks. Compare outputs. The Freestyle job is your safety net — if the pipeline has gaps, you haven't lost anything.

Disable the Freestyle job — wait one release cycle — then delete

Disable first, don't delete. Wait for a full release cycle to confirm nothing depended on the old job in an undocumented way. Then delete with confidence.

Common Freestyle-to-Pipeline mappings:

// Freestyle job config → Declarative Pipeline equivalent

// Freestyle: "Execute shell" build step
//   Command: ./build.sh && ./test.sh
pipeline {
    agent any
    stages {
        stage('Build and Test') {
            steps {
                sh './build.sh'
                sh './test.sh'
            }
        }
    }
}

// Freestyle: "Publish JUnit test result report"
//   Test report XMLs: **/test-results/*.xml
    post {
        always {
            junit '**/test-results/*.xml'
        }
    }

// Freestyle: "Build periodically" trigger
//   Schedule: H 6 * * 1-5
    triggers {
        cron('H 6 * * 1-5')
    }

// Freestyle: "Build after other projects are built"
//   Projects to watch: checkout-service
    triggers {
        upstream(upstreamProjects: 'checkout-service', threshold: hudson.model.Result.SUCCESS)
    }

// Freestyle: "This build is parameterised"
//   String parameter: DEPLOY_ENV (default: staging)
    parameters {
        string(name: 'DEPLOY_ENV', defaultValue: 'staging', description: 'Target environment')
    }

// Freestyle: "Send email for every unstable build"
    post {
        unstable {
            mail to: 'team@acmecorp.com',
                 subject: "UNSTABLE: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
                 body: "Build URL: ${env.BUILD_URL}"
        }
    }
# Running both jobs in parallel for comparison:

Freestyle job: checkout-service-freestyle
  Build #142 — SUCCESS — 3m 14s
  Tests: 47 passed, 0 failed
  Artifacts: checkout-service-1.0.142.jar

Pipeline job: checkout-service-pipeline
  Build #12 — SUCCESS — 3m 09s
  Tests: 47 passed, 0 failed
  Artifacts: checkout-service-1.0.142.jar

# Results match — pipeline is 5 seconds faster (no Freestyle overhead)
# Running both for 14 days → all results match → disabling Freestyle job
# After one release cycle → deleting Freestyle job

What just happened?

  • Both jobs ran in parallel for the comparison period — this is the critical safety step. You're not migrating live traffic to an untested pipeline. You're running the new pipeline alongside the old one and comparing outputs until confident they match.
  • Pipeline was 5 seconds faster — Freestyle jobs have startup overhead from the UI configuration system. Declarative pipelines skip that. For short jobs the difference is noticeable; for longer jobs it's negligible.
  • Disable before delete — disabling the Freestyle job means it no longer runs but its config and history are preserved in Jenkins. If something goes wrong after the pipeline takes over, you can re-enable the Freestyle job instantly. Deleting is permanent.

Migration 2 — Jenkins Version Upgrades

Jenkins releases a new LTS version every 12 weeks. Falling more than two major versions behind means missing security patches and compatibility with newer plugins. The right upgrade cadence is: test on staging, upgrade plugins first, then upgrade core.

Safe upgrade sequence

1

Back up JENKINS_HOME

Before anything else. As covered in Lesson 31. Verify the backup is intact with sha256sum. This is the rollback plan.

2

Upgrade plugins first on staging

Plugin compatibility with the new Jenkins version is the most common source of upgrade breakage. Upgrade and test plugins before upgrading core.

3

Test all critical pipelines on staging with the new version

Run your 5 most business-critical pipelines end to end. If they pass, you have high confidence the upgrade is safe.

4

Upgrade production in a maintenance window

Announce the window. Use safe-restart to wait for running builds. Replace the jenkins.war (or the Docker image tag). Start. Verify the UI loads and one build runs successfully.

Keep the backup for 2 weeks minimum

Some upgrade issues only surface after days of use. Keep the backup accessible — don't delete it until you're confident the upgrade is stable.

Migration 3 — Moving Off Jenkins (The Strangler Fig)

Sometimes the migration is away from Jenkins entirely — to GitHub Actions, GitLab CI, Tekton, or another platform. The strangler fig pattern is the only safe way to do this at scale.

The name comes from the strangler fig tree — a plant that grows around a host tree, gradually enveloping it, until the host is completely replaced and only the fig remains. The host tree was never suddenly cut down — it was incrementally surrounded.

# Strangler fig migration — Jenkins → GitHub Actions
# Week 1–2: Run the new pipeline ALONGSIDE Jenkins, compare outputs
# Week 3–4: New pipeline is primary, Jenkins is on standby
# Week 5+:  Jenkins jobs disabled, Jenkins decommissioned

# Phase 1: New pipelines run in shadow mode
# The GitHub Actions workflow runs but doesn't deploy anything
# Jenkins still does all deployments
# Compare test results, build times, artifact checksums

# Example: feature branch test comparison
# Jenkins result:  payment-api #203 — SUCCESS — 2m 44s — 47 tests
# GHA result:      payment-api PR #78 check — SUCCESS — 2m 39s — 47 tests
# ✅ Results match — can promote this service to Phase 2

# Phase 2: GitHub Actions deploys to staging, Jenkins deploys to production
# This validates GHA end-to-end without risking production traffic

# Phase 3: GitHub Actions deploys to all environments
# Jenkins jobs set to disabled (not deleted)
# Monitor for 2 weeks

# Phase 4: Delete Jenkins jobs, decommission Jenkins server
# Keep JENKINS_HOME backup for 90 days for audit trail
# Migration progress tracker — 8 services migrating from Jenkins to GitHub Actions

Service                 Phase   Jenkins     GHA         Status
payment-api             3       DISABLED    ACTIVE      ✅ Validated 2 weeks
checkout-service        3       DISABLED    ACTIVE      ✅ Validated 2 weeks
fraud-detection         2       ACTIVE      STAGING     🔄 GHA deploys staging only
user-auth               2       ACTIVE      STAGING     🔄 GHA deploys staging only
notification-service    1       ACTIVE      SHADOW      🔄 Comparing results
reporting-api           1       ACTIVE      SHADOW      🔄 Comparing results
audit-service           0       ACTIVE      NOT STARTED ⏳ Backlog
admin-portal            0       ACTIVE      NOT STARTED ⏳ Backlog

Overall: 2/8 fully migrated, 4/8 in progress, 2/8 not started
Jenkins handling: 6 services (production), 2 services (production only)

What just happened?

  • No big-bang cutover — 8 services are at 4 different migration phases simultaneously. Each service migrates independently at its own pace. The team is never in a position where all CI is broken because a migration went wrong.
  • Shadow mode de-risks the new platform — Phase 1 runs GitHub Actions but doesn't change anything in any environment. It's purely a parallel observer. This phase alone catches most integration problems before they affect any builds that matter.
  • Production is the last step, not the first — Phase 2 gives GitHub Actions real deploy experience (staging) while Jenkins still handles the risk-bearing production deploys. If GHA has a bug, the blast radius is staging only.
  • Jenkins isn't deleted until the very end — and even then, JENKINS_HOME is archived for 90 days. Build history, audit trails, and credential backup are preserved. The decommission is clean and reversible for a full quarter.

Migration Anti-Patterns

Big-bang migration

Migrating all 30 services in one weekend. When something goes wrong at 2 AM on Sunday, there's no safe fallback — everything is broken. Migrate incrementally, service by service.

Deleting before validating

Deleting the old Freestyle job or Jenkins instance before the new pipeline has run successfully through at least one full release cycle. Disable first. Delete later.

Migrating without documentation

Starting to migrate a Freestyle job without first documenting what it actually does. The config.xml is the truth — read it before writing a single line of Jenkinsfile.

Upgrading Jenkins without a rollback plan

Upgrading the production Jenkins without a verified backup. If the upgrade causes unexpected failures, rolling back without a backup means hours of manual reconstruction under pressure.

Teacher's Note

Every migration has two phases: build the new thing alongside the old thing, then cut over. Anyone who tells you they did a big-bang migration successfully was either very lucky or is not telling you about the all-nighter that followed.

Practice Questions

1. What is the safe sequence for removing an old Freestyle job after its Pipeline replacement has been running successfully?



2. What is the correct sequence for a safe Jenkins core version upgrade?



3. What migration pattern should be used when moving from Jenkins to a new CI/CD platform, and why?



Quiz

1. Before migrating a Freestyle job to a Pipeline, what is the most reliable source of truth for what the job actually does?


2. In Phase 1 of the strangler fig migration pattern, what does the new platform pipeline do?


3. Why should plugins be upgraded before upgrading the Jenkins core version?


Up Next · Lesson 43

Troubleshooting Jenkins

Pipelines failing for unknown reasons, agents disconnecting, plugins crashing, the UI not loading — the systematic diagnosis approach that turns "Jenkins is broken" into a fixed and understood problem.