CI/CD Lesson 19 – Environment Promotion | Dataplexa
Section II · Lesson 19

Environment Promotion

In this lesson

Environment Types Promotion Strategy Environment Parity Config per Environment Promotion in the Pipeline

Environment promotion is the practice of moving a verified, immutable artifact through a sequence of increasingly production-like environments — from development, through testing and staging, to production — with defined verification steps at each transition. Each environment serves a different audience and a different validation purpose. Promotion is not deployment repeated; it is deployment with accumulated evidence. An artifact that reaches production has already been proven to work in every environment before it, giving teams the confidence to deploy frequently without increasing risk.

Environment Types and Their Roles

Different organisations use different environment sets, but the underlying pattern is consistent: environments exist on a spectrum from "developer-controlled and disposable" to "production-identical and protected." Each environment in the promotion chain adds a layer of confidence before the artifact reaches users.

Environment Types — Purpose and Audience

Development
Local or ephemeral per-branch environment controlled by the developer. Purpose is rapid iteration — run the application, see a change, iterate. Not part of the automated pipeline's promotion sequence; never used as a deployment gate.
CI / Test
Ephemeral environment spun up by the pipeline for each run — a clean runner with real or containerised dependencies. Purpose is automated verification: build, test, static analysis. Destroyed after the pipeline run completes.
Staging
Persistent, production-identical environment used for final verification before release. Audience: QA, product, and internal stakeholders. Purpose: end-to-end tests, UAT, performance benchmarks, and the human approval review. Should use production infrastructure configurations at a smaller scale.
Pre-prod
Optional environment that mirrors production infrastructure exactly — same instance types, same database sizes, same network topology. Used in regulated industries or for high-stakes releases where staging scale differences are unacceptable as evidence.
Production
The live environment serving real users. Only receives artifacts that have passed every prior stage. Protected by branch rules, environment approvals, and deployment controls. Changes here are the highest-consequence and must be the lowest-risk by the time they arrive.

The Clinical Trial Analogy

A new medication does not go from the laboratory directly to patients. It moves through a structured sequence: lab tests, animal studies, small human trials, larger trials, regulatory review, and finally public prescription. Each stage uses the same compound, adds a layer of evidence, and must be passed before the next begins. Skipping a stage does not save time — it removes evidence that the next stage depends on. Environment promotion works identically: each environment is a trial phase, each verification step is a piece of evidence, and production is the point where accumulated evidence meets real users.

Environment Parity — The Condition That Makes Promotion Meaningful

Environment parity is the principle that staging and production should be as structurally identical as possible — same operating system, same runtime versions, same infrastructure configuration, same external service connections where feasible. Parity is what makes staging evidence meaningful. If staging runs on a different database version, uses mocked external services, or runs on a single instance while production runs on a cluster, then a test that passes on staging is not strong evidence about production behaviour.

Perfect parity is rarely achievable — staging typically runs at reduced scale to manage cost — but the gaps between environments must be understood and documented. The team should know exactly what staging does and does not represent, and their confidence in the promotion decision should reflect those gaps. Using infrastructure as code (covered in Lesson 36) to define environments means the same Terraform or Pulumi configuration that describes production also describes staging, with only scale parameters changed.

Configuration per Environment — What Changes and What Must Not

The artifact is immutable — the same binary or Docker image is deployed to every environment. But the configuration the artifact runs with changes per environment: database connection strings, API endpoint URLs, feature flag states, log levels, and third-party credentials all differ between staging and production. This separation between the artifact and its configuration is a foundational principle — it is what makes "build once, deploy many" possible without hardcoding environment-specific values into the build.

Environment variables are the most common mechanism for injecting per-environment configuration at runtime. In a pipeline, each environment's secrets and config values are stored in GitHub encrypted secrets scoped to that environment — staging secrets are only available to jobs targeting the staging environment, production secrets only to production jobs. This scoping ensures that a staging deployment job can never accidentally read a production credential. Environment variables and secrets management are covered in depth in Lessons 23 and 24.

Promotion Strategy in the Pipeline

Promotion is implemented in the pipeline through a combination of job dependencies, environment targeting, and verification steps between each transition. The artifact is always referenced by its commit SHA — never rebuilt. Each environment's deploy job downloads the same artifact, applies its environment-specific configuration, deploys, and runs its verification suite before the next stage is allowed to begin.

Environment Promotion Pipeline — GitHub Actions

jobs:
  build:
    runs-on: ubuntu-latest
    outputs:
      image-tag: ${{ github.sha }}      # Pass the artifact reference to all downstream jobs
    steps:
      - uses: actions/checkout@v4
      - run: docker build -t ghcr.io/myorg/app:${{ github.sha }} .
      - run: docker push ghcr.io/myorg/app:${{ github.sha }}

  deploy-staging:
    needs: build
    runs-on: ubuntu-latest
    environment: staging                # Uses staging-scoped secrets only
    steps:
      - run: |
          docker pull ghcr.io/myorg/app:${{ needs.build.outputs.image-tag }}
          ./deploy.sh staging ${{ needs.build.outputs.image-tag }}
      - run: ./smoke-test.sh https://staging.myapp.com
      - run: ./e2e-tests.sh https://staging.myapp.com

  deploy-production:
    needs: deploy-staging               # Only runs after staging passes all verification
    runs-on: ubuntu-latest
    environment: production             # Pauses for manual approval; uses production secrets
    steps:
      - run: |
          # Same image tag — same artifact that passed staging — deployed to production
          docker pull ghcr.io/myorg/app:${{ needs.build.outputs.image-tag }}
          ./deploy.sh production ${{ needs.build.outputs.image-tag }}
      - run: ./smoke-test.sh https://myapp.com

What just happened?

The build job produces an artifact tagged with the commit SHA and passes that tag as a job output. The staging deploy pulls and deploys that exact image, runs smoke and E2E tests, and only if both pass does the production job become eligible to run — where it pauses for human approval, then deploys the same image to production. One artifact, two environments, one promotion sequence.

Warning: Staging Environments That Don't Resemble Production Produce False Confidence

A staging environment that runs on a single container while production runs on a 20-node cluster, uses a SQLite database while production uses PostgreSQL, and mocks all external API calls instead of connecting to sandbox accounts is not evidence — it is theatre. Tests that pass on such a staging environment tell you almost nothing about production behaviour. The gaps between staging and production must be actively minimised and explicitly documented. Any gap is a risk the team is accepting; accepting it unknowingly is far more dangerous than accepting it deliberately.

Key Takeaways from This Lesson

Promotion is deployment with accumulated evidence — an artifact that reaches production has been verified in every prior environment. Each stage adds a layer of confidence; skipping a stage removes evidence the next stage depends on.
Environment parity makes staging evidence meaningful — if staging differs significantly from production in infrastructure, runtime, or dependencies, passing tests on staging are weak evidence about production behaviour.
The artifact is immutable; the configuration changes per environment — the same image or binary is deployed everywhere, with environment-specific values injected at runtime via environment variables scoped to each environment.
GitHub environment scoping keeps secrets isolated — staging secrets are only available to jobs targeting the staging environment, production secrets only to production jobs. A staging job cannot accidentally read a production credential.
Infrastructure as code is the foundation of environment parity — defining staging and production with the same Terraform or Pulumi configuration, varying only scale parameters, is the most reliable way to keep environments structurally identical.

Teacher's Note

List the differences between your staging and production environments and keep that list somewhere visible — every item on it is a class of production bug that staging cannot catch, and the list tends to grow quietly unless someone owns it.

Practice Questions

Answer in your own words — then check against the expected answer.

1. What is the principle that staging and production environments should be structurally identical — same runtime versions, same infrastructure configuration, same external service connections — so that staging verification produces meaningful evidence about production behaviour?



2. What is the most common mechanism for injecting per-environment configuration — database connection strings, API endpoints, credentials — into an immutable artifact at runtime so the same artifact can be deployed to staging and production without being rebuilt?



3. In a GitHub Actions promotion pipeline, what mechanism allows the build job to pass the artifact's commit SHA tag to the downstream staging and production deploy jobs so they all reference the same image?



Lesson Quiz

1. A colleague says that deploying to staging and then to production is just doing the same deployment twice — wasted effort. What is the accurate distinction between a repeated deployment and environment promotion?


2. A team's staging environment uses SQLite for its database while production uses PostgreSQL. All staging tests pass. A production deployment fails with a database constraint error. What environment parity gap caused this?


3. A pipeline has separate staging and production environments configured in GitHub. A security reviewer asks how the pipeline prevents a staging deploy job from accidentally using production database credentials. What is the mechanism?


Up Next · Lesson 20

Rollback Strategies

Every deployment carries some risk. Rollback strategies are the safety net — the defined, practiced procedures for returning to a known-good state when a production deployment goes wrong.