CI/CD Lesson 18 – CI/CD Pipeline Stages | Dataplexa
Section II · Lesson 18

CI/CD Pipeline Stages

In this lesson

Stage Architecture Jobs & Dependencies Gates & Approvals Parallelism Full Pipeline Blueprint

Pipeline stages are the discrete, ordered phases of work that a CI/CD pipeline executes from the moment a commit is pushed to the moment an artifact reaches production. Each stage has a defined input, a defined output, and a defined failure behaviour — if it fails, downstream stages do not run. The stage architecture of a pipeline determines its speed, its reliability, the granularity of its feedback, and the risk profile of each deployment. Getting the stage order and dependencies right is one of the most consequential pipeline design decisions a team makes.

The Standard Stage Sequence

Every CI/CD pipeline is different in its specifics, but the underlying stage sequence follows a consistent logic: validate early and cheaply, build once, test thoroughly, deploy progressively. The earlier a problem is caught in the sequence, the cheaper it is to fix — so the stages are deliberately ordered from fastest and cheapest to slowest and most expensive.

Standard Pipeline Stages — Purpose and Failure Behaviour

Stage
Purpose
Failure blocks
Source
Check out code, validate the trigger event, set environment variables and job context
Everything
Quality
Lint, format check, static analysis, type checking — fast, no compilation required
Build + all downstream
Build
Compile, bundle, package — produce the versioned artifact and push to the registry
Test + all downstream
Test
Unit, integration, contract, and security tests — verify the artifact before any deployment
All deployments
Deploy staging
Deploy artifact to staging environment, run smoke tests and E2E suite
Production deploy
Approval gate
Manual review and sign-off before production — human judgement applied at the highest-risk transition
Production deploy
Deploy production
Deploy to production, run smoke tests, monitor for anomalies, trigger rollback on failure
Automatic rollback

The Airport Security Lane Analogy

Airport security has multiple checkpoints in a deliberate order — ticket check, bag scan, body scan, gate verification. Each one is faster than the previous, designed to catch different problems, and positioned to stop a problem before it reaches the next, more costly stage. A passenger who fails the ticket check never reaches the body scanner. A pipeline stage sequence works the same way: cheap checks run first, expensive checks run only when the cheap ones have passed, and nothing reaches production that has not cleared every gate before it.

Jobs, Dependencies, and the needs Keyword

In GitHub Actions, a pipeline is composed of jobs — independent units of work that each run on their own runner. By default, jobs run in parallel. To enforce a stage sequence, the needs keyword declares that a job must wait for one or more upstream jobs to succeed before it starts. A job that fails causes every downstream job that depends on it to be skipped automatically — without any additional configuration.

This dependency model is what gives the pipeline its gate behaviour. The build job needs the quality job to pass. The test job needs the build job to complete and produce an artifact. The staging deploy needs the test job. Each needs declaration is both a sequencing instruction and a failure gate — a single point where the pipeline can stop early and report exactly which stage failed and why.

Parallelism — Running Stages Faster Without Losing Correctness

Not all stages must run sequentially. Within a stage, independent jobs can run in parallel — cutting total pipeline time significantly. A test stage that runs unit tests, integration tests, and security scans in parallel will complete in the time of the slowest of the three, rather than the sum of all three. For a team whose pipeline takes 18 minutes end-to-end, parallelising the test stage alone might bring that down to 8.

Matrix strategies take parallelism further — running the same job across multiple configurations simultaneously. A test matrix might run the test suite against Node 18, Node 20, and Node 22 in parallel, or against Linux, macOS, and Windows runners. The pipeline finishes when all matrix variants pass, and a failure in any variant fails the job. This is how libraries and packages verify cross-platform compatibility without running tests serially across every environment.

Approval Gates — Human Judgement at the Right Moment

Not every pipeline stage should be fully automated. The transition from staging to production is the point of highest risk, and many organisations apply a manual approval gate there — a required human review before the deploy job is allowed to run. GitHub Actions supports this through environment protection rules: when a job targets a protected environment, it pauses and waits for a designated reviewer to approve before proceeding.

Approval gates are not a sign of low CI/CD maturity — they are a deliberate risk management decision. A team that deploys dozens of times per day may automate the production gate entirely, relying on staging tests and monitoring for confidence. A team deploying a regulated financial application may require two named approvers and a 30-minute soak period on staging. The right gate is the one that matches the risk profile of the deployment — not the one that feels most "DevOps."

Full Pipeline Blueprint — GitHub Actions

on:
  push:
    branches: [main]

jobs:
  quality:                              # Stage 1 — fast, no compilation needed
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npx prettier --check "src/**"
      - run: npx eslint "src/**"

  build:                                # Stage 2 — runs only if quality passes
    needs: quality
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npm run build
      - uses: actions/upload-artifact@v4
        with:
          name: dist-${{ github.sha }}
          path: dist/

  test:                                 # Stage 3 — parallel jobs within the stage
    needs: build
    runs-on: ubuntu-latest
    strategy:
      matrix:
        suite: [unit, integration, security]   # Three jobs run simultaneously
    steps:
      - uses: actions/checkout@v4
      - uses: actions/download-artifact@v4
        with:
          name: dist-${{ github.sha }}
      - run: npm run test:${{ matrix.suite }}

  deploy-staging:                       # Stage 4 — runs only if all test matrix variants pass
    needs: test
    runs-on: ubuntu-latest
    environment: staging
    steps:
      - run: ./scripts/deploy.sh staging ${{ github.sha }}
      - run: ./scripts/smoke-test.sh staging

  deploy-production:                    # Stage 5 — pauses for manual approval
    needs: deploy-staging
    runs-on: ubuntu-latest
    environment: production             # Protected environment — requires reviewer approval
    steps:
      - run: ./scripts/deploy.sh production ${{ github.sha }}
      - run: ./scripts/smoke-test.sh production

What just happened?

Five stages run in sequence with enforced dependencies. Quality and build run serially. Three test suites run in parallel via a matrix strategy, each against the same artifact. Staging deploys automatically after all tests pass. Production pauses for a human approval, then deploys the same artifact — identified by commit SHA — that has been verified at every prior stage.

Warning: A Pipeline Without Stage Dependencies Is Just a List of Scripts

A pipeline where every job runs regardless of whether previous jobs succeeded is not a quality gate — it is a reporting tool. If the build fails but tests still run, the test results are meaningless. If tests fail but the staging deploy still runs, the deployment is unverified. Every stage must have an explicit dependency on the stages before it, and a failure at any stage must stop downstream stages from running. This is not optional configuration — it is the foundational behaviour that makes a pipeline a reliable gate rather than an expensive log generator.

Key Takeaways from This Lesson

Stages are ordered cheapest to most expensive — fast quality checks run before the build, the build runs before tests, tests run before deployments. A failure at any stage stops everything downstream, catching problems at the lowest possible cost.
The needs keyword enforces stage dependencies — without explicit dependencies, GitHub Actions jobs run in parallel by default. Every gate in the pipeline must be declared with needs or it is not a gate at all.
Parallelism within a stage cuts total pipeline time — independent jobs like unit tests, integration tests, and security scans can run simultaneously. A matrix strategy extends this across multiple environments or configurations.
Manual approval gates belong at the highest-risk transition — the staging-to-production step. GitHub environment protection rules pause the pipeline and require a named reviewer to approve before the deploy job runs.
The same artifact flows through every stage — built once, identified by commit SHA, downloaded at each stage from the registry. Nothing is rebuilt between stages, preserving the integrity of what was tested.

Teacher's Note

Draw the stage dependency graph for your pipeline before you write a single line of YAML — if you cannot draw it clearly, the pipeline will be unclear too, and unclear pipelines always have gaps where failures can silently pass through.

Practice Questions

Answer in your own words — then check against the expected answer.

1. What is the GitHub Actions keyword used to declare that a job must wait for one or more upstream jobs to succeed before it starts — the mechanism that enforces the stage sequence and gate behaviour of a pipeline?



2. What GitHub Actions feature runs the same job simultaneously across multiple configurations — such as different Node.js versions or operating systems — completing when all variants pass and failing if any single variant fails?



3. What GitHub Actions feature pauses a pipeline job that targets a production environment and requires a named reviewer to approve before the deploy step runs?



Lesson Quiz

1. A pipeline has three jobs — quality, build, and test — but no needs declarations between them. A developer notices the test job sometimes starts before the build job finishes. What is happening and why?


2. A pipeline uses a matrix strategy to run unit tests, integration tests, and security scans as three parallel jobs in the test stage. Unit tests take 3 minutes, integration tests take 6 minutes, and security scans take 4 minutes. How long does the test stage take?


3. A team deploying a regulated financial application requires two named approvers and a 30-minute soak period on staging before every production deployment. A consultant tells them this shows their pipeline is immature. What is the correct assessment?


Up Next · Lesson 19

Environment Promotion

Dev, staging, production — and sometimes many more. Environment promotion is the discipline of moving a verified artifact through a sequence of increasingly production-like environments before it reaches users.