CI/CD Lesson 8 – CI/CD Pipeline Overview | Dataplexa
Section I · Lesson 8

CI/CD Pipeline Overview

In this lesson

The Full Pipeline Map Every Stage Explained Triggers & Gates Parallel vs Sequential Pipeline Design Principles

A CI/CD pipeline is the automated assembly line that moves code from a developer's commit to running software in production. It is not a single script — it is a sequence of stages, each with a specific job, each acting as a gate that code must pass before advancing. Fail at any stage and the pipeline stops. Nothing broken moves forward. The pipeline is the enforcement mechanism for every quality standard a team cares about, written down once in code and applied consistently to every change, forever.

Lessons 4 through 7 covered what CI and CD each do individually. This lesson assembles the full picture — every stage in order, what triggers each one, where decisions get made, and the design principles that separate pipelines that teams trust from pipelines that teams route around. Think of this as the map you carry into every practical lesson from here on.

The Full Pipeline — All Eight Stages

A mature CI/CD pipeline has eight stages. Not every team needs all eight on day one — a small product might start with four — but understanding all eight tells you both what you are building toward and why each stage earns its place in the sequence.

The Eight-Stage Pipeline

📤
Source
Commit triggers
🔨
Build
Compile
🧪
Unit Test
Fast checks
🔍
Analyse
Lint · SAST
📦
Package
Artifact built
🌐
Staging
E2E tests
Approve
Gate / auto
🚀
Deploy
Production
▪ Purple = CI scope ▪ Green = CD artifact & staging ▪ Amber = CD release

Each Stage in Detail

Knowing the stage names is not enough — you need to know what each stage is actually doing, what it is looking for, and what it means when it fails. Here is the full breakdown.

Stage-by-Stage Reference

1 · Source
A webhook fires when a developer pushes a commit or opens a pull request. The pipeline checks out the exact commit SHA — not "the latest" but the precise snapshot that triggered the run. This guarantees that the result is tied to a specific, reproducible state of the code.
2 · Build
The code is compiled or assembled on a clean runner. Dependencies are installed from a lockfile — never resolved fresh, because a dependency update mid-pipeline is a variable you do not want. If the build fails here, the problem is structural: syntax errors, missing imports, incompatible dependency versions.
3 · Unit Test
Fast, isolated tests that check individual functions and classes. No database. No network calls. No external services. The entire unit test suite should run in under two minutes. Failures here point at specific logic errors in specific files — they are the easiest failures to diagnose and the cheapest to fix.
4 · Analyse
Static analysis runs in parallel with (or immediately after) unit tests. This covers linting for code style, SAST (Static Application Security Testing) for vulnerability patterns, dependency scanning for known CVEs, and code coverage checks. Failures here are policy violations — the code works but does not meet the team's standards.
5 · Package
The build output is packaged into a versioned, immutable artifact — a Docker image tagged with the commit SHA, a JAR, a compiled binary. This artifact is pushed to a registry and will be promoted unchanged through every subsequent environment. Nothing gets rebuilt after this point. The package stage is the handoff from CI to CD.
6 · Staging
The artifact is deployed to a staging environment — a close mirror of production. Integration tests verify that services talk to each other correctly. End-to-end tests simulate real user journeys. Performance baselines may be checked here. Failures at this stage reveal problems that only appear when the full system is assembled together.
7 · Approve
In Continuous Delivery, a human reviews the pipeline results and clicks to proceed. In Continuous Deployment, this stage does not exist — the pipeline proceeds automatically. Either way, reaching this point means every automated check has passed. The question now is purely about timing and business readiness.
8 · Deploy
The artifact is deployed to production using the same mechanism that deployed it to staging — same scripts, same configuration shape, different environment variables. Post-deployment smoke tests run immediately to confirm the application started correctly. Monitoring dashboards are checked. If anything looks wrong, rollback is triggered.

Parallel Stages: Where Speed Hides

A naive pipeline runs every stage sequentially — one finishes, the next begins. A well-designed pipeline finds every stage that does not depend on another and runs them simultaneously. This is where teams cut pipeline time from 20 minutes to 8 minutes without removing a single check.

Unit tests and static analysis have no dependency on each other. Both need only the compiled build output — so they can run in parallel. Integration tests against a test database can run in a separate job while end-to-end UI tests run in another. Security scanning does not need test results. Running these concurrently is not cutting corners — it is applying basic engineering logic to the pipeline itself.

The Airport Security Analogy

An airport with one security lane makes every passenger wait regardless of how many officers are available. Open six lanes in parallel and the queue moves six times faster — but every passenger still goes through exactly the same checks. A well-designed CI pipeline works the same way: parallelise the stages that have no dependency on each other, and the total pipeline time drops dramatically without sacrificing a single quality gate. The checks do not change. The queue does.

Pipeline Design Principles

A pipeline that technically works is not the same as a pipeline that teams trust. The difference comes down to a handful of design principles. Break any of them and you get a pipeline that is slow, flaky, inconsistent, or quietly ignored — which is the worst outcome of all.

Five Principles of a Well-Designed Pipeline

Fail fast
Put the cheapest and most likely-to-fail checks first. Running a 20-minute end-to-end suite before a 30-second linting check wastes everyone's time when the linting check would have caught the problem immediately. The pipeline should eliminate the most obvious failures as early as possible.
🔒
Deterministic
The same commit must produce the same result every time. A flaky test — one that sometimes passes and sometimes fails on identical code — is more damaging than a consistently failing test, because it trains developers to ignore failures. Flaky tests must be fixed or quarantined immediately.
📋
Pipeline as code
The pipeline definition lives in the repository alongside the code it tests. It is versioned, reviewed in pull requests, and changes to it go through the same process as changes to the application. A pipeline that only exists in a UI configuration panel is a pipeline that will drift and become inconsistent. Lesson 22 covers this in depth.
👁️
Visible to everyone
Every developer should be able to see the current state of the pipeline — not through a permission wall, not buried in logs. Build status should be on the pull request, in the team Slack channel, on a dashboard visible from across the room. Invisible failures are ignored failures.
🔁
Idempotent deployments
Running the deploy stage twice against the same artifact should produce the same result as running it once. A deployment that cannot be safely re-run is a deployment that will cause incidents during rollbacks and retries. Idempotency is not a nice-to-have — it is what makes automated recovery possible.

Warning: A Pipeline Nobody Looks at Is Worse Than No Pipeline

The most dangerous pipeline is one that runs, produces results, and gets ignored. It happens gradually — a flaky test gets muted, a failure notification gets turned off, someone bypasses the gate "just this once," and over six months the pipeline becomes wallpaper. The team still calls it CI/CD. It is not. It is a very expensive cron job. A pipeline only has value while the team treats a red build as an emergency. The moment that stops, the pipeline stops working — regardless of what the dashboard says.

Key Takeaways from This Lesson

A pipeline has eight stages — Source, Build, Unit Test, Analyse, Package, Staging, Approve, Deploy. Each has a distinct job. None is optional once a team is operating at scale.
The Package stage is the CI-to-CD handoff — it produces the immutable artifact that CD will promote. Everything before it is CI. Everything after it is CD.
Parallel stages are where speed is reclaimed — unit tests, static analysis, and security scanning can all run simultaneously after the build. There is no reason to queue them sequentially.
Fail fast means cheapest checks first — a linting failure caught in 30 seconds saves 20 minutes of end-to-end test time. Order the stages from fastest/cheapest to slowest/most expensive.
Flaky tests are the enemy — a test that sometimes passes and sometimes fails on identical code destroys confidence in the entire pipeline. Fix or quarantine flaky tests immediately.
The pipeline lives in the repository — pipeline-as-code means the pipeline definition is versioned alongside the application. Changes to it are reviewed and tracked like any other code change.
Deployments must be idempotent — running the deploy stage twice should produce the same result as running it once. This is what makes automated rollback and retry safe.
A pipeline ignored is a pipeline that does not exist — visibility, fast feedback, and a team culture that treats red builds as emergencies are what give the pipeline its power. The YAML is just the skeleton.

Teacher's Note

Draw your team's current pipeline on a whiteboard. Label each box with how long it takes. You will immediately see which stages to parallelise, which are missing entirely, and where the 45-minute CI run is actually hiding.

Practice Questions

Answer in your own words — then check against the expected answer.

1. Which pipeline stage marks the handoff from CI to CD — the point where a versioned, immutable artifact is created and pushed to a registry for promotion through environments?



2. What term describes a deployment where running the process twice produces the same result as running it once — a property that makes automated rollback and retry safe?



3. What is the term for a test that sometimes passes and sometimes fails on identical code — the type of test that, left unaddressed, causes developers to lose trust in the entire pipeline?



Lesson Quiz

1. A team's pipeline runs build → unit tests → static analysis sequentially, taking 18 minutes total. The build takes 3 minutes, unit tests 8 minutes, static analysis 7 minutes. What is the most impactful single change they can make?


2. A team has a test that fails intermittently with no code changes. Rather than fix it, they add a retry — if it fails, run it again automatically. Why is this the wrong response?


3. A team configures their CI/CD pipeline entirely through their CI platform's web UI. A new engineer changes a setting accidentally and nobody notices for a week. What design principle would have prevented this?


Up Next · Lesson 9

Benefits of CI/CD

You have seen how the pipeline works. Now see what it actually changes — for engineers, for teams, for products, and for the organisations that ship them.