CI/CD Course
Code Quality & Static Analysis
In this lesson
Static analysis is the automated examination of source code without executing it — reading the code as text and applying a set of rules to identify problems, enforce standards, and measure structural quality. It is the complement to testing: where tests verify what the code does at runtime, static analysis verifies how the code is written. A function can pass every test and still be so complex, inconsistently formatted, or structurally fragile that the next developer to touch it introduces a bug. Static analysis catches that category of problem before it accumulates into technical debt that slows the entire team down.
Linting — Style, Correctness, and Consistency
A linter is a static analysis tool that checks code against a defined set of rules covering style, syntax, and common error patterns. Some rules are stylistic — consistent indentation, quote style, trailing commas — and exist purely to keep the codebase readable and review-friendly. Others are correctness rules — unused variables, undefined references, unreachable code — that catch real bugs without running the application.
The pipeline value of linting is not just catching bugs — it is freeing code reviewers from style debates. When a linter enforces formatting automatically, pull request comments about indentation and semicolons disappear. Reviewers spend their attention on logic, architecture, and correctness. That shift in review focus has a measurable effect on the quality of feedback a team produces and the speed at which PRs are approved.
The Copy Editor Analogy
A copy editor reads a manuscript before it reaches the publisher — fixing spelling, enforcing house style, flagging grammatical errors. They do not judge whether the story is good; they ensure the manuscript meets a consistent standard so that the senior editor can focus entirely on the narrative quality. A linter is the copy editor for code. It runs before the human reviewer sees the PR, removes the low-level noise, and leaves the reviewer free to think about the things only a human can judge.
Static Analysis Beyond Linting — Bugs and Security Patterns
Linting handles style and basic correctness. Deeper static analysis tools go further — examining data flow, control flow, and inter-function relationships to find bugs and security vulnerabilities that no linter would catch. These tools reason about how values move through the code: does user-supplied input ever reach a SQL query without sanitisation? Can a null reference propagate to a function that does not handle it? Does this code path ever execute with an uninitialised variable?
Static Analysis Tools by Language and Purpose
Code Complexity — A Structural Quality Signal
Cyclomatic complexity is a metric that measures the number of independent paths through a function — in practical terms, how many if, else, for, switch, and catch branches it contains. A function with a cyclomatic complexity of 2 is straightforward. One with a complexity of 25 is almost impossible to test exhaustively, extremely difficult to reason about, and highly likely to contain hidden bugs.
Enforcing a complexity threshold in the pipeline — failing the build if any function exceeds a defined limit, typically 10 to 15 — creates pressure to decompose complex logic into smaller, testable units. This is not a stylistic preference; high complexity is one of the strongest predictors of defect density in research on software quality. Most linters and quality platforms compute cyclomatic complexity automatically and can be configured to fail on breach.
Formatting Enforcement — Prettier and the End of Style Debates
Code formatting is a category of static analysis where the tool does not just flag problems — it fixes them. Tools like Prettier (JavaScript/TypeScript), Black (Python), and gofmt (Go) apply a single, opinionated formatting style to the entire codebase. They are not configurable beyond a small number of high-level options — by design. The point is not to produce the prettiest code; it is to produce consistent code that every developer on the team reads and writes in exactly the same way.
In the pipeline, formatting is enforced by running the formatter in check mode — prettier --check, black --check — and failing the build if any file would be changed. Developers are expected to run the formatter locally before pushing. Most teams enforce this with a pre-commit hook so that unformatted code never even reaches the pipeline. The result is a codebase where formatting is never a point of discussion in code review.
Quality Gates in the Pipeline
Static analysis and linting belong early in the pipeline — before tests, before builds in some configurations. They are fast, require no running application, and catch a wide range of issues cheaply. A lint failure on a PR costs seconds to report and minutes to fix. The same issue caught during a QA review costs hours.
Static Analysis Stage — GitHub Actions
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Check formatting
run: npx prettier --check "src/**/*.{js,ts}" # Fail if any file is not formatted
- name: Lint
run: npx eslint "src/**/*.{js,ts}" # Fail on any lint error
- name: Type check
run: npx tsc --noEmit # Fail on TypeScript type errors
- name: Complexity check
run: npx eslint --rule '{"complexity": ["error", 15]}' "src/**/*.{js,ts}"
# Fail if any function exceeds cyclomatic complexity of 15
What just happened?
Four quality checks ran in sequence on the PR branch — formatting, linting, type checking, and complexity — all before a single test was executed. Each check is fast and fails independently with a specific, actionable message. A developer sees exactly what needs to be fixed and can address it locally before the next push.
Warning: Introducing Static Analysis on a Legacy Codebase All at Once Will Be Ignored
Enabling a linter or quality tool on a codebase that has never had one typically produces hundreds or thousands of violations immediately. If the pipeline fails on all of them at once, the team will either disable the tool or add a blanket suppression rule to silence everything — and the quality signal is lost. The correct approach is to enable the tool in warning mode first, fix the most critical violations, then progressively tighten thresholds over weeks. Alternatively, use a "ratchet" pattern: only fail the pipeline on violations introduced by the current PR, not on pre-existing ones. Most quality platforms support this natively.
Key Takeaways from This Lesson
Teacher's Note
Add a pre-commit hook that runs the formatter and linter locally before every push — it costs nothing to set up and means the pipeline almost never fails on formatting, leaving its failures for things that actually matter.
Practice Questions
Answer in your own words — then check against the expected answer.
1. What is the metric that measures the number of independent paths through a function — counting branches like if, for, and switch — used to identify functions that are too complex to test or safely modify?
2. What is the pattern — supported by most quality platforms — that only fails the pipeline on violations introduced by the current pull request, rather than on all pre-existing violations in the codebase?
3. What is the name of the opinionated JavaScript and TypeScript formatting tool — run in check mode in the pipeline — that enforces a single consistent style across the entire codebase and removes formatting as a topic of code review?
Lesson Quiz
1. A team already has 85% code coverage. A colleague argues that static analysis is redundant because the tests cover most of the code. What is the flaw in this reasoning?
2. A team enables ESLint on a codebase that has never had a linter. The first pipeline run reports 847 violations and the build fails. What is the correct approach?
3. A pipeline is configured to fail if any function has a cyclomatic complexity above 15. A developer asks what this number actually measures and why it matters. What is the accurate answer?
Up Next · Lesson 18
CI/CD Pipeline Stages
Source, build, test, deploy — but in what order, with what dependencies, and with what gates between them? Lesson 18 maps the full pipeline stage architecture from commit to production.