Security Basics
CIA Triad
This lesson covers
The three pillars every security decision rests on → Confidentiality and what a real breach looks like → Integrity failures that go undetected for months → Availability attacks that cost millions per hour → Conflicts between the three pillars → Applying the triad to real systems
On a Tuesday morning in 2016, a Bangladesh Bank employee arrived at work to find $81 million had vanished overnight. The attackers didn't blow a safe. They didn't threaten anyone. They sent 35 payment instructions through SWIFT — the interbank messaging network — using stolen credentials, and the money moved exactly as it was supposed to. Availability was fine. The system worked perfectly. But confidentiality had failed weeks earlier when credentials were stolen, and integrity failed the moment fraudulent instructions were treated as legitimate ones. Three letters. Two failures. Eighty-one million dollars.
The foundation everything else is built on
The CIA triad isn't a product or a framework you buy. It's a mental model — a lens security professionals put over every decision they make. Before a single firewall rule gets written, before a password policy gets drafted, before a server gets configured, the CIA triad is already in the room asking three questions: are we protecting the data's secrecy, its accuracy, and its availability?
The model dates back to the 1970s and has outlasted every security trend, framework, and buzzword that came after it. The reason it survives is that it describes something real. Every attack in history has violated at least one of the three. Most serious ones violate two. Once you understand the triad, you start seeing it everywhere — in breach reports, in audit checklists, in incident response playbooks, in the questions an interviewer asks you on day one of a security job.
Confidentiality
Data is accessible only to those authorised to see it. Broken when an outsider — or the wrong insider — gets access they shouldn't have.
Integrity
Data is accurate and unaltered. Broken when someone modifies it without authorisation — and the system keeps running as if nothing changed.
Availability
Systems and data are accessible when people need them. Broken when downtime — planned or forced — puts them out of reach.
Confidentiality — the most visible failure
Confidentiality is the pillar most people picture when they think of a data breach — and for good reason. It's the one that ends up on front pages. When 533 million Facebook records appeared on a hacking forum in 2021, when Equifax lost the personal data of 147 million Americans in 2017, when Ashley Madison's user database was dumped publicly in 2015 — all of these were confidentiality failures. Someone had data they were never supposed to have.
Confidentiality breaks in more ways than most people expect. The obvious route is an external attacker — someone from outside stealing credentials, exploiting a vulnerability, or intercepting unencrypted traffic. But confidentiality also fails from the inside: an employee emailing a customer list to a personal account, a developer accidentally pushing an API key to a public GitHub repository, a misconfigured cloud bucket sitting open to anyone with the URL.
The technical controls that protect confidentiality are encryption, access controls, and network segmentation. But controls are only as good as the people configuring them. The majority of confidentiality breaches don't involve breaking encryption — they involve finding data that was never encrypted in the first place, or credentials that gave access without needing to break anything at all.
Equifax, 2017 — 147 million records
An unpatched vulnerability in the Apache Struts web framework gave attackers access to Equifax's internal network. For 78 days they moved through the environment undetected, pulling names, Social Security numbers, birth dates, and credit card details. The patch that would have closed the door had been available for months. Confidentiality failed not because the encryption was broken, but because an unapplied update left an open door. The breach cost Equifax over $700 million in settlements.
Integrity — the silent attacker
Integrity failures are the sneakiest category in security. There's no alarm. No error message. No obvious sign anything is wrong. The system keeps running, reports keep generating, people keep making decisions — and all of it is based on data someone else quietly changed. By the time anyone notices, the damage has been compounding for weeks.
In 2010, the Stuxnet worm targeted Iranian nuclear centrifuges running at the Natanz facility. It didn't destroy them outright. It subtly altered their operating speeds while simultaneously sending falsified sensor readings back to the control room, making everything appear normal. The engineers watching the screens had no idea the machines were tearing themselves apart. That's a textbook integrity attack — the data the operators were trusting had been silently corrupted, and the system's availability masked the failure completely.
Integrity matters just as much in ordinary environments. A tampered software update that installs a backdoor. A financial record where transaction amounts have been quietly altered. An audit log with entries deleted. A DNS record that's been changed to redirect users to a malicious site. None of these look broken from the outside. The system is functioning — it's just lying.
Detecting integrity failures
The standard controls for integrity are hashing, digital signatures, and audit trails. A cryptographic hash of a file is a fixed-length fingerprint — if a single byte changes, the hash changes completely. This is how software distributors let you verify a downloaded file hasn't been tampered with in transit. Audit logs serve a similar purpose for data: if every change is recorded with a timestamp and user identity, unexplained modifications leave a traceable trail — provided those logs themselves are protected from tampering.
Availability — the attack with a price tag
Availability is the pillar that converts directly into money. Every minute a system is down, somebody is counting the cost — lost sales, idle staff, emergency response time, reputational damage, regulatory penalties. Amazon estimates that one second of load time delay costs them $1.6 billion annually. A hospital's electronic health record system going down doesn't just cost money; it forces staff onto paper backups, slows treatment decisions, and in extreme cases puts patients at direct risk.
The most well-known availability attack is a Distributed Denial of Service — DDoS — where an attacker floods a target with traffic until it can no longer respond to legitimate requests. In 2016, the Mirai botnet hijacked hundreds of thousands of unsecured IoT devices — cameras, routers, smart appliances — and used them to launch a DDoS attack against Dyn, a major DNS provider. The result: Twitter, Reddit, Netflix, Spotify, and dozens of other major platforms went offline simultaneously for millions of users. The attack didn't steal a single record. It just made everything unreachable.
Ransomware is equally brutal on availability. WannaCry didn't steal NHS patient records — it encrypted them and made them inaccessible until a ransom was paid. Thousands of appointments were cancelled, surgeries were postponed, and staff reverted to pen and paper across hospitals that had been entirely dependent on digital systems. The attack cost the NHS an estimated £92 million.
The Attacker's Availability Calculation
Availability attacks are attractive to attackers precisely because they don't require breaking in and stealing anything. They just have to make the target stop working. For organisations that depend on uptime — e-commerce, banking, healthcare, logistics — that threat alone is enough for extortion. Several ransomware groups now use DDoS attacks as a secondary pressure tactic: if you don't pay the ransom, we'll also take your site down.
The triad in tension — when pillars conflict
Here's the part most introductory courses skip: the three pillars don't always pull in the same direction. Real security work involves managing trade-offs between them, and understanding those trade-offs is what separates a security professional from someone who just ticks compliance boxes.
Confidentiality vs. Availability: The strongest confidentiality control is locking everything down — minimum access, heavy encryption, strict authentication. But every additional barrier is also a potential availability problem. If the only person who knows the decryption key for a critical system gets hit by a bus, confidentiality just destroyed availability. Backup access processes, key escrow, and administrative access controls are all attempts to balance this tension.
Integrity vs. Availability: Strict integrity controls — requiring cryptographic verification before any file executes — add latency. In high-performance systems processing millions of transactions per second, that overhead matters. Some environments accept slightly looser integrity checks in exchange for speed. That's a deliberate risk decision, not an oversight.
Confidentiality vs. Integrity: End-to-end encryption protects confidentiality by ensuring only the sender and recipient can read a message. But it also means that middle systems — including security tools trying to scan for malware — can't inspect the content. Some malware specifically hides inside encrypted traffic for exactly this reason.
Instructor's Note
Every security decision is a trade-off. When someone asks you to "maximise security," the professional answer is: maximise which part? More confidentiality might reduce availability. Stronger integrity controls might introduce latency. The CIA triad doesn't tell you what to prioritise — that depends on context, threat model, and what the business actually does. It just gives you the language to have the conversation clearly.
The triad applied — reading logs through a CIA lens
Once the triad is in your head, you start automatically classifying events. Here's a script that pulls recent authentication failures, privilege escalations, and file modifications from a Linux system — three event types that map directly onto all three CIA pillars:
# Pull events relevant to each CIA pillar from system logs
echo "=== CONFIDENTIALITY: Failed login attempts (last 10) ==="
grep "Failed password" /var/log/auth.log | tail -10
echo ""
echo "=== INTEGRITY: Privilege escalation events (last 10) ==="
grep "sudo:" /var/log/auth.log | tail -10
echo ""
echo "=== AVAILABILITY: Service crash / OOM events (last 10) ==="
grep -E "Out of memory|Killed process|segfault" /var/log/syslog | tail -10
=== CONFIDENTIALITY: Failed login attempts (last 10) === Jun 02 14:21:03 webserver sshd[4821]: Failed password for root from 91.108.4.11 port 51204 ssh2 Jun 02 14:21:07 webserver sshd[4822]: Failed password for admin from 91.108.4.11 port 51208 ssh2 Jun 02 14:21:09 webserver sshd[4823]: Failed password for ubuntu from 91.108.4.11 port 51211 ssh2 Jun 02 14:21:11 webserver sshd[4824]: Failed password for root from 91.108.4.11 port 51214 ssh2 Jun 02 14:21:13 webserver sshd[4825]: Failed password for deploy from 91.108.4.11 port 51217 ssh2 === INTEGRITY: Privilege escalation events (last 10) === Jun 02 09:14:52 webserver sudo: jsmith : TTY=pts/1 ; PWD=/home/jsmith ; USER=root ; COMMAND=/bin/bash Jun 02 11:33:17 webserver sudo: deploy : TTY=pts/0 ; PWD=/var/www ; USER=root ; COMMAND=/usr/bin/vim /etc/passwd Jun 02 13:45:01 webserver sudo: unknown : 3 incorrect password attempts ; TTY=pts/2 ; USER=root ; COMMAND=/bin/su === AVAILABILITY: Service crash / OOM events (last 10) === Jun 02 08:02:11 webserver kernel: Out of memory: Killed process 14832 (mysqld) score 921 Jun 02 10:17:44 webserver kernel: Out of memory: Killed process 15901 (apache2) score 874 Jun 02 12:55:32 webserver kernel: nginx[9021]: segfault at 0 ip 00007f4b2c3d1a20 sp 00007ffec3a0b9f0
What just happened
The first block shows an automated script hammering SSH — a confidentiality threat. The second block is more alarming: a user running vim /etc/passwd as root is an integrity red flag — the password file should never be opened in a text editor outside of controlled processes. The third block shows database and web server processes being killed by the kernel due to memory exhaustion — a direct availability problem. Each section is a different fire. A real analyst looks at all three simultaneously.
Practice Questions
The Stuxnet worm altered centrifuge speeds while feeding operators falsified normal readings. The centrifuges degraded over weeks with no one aware. Which CIA pillar was the primary target of this attack?
A developer accidentally pushes a database password to a public GitHub repository. The database itself is never accessed, but the credential is now visible to the entire internet. Which CIA pillar has been violated?
A DDoS attack floods a payment gateway with traffic, making it impossible for customers to complete purchases for four hours. No data was accessed or modified. Which CIA pillar was attacked?
Quiz
The 2016 Bangladesh Bank heist moved $81 million using stolen SWIFT credentials to send fraudulent payment instructions. Which CIA pillars were violated, and why?
A security team implements encryption and strict access controls on a critical server. During an incident, the only administrator with decryption access is unavailable and the system cannot be recovered in time. Which CIA tension does this best illustrate?
Software distributors publish SHA-256 hashes alongside their downloads. A user recalculates the hash after downloading and finds it doesn't match. Which statement best explains why this hash check protects integrity?
Up Next · Lesson 6
Authentication vs Authorization
Two words that get mixed up constantly — and two completely different things that can fail in completely different ways.