Security Basics
Security Best Practices
This lesson covers
Patch management done right → Credential hygiene that actually holds → Network hardening basics → Secure configuration from first boot → Backup strategy that survives ransomware → Audit and monitoring as a daily habit
Every breach post-mortem reads like a checklist of skipped basics. Unpatched systems. Reused passwords. Open ports that served no purpose. Default credentials nobody changed. The uncomfortable truth in security is that most successful attacks don't require sophistication — they require finding an organisation that didn't do the fundamentals. This lesson is the fundamentals. Not glamorous. Completely essential.
Patch management — the control that keeps getting skipped
When a software vendor releases a patch, they're simultaneously publishing a vulnerability map to anyone who reads the release notes. The patch notes describe exactly what was broken and where. Attackers read those notes. They build exploits targeting the fixed flaw and start scanning for unpatched systems within hours of the release. The organisations that get hit are the ones who treat patching as a monthly maintenance window rather than a time-sensitive security control.
Effective patch management has three components. First, visibility — you can't patch what you don't know you're running. An asset inventory that lists every system, its OS version, and its installed software is the prerequisite. Second, prioritisation — not all patches are equal. A CVSS 9.8 remote code execution vulnerability in an internet-facing service patches tonight. A CVSS 3.1 local privilege escalation in an offline system can wait for the next cycle. Third, verification — patching and confirming the patch applied are two different things. Automated patch deployment that doesn't verify success creates a false sense of security.
# Check for available security updates on Debian/Ubuntu
apt list --upgradable 2>/dev/null | grep -i security
# Show packages with known CVEs (requires debian-goodies)
debsecan --suite bookworm --only-fixed
# On RHEL/CentOS — list pending security patches with severity
yum updateinfo list security
# or on newer systems:
dnf updateinfo list --security
What just happened
The first command lists packages that have security updates available. The second cross-references installed packages against a CVE database and filters to ones with fixes already released — exactly what an attacker would check. The third and fourth commands do the same on Red Hat based systems, adding severity ratings so you can triage by risk. Running these against your systems tells you exactly what an attacker scanning your environment would already know about you.
Credential hygiene — beyond the password policy
A password policy tells people what to create. Credential hygiene is the broader practice of managing those credentials across their entire lifecycle — creation, storage, rotation, and revocation. Most organisations handle creation reasonably well and the rest badly.
Unique passwords per account. Password reuse is the gift that keeps giving — to attackers. When credentials from one breached site get tried against every other major platform, a single leaked password becomes access to everything the user reused it on. A password manager eliminates this entirely. Generating and storing a unique 20-character random password per site costs the user nothing in effort and removes credential stuffing as an attack vector completely.
Service account hygiene. Service accounts — credentials used by applications and automated processes rather than humans — are consistently the most over-privileged and least monitored accounts in any environment. They often have passwords set once at deployment and never rotated. They accumulate permissions over time as applications grow. An attacker who compromises a service account with domain admin rights has effectively compromised everything. Service accounts need the same least-privilege treatment as user accounts, plus rotation schedules and monitoring for unusual activity.
Secrets out of code. Hardcoded credentials in source code are one of the most common and easily preventable vulnerabilities in modern applications. A developer commits an API key to a Git repository, the repository goes public, automated scanners find the key within minutes, and the attacker has persistent access to whatever that key controls. Secrets belong in a secrets manager — Vault, AWS Secrets Manager, Azure Key Vault — not in environment variables, config files, or source code.
Check your own code right now
Tools like trufflehog and gitleaks scan Git history for committed secrets — not just the current state of the repo, but every commit ever made. A key committed and then deleted is still in the history. If your codebase has ever had credentials in it, assume they need rotating now regardless of whether they were "cleaned up" later.
Network hardening — close what you don't use
Every open port on a publicly reachable system is a potential attack surface. Services running on open ports may have vulnerabilities. They add to the attack surface even if they don't. The network hardening principle is simple: if you can't name a business reason for a port being open, it should be closed.
# See every listening port and the process behind it
ss -tlnp
# Scan your own server the way an attacker would
# (run from outside, replace with your actual IP)
nmap -sV -sC --open your.server.ip
# Disable a service you don't need (example: telnet)
systemctl stop telnet
systemctl disable telnet
# UFW — simple firewall: deny all incoming, allow only what you need
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp # SSH
ufw allow 443/tcp # HTTPS
ufw enable
What just happened
ss -tlnp shows every port your system is listening on and exactly which process owns it — the starting point for any hardening exercise. The nmap scan shows what an external attacker sees. The UFW block sets a default-deny posture: nothing gets in unless explicitly allowed. Two rules — SSH and HTTPS — and everything else is dropped. That's the entire philosophy of network hardening in five lines.
Secure configuration from first boot
Default configurations are built for compatibility and ease of setup, not security. A freshly installed OS or application ships with settings that make it easy to get running — which also makes it easy to exploit. Secure configuration is the process of changing those defaults before the system ever touches a network.
The CIS Benchmarks — published by the Center for Internet Security — provide detailed hardening guides for virtually every major OS, application, and cloud platform. They are free, maintained by industry consensus, and map directly to most compliance frameworks. A CIS Level 1 benchmark covers the settings that should be applied to every system with no exceptions. Level 2 covers settings for higher-security environments where some usability trade-offs are acceptable.
Minimum hardening steps for any new Linux server: disable root SSH login, change the SSH port or restrict access by IP, remove unused packages and services, set up automatic security updates, configure a firewall with default-deny, and enable audit logging. None of these take more than ten minutes. Together they eliminate the majority of automated attack paths against a new system.
Default credentials are still a crisis
Shodan — a search engine that indexes internet-connected devices — regularly surfaces routers, cameras, industrial controllers, and database servers accessible with admin/admin or no password at all. The Mirai botnet that took down half the internet in 2016 ran almost entirely on IoT devices with unchanged default credentials. Changing the default password on every device, every application, and every service is not optional. It is the absolute baseline.
Backup strategy — the last line of defence
Backups are where security and operations overlap completely. A backup strategy that works for hardware failure may not survive ransomware. Ransomware operators know this — modern ransomware actively hunts for backup systems and encrypts or deletes them before triggering the main payload. The backup that saves you is the one the attacker couldn't reach.
The 3-2-1 rule is the baseline: three copies of data, on two different storage types, with one copy offsite. For ransomware resilience, add a fourth requirement: at least one copy must be air-gapped or immutable — physically disconnected from the network or stored in a write-once system that an attacker with network access cannot modify or delete.
Testing restores is the part most organisations skip. A backup that hasn't been tested is a backup you haven't confirmed works. The worst time to discover that your backup process has been silently failing for three months is the morning after a ransomware attack. Restore tests should be scheduled, documented, and treated as mandatory — not something done only when something breaks.
3 copies
Primary data plus two independent backups. One failure can't be the end of the data.
2 media types
Local disk plus cloud, or disk plus tape. A failure mode that kills one type doesn't kill both.
1 offsite + immutable
One copy an attacker with network access cannot reach or modify. This is the copy that survives ransomware.
Logging and monitoring as a daily habit
Logging is only valuable if someone is reading the logs. An organisation that generates gigabytes of log data daily and reviews it never has all the detection capability of an organisation that logs less but acts on what it captures. The goal isn't maximum log volume — it's meaningful signal with a defined response process when that signal fires.
At a minimum, every environment should be logging authentication events, privilege escalations, firewall drops, and changes to critical files. Those four categories cover the fingerprints of the majority of real attacks. A SIEM that ingests these logs and applies correlation rules — five failed logins followed by a success, a new process spawning from a web server, an admin account active at 3am — can detect attacks that no single log entry would reveal alone.
# Enable auditd — Linux kernel audit framework
systemctl enable auditd
systemctl start auditd
# Watch for changes to the passwd and shadow files (integrity)
auditctl -w /etc/passwd -p wa -k passwd_changes
auditctl -w /etc/shadow -p wa -k shadow_changes
# Watch for privilege escalation attempts
auditctl -w /usr/bin/sudo -p x -k sudo_usage
auditctl -w /bin/su -p x -k su_usage
# Search audit log for triggered rules
ausearch -k passwd_changes
ausearch -k sudo_usage --start today
What just happened
auditctl -w sets a watch on a file or binary. The -p wa flag triggers on writes and attribute changes — so any modification to /etc/passwd gets logged with the user, timestamp, and process. The -p x on sudo and su triggers every time those binaries execute. The ausearch commands pull only the events matching your rule key — instant triage without scrolling through millions of lines.
Instructor's Note
The best practices in this lesson aren't exciting. Nobody gets promoted for consistently applying patches on time or maintaining clean backup records. But when something goes wrong — and at some point something always goes wrong — these are the controls that determine whether an incident is a recoverable inconvenience or an existential event. The organisations that handle breaches well aren't the ones with the most advanced tooling. They're the ones that did the boring things consistently.
Practice Questions
Modern ransomware actively deletes or encrypts backup systems before triggering its main payload. The 3-2-1 backup rule requires at least one copy to be offsite and ________ — stored in a way that cannot be modified or deleted by an attacker with network access.
A developer needs to store an API key used by an application at runtime. Hardcoding it in source code is a known vulnerability. Where should credentials like this be stored instead?
Effective patch management requires three components: prioritisation, verification, and ________ — knowing every system and software version in the environment before you can patch it.
Quiz
A vendor releases a critical patch for a remote code execution vulnerability. A security engineer argues the organisation should apply it immediately rather than waiting for the next maintenance window. What is the strongest argument for urgency?
A sysadmin is hardening a new server and wants to identify every service currently listening for network connections before applying firewall rules. Which command from this lesson is the correct starting point, and why?
During a security audit, a consultant flags service accounts as a higher priority risk than standard user accounts despite service accounts not belonging to any individual person. What justifies this prioritisation?
Up Next · Lesson 10
Introduction to Security Careers
The roles, the paths, the certifications that matter — and an honest map of how people actually break into the field.