Security Basics Lesson 19 – Security Misconfigurations | Dataplexa
Section II · Lesson 19

Security Misconfigurations

Security misconfiguration is the most preventable vulnerability category on the OWASP Top 10 — and consistently one of the most commonly found. It doesn't require a sophisticated exploit or a zero-day. It requires finding something that was left open, left default, or left behind. Default credentials, open cloud storage buckets, verbose error pages, and forgotten admin panels have caused some of the largest breaches on record.

This lesson covers

What misconfiguration looks like in the real world → Default credentials → Open cloud storage → Verbose error messages → Directory listing → Exposed admin interfaces → Scanning for misconfigurations with nikto and nmap → How to build a misconfiguration checklist

What misconfiguration actually means

Misconfiguration isn't a single vulnerability — it's a category of failures that share one trait: something was set up incorrectly, incompletely, or not at all. The system works. It just works in a way that exposes more than it should.

In 2019, Capital One suffered a breach exposing over 100 million customer records. The attacker didn't crack encryption or exploit application code. They found a misconfigured Web Application Firewall that was set up with overly permissive IAM roles — it was allowed to query AWS metadata endpoints that exposed temporary credentials. Those credentials gave access to S3 buckets containing the data. One misconfigured permission, one breach, $80 million in fines.

The categories of misconfiguration appear consistently across every environment:

Default credentials

admin/admin, root/root, admin/password left unchanged on routers, databases, CMS installs, and admin panels.

Open cloud storage

S3 buckets, Azure Blob containers, and GCS buckets with public read access containing sensitive data.

Verbose error messages

Stack traces, database errors, and framework internals exposed in production responses.

Directory listing enabled

Web server returns a file browser when no index file exists — exposing backup files, config files, and source code.

Unnecessary services

FTP, Telnet, unused database ports, and test endpoints left running in production.

Exposed dev artifacts

.env files, backup.zip, .git directories, phpinfo.php, and debug endpoints accessible on production servers.

Open cloud storage — the S3 bucket problem

Between 2017 and 2020, hundreds of organisations exposed sensitive data through publicly readable AWS S3 buckets. The breaches followed an identical pattern: a developer created a bucket, set it to public for testing or convenience, and nobody changed it back. Verizon, Accenture, the US Army, the NSA's contractor — all had data exposed this way.

The fix is simple. The habit of checking is what's missing.

# Check if a specific S3 bucket is publicly accessible
aws s3api get-bucket-acl --bucket your-bucket-name

# Check the bucket policy for public access statements
aws s3api get-bucket-policy --bucket your-bucket-name

# Check the public access block settings (the master switch)
aws s3api get-public-access-block --bucket your-bucket-name

# Enable public access block on a bucket — closes all public access
aws s3api put-public-access-block \
  --bucket your-bucket-name \
  --public-access-block-configuration \
    "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"

# Audit ALL buckets in your account for public access issues
aws s3api list-buckets --query "Buckets[].Name" --output text | \
  tr '\t' '\n' | \
  while read bucket; do
    echo "--- $bucket ---"
    aws s3api get-public-access-block --bucket "$bucket" 2>/dev/null || echo "No block config set — check manually"
  done
# get-public-access-block on a misconfigured bucket
{
    "PublicAccessBlockConfiguration": {
        "BlockPublicAcls": false,
        "IgnorePublicAcls": false,
        "BlockPublicPolicy": false,
        "RestrictPublicBuckets": false
    }
}
# All four are false — this bucket has NO public access protection

# After applying put-public-access-block
{
    "PublicAccessBlockConfiguration": {
        "BlockPublicAcls": true,
        "IgnorePublicAcls": true,
        "BlockPublicPolicy": true,
        "RestrictPublicBuckets": true
    }
}
# All four true — public access is now fully blocked

# Audit output across all buckets
--- production-assets ---
BlockPublicAcls: true, RestrictPublicBuckets: true
--- dev-uploads-temp ---
No block config set — check manually
--- backup-archive-2023 ---
No block config set — check manually

What just happened

The first bucket has all four public access block settings set to false — it's potentially publicly readable. The audit sweep found two more buckets with no block configuration set at all. Those "check manually" results are the dangerous ones — they inherited whatever account-level defaults were set, which might be permissive. Every bucket in a production account should have all four block settings explicitly set to true unless there's a documented reason for public access.

Scanning for misconfigurations with nikto and nmap

Before an attacker maps your misconfigurations, you should. Nikto is a web server scanner that checks for dangerous files, outdated software, missing headers, and common misconfiguration patterns. Nmap's scripting engine goes further — it can test for default credentials, enumerate services, and identify known-vulnerable configurations across any open port.

# Basic nikto scan against a web server
nikto -h https://example.com

# Scan with specific checks — misconfigurations and outdated software
nikto -h https://example.com -Tuning 5 -Format txt -output nikto-report.txt

# Nmap — service version detection and default script scan
sudo nmap -sV -sC -p 80,443,8080,8443,22,21,3306,5432 example.com

# Nmap — check for default credentials on common services
sudo nmap -p 21,22,23,3306,5900 --script=ftp-anon,ssh-auth-methods,telnet-info,mysql-empty-password example.com

# Check if .git directory is exposed on a web server
curl -s -o /dev/null -w "%{http_code}" https://example.com/.git/HEAD

# Check for common exposed files
for path in /.env /backup.zip /phpinfo.php /admin /wp-admin /.git/HEAD /config.php /database.sql; do
  code=$(curl -s -o /dev/null -w "%{http_code}" "https://example.com$path")
  echo "$code — https://example.com$path"
done
# nikto -h https://example.com (abbreviated)
- Nikto v2.1.6
---------------------------------------------------------------------------
+ Target IP:          93.184.216.34
+ Target Hostname:    example.com
+ Target Port:        443
---------------------------------------------------------------------------
+ Server: nginx/1.18.0
+ The anti-clickjacking X-Frame-Options header is not present.
+ The X-Content-Type-Options header is not set.
+ No CGI Directories found
+ Allowed HTTP Methods: GET, HEAD, POST, OPTIONS
+ OSVDB-3092: /admin/: This might be interesting...
+ OSVDB-3268: /backup/: Directory indexing found.
+ /backup/db_dump_2024.sql: Database backup file found.
+ OSVDB-3233: /phpinfo.php: PHP Info file found. May expose version info.
+ 8135 requests: 0 error(s) and 7 item(s) reported

# Exposed files check output
200 — https://example.com/.env              ← CRITICAL
200 — https://example.com/phpinfo.php       ← HIGH
200 — https://example.com/.git/HEAD         ← HIGH
200 — https://example.com/admin             ← REVIEW
403 — https://example.com/wp-admin
404 — https://example.com/backup.zip
404 — https://example.com/config.php
200 — https://example.com/database.sql      ← CRITICAL

What just happened

Nikto found a database backup file sitting in a publicly accessible /backup/ directory with directory listing enabled — anyone can download a full database dump. The file check found a live .env file (containing database passwords and API keys), an exposed .git directory (the entire source code history), and a raw database.sql file. Any one of these is a critical incident. All of them together on the same server is a catastrophic misconfiguration.

Default credentials — the easiest win for attackers

Default credentials are usernames and passwords that ship with software out of the box — admin/admin, root/toor, admin/password. They exist for initial setup convenience. They're documented publicly in every product's manual. Attackers have comprehensive lists of them and try them automatically against every service they find.

In 2016, the Mirai botnet — which took down Dyn's DNS infrastructure and disrupted half the internet — was built almost entirely by scanning for IoT devices with default credentials. It used a list of 61 username/password combinations. Sixty-one. That list compromised hundreds of thousands of devices because their owners never changed the defaults.

The attacker's perspective

A Shodan search for port:9200 product:elasticsearch returns thousands of publicly accessible Elasticsearch instances — many with no authentication at all. Elasticsearch ships with no authentication enabled by default in older versions. An attacker doesn't need to exploit anything. They connect, run a query, and download whatever data is in the index. Between 2017 and 2019, over 4,000 MongoDB and Elasticsearch instances were ransomed this way — data wiped, ransom note left in its place.

Building a misconfiguration checklist

The most effective defence against misconfiguration is a repeatable checklist that gets run against every new deployment. Not once at launch — on every change, every new environment, every new service added to the stack.

Change every default credential before deployment

Databases, admin panels, routers, monitoring tools, CI/CD systems — every service that ships with a default password needs a unique, strong credential set before it touches a network. No exceptions, no "we'll do it later."

Disable directory listing on all web servers

In nginx: autoindex off; is the default but verify it's not enabled. In Apache: ensure Options -Indexes is set in your config. A directory without an index file should return a 403, not a file browser.

Set DEBUG=False in all production applications

Django, Flask, Laravel, Rails — every framework has a debug mode that exposes internal details on errors. It should never be enabled in production. Implement custom error pages that return a generic message without leaking stack traces, file paths, or variable values.

Remove development files before deploying

Add .env, *.sql, *.bak, *.zip, and .git/ to your deployment exclusion list. Use a .gitignore and a deployment script that never copies development artifacts to the webroot. Run the exposed-file curl check above after every deployment.

Apply least-privilege to every cloud resource

IAM roles, S3 bucket policies, security groups — every permission that isn't explicitly required is an attack surface. Review IAM policies with AWS Access Analyzer. Enable S3 public access block at the account level as a safety net. Tag every resource with its intended access level.

Run nikto and the exposed-file check before go-live

Automate this in your CI/CD pipeline. A nikto scan that finds a backup SQL file on a staging environment catches something that would have been catastrophic in production. Make misconfiguration scanning a gate, not an afterthought.

Instructor's Note

Run the exposed-file curl loop from the code block above against any web server you're responsible for — right now, before you do anything else. I've run it against systems that teams believed were fully hardened and found live .env files, accessible admin panels, and database dumps more times than I can count. Five minutes of checking has prevented more incidents than any amount of theoretical security knowledge.


Practice Questions

What AWS S3 feature — with four boolean settings — acts as a master switch to prevent any public access to a bucket regardless of its ACL or policy? (three words)




How many default credential combinations did the Mirai botnet use to compromise hundreds of thousands of IoT devices?




What web server scanning tool checks for dangerous files, missing headers, directory listing, and common misconfiguration patterns in a single scan?



Quiz

How did the 2019 Capital One breach occur?



Why is directory listing enabled on a web server a security risk?



Why were thousands of Elasticsearch and MongoDB instances ransomed between 2017 and 2019?


Up Next · Lesson 20

Security Hardening Basics

Hardening goes beyond fixing misconfigurations — it's about reducing attack surface systematically. CIS benchmarks, sysctl hardening, service minimisation, and building a repeatable hardening baseline.