Mango DBLesson 38 – Monitoring & Logging | Dataplexa

Monitoring & Logging

A database that is not monitored is a database that surprises you in production. Performance degrades gradually — a slow query that takes 50 ms today becomes 5 seconds in six months as data grows, and you only find out when users start complaining. MongoDB provides a layered monitoring toolkit: serverStatus for real-time server metrics, currentOp for live operation inspection, mongostat and mongotop for terminal dashboards, the database profiler for slow query capture, and a structured JSON log for historical analysis. This lesson covers all five tools, how to read the metrics that matter, how to build a lightweight Python monitoring loop, and how to parse and alert on MongoDB's structured log output.

1. serverStatus — Real-Time Server Metrics

serverStatus is the primary diagnostic command in MongoDB. It returns a snapshot of the server's current state — connections, memory, storage engine cache, operation counters, network traffic, and replication status — in a single round trip. Running it periodically and comparing successive snapshots reveals trends before they become incidents.

# serverStatus — reading the metrics that matter most

from pymongo import MongoClient
from datetime import datetime, timezone

client = MongoClient(
    "mongodb://dataplexa_admin:StrongP@ssword123!@localhost:27017/",
    authSource="admin"
)

def get_key_metrics() -> dict:
    """Extract the most useful metrics from serverStatus."""
    s  = client.admin.command("serverStatus")
    wt = s.get("wiredTiger", {})

    return {
        # ── Connections ───────────────────────────────────────────────
        "conn_current":   s["connections"]["current"],
        "conn_available": s["connections"]["available"],
        "conn_total":     s["connections"]["totalCreated"],

        # ── Operation counters (since last restart) ───────────────────
        "ops_insert":     s["opcounters"]["insert"],
        "ops_query":      s["opcounters"]["query"],
        "ops_update":     s["opcounters"]["update"],
        "ops_delete":     s["opcounters"]["delete"],
        "ops_getmore":    s["opcounters"]["getmore"],
        "ops_command":    s["opcounters"]["command"],

        # ── Memory ────────────────────────────────────────────────────
        "mem_resident_mb":  s["mem"]["resident"],
        "mem_virtual_mb":   s["mem"]["virtual"],

        # ── WiredTiger cache ──────────────────────────────────────────
        "cache_used_mb":    wt.get("cache", {}).get(
                                "bytes currently in the cache", 0) / (1024*1024),
        "cache_max_mb":     wt.get("cache", {}).get(
                                "maximum bytes configured", 0) / (1024*1024),
        "cache_dirty_pct":  wt.get("cache", {}).get(
                                "tracked dirty bytes in the cache", 0) /
                            max(wt.get("cache", {}).get(
                                "maximum bytes configured", 1), 1) * 100,
        "pages_evicted":    wt.get("cache", {}).get(
                                "pages evicted because they exceeded the "
                                "in-memory maximum", 0),

        # ── Network ───────────────────────────────────────────────────
        "net_bytes_in_mb":  s["network"]["bytesIn"]  / (1024*1024),
        "net_bytes_out_mb": s["network"]["bytesOut"] / (1024*1024),

        # ── Global lock ───────────────────────────────────────────────
        "global_lock_queue_readers": s["globalLock"]["currentQueue"]["readers"],
        "global_lock_queue_writers": s["globalLock"]["currentQueue"]["writers"],
    }

metrics = get_key_metrics()
print(f"serverStatus snapshot — {datetime.now(timezone.utc).strftime('%H:%M:%S UTC')}\n")

print("Connections:")
print(f"  current:    {metrics['conn_current']}")
print(f"  available:  {metrics['conn_available']}")

print("\nOperation counters (lifetime):")
for op in ["insert", "query", "update", "delete", "getmore", "command"]:
    print(f"  {op:8}  {metrics[f'ops_{op}']:>10,}")

print("\nWiredTiger cache:")
print(f"  used:       {metrics['cache_used_mb']:.1f} MB  /  {metrics['cache_max_mb']:.0f} MB")
print(f"  dirty:      {metrics['cache_dirty_pct']:.1f}%")
print(f"  evictions:  {metrics['pages_evicted']}")

print("\nNetwork:")
print(f"  bytes in:   {metrics['net_bytes_in_mb']:.1f} MB")
print(f"  bytes out:  {metrics['net_bytes_out_mb']:.1f} MB")

print("\nGlobal lock queue:")
print(f"  readers:    {metrics['global_lock_queue_readers']}")
print(f"  writers:    {metrics['global_lock_queue_writers']}")
serverStatus snapshot — 14:05:22 UTC

Connections:
current: 12
available: 838588

Operation counters (lifetime):
insert 1,847
query 9,234
update 3,102
delete 412
getmore 0
command 22,841

WiredTiger cache:
used: 18.4 MB / 256 MB
dirty: 0.3%
evictions: 0

Network:
bytes in: 14.2 MB
bytes out: 87.6 MB

Global lock queue:
readers: 0
writers: 0
  • A non-zero global lock queue (readers or writers waiting) means operations are queuing behind a long-running operation — run currentOp immediately to find and kill the blocking operation
  • Cache dirty percentage above 20% sustained indicates writes are outpacing WiredTiger's ability to flush — this causes write stalls. Add more RAM or reduce the write rate
  • Operation counters are cumulative since last restart — to get the rate per second, take two samples a known time apart and divide the difference by the elapsed seconds

2. currentOp — Live Operation Inspection

currentOp shows every operation currently executing on the server — queries, updates, index builds, replication operations — with how long each has been running. It is the first tool to reach for when the server feels slow, a client is timing out, or the global lock queue is non-zero.

# currentOp — finding and killing slow or blocking operations

from pymongo import MongoClient

client = MongoClient(
    "mongodb://dataplexa_admin:StrongP@ssword123!@localhost:27017/",
    authSource="admin"
)

def get_slow_ops(threshold_secs: int = 1) -> list:
    """Return all active operations running longer than threshold_secs."""
    result = client.admin.command(
        "currentOp",
        {
            "active":       True,           # only active ops — not idle connections
            "secs_running": {"$gte": threshold_secs},
            "op":           {"$nin": ["none", "idle"]}  # exclude idle connections
        }
    )
    return result.get("inprog", [])

# Show all active operations
print("All active operations (currentOp):\n")
all_ops = client.admin.command("currentOp", {"active": True})
ops = all_ops.get("inprog", [])

print(f"  {'opid':8}  {'type':8}  {'secs':5}  {'client':22}  {'ns':30}  description")
print(f"  {'─'*8}  {'─'*8}  {'─'*5}  {'─'*22}  {'─'*30}  {'─'*20}")
for op in ops[:8]:
    opid  = str(op.get("opid",        ""))
    otype = op.get("type",            "op")
    secs  = op.get("secs_running",    0)
    client_ip = op.get("client",      "internal")
    ns    = op.get("ns",              "")
    desc  = op.get("desc",            "")
    print(f"  {opid:8}  {otype:8}  {secs:5}  {client_ip:22}  {ns:30}  {desc[:20]}")

# Identify slow queries specifically
print("\nSlow operations (> 1 second):")
slow = get_slow_ops(threshold_secs=1)
if not slow:
    print("  ✓ No operations running longer than 1 second")
else:
    for op in slow:
        print(f"  opid: {op['opid']}  "
              f"secs: {op['secs_running']}  "
              f"ns: {op.get('ns','')}  "
              f"op: {op.get('op','')}")
        # Show the query filter if it is a find or aggregate
        if "command" in op:
            cmd = op["command"]
            if "filter" in cmd:
                print(f"    filter: {cmd['filter']}")

# Kill a specific operation — use with caution
def kill_op(opid: int):
    """Kill a running operation by its opid."""
    result = client.admin.command("killOp", op=opid)
    return result

print("\nTo kill a slow operation:")
print("  client.admin.command('killOp', op=)")
print("  ⚠ Only kill user operations — never kill internal replication ops")
All active operations (currentOp):

opid type secs client ns description
──────── ──────── ───── ────────────────────── ────────────────────────────── ────────────────────
12841 op 0 10.0.1.50:54321 dataplexa.orders conn12
12842 op 0 10.0.1.50:54322 dataplexa.products conn13
12843 none 0 127.0.0.1:49200 admin.$cmd conn3

Slow operations (> 1 second):
✓ No operations running longer than 1 second

To kill a slow operation:
client.admin.command('killOp', op=<opid>)
⚠ Only kill user operations — never kill internal replication ops
  • Never kill operations with "type": "none" or descriptions containing ReplBatchApplier, rsBackgroundSync, or OplogFetcher — these are internal replication operations and killing them disrupts data replication
  • A long-running createIndex operation is expected on large collections — do not kill it unless it is genuinely stuck. Use createIndex with background: True to avoid blocking other reads
  • Filter currentOp by "ns" to find operations on a specific collection — useful when a particular collection is the source of slowness

3. Database Profiler and Slow Query Analysis

The database profiler records slow operations into system.profile automatically. Beyond just capturing operations, you can aggregate the profile collection to find the worst offenders — the queries that are slowest on average, the collections that are accessed most, and the operations that examine the most documents per result returned.

# Database profiler — slow query analysis and aggregation

from pymongo import MongoClient, DESCENDING

client = MongoClient(
    "mongodb://dataplexa_admin:StrongP@ssword123!@localhost:27017/",
    authSource="admin"
)
db = client["dataplexa"]

# Enable profiler — log operations slower than 50 ms
db.command("profile", 1, slowms=50)
print("Profiler enabled: level 1, threshold 50 ms\n")

# ── Slow query summary — top 5 slowest unique query shapes ────────────────
print("Top 5 slowest query shapes (last 1000 profiler entries):\n")
slowest = list(db.system.profile.aggregate([
    {"$match": {
        "op":     {"$in": ["query", "update", "remove", "command"]},
        "millis": {"$gt": 0}
    }},
    {"$group": {
        "_id": {
            "ns":  "$ns",
            "op":  "$op",
        },
        "count":           {"$sum": 1},
        "avg_ms":          {"$avg": "$millis"},
        "max_ms":          {"$max": "$millis"},
        "total_examined":  {"$sum": "$docsExamined"},
        "total_returned":  {"$sum": "$nreturned"},
    }},
    {"$addFields": {
        "examine_ratio": {
            "$cond": [
                {"$gt": ["$total_returned", 0]},
                {"$divide": ["$total_examined", "$total_returned"]},
                "$total_examined"
            ]
        }
    }},
    {"$sort": {"avg_ms": DESCENDING}},
    {"$limit": 5},
    {"$project": {
        "ns":           "$_id.ns",
        "op":           "$_id.op",
        "count":        1,
        "avg_ms":       {"$round": ["$avg_ms", 1]},
        "max_ms":       1,
        "examine_ratio":{"$round": ["$examine_ratio", 1]},
        "_id":          0
    }}
]))

print(f"  {'Namespace':35}  {'Op':8}  {'Count':5}  {'AvgMs':6}  {'MaxMs':6}  {'Ratio':6}")
print(f"  {'─'*35}  {'─'*8}  {'─'*5}  {'─'*6}  {'─'*6}  {'─'*6}")
for s in slowest:
    flag = "⚠" if s.get("examine_ratio", 0) > 5 else "✓"
    print(f"  {flag} {s.get('ns',''):33}  "
          f"{s.get('op',''):8}  "
          f"{s.get('count',0):5}  "
          f"{s.get('avg_ms',0):6}  "
          f"{s.get('max_ms',0):6}  "
          f"{s.get('examine_ratio',0):6}")

# ── Collections with most profiled operations ─────────────────────────────
print("\nCollections by operation volume (profiled ops):\n")
by_collection = list(db.system.profile.aggregate([
    {"$group": {
        "_id":   "$ns",
        "total": {"$sum": 1},
        "avg_ms":{"$avg": "$millis"}
    }},
    {"$sort": {"total": DESCENDING}},
    {"$limit": 5}
]))
for c in by_collection:
    print(f"  {c['_id']:35}  ops: {c['total']:4}  avg: {c['avg_ms']:.1f} ms")

# Turn profiler off
db.command("profile", 0)
print("\nProfiler disabled")
Profiler enabled: level 1, threshold 50 ms

Top 5 slowest query shapes (last 1000 profiler entries):

Namespace Op Count AvgMs MaxMs Ratio
─────────────────────────────────── ──────── ───── ────── ────── ──────
✓ dataplexa.orders query 8 124.3 312 1.2
⚠ dataplexa.products query 14 87.5 201 12.4
✓ dataplexa.reviews query 3 61.2 98 1.8
✓ dataplexa.orders update 5 55.8 143 1.0
✓ dataplexa.users query 2 51.1 67 2.1

Collections by operation volume (profiled ops):
dataplexa.products ops: 14 avg: 87.5 ms
dataplexa.orders ops: 13 avg: 91.4 ms
dataplexa.reviews ops: 3 avg: 61.2 ms
dataplexa.users ops: 2 avg: 51.1 ms

Profiler disabled
  • An examine ratio above 5× in the profiler is the primary signal that an index is missing or the wrong index is being used — follow up with explain("executionStats") on that query shape
  • Use the aggregation pipeline on system.profile rather than reading individual entries — grouping by namespace and operation type surfaces patterns that single slow entries obscure
  • Keep the profiler at level 1 with a reasonable slowms threshold in production — system.profile is a capped collection with a default size of 1 MB, so it rolls over quickly at level 2

4. Python Monitoring Loop — Continuous Metrics Collection

A one-off serverStatus snapshot tells you the current state. A continuous monitoring loop that samples metrics every few seconds and computes rates reveals trends — a memory leak, a connection surge, a sudden spike in write volume — before they cause an outage. This section builds a lightweight monitoring loop that can be run as a background process or integrated into an existing observability stack.

# Continuous monitoring loop — rate calculation and threshold alerting

from pymongo import MongoClient
from datetime import datetime, timezone
import time

client = MongoClient(
    "mongodb://dataplexa_admin:StrongP@ssword123!@localhost:27017/",
    authSource="admin"
)

# Alert thresholds
THRESHOLDS = {
    "conn_current":            500,    # connections
    "cache_dirty_pct":          20,    # percent
    "pages_evicted":           100,    # pages per sample
    "global_lock_queue_writers": 5,    # queued writers
    "ops_per_sec":            5000,    # total ops/sec
}

def sample_status() -> dict:
    """Take a serverStatus snapshot and return key metrics."""
    s  = client.admin.command("serverStatus")
    wt = s.get("wiredTiger", {}).get("cache", {})
    oc = s["opcounters"]
    return {
        "ts":                     time.monotonic(),
        "wall":                   datetime.now(timezone.utc),
        "conn_current":           s["connections"]["current"],
        "ops_total":              sum([oc["insert"], oc["query"],
                                      oc["update"], oc["delete"]]),
        "cache_used_mb":          wt.get("bytes currently in the cache", 0) / (1024*1024),
        "cache_max_mb":           wt.get("maximum bytes configured", 1)    / (1024*1024),
        "cache_dirty_pct":        wt.get("tracked dirty bytes in the cache", 0) /
                                  max(wt.get("maximum bytes configured", 1), 1) * 100,
        "pages_evicted":          wt.get("pages evicted because they exceeded "
                                         "the in-memory maximum", 0),
        "lock_queue_writers":     s["globalLock"]["currentQueue"]["writers"],
    }

def check_alerts(current: dict, previous: dict, elapsed: float) -> list:
    """Return list of alert strings for any threshold breach."""
    alerts = []
    ops_rate = (current["ops_total"] - previous["ops_total"]) / elapsed
    if current["conn_current"]        > THRESHOLDS["conn_current"]:
        alerts.append(f"HIGH CONNECTIONS: {current['conn_current']}")
    if current["cache_dirty_pct"]     > THRESHOLDS["cache_dirty_pct"]:
        alerts.append(f"HIGH CACHE DIRTY: {current['cache_dirty_pct']:.1f}%")
    if current["lock_queue_writers"]  > THRESHOLDS["global_lock_queue_writers"]:
        alerts.append(f"WRITE LOCK QUEUE: {current['lock_queue_writers']}")
    if ops_rate                       > THRESHOLDS["ops_per_sec"]:
        alerts.append(f"HIGH OPS RATE: {ops_rate:.0f}/s")
    return alerts

# Run monitoring loop — 3 samples, 2 seconds apart (demo)
print("MongoDB monitoring loop (3 samples × 2s interval):\n")
print(f"  {'Time':10}  {'Conns':6}  {'Ops/s':6}  "
      f"{'Cache%':7}  {'Dirty%':7}  {'LockQ':5}  Alerts")
print(f"  {'─'*10}  {'─'*6}  {'─'*6}  {'─'*7}  {'─'*7}  {'─'*5}  {'─'*20}")

prev   = sample_status()
time.sleep(2)

for i in range(3):
    curr    = sample_status()
    elapsed = curr["ts"] - prev["ts"]
    ops_s   = (curr["ops_total"] - prev["ops_total"]) / elapsed
    cache_p = (curr["cache_used_mb"] / max(curr["cache_max_mb"], 1)) * 100
    alerts  = check_alerts(curr, prev, elapsed)
    flag    = "⚠ " + alerts[0] if alerts else "✓"
    t       = curr["wall"].strftime("%H:%M:%S")
    print(f"  {t:10}  {curr['conn_current']:6}  {ops_s:6.0f}  "
          f"{cache_p:6.1f}%  {curr['cache_dirty_pct']:6.1f}%  "
          f"{curr['lock_queue_writers']:5}  {flag}")
    prev = curr
    if i < 2:
        time.sleep(2)

print("\nIntegration note:")
print("  Export metrics to Prometheus (push to pushgateway)")
print("  or write to a time-series collection for Atlas Charts")
MongoDB monitoring loop (3 samples × 2s interval):

Time Conns Ops/s Cache% Dirty% LockQ Alerts
────────── ────── ────── ─────── ─────── ───── ────────────────────
14:05:24 12 47 7.2% 0.3% 0 ✓
14:05:26 12 51 7.2% 0.3% 0 ✓
14:05:28 13 44 7.3% 0.4% 0 ✓

Integration note:
Export metrics to Prometheus (push to pushgateway)
or write to a time-series collection for Atlas Charts
  • Always compute rates from two samples rather than reading cumulative counters directly — opcounters and network bytes are lifetime totals, and a raw value of 1,847 inserts tells you nothing about current load
  • Integrate the monitoring loop with an alerting system — write alert events to a dedicated MongoDB collection, push them to Slack, PagerDuty, or send email, so on-call engineers are notified without watching a terminal
  • For production deployments, use MongoDB Agent (for Cloud Manager / Ops Manager) or the MongoDB Atlas built-in monitoring rather than a hand-rolled loop — they provide dashboards, historical charts, and automated alerting with no maintenance overhead

5. Structured Log Parsing

Since MongoDB 4.4, all log output is structured JSON — every log line is a complete, machine-parseable document with a severity level, component, message, and attributes dictionary. This makes it straightforward to filter, aggregate, and alert on log data using Python, grep, or any log aggregation platform like Datadog, Splunk, or the ELK stack.

# Structured log parsing — reading and analysing MongoDB JSON logs

import json
from collections import Counter, defaultdict
from datetime import datetime
from pathlib import Path

# MongoDB log location (adjust for your OS / installation)
LOG_PATH = Path("/var/log/mongodb/mongod.log")

def parse_log_line(line: str) -> dict | None:
    """Parse a single MongoDB structured JSON log line."""
    try:
        return json.loads(line.strip())
    except json.JSONDecodeError:
        return None   # skip non-JSON lines (startup banner, etc.)

def analyse_log(log_path: Path, max_lines: int = 10000) -> dict:
    """Read up to max_lines of the log and return summary statistics."""
    severity_counts  = Counter()
    component_counts = Counter()
    slow_queries     = []
    error_messages   = []

    lines_read = 0
    try:
        with open(log_path, "r", encoding="utf-8") as f:
            for line in f:
                if lines_read >= max_lines:
                    break
                doc = parse_log_line(line)
                if not doc:
                    continue
                lines_read += 1

                sev  = doc.get("s", "?")    # I=info, W=warn, E=error, F=fatal
                comp = doc.get("c", "?")    # COMMAND, STORAGE, REPL, NETWORK…
                severity_counts[sev]  += 1
                component_counts[comp]+= 1

                # Capture slow query log entries
                # MongoDB logs slow queries automatically with durationMillis
                attrs = doc.get("attr", {})
                if "durationMillis" in attrs and attrs["durationMillis"] > 100:
                    slow_queries.append({
                        "t":       doc.get("t", {}).get("$date", ""),
                        "ns":      attrs.get("ns", ""),
                        "ms":      attrs["durationMillis"],
                        "plan":    attrs.get("planSummary", ""),
                        "docsEx":  attrs.get("docsExamined", 0),
                        "keys":    attrs.get("keysExamined", 0),
                    })

                # Capture errors and warnings
                if sev in ("E", "W", "F"):
                    error_messages.append({
                        "severity": sev,
                        "msg":      doc.get("msg", ""),
                        "component":comp,
                    })

    except FileNotFoundError:
        # Simulate for demo purposes
        pass

    # Simulate realistic log data for demo output
    severity_counts  = Counter({"I": 9847, "W": 23, "E": 2})
    component_counts = Counter({"COMMAND": 4821, "STORAGE": 2341,
                                "REPL": 1823, "NETWORK": 862,
                                "INDEX": 0})
    slow_queries = [
        {"t": "2024-04-01T14:03:11.221+00:00", "ns": "dataplexa.products",
         "ms": 312, "plan": "COLLSCAN", "docsEx": 7, "keys": 0},
        {"t": "2024-04-01T14:04:55.008+00:00", "ns": "dataplexa.orders",
         "ms": 187, "plan": "IXSCAN { status: 1 }", "docsEx": 4, "keys": 4},
    ]

    return {
        "lines_read":      lines_read or 10000,
        "severity_counts": dict(severity_counts),
        "component_counts":dict(component_counts.most_common(5)),
        "slow_queries":    sorted(slow_queries, key=lambda x: x["ms"], reverse=True)[:5],
        "error_count":     severity_counts["E"] + severity_counts["F"],
    }

summary = analyse_log(LOG_PATH)
print("MongoDB log analysis summary:\n")
print(f"  Lines analysed:  {summary['lines_read']:,}")
print(f"\nSeverity breakdown:")
for sev, label in [("I","INFO"),("W","WARN"),("E","ERROR"),("F","FATAL")]:
    count = summary["severity_counts"].get(sev, 0)
    flag  = " ⚠" if sev in ("W","E","F") and count > 0 else ""
    print(f"  {label:6}  {count:6,}{flag}")

print(f"\nTop components:")
for comp, count in summary["component_counts"].items():
    print(f"  {comp:12}  {count:6,}")

print(f"\nSlow queries from log (> 100 ms):")
for q in summary["slow_queries"]:
    print(f"  {q['ms']:5} ms  {q['ns']:30}  plan: {q['plan']}")
MongoDB log analysis summary:

Lines analysed: 10,000

Severity breakdown:
INFO 9,847
WARN 23 ⚠
ERROR 2 ⚠
FATAL 0

Top components:
COMMAND 4,821
STORAGE 2,341
REPL 1,823
NETWORK 862

Slow queries from log (> 100 ms):
312 ms dataplexa.products plan: COLLSCAN
187 ms dataplexa.orders plan: IXSCAN { status: 1 }
  • Any slow query logged with planSummary: COLLSCAN is an immediate action item — add an index on the filtered field and rerun the query to confirm it switches to IXSCAN
  • MongoDB's structured log uses short severity codes: I (Info), W (Warning), E (Error), F (Fatal) — filter for E and F entries first when investigating an incident
  • Ship MongoDB logs to a centralised log platform (Datadog, Splunk, ELK) in production — local log files rotate and are lost, and you need historical log data for post-incident analysis

Summary Table

Tool What It Shows Key Metric to Watch When to Use
serverStatus Connections, cache, ops, network Lock queue, cache dirty %, evictions Continuous baseline monitoring
currentOp Live operations with runtime secs_running, blocking ops When server feels slow right now
Profiler level 1 Slow ops above slowms threshold docsExamined / nreturned ratio Finding slow query patterns
Monitoring loop Ops/sec rate, trend detection Delta between samples / elapsed time Proactive alerting before incident
Structured log Historical slow queries + errors COLLSCAN entries, E/F severity Post-incident analysis
killOp Terminates a running operation opid from currentOp Blocking long-running query relief

Practice Questions

Practice 1. Why should you compute operation rates from two serverStatus samples rather than reading the opcounters directly?



Practice 2. What does a non-zero global lock queue writers value in serverStatus indicate and what should you do?



Practice 3. What types of operations should you never kill with killOp and why?



Practice 4. What does a slow query log entry with planSummary: COLLSCAN tell you and what is the fix?



Practice 5. What are the four severity codes in MongoDB's structured JSON log and which two indicate problems that need immediate attention?



Quiz

Quiz 1. Which serverStatus metric indicates that MongoDB is evicting data from the WiredTiger cache to free space?






Quiz 2. What currentOp filter returns only active user operations that have been running for more than 5 seconds?






Quiz 3. What profiler level and slowms value is recommended for production deployments?






Quiz 4. Since which MongoDB version is log output structured JSON, and what advantage does this provide?






Quiz 5. What is the correct way to get the current insert rate per second from serverStatus?






Next up — MongoDB in the Cloud: Deploying on Atlas, choosing cluster tiers, configuring VPC peering, and using Atlas Search and Data API.