Kubernetes Course
Secrets Management Best Practices
Lesson 20 covered how to create Kubernetes Secrets — and warned that base64 is not encryption. This lesson covers what it actually takes to secure secrets in production: RBAC scoping, etcd encryption at rest, External Secrets Operator, and automated rotation.
The Problem: Kubernetes Secrets Are Not Secret by Default
A fresh Kubernetes cluster stores Secrets as base64-encoded strings in etcd. Any user or service account with get secrets permission can decode them trivially. A leaked etcd backup reveals every credential in the cluster as plaintext. Securing secrets is a layered problem — there is no single toggle. The four levels below represent increasing protection, from the default insecure baseline to the production-grade external secrets pattern.
Baseline
get secret permission reads them in plaintext. etcd backup leak = full credential exposure. The cluster default.RBAC
Encrypted
External
Level 1: Scoping Secret Access with RBAC
The scenario: The payment-api ServiceAccount needs exactly two Secrets in the payments namespace. Developers should not read any Secrets.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-reader-payment-api
namespace: payments
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"] # get only — NOT list
resourceNames:
- postgres-credentials # Restrict to exactly these two Secrets
- payment-gateway-key
# Why omit 'list'?
# list returns Secret metadata (names, annotations) even without 'get'
# Knowing secret names is valuable recon for an attacker
# Only grant list if code genuinely needs to iterate all secrets
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: payment-api-read-secrets
namespace: payments
subjects:
- kind: ServiceAccount
name: payment-api-sa
namespace: payments
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-reader-payment-api
$ kubectl apply -f secret-rbac.yaml role.rbac.authorization.k8s.io/secret-reader-payment-api created rolebinding.rbac.authorization.k8s.io/payment-api-read-secrets created # SA can get its specific secrets $ kubectl auth can-i get secret/postgres-credentials -n payments \ --as=system:serviceaccount:payments:payment-api-sa yes # SA cannot get other secrets $ kubectl auth can-i get secret/tls-certificate -n payments \ --as=system:serviceaccount:payments:payment-api-sa no # Developers with 'edit' ClusterRole cannot read secrets $ kubectl auth can-i get secrets -n payments --as=alice@company.com no ← 'edit' ClusterRole intentionally excludes secrets ✓
What just happened?
resourceNames enforces minimum-necessary access — Without it, the binding covers all Secrets in the namespace. With it, a compromised container can only access its two specific Secrets — not the TLS certificate, admin token, or any other credential in the namespace. If the SA is compromised, the blast radius is two secrets, not everything.
list is a separate risk from get — list returns all Secret metadata including names. Knowing secret names (e.g., prod-root-ca-key, aws-master-credentials) is reconnaissance. Only grant list when code genuinely iterates over all Secrets.
Level 2: etcd Encryption at Rest
etcd stores everything in the cluster — including all Secrets. By default they are base64. Enabling encryption at rest means the API server AES-encrypts Secret values before writing. A stolen etcd backup yields ciphertext, not plaintext credentials.
On managed Kubernetes (EKS, GKE, AKS), encryption is often one-click or default. For self-managed clusters it requires an EncryptionConfiguration file passed to the API server.
# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
- configmaps # Optional — encrypt ConfigMaps too
providers:
- aescbc: # AES-CBC with a local key
keys:
- name: key1
secret: c2VjcmV0MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0
# 32-byte base64 key. Generate: head -c 32 /dev/urandom | base64
# Protect this file — if it leaks, all Secrets are decryptable
- kms: # KMS provider — key never leaves the HSM
name: aws-kms
endpoint: unix:///tmp/kms-provider.sock
cachesize: 1000
timeout: 3s
# API server sends ciphertext to KMS, gets plaintext back
# The encryption key lives in AWS KMS protected by HSM
- identity: {} # Plaintext fallback — MUST be last
# Allows reading Secrets written before encryption
# Remove after running the re-encrypt command below
# Step 1: Add flag to kube-apiserver static Pod manifest:
# --encryption-provider-config=/etc/kubernetes/encryption-config.yaml
# (API server restarts automatically when the manifest changes)
# Step 2: Re-encrypt all pre-existing Secrets
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
# Reads every Secret (triggers decryption if needed)
# Writes it back (triggers encryption with the new provider)
# Step 3: Verify — raw etcd query on the control plane node
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
get /registry/secrets/default/my-secret | hexdump -C | head -2
# If prefix reads "k8s:enc:aescbc:v1" → encrypted ✓
# If you can read base64 values → not yet encrypted ✗
# Managed cluster shortcut (EKS):
eksctl utils enable-secrets-encryption \
--cluster=my-cluster \
--key-arn=arn:aws:kms:us-east-1:123456789012:key/abc-def
$ kubectl get secrets --all-namespaces -o json | kubectl replace -f - secret "postgres-credentials" replaced secret "payment-gateway-key" replaced secret "tls-certificate" replaced [... 44 more] 47 secrets replaced $ ETCDCTL_API=3 etcdctl ... get /registry/secrets/default/my-secret | hexdump -C | head -2 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 38 73 3a 65 6e |s/default/k8s:en| 00000020 63 3a 61 65 73 63 62 63 3a 76 31 3a 6b 65 79 31 |c:aescbc:v1:key1| # "k8s:enc:aescbc:v1:key1" prefix confirms encryption is active ✓
What just happened?
Provider ordering determines write behaviour — The first provider encrypts all new writes. Subsequent providers are tried in order for reads only. The identity: {} fallback at the end lets the API server read pre-existing plaintext Secrets. Once the kubectl replace loop has re-encrypted everything, remove the identity entry so no future unencrypted writes are possible.
Local key vs KMS — The local aescbc key is a file on the control plane node. If that file leaks, all Secrets are decryptable. The KMS provider (AWS KMS, GCP Cloud KMS, Vault) never exposes the encryption key — it performs encrypt/decrypt on behalf of the API server inside its HSM. For PCI DSS or HIPAA environments, KMS is required over local keys.
Level 3: External Secrets Operator
The External Secrets Operator (ESO) is the standard for production-grade secrets management. It runs as a controller inside the cluster and synchronises values from external secrets backends into Kubernetes Secrets. Developers declare where a secret lives — not what the value is — and ESO handles retrieval, creation, and rotation.
The scenario: All credentials live in AWS Secrets Manager. You want Kubernetes Secrets populated and refreshed automatically — without any human ever copying a value into a manifest.
apiVersion: external-secrets.io/v1beta1
kind: SecretStore # How ESO connects to the backend
metadata:
name: aws-secrets-manager
namespace: payments
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
jwt:
serviceAccountRef:
name: eso-sa # ServiceAccount with IRSA annotation
# No static AWS keys anywhere in the cluster
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret # What to fetch and where to put it
metadata:
name: postgres-credentials-sync
namespace: payments
spec:
refreshInterval: 1h # Re-read from AWS Secrets Manager every hour
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: postgres-credentials # Kubernetes Secret to create or update
creationPolicy: Owner # ESO owns the Secret — deletes it if this object is deleted
data:
- secretKey: POSTGRES_USER # Key in the resulting Kubernetes Secret
remoteRef:
key: production/payments/postgres # Path in AWS Secrets Manager
property: username # JSON field within the AWS secret value
# AWS secret: {"username":"pay_user","password":"..."}
- secretKey: POSTGRES_PASSWORD
remoteRef:
key: production/payments/postgres
property: password
- secretKey: POSTGRES_DB
remoteRef:
key: production/payments/postgres
property: database
$ kubectl apply -f external-secrets.yaml secretstore.external-secrets.io/aws-secrets-manager created externalsecret.external-secrets.io/postgres-credentials-sync created $ kubectl get externalsecret -n payments NAME STORE REFRESH INTERVAL STATUS READY postgres-credentials-sync aws-secrets-manager 1h Valid True $ kubectl get secret postgres-credentials -n payments NAME TYPE DATA AGE postgres-credentials Opaque 3 12s ← created automatically by ESO ✓ # The manifest committed to Git contains zero secret values — only paths # The actual password lives in AWS Secrets Manager, never in Git
What just happened?
The ExternalSecret is safe to commit to Git — It contains paths, not values. Your entire infrastructure-as-code repository can be public-facing and none of it exposes credentials. The actual password lives in AWS Secrets Manager, protected by IAM, KMS, and audit logging. This is the key ESO insight: decouple the declaration of what you need from the value itself.
IRSA means no AWS credentials in the cluster — ESO uses the projected service account token (Lesson 39) to authenticate to AWS STS, exchanging it for short-lived IAM credentials. No AWS access keys are stored anywhere in Kubernetes. The IAM role policy grants ESO read access only to specific Secrets Manager paths.
refreshInterval enables automatic rotation — When a credential is rotated in AWS Secrets Manager, ESO picks up the new value at the next refresh. The Kubernetes Secret is updated. Volume-mounted Secrets propagate to Pods within ~60 seconds. Env vars require a Pod restart — which Reloader handles automatically.
Automated Rotation with Reloader
Pods using env vars from Secrets don't pick up new values when the Secret updates — env vars are baked in at Pod startup. stakater/Reloader watches for Secret changes and triggers rolling restarts of annotated Deployments, completing the rotation loop automatically.
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-api
namespace: payments
annotations:
# Reloader: watch these Secrets — rolling restart when either changes
secret.reloader.stakater.com/reload: "postgres-credentials,payment-gateway-key"
# When ESO updates either Secret after a rotation,
# Reloader triggers a rolling restart — new Pods get fresh env var values
spec:
replicas: 3
selector:
matchLabels:
app: payment-api
template:
metadata:
labels:
app: payment-api
spec:
serviceAccountName: payment-api-sa
containers:
- name: payment-api
image: registry.company.com/payment-api:3.0.0
envFrom:
- secretRef:
name: postgres-credentials # ESO keeps this current from AWS SM
- secretRef:
name: payment-gateway-key
Full automated rotation — no human steps after step 1: 1. Security team rotates postgres password in AWS Secrets Manager 2. ESO refreshInterval fires → reads new value from AWS SM 3. ESO updates Kubernetes Secret "postgres-credentials" 4. Reloader detects the Secret change 5. Reloader triggers rolling restart of payment-api Deployment 6. New Pods start with updated POSTGRES_PASSWORD in env vars 7. Old Pods drain gracefully (SIGTERM + grace period) 8. Zero-downtime rotation complete ✓ $ kubectl get pods -n payments -w payment-api-5c9d-2xkpj 1/1 Running payment-api-5c9d-7rvqn 1/1 Running payment-api-5c9d-m4czl 1/1 Running payment-api-9f2a-8pkrn 0/1 ContainerCreating ← new Pod, new password payment-api-5c9d-2xkpj 1/1 Terminating ← old Pod draining
What just happened?
Zero-touch rotation — A credential rotation that previously required a change request, a deploy window, and manual coordination now happens automatically. The security team gains confidence that passwords are rotated regularly; engineering loses zero sleep over it.
Volume-mounted Secrets don't need Reloader — If your app reads credentials from files (volume-mounted Secrets), the kubelet updates the mounted file within ~60 seconds of the Secret changing — no restart needed. Reloader is specifically for the env var case, where values are frozen at Pod startup.
Teacher's Note: Git history never forgets a committed secret
The most common secrets incident: a developer accidentally commits a Kubernetes Secret YAML to a public GitHub repository. Even if noticed within seconds, GitHub's fork network means copies exist permanently. The credential is compromised and must be rotated immediately — no investigation first.
Three things to implement now: (1) Pre-commit hooks — git-secrets or truffleHog scan every commit for credential patterns before they leave the developer's machine. (2) GitHub secret scanning — enable for all repositories, public and private. (3) Mandatory immediate rotation — if a secret is exposed, rotate first, investigate later. The rotation takes 5 minutes; the post-mortem can follow.
The correct workflow: value in AWS Secrets Manager → ExternalSecret YAML in Git (no values) → ESO creates the Kubernetes Secret. Git contains everything needed to run the application and none of it is sensitive.
Practice Questions
1. Which controller synchronises values from AWS Secrets Manager or HashiCorp Vault into Kubernetes Secrets, refreshing them when the source value rotates?
2. In an RBAC Role rule for Secrets, which field restricts the rule to only specific named Secrets rather than all Secrets in the namespace?
3. After enabling etcd encryption at rest, existing Secrets are still stored as plaintext. What kubectl command re-encrypts all of them?
Quiz
1. You enable etcd encryption at rest with AES-CBC. A stolen etcd backup no longer exposes credentials. What threat does this NOT protect against?
2. A developer using ESO needs a database password available to their application. What do they commit to Git?
3. Your app uses envFrom to inject credentials from a Kubernetes Secret. ESO updates the Secret when the source rotates. Why don't running Pods see the new value — and what tool fixes this?
Up Next · Lesson 43
Image Pull Secrets
How to authenticate the kubelet against private container registries — and the patterns for managing pull credentials at scale across many namespaces without repetition.