Kubernetes Course
Pod Security Policies
Lesson 40 covered security contexts — settings individual teams add to their manifests. This lesson covers cluster-wide enforcement: how to make it impossible to deploy an insecure Pod in the first place, using Pod Security Admission and policy-as-code tools like OPA/Kyverno.
The Problem: Per-Manifest Security Doesn't Scale
You've defined a hardened security context template. You've trained your team. But in a cluster with 20 teams and 200 Deployments, someone will inevitably deploy a Pod without security contexts — by copying an old template, by using a Helm chart that predates your standards, or simply by forgetting. Per-manifest security only works when it's enforced at the cluster level.
The original solution was PodSecurityPolicy (PSP) — a cluster-level admission controller that validated Pod specs against a policy. PSP was powerful but notoriously confusing, and Kubernetes deprecated it in 1.21 and removed it in 1.25. Its replacement is Pod Security Admission (PSA) — built into Kubernetes from 1.23 onwards, simpler, and namespace-label driven.
PodSecurityPolicy (removed in 1.25)
Complex RBAC integration required. Confusing "who can use which policy" model. Notoriously misconfigured. Removed. Don't use.
Pod Security Admission (built-in from 1.23)
Simple namespace-label model. Three built-in security profiles. No RBAC complexity. The current standard. Use this.
Pod Security Admission: Three Profiles, Three Modes
Pod Security Admission defines three security profiles that map to the CIS Kubernetes Benchmark and NIST standards. You apply them to namespaces via labels.
| Profile | What it enforces | Use for |
|---|---|---|
| privileged | No restrictions. Allows everything. | kube-system, CNI plugins, monitoring agents that need host access |
| baseline | Blocks known privilege escalation paths: hostPID, hostNetwork, hostPath, privileged containers, dangerous capabilities (SYS_ADMIN, NET_ADMIN). Allows running as root. | Most application namespaces. Minimum viable security. |
| restricted | Enforces everything in baseline plus: must run as non-root, must drop ALL capabilities, allowPrivilegeEscalation must be false, seccompProfile must be set. | Production namespaces with sensitive data. PCI DSS, HIPAA, SOC2 workloads. |
Each profile can be applied in three modes:
| Mode | What it does | When to use |
|---|---|---|
| enforce | Rejects Pods that violate the policy. kubectl apply fails with an error. | Production namespaces ready to enforce |
| warn | Allows Pods but prints a warning. kubectl apply succeeds with a warning message. | Dry run before enforcing — tells you what would break |
| audit | Allows Pods and records violations in the API server audit log. Silent to the user. | Observing violations in production without breaking deployments |
Applying Pod Security Admission via Namespace Labels
The scenario: You're rolling out Pod Security Admission across your cluster. You start with warn mode to discover violations without breaking anything, then move to enforce once the namespace is clean.
kubectl label namespace payments \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/warn-version=latest
# warn mode: any Pod violating 'restricted' profile gets a warning but deploys anyway
# Use this first to discover what needs fixing without breaking existing workloads
# warn-version: which version of the restricted profile to apply (latest = current cluster version)
kubectl label namespace payments \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/audit-version=latest
# audit mode: violations recorded in API server audit log — silent to users
# Useful alongside warn to capture violations that CI/CD might suppress
kubectl label namespace payments \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/enforce-version=latest
# enforce mode: violations are REJECTED — kubectl apply fails
# Only add this after warn mode shows zero remaining violations
$ kubectl label namespace payments \ pod-security.kubernetes.io/warn=restricted \ pod-security.kubernetes.io/warn-version=latest namespace/payments labeled $ kubectl apply -f legacy-pod-no-security-context.yaml -n payments Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "app" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "app" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "app" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "app" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") pod/legacy-pod created ← warn mode — still deployed despite violations $ kubectl label namespace payments pod-security.kubernetes.io/enforce=restricted namespace/payments labeled $ kubectl apply -f legacy-pod-no-security-context.yaml -n payments Error from server (Forbidden): error when creating "legacy-pod-no-security-context.yaml": pods "legacy-pod" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (...), unrestricted capabilities (...)
What just happened?
warn mode is your migration path — The warning output tells you exactly which settings are missing and what to add. It's a free audit of your manifests. Run warn mode for a week, fix the violations, then flip to enforce. This gradual approach prevents the "PSA turned on, 40 Deployments broke overnight" disaster.
enforce rejection is specific — The error message lists every individual violation. It's not just "forbidden" — it tells you exactly what's wrong. This makes the developer experience much better than the old PSP mechanism, which often gave opaque errors. Your developers can read the error and fix their manifest directly.
Labels can stack — You can have enforce=baseline and warn=restricted simultaneously. This lets the namespace block the worst violations (privileged containers, hostPath mounts) immediately, while warning about restricted violations that you're still remediating. This is the staged rollout pattern for large teams.
Namespace Security Label Best Practices
The scenario: You're applying Pod Security Admission across a multi-team cluster. Different namespaces need different profiles — kube-system needs privileged access, application namespaces need baseline minimum, sensitive data namespaces need restricted.
apiVersion: v1
kind: Namespace
metadata:
name: payments
labels:
# Enforce restricted profile — reject non-compliant Pods
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: "v1.29"
# Warn on restricted profile in addition (belt and suspenders for CI feedback)
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: "v1.29"
# Audit violations silently for logging/monitoring
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: "v1.29"
# Pinning to a specific version ensures behaviour doesn't change
# on cluster upgrades — switch to "latest" after validating a new version
team: payments
environment: production
---
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
labels:
# Monitoring agents often need host access — baseline is more appropriate
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: "v1.29"
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: "v1.29"
team: platform
environment: production
$ kubectl apply -f namespaces.yaml
namespace/payments configured
namespace/monitoring configured
$ kubectl get namespace payments -o yaml | grep pod-security
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: v1.29
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: v1.29
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: v1.29
$ kubectl apply -f privileged-pod.yaml -n payments
Error from server (Forbidden): pods "privileged-pod" is forbidden:
violates PodSecurity "restricted:v1.29": [...]
$ kubectl apply -f hardened-pod.yaml -n payments
pod/hardened-pod created ← fully hardened Pod deploys cleanly ✓What just happened?
Version pinning — Setting pod-security.kubernetes.io/enforce-version: "v1.29" instead of latest pins the policy to Kubernetes 1.29's definition of "restricted." When you upgrade the cluster to 1.30, the policy doesn't automatically change and your existing Pods aren't suddenly non-compliant. Upgrade the version pin deliberately, after testing.
Namespace-as-code — Defining the Namespace as a YAML object and committing it to Git ensures the PSA labels are part of your infrastructure-as-code. The Namespace is recreated with the correct labels if the cluster is rebuilt. Platform teams can enforce namespace standards through a GitOps pipeline — any Namespace without the required PSA labels triggers a PR review before merge.
Beyond PSA: OPA Gatekeeper and Kyverno
Pod Security Admission handles Pod-level security. But policy enforcement in mature clusters goes further: preventing hostPath on any resource type, requiring specific labels on all Deployments, blocking images from unapproved registries, or enforcing a naming convention. For these custom policies, two admission webhook solutions dominate the ecosystem.
OPA Gatekeeper
Policies written in Rego (a policy language). Very powerful and flexible. Higher learning curve. The CNCF standard for policy-as-code. Used by large enterprises with dedicated platform engineering teams.
Kyverno
Policies written in YAML (Kubernetes-native). Easier to learn. Can also mutate resources (auto-add labels, inject sidecars, set defaults). Popular for teams that prefer YAML over a custom language.
The scenario: Your cluster policy requires that all container images come from your approved internal registry (registry.company.com) — not Docker Hub or any other public registry. Here's how to enforce that with a Kyverno policy.
apiVersion: kyverno.io/v1
kind: ClusterPolicy # ClusterPolicy: applies across all namespaces
metadata:
name: require-internal-registry
annotations:
policies.kyverno.io/title: "Require Internal Registry"
policies.kyverno.io/description: >
All container images must come from registry.company.com.
Public registry images are a supply chain security risk.
spec:
validationFailureAction: Enforce # Enforce: reject violations. Audit: log-only.
background: true # background: also scan existing resources (not just new ones)
rules:
- name: check-image-registry
match:
any:
- resources:
kinds:
- Pod # Match Pod creation and updates
validate:
message: "Image must come from registry.company.com. Public images are not allowed."
foreach:
- list: "request.object.spec.containers"
deny:
conditions:
any:
- key: "{{ element.image }}"
operator: NotEquals
value: "registry.company.com/*"
# element.image: the image field of each container in the Pod
# NotEquals with wildcard: deny if image does NOT start with registry.company.com/
$ kubectl apply -f require-internal-registry.yaml
clusterpolicy.kyverno.io/require-internal-registry created
$ kubectl apply -f pod-with-dockerhub-image.yaml
Error from server: admission webhook "mutate.kyverno.svc-fail" denied the request:
resource Pod was blocked due to the following policies:
require-internal-registry:
check-image-registry: Image must come from registry.company.com.
Public images are not allowed.
$ kubectl apply -f pod-with-internal-image.yaml
pod/secure-pod created ← registry.company.com image allowed ✓
$ kubectl get clusterpolicy require-internal-registry
NAME BACKGROUND VALIDATE ACTION READY
require-internal-registry true Enforce trueWhat just happened?
Supply chain security — Requiring images from an internal registry means you control every image that runs in your cluster. Your internal registry proxies approved public images (after scanning them for CVEs), and only images that pass your security scanning can be promoted to registry.company.com. This policy enforces that guarantee — nobody can sneak in an unscanned image from Docker Hub.
Kyverno works as an admission webhook — Kyverno installs a validating admission webhook that the API server calls for every Pod creation. The policy is evaluated before the Pod is persisted to etcd. The rejection message is human-readable and specific. The background: true setting also runs Kyverno against existing resources periodically — reporting violations without blocking running workloads.
Cluster-Wide Security Enforcement Architecture
Here's how all the layers work together — each one catching different classes of security violation:
Defense in Depth — Policy Enforcement Layers
CI/CD
kubesec, trivy config, kube-score scan manifests before they reach the cluster. Catches problems at PR time, not deploy time.Admission
Runtime
Network
Teacher's Note: The migration path from no PSA to enforced restricted
Start with warn=baseline on all namespaces — most clusters are already compliant and warnings reveal the gaps. Then add warn=restricted to application namespaces and spend 2–4 weeks fixing violations. Only flip to enforce after warn mode shows zero remaining issues.
Add Kyverno or OPA for custom policies (registry enforcement, required labels) once the PSA baseline is stable. And wire helm template | kubectl apply --dry-run=server into CI so violations are caught at PR time, not deploy time.
Practice Questions
1. You want to apply the restricted Pod Security Admission profile to a namespace but don't want to break existing deployments yet. Which PSA mode should you use first to discover violations without rejecting Pods?
2. What namespace label key do you set to enforce the restricted Pod Security Admission profile — causing non-compliant Pod creation to be rejected?
3. Which Pod Security Admission profile enforces that containers must run as non-root, must drop ALL capabilities, must set allowPrivilegeEscalation: false, and must set a seccomp profile?
Quiz
1. A colleague suggests using PodSecurityPolicy to enforce security baselines on a cluster running Kubernetes 1.26. What do you tell them?
2. You apply both enforce=baseline and warn=restricted labels to a namespace. What is the effect?
3. Pod Security Admission is active on your cluster. You also need to enforce that all container images come from an internal registry. Which tool handles this custom policy?
Up Next · Lesson 42
Secrets Management Best Practices
Kubernetes Secrets aren't actually secret by default. This lesson covers etcd encryption at rest, external secrets operators, Vault integration, and the patterns that keep credentials secure in a real production cluster.