Kubernetes Lesson 38 – Roles and RoleBindings | Dataplexa
Networking, Ingress & Security · Lesson 38

Roles and RoleBindings

Knowing that RBAC exists is one thing. Knowing how real engineering organisations structure their permissions — which built-in roles to reuse, how to model multi-team namespaces, how to scope CI/CD pipeline access, and how to prevent privilege escalation — is what this lesson covers.

The Built-in ClusterRoles

Kubernetes ships with several built-in ClusterRoles that cover the most common permission patterns. Rather than writing your own from scratch, understand these first — you can often apply them directly or use them as starting templates.

ClusterRole What it grants Use via
cluster-admin Full access to everything in the cluster. The superuser role. No restrictions. ClusterRoleBinding only — never grant this to automated systems or developers
admin Full read/write on most namespace resources. Can manage Roles and RoleBindings within the namespace. Cannot manage the namespace itself or cluster-scoped resources. RoleBinding — for namespace owners, team leads
edit Read/write on most namespace resources. Cannot view or manage Roles and RoleBindings — prevents privilege escalation. RoleBinding — for developers deploying applications
view Read-only on most namespace resources. Cannot view Secrets. Good for observability without data exposure. RoleBinding — for on-call engineers, dashboards, auditors

The scenario: Your company uses a three-tier access model for each namespace: team leads get admin, developers get edit, and the monitoring stack gets view. By using built-in ClusterRoles via namespace-scoped RoleBindings, you never write a single Role manifest — just the bindings.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: payments-admin-binding
  namespace: payments
subjects:
  - kind: User
    name: sarah@company.com          # Team lead
    apiGroup: rbac.authorization.k8s.io
  - kind: Group
    name: payments-leads             # Any user in this OIDC group
    apiGroup: rbac.authorization.k8s.io
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole                  # Reference the built-in ClusterRole
  name: admin                        # Scoped to 'payments' namespace by this RoleBinding

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: payments-edit-binding
  namespace: payments
subjects:
  - kind: Group
    name: payments-engineers         # All engineers on the payments team
    apiGroup: rbac.authorization.k8s.io
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: edit                         # Can create/update/delete but cannot touch Roles or RoleBindings

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: payments-view-binding
  namespace: payments
subjects:
  - kind: ServiceAccount
    name: prometheus                 # Prometheus scraper service account
    namespace: monitoring
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view                         # Read-only, no Secrets access
$ kubectl apply -f payments-rbac.yaml
rolebinding.rbac.authorization.k8s.io/payments-admin-binding created
rolebinding.rbac.authorization.k8s.io/payments-edit-binding created
rolebinding.rbac.authorization.k8s.io/payments-view-binding created

$ kubectl get rolebindings -n payments
NAME                      ROLE                AGE
payments-admin-binding    ClusterRole/admin   4s
payments-edit-binding     ClusterRole/edit    4s
payments-view-binding     ClusterRole/view    4s

$ kubectl auth can-i create deployments -n payments \
  --as=sarah@company.com
yes

$ kubectl auth can-i delete namespaces \
  --as=sarah@company.com
no   ← admin is namespace-scoped — cluster resources still blocked

What just happened?

ClusterRole via RoleBinding = namespace-scoped power — Sarah gets admin permission in the payments namespace only — she cannot delete namespaces, manage nodes, or touch other namespaces. The ClusterRole defines what is allowed; the RoleBinding scopes where. This reuse pattern means you define your access tiers once as ClusterRoles and apply them to any namespace with a single RoleBinding.

The edit role and privilege escalation prevention — The built-in edit ClusterRole deliberately excludes the ability to view or modify Roles and RoleBindings. This is critical: an engineer with edit cannot grant themselves or others more permissions. The separation between editing resources and managing access control is a key principle of least-privilege RBAC design.

Custom Roles for Specific Workloads

Built-in ClusterRoles cover human access well. For service accounts used by automated systems — CI/CD pipelines, operators, custom controllers — you almost always need a precisely-scoped custom Role. The principle: grant only the exact verbs on the exact resources the workload needs, nothing more.

The scenario: You're deploying a GitOps-style CI/CD pipeline (ArgoCD-style) for the payments team. The pipeline needs to: apply Deployments and Services from Git manifests, read ConfigMaps to check current state, and list Pods to verify rollout health. It must not be able to read Secrets, delete resources, or touch other namespaces.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: payments-deployer            # Dedicated service account for the pipeline
  namespace: ci-cd                   # Runs in its own namespace for isolation
  labels:
    app: payments-pipeline

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: payments-deployer-role
  namespace: payments                # The namespace the pipeline deploys TO
rules:
  - apiGroups: ["apps"]
    resources: ["deployments", "statefulsets"]
    verbs: ["get", "list", "create", "update", "patch"]
    # No delete — pipeline should not be able to destroy running workloads
    # Rolling updates use update/patch, not delete

  - apiGroups: [""]
    resources: ["services", "configmaps"]
    verbs: ["get", "list", "create", "update", "patch"]

  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list", "watch"]  # Watch for rollout status monitoring
    # No exec, no delete — pipeline observes Pods, doesn't interact with them

  - apiGroups: ["networking.k8s.io"]
    resources: ["ingresses"]
    verbs: ["get", "list", "create", "update", "patch"]

  # Explicitly NOT included:
  # - secrets (no access to credentials)
  # - pods/exec (no shell access)
  # - roles, rolebindings (no privilege escalation)
  # - delete verb on anything (no destructive operations)

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: payments-deployer-binding
  namespace: payments
subjects:
  - kind: ServiceAccount
    name: payments-deployer
    namespace: ci-cd                 # ServiceAccount in the ci-cd namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: payments-deployer-role
$ kubectl apply -f payments-deployer-rbac.yaml
serviceaccount/payments-deployer created
role.rbac.authorization.k8s.io/payments-deployer-role created
rolebinding.rbac.authorization.k8s.io/payments-deployer-binding created

$ kubectl auth can-i apply deployments -n payments \
  --as=system:serviceaccount:ci-cd:payments-deployer
yes

$ kubectl auth can-i get secrets -n payments \
  --as=system:serviceaccount:ci-cd:payments-deployer
no   ← cannot read secrets ✓

$ kubectl auth can-i delete deployments -n payments \
  --as=system:serviceaccount:ci-cd:payments-deployer
no   ← cannot delete — only create/update/patch ✓

$ kubectl auth can-i --list -n payments \
  --as=system:serviceaccount:ci-cd:payments-deployer | grep -v "^Non-resource"
Resources                   Non-Resource URLs   Resource Names  Verbs
configmaps                  []                  []              [get list create update patch]
pods                        []                  []              [get list watch]
services                    []                  []              [get list create update patch]
deployments.apps             []                  []              [get list create update patch]
statefulsets.apps            []                  []              [get list create update patch]
ingresses.networking.k8s.io  []                  []              [get list create update patch]

What just happened?

Deliberately omitting delete — The pipeline can create and update resources but cannot delete them. This is intentional. A GitOps pipeline applying manifests from Git should only converge toward the desired state — it shouldn't be in the business of deleting resources that aren't in the current commit. If a Deployment needs to be removed, that requires a human with appropriate permissions. This constraint prevents a bug in the pipeline from destroying production workloads.

Cross-namespace binding — The ServiceAccount lives in ci-cd. The Role and RoleBinding live in payments. This cross-namespace subject pattern is essential for centralised CI/CD: the pipeline runs in a dedicated namespace, but its permissions are granted in each team namespace via a RoleBinding there.

apply uses create + update/patchkubectl apply uses create for new objects and patch or update for existing ones. You need all three verbs on a resource for a pipeline to apply manifests reliably. Many RBAC misconfigurations are the result of granting only update — the first deploy succeeds but subsequent ones fail because the resource doesn't exist yet.

ClusterRoles for Cluster-Wide Resources

Some resources are cluster-scoped — Nodes, PersistentVolumes, Namespaces, StorageClasses, ClusterRoles themselves. You cannot use a namespace-scoped Role to grant permissions on these. A ClusterRole with a ClusterRoleBinding is required.

The scenario: Your platform team runs a custom node autoscaler that needs to: read Node objects to assess capacity, list PersistentVolumes to track storage allocation, and create Namespaces for new tenant onboarding. These are all cluster-scoped resources.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole                    # ClusterRole: needed for cluster-scoped resources
metadata:
  name: platform-autoscaler-role
  labels:
    component: autoscaler
rules:
  - apiGroups: [""]
    resources: ["nodes"]             # Nodes are cluster-scoped
    verbs: ["get", "list", "watch"]  # Read node capacity and status

  - apiGroups: [""]
    resources: ["persistentvolumes"] # PVs are cluster-scoped (not PVCs)
    verbs: ["get", "list", "watch"]

  - apiGroups: [""]
    resources: ["namespaces"]        # Namespaces are cluster-scoped
    verbs: ["get", "list", "create"] # Can create new tenant namespaces

  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]    # StorageClasses are cluster-scoped
    verbs: ["get", "list"]

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: platform-autoscaler
  namespace: platform-tools

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding             # ClusterRoleBinding: applies cluster-wide
metadata:
  name: platform-autoscaler-binding
subjects:
  - kind: ServiceAccount
    name: platform-autoscaler
    namespace: platform-tools
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: platform-autoscaler-role
$ kubectl apply -f platform-autoscaler-rbac.yaml
clusterrole.rbac.authorization.k8s.io/platform-autoscaler-role created
serviceaccount/platform-autoscaler created
clusterrolebinding.rbac.authorization.k8s.io/platform-autoscaler-binding created

$ kubectl get clusterrolebindings | grep autoscaler
platform-autoscaler-binding   ClusterRole/platform-autoscaler-role   8s

$ kubectl auth can-i list nodes \
  --as=system:serviceaccount:platform-tools:platform-autoscaler
yes

$ kubectl auth can-i list pods --all-namespaces \
  --as=system:serviceaccount:platform-tools:platform-autoscaler
no   ← ClusterRoleBinding grants node/PV access but NOT pods in all namespaces

What just happened?

ClusterRole is still scoped to what you list — Even with a ClusterRoleBinding, the autoscaler only gets the resources explicitly listed in the ClusterRole rules. It cannot list Pods across all namespaces because Pods aren't in the ClusterRole. A ClusterRoleBinding with a ClusterRole doesn't mean "access to everything" — it means the specified resources are accessible cluster-wide.

When to use ClusterRoleBinding vs RoleBinding — Use ClusterRoleBinding only when the workload genuinely needs to operate across namespaces or on cluster-scoped resources. A common mistake is giving CI/CD pipelines ClusterRoleBindings "because it's simpler" — then the pipeline has read/write access to every namespace. Use namespace-scoped RoleBindings and create one per namespace the pipeline deploys to. It's more configuration but dramatically limits the blast radius of a compromise.

Aggregated ClusterRoles

Kubernetes has a mechanism called aggregated ClusterRoles that automatically merges permissions from multiple ClusterRoles. The built-in view, edit, and admin ClusterRoles use aggregation — which is why CRDs can automatically plug into them.

The scenario: You've installed a custom resource definition (CRD) for database provisioning. The platform team wants the built-in edit ClusterRole to automatically include permissions on the new CRD — so all engineers who have edit access automatically get CRD edit access too, without updating any RoleBindings.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: database-provisioner-edit
  labels:
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    # This magic label tells Kubernetes to merge this ClusterRole
    # into the built-in 'edit' ClusterRole automatically
    # Same pattern: aggregate-to-view, aggregate-to-admin, aggregate-to-cluster-admin
rules:
  - apiGroups: ["db.company.com"]    # Your CRD's API group
    resources: ["databases"]         # The CRD resource name
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
    # All non-admin users with 'edit' binding now automatically have
    # full CRUD on Database CRDs — with no RoleBinding changes required

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: database-provisioner-view
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    # Users with 'view' can now see Database objects too
rules:
  - apiGroups: ["db.company.com"]
    resources: ["databases"]
    verbs: ["get", "list", "watch"]
$ kubectl apply -f database-aggregation.yaml
clusterrole.rbac.authorization.k8s.io/database-provisioner-edit created
clusterrole.rbac.authorization.k8s.io/database-provisioner-view created

$ kubectl describe clusterrole edit | grep databases
  databases   []   []  [create delete get list patch update watch]
  ← 'databases' now appears in the built-in edit ClusterRole automatically

$ kubectl auth can-i create databases -n payments \
  --as=system:serviceaccount:ci-cd:payments-deployer
yes   ← deployer had 'edit' ClusterRole applied — databases now included automatically

What just happened?

Zero-touch permission propagation — By adding the aggregation label, the new CRD permissions flow automatically into every existing binding of the edit ClusterRole across the entire cluster. Every engineer who had edit access to any namespace now also has edit access to databases in those namespaces. No RoleBinding updates required. This is the intended extension mechanism — CRD authors can plug into the existing access hierarchy.

The risk of aggregation — The flip side: adding aggregation labels to CRD ClusterRoles automatically expands permissions for every subject bound to the target ClusterRole. If you have 500 engineers with edit access and install a sensitive CRD, they all get access immediately. Always consider the blast radius before adding aggregation labels. For sensitive CRDs (secrets management, billing, infrastructure provisioning), create explicit separate bindings instead of aggregating.

RBAC for Multi-Team Clusters: The Reference Architecture

Here's how a real organisation with multiple product teams on a shared cluster structures its RBAC. Each namespace follows the same template:

Per-Namespace RBAC Template

Subject Binding type ClusterRole Can do
Platform team ClusterRoleBinding cluster-admin Everything. Cluster management only.
[team]-leads group RoleBinding (ns) admin Full namespace control, manage sub-team RBAC
[team]-engineers group RoleBinding (ns) edit Deploy, scale, update — no RBAC management
On-call engineers RoleBinding (ns) view + pods/exec* Read everything, exec for debugging — no writes
CI/CD pipeline SA RoleBinding (ns) custom deployer role Create/update workloads — no delete, no secrets
Monitoring SA RoleBinding (ns) view Read pod/service state for dashboards
* On-call exec access via a separate temporary RoleBinding that gets rotated, not permanent

Teacher's Note: RBAC auditing — the habit that saves you

RBAC configurations drift over time. Engineers are onboarded, granted access, leave the company, and sometimes their bindings aren't cleaned up. Service accounts from decommissioned pipelines keep running with live permissions. The audit habit: once a quarter, run kubectl get rolebindings,clusterrolebindings -A -o wide and review every binding that references cluster-admin — there should be very few, and you should be able to justify every one.

A useful one-liner for finding all cluster-admin bindings: kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name == "cluster-admin") | {name: .metadata.name, subjects: .subjects}'. Run it and count the results. If you have more than 3-5 cluster-admin bindings and you can't explain every one, something has drifted.

Also worth knowing: Kubernetes API server audit logs record every RBAC decision. In production, enable API server audit logging and ship the logs to a SIEM. Any forbidden (403) response is a potential attack or misconfiguration. Any cluster-admin action from an unexpected source is an incident.

Practice Questions

1. Which built-in ClusterRole grants full read/write access to most namespace resources but deliberately excludes permissions to view or modify Roles and RoleBindings — preventing privilege escalation?



2. A CI/CD pipeline needs to run kubectl apply on Deployments. apply creates new objects and updates existing ones. Which three verbs must be granted on Deployments to support this reliably?



3. You create a new CRD and want all users who have the built-in edit ClusterRole to automatically have edit permissions on the new resource. What label do you add to your new ClusterRole?



Quiz

1. An engineer has the edit ClusterRole bound in the payments namespace. They try to create a RoleBinding to grant themselves cluster-admin access in that namespace. What happens?


2. A CI/CD pipeline ServiceAccount needs to deploy to 5 specific namespaces. Should you use a ClusterRoleBinding or a RoleBinding per namespace?


3. You want to grant a ServiceAccount permission to list Nodes in the cluster. Why can't you use a namespace-scoped Role for this?


Up Next · Lesson 39

Service Accounts

How Pods authenticate to the Kubernetes API, how to create and bind dedicated service accounts for your workloads, and how projected token volumes replace the old secret-based token mechanism.