Kubernetes Lesson 18 – Namespaces | Dataplexa
Core Kubernetes Concepts · Lesson 18

Namespaces

A single Kubernetes cluster can serve dozens of teams, hundreds of services, and multiple environments — but only if you partition it correctly. Namespaces are how you do that, and treating everything as default is one of the most common mistakes engineers make when they start running Kubernetes in production.

What a Namespace Actually Is

A namespace is a virtual partition inside a Kubernetes cluster. Objects inside a namespace are isolated from objects in other namespaces — they have their own scope for names, resource quotas, and access controls. Two teams can both have a Deployment called api in the same cluster without collision, as long as they're in different namespaces.

Think of a Kubernetes cluster like a large office building. The building is the cluster — shared infrastructure, shared utilities, shared security desk at the front. Each floor is a namespace — the payments team is on floor 3, the identity team is on floor 7. They share the building but they don't share office space, they don't read each other's whiteboards, and their guest lists (RBAC policies) are separate. The building management (Kubernetes control plane) runs across all floors.

⚠️ Namespaces are NOT security boundaries

Namespaces provide name isolation and quota boundaries, but not hard security isolation. A Pod in namespace A can still talk to a Pod in namespace B over the network by default — unless you explicitly block it with Network Policies (Lesson 36). If you need true multi-tenant isolation between untrusted workloads, you need additional tooling like separate clusters or virtual clusters. Namespaces are for organisation, not for security guarantees.

The Four System Namespaces

Every fresh Kubernetes cluster ships with four namespaces already created. You need to understand what each one is for — and which ones you should never touch.

Namespace Purpose Should you use it?
default Where objects land when you don't specify a namespace. The fallback. Only for quick experiments and learning. Never for production workloads.
kube-system Kubernetes control plane components: CoreDNS, kube-proxy, kube-scheduler, etcd, API server pods. Never deploy your apps here. Modifying things here can break the entire cluster.
kube-public Readable by all users including unauthenticated ones. Used for cluster-level public info like bootstrap ConfigMaps. Rarely touched. Leave it alone unless you have a specific use case.
kube-node-lease Holds Lease objects used by the kubelet on each node to send heartbeats to the control plane — how the control plane knows nodes are alive. Never touch. Purely internal cluster health mechanism.

Creating and Working with Namespaces

The scenario: Your company runs one cluster shared across three product teams — payments, identity, and notifications. Until now, everyone has been dumping everything into default. Names are colliding, resource usage is invisible, and one team's runaway Pod ate all the CPU last week and took everyone down. You're introducing a proper namespace structure today.

kubectl get namespaces
# List all namespaces in the cluster — shorthand: kubectl get ns

kubectl create namespace payments
# Imperative: create a namespace immediately — fine for quick setup
# But for production, prefer the declarative YAML approach below

kubectl create namespace identity
kubectl create namespace notifications
kubectl create namespace monitoring    # For Prometheus, Grafana, and alerting stack
$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   5d
kube-node-lease   Active   5d
kube-public       Active   5d
kube-system       Active   5d

$ kubectl create namespace payments
namespace/payments created

$ kubectl create namespace identity
namespace/identity created

$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   5d
identity          Active   3s
kube-node-lease   Active   5d
kube-public       Active   5d
kube-system       Active   5d
payments          Active   8s

What just happened?

STATUS: Active — A namespace is either Active or Terminating. Terminating means the namespace was deleted and Kubernetes is in the process of cleaning up all objects inside it. If a namespace gets stuck in Terminating, it usually means a finalizer on one of its objects is blocking deletion — a common gotcha with some CRDs and operators.

kubectl get nsns is the short form of namespaces. Most Kubernetes resource types have shorthand aliases: po for pods, svc for services, deploy for deployments, cm for configmaps. Run kubectl api-resources to see the full list with their shortnames.

Namespace as Code: The Declarative Approach

In production you want your namespaces in Git, just like everything else. A Namespace is a first-class Kubernetes object with a YAML manifest. You can add labels and annotations to namespaces — tools like Helm, network policy engines, and cost allocation tools all read namespace-level labels.

apiVersion: v1              # Namespaces are in the core v1 API group
kind: Namespace             # Creating a Namespace object
metadata:
  name: payments            # The namespace name — DNS label rules apply: lowercase, hyphens only
  labels:
    team: payments          # Which team owns this namespace
    env: production         # Which environment
    cost-centre: "CC-4412"  # For FinOps chargeback — same labels you put on Pods
    istio-injection: enabled  # Example: tells Istio service mesh to auto-inject sidecars in this namespace
  annotations:
    contact: "payments-oncall@company.com"   # Annotation: freeform metadata, not used for selection
    docs: "https://wiki.company.com/payments" # Link to team runbook — visible in kubectl describe
$ kubectl apply -f payments-namespace.yaml
namespace/payments configured

$ kubectl describe namespace payments
Name:         payments
Labels:       cost-centre=CC-4412
              env=production
              istio-injection=enabled
              team=payments
Annotations:  contact: payments-oncall@company.com
              docs: https://wiki.company.com/payments
Status:       Active
No resource quota.
No LimitRange resource.

What just happened?

Labels vs Annotations — This is a distinction worth cementing. Labels are for selection — they're indexed and queryable with -l. Annotations are for arbitrary metadata that doesn't need to be queried by selectors. Longer strings, URLs, JSON blobs — they belong in annotations. Tools like Helm, Prometheus, and service meshes read both, but for completely different purposes.

istio-injection: enabled — This is a real-world example of a namespace label that external tooling reads. When Istio service mesh is installed, it watches for namespaces with this label and automatically injects a sidecar proxy container into every Pod created in that namespace. One label on the namespace, zero changes to any Pod manifest. This pattern — controllers reacting to namespace labels — is extremely common in the Kubernetes ecosystem.

No resource quota / No LimitRange — These two lines in kubectl describe namespace tell you whether any resource constraints have been applied to this namespace. Right now there are none — which means a runaway Pod could consume unlimited CPU and memory. We fix that next.

ResourceQuotas: Capping What a Namespace Can Consume

A ResourceQuota sets hard limits on the total resources a namespace can consume. If a namespace is at its CPU quota, no new Pods can be scheduled there until existing ones are scaled down or deleted. This is how you prevent one noisy team from starving everyone else.

The scenario: The payments team's namespace currently has no limits. Last Thursday, a developer accidentally set replicas: 500 on a test Deployment (they meant 5). It ate every CPU core in the cluster and brought down three other services. You're adding a ResourceQuota to the payments namespace so this can never happen again.

apiVersion: v1
kind: ResourceQuota             # ResourceQuota: hard resource cap for an entire namespace
metadata:
  name: payments-quota          # Name of this quota object
  namespace: payments           # Apply this quota to the payments namespace
spec:
  hard:                         # hard: the absolute maximum — cannot be exceeded
    pods: "50"                  # Max 50 Pods total in this namespace at any time
    requests.cpu: "10"          # Total CPU requests across all Pods cannot exceed 10 cores
    requests.memory: 20Gi       # Total memory requests cannot exceed 20 gibibytes
    limits.cpu: "20"            # Total CPU limits cannot exceed 20 cores
    limits.memory: 40Gi         # Total memory limits cannot exceed 40 gibibytes
    configmaps: "20"            # Max 20 ConfigMap objects in this namespace
    secrets: "30"               # Max 30 Secret objects
    services: "15"              # Max 15 Service objects
    persistentvolumeclaims: "10" # Max 10 PersistentVolumeClaims
$ kubectl apply -f payments-quota.yaml
resourcequota/payments-quota created

$ kubectl describe resourcequota payments-quota -n payments
Name:                    payments-quota
Namespace:               payments
Resource                 Used   Hard
--------                 ----   ----
configmaps               2      20
limits.cpu               2      20
limits.memory            4Gi    40Gi
persistentvolumeclaims   1      10
pods                     8      50
requests.cpu             800m   10
requests.memory          2Gi    20Gi
secrets                  4      30
services                 3      15

What just happened?

Used vs Hard columnskubectl describe resourcequota gives you a live view of current consumption against the cap. The payments namespace is currently using 800m CPU (0.8 cores) out of the 10 core limit. This is your capacity dashboard for the namespace.

When ResourceQuota is active, every Pod must declare resources — This is the hidden consequence that trips people up. Once a ResourceQuota is applied to a namespace, every single Pod created in that namespace must specify both resources.requests and resources.limits. If a Pod manifest omits them, the API server rejects it. This is actually a good thing — it forces your team to think about resource sizing.

Object count quotas — You can quota not just CPU and memory but object counts too. Limiting Secrets to 30 and Services to 15 prevents namespace sprawl and makes kubectl get all stay manageable. It also prevents accidental resource leaks from scripts that create objects without cleaning up.

LimitRange: Per-Pod and Per-Container Defaults

A LimitRange works at a finer level than ResourceQuota. Where ResourceQuota caps the total for the whole namespace, LimitRange sets defaults and min/max bounds for individual Pods and containers. Together they form a complete resource governance layer.

apiVersion: v1
kind: LimitRange                # LimitRange: per-object resource bounds within a namespace
metadata:
  name: payments-limits
  namespace: payments
spec:
  limits:
    - type: Container           # Apply these rules to every Container in this namespace
      default:                  # default: injected if the container doesn't specify limits
        cpu: "500m"             # Any container without a CPU limit gets 500m automatically
        memory: "256Mi"         # Any container without a memory limit gets 256Mi automatically
      defaultRequest:           # defaultRequest: injected if container doesn't specify requests
        cpu: "100m"             # Default CPU request if not specified
        memory: "128Mi"         # Default memory request if not specified
      max:                      # max: the container CANNOT request/limit above this
        cpu: "2"                # No single container can use more than 2 CPU cores
        memory: "2Gi"           # No single container can use more than 2Gi RAM
      min:                      # min: the container MUST request at least this much
        cpu: "50m"              # Container must request at least 50 millicores
        memory: "64Mi"          # Container must request at least 64Mi RAM
    - type: Pod                 # Apply a separate rule at the Pod level (sum of all containers)
      max:
        cpu: "4"                # The total CPU limit across all containers in a Pod cannot exceed 4
        memory: "4Gi"           # The total memory limit across all containers in a Pod cannot exceed 4Gi
$ kubectl apply -f payments-limitrange.yaml
limitrange/payments-limits created

$ kubectl describe limitrange payments-limits -n payments
Name:       payments-limits
Namespace:  payments
Type        Resource  Min   Max   Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---   ---   ---------------  -------------  -----------------------
Container   cpu       50m   2     100m             500m           -
Container   memory    64Mi  2Gi   128Mi            256Mi          -
Pod         cpu       -     4     -                -              -
Pod         memory    -     4Gi   -                -              -

What just happened?

Default injection — The default and defaultRequest fields mean that even if a developer forgets to specify resources in their Pod manifest, the LimitRange's admission controller automatically injects sensible defaults at creation time. The developer never sees it happen — the Pod just gets created with reasonable resource settings.

ResourceQuota + LimitRange together — These two objects are designed to work as a pair. ResourceQuota enforces namespace-level totals. LimitRange enforces per-object bounds and provides defaults. A namespace with both configured is a well-governed namespace — developers can deploy without worrying about resource settings (LimitRange provides defaults) but can't accidentally consume the entire cluster (ResourceQuota caps the total).

Navigating Namespaces with kubectl

The scenario: You're an SRE managing a cluster with eight active namespaces. During an incident you're constantly switching between the payments namespace to check Pods and the monitoring namespace to check Prometheus. Here are the kubectl patterns that will save you the most time.

kubectl get pods -n payments
# -n: namespace flag — show objects in the specified namespace
# Without -n, kubectl defaults to whatever namespace is set in your kubeconfig context
# Shorthand: --namespace payments

kubectl get pods --all-namespaces
# Show pods across ALL namespaces — adds a NAMESPACE column to the output
# Shorthand: kubectl get pods -A
# Essential during incidents when you don't know which namespace the problem is in

kubectl get pods -A | grep CrashLoop
# Pipe to grep to find broken pods across the entire cluster instantly
# One of the most useful one-liners when something is wrong and you don't know where

kubectl config set-context --current --namespace=payments
# Change the DEFAULT namespace for your current kubectl context
# After this, all kubectl commands without -n target payments instead of default
# Check current context: kubectl config current-context
# View full context settings: kubectl config get-contexts

kubectl get all -n payments
# All common resources in the payments namespace at once
# Pods, Deployments, ReplicaSets, Services, Jobs — everything visible in one shot

kubectl delete namespace staging
# Deletes the namespace AND everything inside it — Pods, Services, Secrets, ConfigMaps, PVCs
# This is irreversible. There is no confirmation prompt. Be very careful.
$ kubectl get pods --all-namespaces
NAMESPACE      NAME                                READY   STATUS    RESTARTS   AGE
kube-system    coredns-787d4945fb-2xkpj            1/1     Running   0          5d
kube-system    coredns-787d4945fb-8rvnq            1/1     Running   0          5d
kube-system    etcd-control-plane                  1/1     Running   0          5d
kube-system    kube-apiserver-control-plane        1/1     Running   0          5d
kube-system    kube-proxy-m4czl                    1/1     Running   0          5d
kube-system    kube-scheduler-control-plane        1/1     Running   0          5d
monitoring     prometheus-7d6b9c8f4d-p9wxt         1/1     Running   0          2d
payments       checkout-api-6f8b9d-2xpkj           1/1     Running   0          1d
payments       checkout-api-6f8b9d-7rvqn           1/1     Running   0          1d
payments       payment-api-7d9c4b-xr7nq            1/1     Running   0          1d
identity       auth-service-5c7d8f-p2rkx           1/1     Running   0          3d

$ kubectl config set-context --current --namespace=payments
Context "prod-cluster" modified.

$ kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
checkout-api-6f8b9d-2xpkj       1/1     Running   0          1d
checkout-api-6f8b9d-7rvqn       1/1     Running   0          1d
payment-api-7d9c4b-xr7nq        1/1     Running   0          1d

What just happened?

kubectl get pods -A — The -A flag (short for --all-namespaces) is your global cluster health view. Notice you can see the control plane components in kube-systemetcd, kube-apiserver, kube-scheduler, and coredns all running as Pods. If any of these show anything other than Running, your cluster has a serious problem.

set-context --namespace — After setting the default namespace to payments, the next plain kubectl get pods shows only Pods in that namespace — no -n needed. Most engineers working on a specific service set their context namespace to that team's namespace at the start of their shift. A tool called kubens (part of the kubectx toolkit) makes switching namespaces even faster — worth installing.

kubectl delete namespace is catastrophic — It silently deletes everything. No "are you sure?" prompt. No recycle bin. If you accidentally delete a production namespace, your only recovery path is from a backup. Always double-check your context namespace before running destructive commands, and consider using RBAC to restrict namespace deletion to cluster admins only.

Namespace Architecture for a Real Production Cluster

Here's how a well-structured multi-team production cluster typically partitions its namespaces:

Production Cluster Namespace Layout

🔒 SYSTEM (do not touch)
kube-system
kube-public
kube-node-lease
🛠️ PLATFORM
monitoring
logging
ingress-nginx
cert-manager
👥 PRODUCT TEAMS
payments
identity
notifications
inventory
Each product team namespace gets:
ResourceQuota (CPU/memory caps)
LimitRange (per-Pod defaults + max)
RBAC RoleBinding (team gets edit access)
NetworkPolicy (restrict cross-ns traffic)
Cross-namespace DNS pattern:
[service].[namespace].svc.cluster.local

e.g. payments calling identity:
http://auth-svc.identity.svc.cluster.local

Teacher's Note: The default namespace trap

Almost every Kubernetes beginner dumps everything into default. It works fine for a week. Then the cluster has 40 Deployments in one namespace, nobody knows who owns what, names start colliding, and applying any RBAC or quota becomes a cleanup project that takes weeks. The time to introduce namespaces is day one — not after you've already created the mess.

A good rule of thumb: one namespace per team per environment. If you have a payments team that needs both staging and production in the same cluster: payments-staging and payments-prod. Clean, queryable, and each one can have its own quota and RBAC policy.

And remember: Services are namespace-scoped. A Service named api-svc in the payments namespace is a completely different object from api-svc in the identity namespace. When calling across namespaces, you must use the full DNS name: api-svc.identity.svc.cluster.local. Just api-svc resolves only within the caller's own namespace.

Practice Questions

1. Which Kubernetes system namespace contains the control plane components like the API server, scheduler, etcd, and CoreDNS — and should never have application workloads deployed into it?



2. What Kubernetes object do you create in a namespace to set hard limits on the total CPU, memory, and object counts that the entire namespace can consume?



3. A Pod in the payments namespace needs to call a Service named auth-svc in the identity namespace. What is the full DNS name it should use?



Quiz

1. A ResourceQuota has been applied to the payments namespace. A developer tries to apply a Pod manifest that has no resources block. What happens?


2. Which Kubernetes object automatically injects default CPU and memory requests/limits into a container if the developer's manifest does not specify them?


3. Which statement best describes the security properties of Kubernetes namespaces?


Up Next · Lesson 19

ConfigMaps

How to decouple configuration from container images — so you never have to rebuild and redeploy just to change a database URL or feature flag again.