Kubernetes Lesson 19 – ConfigMaps | Dataplexa
Core Kubernetes Concepts · Lesson 19

ConfigMaps

Hardcoding configuration inside a container image is one of the oldest mistakes in software — you end up rebuilding and redeploying every time a database URL changes. ConfigMaps are Kubernetes's answer: a first-class object for storing configuration that your Pods can consume without touching the image.

Why Configuration Doesn't Belong in Images

Imagine you have a payment API container image. Inside it is a config file with DATABASE_HOST=db.staging.internal. To promote it to production you have to rebuild the image with a different value, re-push it to the registry, update the Deployment tag, and roll it out. For a one-line config change. That's the problem.

The solution is to separate configuration from code. Build the image once. Promote it through environments unchanged. The only thing that changes per-environment is the ConfigMap that gets mounted into it. This is one of the twelve-factor app principles, and Kubernetes makes it easy with ConfigMaps.

What ConfigMaps are for — and what they're not for

ConfigMaps store non-sensitive configuration: feature flags, hostnames, port numbers, log levels, timeout values, config file contents. They are not encrypted at rest by default — anything you'd be embarrassed to have logged or committed to Git belongs in a Secret (Lesson 20), not a ConfigMap. Database passwords, API keys, TLS certificates — all Secrets.

Three Ways to Create a ConfigMap

Before writing YAML, it helps to know all three creation methods. Each has a different use case.

The scenario: You're a platform engineer standardising how the payments team manages configuration. Until now, config values are scattered across Deployment env blocks, hardcoded in Dockerfiles, and stored in a shared Google Doc that nobody keeps up to date. You're migrating everything to ConfigMaps — starting with the checkout service.

kubectl create configmap checkout-config \
  --from-literal=APP_ENV=production \
  --from-literal=LOG_LEVEL=info \
  --from-literal=DB_HOST=postgres.payments.svc.cluster.local \
  --from-literal=DB_PORT=5432
# --from-literal: create a ConfigMap from inline key=value pairs
# Good for simple, short values — not ideal for multi-line config files
# Each --from-literal adds one key-value entry to the ConfigMap data

kubectl create configmap checkout-config \
  --from-file=app.properties
# --from-file: create a ConfigMap from a file — the filename becomes the key
# The entire file content becomes the value — ideal for config files (nginx.conf, application.yml)

kubectl create configmap checkout-config \
  --from-file=config-dir/
# --from-file with a directory: every file in the directory becomes a key in the ConfigMap
# Key = filename, Value = file contents — useful for bulk config migration
$ kubectl create configmap checkout-config \
  --from-literal=APP_ENV=production \
  --from-literal=LOG_LEVEL=info \
  --from-literal=DB_HOST=postgres.payments.svc.cluster.local \
  --from-literal=DB_PORT=5432
configmap/checkout-config created

$ kubectl get configmap checkout-config -o yaml
apiVersion: v1
data:
  APP_ENV: production
  DB_HOST: postgres.payments.svc.cluster.local
  DB_PORT: "5432"
  LOG_LEVEL: info
kind: ConfigMap
metadata:
  name: checkout-config
  namespace: payments

What just happened?

kubectl get configmap -o yaml — The -o yaml flag outputs the full object as YAML. This is the fastest way to inspect a ConfigMap's contents — and it shows you the canonical YAML structure that you should be committing to Git for the declarative approach. You can pipe this to a file and use it as your source of truth: kubectl get configmap checkout-config -o yaml > checkout-config.yaml.

data vs binaryData — The data field stores UTF-8 string values. If you need to store binary content (images, compiled files), there's a binaryData field that accepts base64-encoded values. In practice you'll use data for virtually everything.

The Declarative ConfigMap Manifest

Imperative kubectl create configmap is fine for exploration but doesn't belong in production. You want your ConfigMaps in Git so they're reviewable, versioned, and reproducible. Here's the full YAML form — including a multi-line config file stored as a value.

The scenario: The checkout service needs several environment-level settings injected at runtime, plus a full app.properties config file that the Java service reads from the filesystem on startup. You're putting both into a single ConfigMap so everything travels together.

apiVersion: v1
kind: ConfigMap                 # ConfigMaps are core v1 objects
metadata:
  name: checkout-config         # Name of the ConfigMap — referenced by Pods that consume it
  namespace: payments           # Must be in the same namespace as the Pods that use it
  labels:
    app: checkout-api           # Label it so you can find it with kubectl get cm -l app=checkout-api
data:                           # data: the key-value store — all values are strings
  APP_ENV: "production"         # Simple string values — will be injected as env vars or file keys
  LOG_LEVEL: "info"
  DB_HOST: "postgres.payments.svc.cluster.local"
  DB_PORT: "5432"               # Numbers must be quoted in ConfigMap data — values are always strings
  FEATURE_NEW_CHECKOUT: "true"  # Feature flag — toggle behaviour without redeploying the image
  MAX_CONNECTIONS: "100"

  app.properties: |             # Multi-line value using YAML literal block scalar (|)
    # Checkout service configuration                    
    server.port=8080
    server.timeout=30s
    database.pool.min=5
    database.pool.max=20
    cache.ttl=300
    payment.gateway.url=https://gateway.payments.internal
    payment.retry.max=3
    logging.format=json
    # The | preserves newlines exactly — the entire block is one string value
    # This file will be mounted as /config/app.properties inside the container
$ kubectl apply -f checkout-configmap.yaml
configmap/checkout-config created

$ kubectl describe configmap checkout-config -n payments
Name:         checkout-config
Namespace:    payments
Labels:       app=checkout-api
Annotations:  <none>

Data
====
APP_ENV:
----
production
DB_HOST:
----
postgres.payments.svc.cluster.local
DB_PORT:
----
5432
FEATURE_NEW_CHECKOUT:
----
true
LOG_LEVEL:
----
info
MAX_CONNECTIONS:
----
100
app.properties:
----
# Checkout service configuration
server.port=8080
server.timeout=30s
database.pool.min=5
database.pool.max=20
...

BinaryData
====
Events:  <none>

What just happened?

The | literal block scalar — In YAML, a pipe character followed by an indented block preserves newlines exactly. This is how you store an entire config file as a single value in a ConfigMap. The key is app.properties and the value is the full file content as a multi-line string. When you mount this ConfigMap as a volume, Kubernetes writes this value to a file named app.properties at the mount path.

Numbers must be quoted — YAML would normally parse DB_PORT: 5432 as an integer. But ConfigMap values must all be strings — so always quote numeric values with double quotes. Without quotes some YAML parsers will accept it but others will error, and it's a subtle source of bugs.

Consuming ConfigMaps: Method 1 — Environment Variables

There are two main ways to get ConfigMap data into a Pod: inject individual keys as environment variables, or mount the whole ConfigMap as a directory of files. Both have their place. Env vars are simpler and work for most twelve-factor apps. File mounts are essential when your app expects a config file at a specific path.

The scenario: The checkout API is a Node.js service that reads configuration from process.env. It expects APP_ENV, LOG_LEVEL, and DB_HOST as environment variables. You're going to inject them from the ConfigMap without changing a single line of application code.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: checkout-api
  namespace: payments
spec:
  replicas: 2
  selector:
    matchLabels:
      app: checkout-api
  template:
    metadata:
      labels:
        app: checkout-api
    spec:
      containers:
        - name: checkout-api
          image: company/checkout-api:2.3.0
          ports:
            - containerPort: 3000
          env:
            - name: APP_ENV             # The env var name the container will see
              valueFrom:
                configMapKeyRef:        # configMapKeyRef: pull a single key from a ConfigMap
                  name: checkout-config # Which ConfigMap to read from
                  key: APP_ENV          # Which key within the ConfigMap to use
            - name: LOG_LEVEL
              valueFrom:
                configMapKeyRef:
                  name: checkout-config
                  key: LOG_LEVEL
            - name: DB_HOST
              valueFrom:
                configMapKeyRef:
                  name: checkout-config
                  key: DB_HOST
          envFrom:                      # envFrom: inject ALL keys from a ConfigMap as env vars at once
            - configMapRef:
                name: checkout-config   # Every key in checkout-config becomes an env var
                                        # Key name = env var name, value = env var value
                                        # Useful when you want all values without listing each one
$ kubectl apply -f checkout-deployment.yaml
deployment.apps/checkout-api created

$ kubectl exec -it checkout-api-6f8b9d-2xpkj -n payments -- env | grep -E "APP_ENV|LOG_LEVEL|DB_HOST|DB_PORT"
APP_ENV=production
DB_HOST=postgres.payments.svc.cluster.local
DB_PORT=5432
FEATURE_NEW_CHECKOUT=true
LOG_LEVEL=info
MAX_CONNECTIONS=100

What just happened?

configMapKeyRef vs envFromconfigMapKeyRef pulls a single named key from a ConfigMap and maps it to a specific env var name. Use this when you want to rename the key (e.g. ConfigMap has DB_HOST but the app expects DATABASE_HOST). envFrom bulk-injects every key in the ConfigMap as an env var — simpler but you get everything, named exactly as in the ConfigMap.

kubectl exec -it ... -- envkubectl exec runs a command inside a running container. -it attaches an interactive terminal. -- env is the command to run inside the container. This is the fastest way to verify that your ConfigMap values are actually reaching the container as expected. Pipe to grep to find specific variables.

The critical limitation of env vars — Env vars are injected at container start time and are static. If you update the ConfigMap after the Pod is running, the running container does not see the new values. The Pod must be restarted to pick up changes. If you need live config updates without restarts, use a volume mount instead — covered next.

Consuming ConfigMaps: Method 2 — Volume Mounts

Mounting a ConfigMap as a volume writes each key as a file inside the container. The key name becomes the filename. The value becomes the file content. This is how you deliver full config files — nginx.conf, application.yml, app.properties — to containers at runtime.

The scenario: The payments team's Java service reads all its config from /config/app.properties on disk. The app doesn't read environment variables at all — it was written before that was common practice. You need to mount the app.properties key from the ConfigMap as an actual file at that path. And the team wants config changes to take effect within a minute without restarting the Pod.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment-processor
  namespace: payments
spec:
  replicas: 2
  selector:
    matchLabels:
      app: payment-processor
  template:
    metadata:
      labels:
        app: payment-processor
    spec:
      containers:
        - name: payment-processor
          image: company/payment-processor:3.1.0
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: config-volume       # Must match the volume name defined in volumes below
              mountPath: /config        # The directory path inside the container to mount into
              readOnly: true            # readOnly: container can read but not write to this directory
      volumes:
        - name: config-volume           # Declare the volume — referenced by volumeMounts above
          configMap:
            name: checkout-config       # Which ConfigMap to expose as this volume
            items:                      # items: selectively expose only specific keys (optional)
              - key: app.properties     # Which key in the ConfigMap to expose
                path: app.properties    # The filename to use inside the mountPath directory
                                        # Without items, ALL keys appear as files — sometimes too noisy
$ kubectl apply -f payment-processor-deployment.yaml
deployment.apps/payment-processor created

$ kubectl exec -it payment-processor-7c9d4b-x2pkz -n payments -- ls /config
app.properties

$ kubectl exec -it payment-processor-7c9d4b-x2pkz -n payments -- cat /config/app.properties
# Checkout service configuration
server.port=8080
server.timeout=30s
database.pool.min=5
database.pool.max=20
cache.ttl=300
payment.gateway.url=https://gateway.payments.internal
payment.retry.max=3
logging.format=json

What just happened?

Volume mount path — The mountPath: /config tells Kubernetes to create a directory at /config inside the container and populate it with files from the ConfigMap. The items block then says "from the ConfigMap, take the key app.properties and write it as a file named app.properties." The app reads /config/app.properties like a normal file — it has no idea Kubernetes put it there.

Live updates via volumes — Here's the key advantage over env vars: when you update the ConfigMap with kubectl apply, the kubelet detects the change and updates the mounted file inside the running container within approximately 60 seconds (configurable via syncPeriod). The process itself must re-read the file to pick up changes — apps that watch the file with inotify or poll it on a timer will get live updates. Apps that read config once at startup still need a restart.

Without items — all keys become files — If you omit the items block, every key in the ConfigMap becomes a file in the mount directory. For the checkout-config that means /config/APP_ENV, /config/LOG_LEVEL, /config/app.properties — one file per key. Fine for some use cases, noisy for others. Use items to be selective.

ConfigMap Consumption: Side-by-Side Comparison

Both methods deliver the same data — but they behave very differently at runtime. Here's the full comparison to help you pick the right one:

Property Environment Variables Volume Mounts
Update behaviour ❌ Static — Pod must restart to see changes ✅ Dynamic — file updated within ~60s (app must re-read)
Best for Simple key=value config, twelve-factor apps Full config files (nginx.conf, app.yml, .properties)
Multi-line values ❌ Awkward — env vars aren't designed for this ✅ Natural — each key is its own file
App changes needed ✅ None — most apps already read env vars App must read from the mounted file path
Visibility kubectl exec -- env kubectl exec -- cat /config/file
kubectl ref configMapKeyRef / envFrom volumes.configMap + volumeMounts

Updating a ConfigMap in Production

The scenario: You need to change the LOG_LEVEL from info to debug during an active incident investigation — and you need it to take effect immediately without restarting Pods and interrupting live payment processing. You also need to know the fastest way to roll back if the debug logging turns out to be too verbose.

kubectl edit configmap checkout-config -n payments
# edit: opens the ConfigMap in your $EDITOR (vim by default)
# Change LOG_LEVEL from "info" to "debug", save and exit
# Kubernetes applies the change immediately upon save
# Volume-mounted consumers will see the file update within ~60 seconds
# env var consumers will NOT update — they require a Pod restart

kubectl patch configmap checkout-config -n payments \
  --type merge \
  -p '{"data":{"LOG_LEVEL":"debug"}}'
# patch: update specific fields without opening the editor
# --type merge: merge the patch with the existing data rather than replace it
# -p: the patch body as JSON — only the fields you specify are changed
# Faster than edit for scripted or automated config changes

kubectl rollout restart deployment/checkout-api -n payments
# rollout restart: gracefully restarts all Pods in the Deployment one by one
# Use this to force env-var consumers to pick up the new ConfigMap values
# It performs a rolling restart — zero downtime if you have multiple replicas
$ kubectl patch configmap checkout-config -n payments \
  --type merge \
  -p '{"data":{"LOG_LEVEL":"debug"}}'
configmap/checkout-config patched

$ kubectl get configmap checkout-config -n payments -o jsonpath='{.data.LOG_LEVEL}'
debug

$ kubectl rollout restart deployment/checkout-api -n payments
deployment.apps/checkout-api restarted

$ kubectl rollout status deployment/checkout-api -n payments
Waiting for deployment "checkout-api" rollout to finish: 1 out of 2 new replicas have been updated...
deployment "checkout-api" successfully rolled out

What just happened?

kubectl edit vs kubectl patchedit is interactive — opens an editor, good for humans. patch is programmatic — specify exactly what to change as a JSON or YAML snippet, good for scripts and CI/CD pipelines. In both cases the change is applied to the live object in etcd immediately.

-o jsonpath — The -o jsonpath='{.data.LOG_LEVEL}' output format lets you extract a specific field from any Kubernetes object. Useful in scripts to check a value without parsing the full YAML output. .data navigates to the data field, .LOG_LEVEL navigates to that key within it.

kubectl rollout restart — This command triggers a rolling restart of all Pods in a Deployment. Each Pod is terminated and replaced one at a time (respecting maxUnavailable), so you never have zero Pods running if you have multiple replicas. It's the standard way to force a Deployment to pick up changes that don't automatically trigger a new rollout — like a ConfigMap update consumed via env vars.

Teacher's Note: ConfigMaps are not secrets — but they're also not that private

ConfigMaps are stored as plaintext in etcd. Any user with kubectl get configmap access in a namespace can read every value. This is intentional for configuration — non-sensitive values shouldn't need encryption overhead. But it also means you must never put passwords, tokens, or private keys in a ConfigMap. The rule is simple: if you'd be uncomfortable committing it to a public Git repo, it's a Secret, not a ConfigMap.

One more gotcha worth knowing: if a Pod references a ConfigMap that doesn't exist, the Pod will fail to start with a CreateContainerConfigError. Always deploy ConfigMaps before the Pods that consume them — or better yet, put the Namespace, ConfigMap, and Deployment in one YAML file separated by --- so they deploy together in order.

Practice Questions

1. What field in a container spec injects all keys from a ConfigMap as environment variables at once, without needing to list each key individually?



2. You update a ConfigMap and want the running Pod to see the new values within 60 seconds without a restart. Which consumption method must you be using — environment variables or volume mount?



3. The checkout-api Deployment uses a ConfigMap via environment variables. You've updated the ConfigMap. What kubectl command triggers a zero-downtime rolling restart to force the Pods to pick up the new values?



Quiz

1. A Deployment references a ConfigMap named app-config via envFrom, but that ConfigMap does not exist in the namespace. What happens when a Pod from this Deployment starts?


2. Which of the following should not be stored in a ConfigMap?


3. When a ConfigMap is mounted as a volume into a container, how do the keys and values in the ConfigMap's data field appear inside the container?


Up Next · Lesson 20

Secrets

Everything ConfigMaps do — but for sensitive data. Plus why Kubernetes Secrets aren't as secret as you think, and how to actually secure them.