Kubernetes Course
Ingress Controllers
One load balancer for all your HTTP services. That's the core promise of Ingress — and it's the difference between paying for 50 cloud load balancers and paying for one. This lesson covers how Ingress controllers work, which one to choose, and how to deploy and configure the most widely-used option.
The Problem Ingress Solves
In the previous lesson, every service that needed external access got its own LoadBalancer Service — which means its own cloud load balancer, its own IP, its own cost. For a few services this is fine. For a company with 40 microservices it becomes untenable: 40 load balancers, 40 IPs to manage, 40 sets of SSL certificates, 40 firewall rules.
Ingress solves this with a single entry point. One load balancer sits at the edge. All external HTTP/HTTPS traffic flows through it. Inside the cluster, an Ingress controller reads Ingress rules you define and routes requests to the right ClusterIP Service based on hostname and URL path. Your 40 microservices all get ClusterIP Services (cheap, internal-only) and the Ingress controller handles all the routing at the edge.
Ingress vs IngressClass — two separate objects
An Ingress object (Lesson 34) defines the routing rules — "requests for api.company.com/payments go to the payments-svc Service." An Ingress Controller (this lesson) is the actual running software that reads those rules and implements them. You must deploy an Ingress controller before any Ingress objects do anything. The controller watches for Ingress objects and reconfigures itself accordingly — in real time, no restart required.
How an Ingress Controller Works
An Ingress controller is a Pod (or Deployment) running inside your cluster that does three things simultaneously: it watches the Kubernetes API for Ingress objects and translates them into its own routing configuration, it runs a reverse proxy (nginx, Envoy, HAProxy, or similar) that applies that configuration to incoming requests, and it exposes itself via a LoadBalancer or NodePort Service as the single entry point for all external traffic.
Ingress Architecture — One Entry Point, Many Services
Service
1 public IP
Controller Pod
reads Ingress rules
Choosing an Ingress Controller
There is no built-in Ingress controller in vanilla Kubernetes — you install one. There are over a dozen options. Here are the four you'll encounter most often:
| Controller | Proxy | Best for | Notes |
|---|---|---|---|
| ingress-nginx | nginx | General purpose, most widely used. Huge community, well-documented. | The Kubernetes community project. Different from nginx Inc's version. |
| AWS ALB Controller | AWS ALB | EKS native. Each Ingress creates a real AWS ALB. Deep AWS integration. | More expensive than ingress-nginx (per-ALB cost) but native WAF/shield support. |
| Traefik | Traefik | Auto-discovery, rich middleware, built-in dashboard. Popular in docker-compose migrations. | Excellent for rapid setup. Less battle-tested at extreme scale. |
| Istio Gateway | Envoy | Clusters already running Istio service mesh. L7 traffic management. | Overkill if you don't need service mesh. Heavy operational overhead. |
For the rest of this lesson we'll use ingress-nginx — the Kubernetes community project. It's what most teams start with, it has the most documentation, and the concepts transfer directly to other controllers.
Installing ingress-nginx
The scenario: You're setting up a new EKS cluster for a SaaS platform. The cluster will host 12 microservices, all HTTP. You want one load balancer, TLS termination at the edge, and URL-path-based routing. Installing ingress-nginx is the first step.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.4/deploy/static/provider/aws/deploy.yaml
# This single manifest installs everything needed for ingress-nginx on AWS:
# - Namespace: ingress-nginx
# - Deployment: ingress-nginx-controller (the nginx pods)
# - Service: ingress-nginx-controller (type: LoadBalancer — provisions the AWS NLB)
# - ConfigMap: ingress-nginx-controller (nginx global config)
# - IngressClass: nginx (so Ingress objects know which controller to use)
# - RBAC: ClusterRole, ClusterRoleBinding, ServiceAccount
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
# Wait until the ingress-nginx controller Pods are running and ready before proceeding
# --timeout=120s: fail if not ready within 2 minutes (cloud LB provisioning takes ~60s)
kubectl get pods -n ingress-nginx
# Verify the controller is running
kubectl get svc -n ingress-nginx
# Check the LoadBalancer Service — wait for EXTERNAL-IP to be populated
# This is the public endpoint that all your Ingress routes will be reachable through
$ kubectl apply -f https://raw.githubusercontent.com/... namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created deployment.apps/ingress-nginx-controller created service/ingress-nginx-controller created ingressclass.networking.k8s.io/nginx created $ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-controller-6b94c75899-xk4pj 1/1 Running 0 45s $ kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.96.44.7 a9f8e1234567.elb.us-east-1.amazonaws.com 80:31080/TCP,443:31443/TCP 55s ingress-nginx-controller-admission ClusterIP 10.96.22.15 <none> 443/TCP 55s
What just happened?
One manifest, full installation — The apply command created a complete, production-ready ingress-nginx installation. The controller Deployment runs nginx pods. The LoadBalancer Service fronting it got an AWS NLB provisioned automatically — that DNS name is your single public endpoint for the entire cluster.
ingress-nginx-controller-admission — This ClusterIP Service is the admission webhook endpoint. When you apply an Ingress object, the Kubernetes API server sends it to this webhook for validation before persisting it. If your Ingress has invalid syntax (conflicting paths, unknown annotations), the webhook rejects it immediately with a clear error rather than silently creating a broken config.
IngressClass: nginx — The installation also created an IngressClass named nginx. Ingress objects reference this class with ingressClassName: nginx to tell Kubernetes which controller should handle them. If you run multiple Ingress controllers (different controllers for different namespaces or teams), IngressClass is how you direct each Ingress to the right controller.
Configuring ingress-nginx Globally
The ingress-nginx controller is configured via a ConfigMap in the ingress-nginx namespace. Settings here apply globally to all traffic through the controller — default timeouts, log format, SSL protocols, rate limiting defaults, and more.
The scenario: Your platform team needs to harden the ingress-nginx installation for production. You want to enforce modern TLS, add security headers globally, set reasonable timeouts, and enable the real-IP passthrough so application logs show user IPs rather than the NLB's IP.
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller # Must match the controller's ConfigMap name exactly
namespace: ingress-nginx
data:
# TLS configuration
ssl-protocols: "TLSv1.2 TLSv1.3" # Only allow TLS 1.2 and 1.3 — drop insecure TLS 1.0/1.1
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256"
ssl-redirect: "true" # Redirect all HTTP requests to HTTPS
# Real IP passthrough — critical for accurate access logs and rate limiting
use-forwarded-headers: "true" # Trust X-Forwarded-For headers from the upstream LB
forwarded-for-header: "X-Forwarded-For"
compute-full-forwarded-for: "true" # Build complete chain of forwarded IPs
# Timeout settings (in seconds)
proxy-connect-timeout: "10" # Timeout to connect to the upstream Pod
proxy-read-timeout: "60" # Timeout waiting for the upstream to send a response
proxy-send-timeout: "60" # Timeout for sending a request to the upstream
# Security headers applied to ALL responses
add-headers: "ingress-nginx/custom-headers" # Reference a ConfigMap with custom headers
# Performance
keep-alive: "75" # HTTP keep-alive timeout in seconds
keep-alive-requests: "1000" # Max requests per keep-alive connection
worker-processes: "auto" # nginx worker count = number of CPU cores
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-headers # Referenced by add-headers in the main ConfigMap
namespace: ingress-nginx
data:
X-Frame-Options: "SAMEORIGIN" # Prevents clickjacking — iframe only from same origin
X-Content-Type-Options: "nosniff" # Prevents MIME type sniffing
X-XSS-Protection: "1; mode=block" # Browser XSS protection (legacy, but still useful)
Referrer-Policy: "strict-origin-when-cross-origin"
Permissions-Policy: "geolocation=(), microphone=(), camera=()"
Strict-Transport-Security: "max-age=31536000; includeSubDomains; preload"
# HSTS: tell browsers to only use HTTPS for 1 year
$ kubectl apply -f ingress-nginx-config.yaml configmap/ingress-nginx-controller configured configmap/custom-headers created $ kubectl rollout restart deployment/ingress-nginx-controller -n ingress-nginx deployment.apps/ingress-nginx-controller restarted $ kubectl rollout status deployment/ingress-nginx-controller -n ingress-nginx deployment "ingress-nginx-controller" successfully rolled out $ curl -I https://api.company.com/health HTTP/2 200 x-frame-options: SAMEORIGIN x-content-type-options: nosniff strict-transport-security: max-age=31536000; includeSubDomains; preload x-xss-protection: 1; mode=block
What just happened?
ConfigMap changes require a controller restart — Unlike Ingress routing rules (which are picked up dynamically), changes to the main ingress-nginx ConfigMap require a controller restart to take effect. nginx reads its configuration on startup — a reload is needed to apply new global settings. The kubectl rollout restart triggers a graceful rolling restart of the controller pods.
Security headers on every response — By adding security headers in the controller ConfigMap, every single response flowing through the ingress — from every service, across every Ingress rule — gets those headers automatically. Application teams don't need to add them to their services individually. This is one of the highest-leverage platform team actions: one place, cluster-wide security improvement.
ssl-redirect: true — Once this is set, any HTTP request hitting the ingress controller gets a 308 Permanent Redirect to the HTTPS equivalent. You don't need to configure this in every Ingress object — the controller handles it globally. Users who accidentally type http:// get transparently upgraded.
Deploying with Helm for Production
The raw manifest installation is fine for getting started. In production, most teams use Helm to install ingress-nginx — it gives you version control, easy upgrades, and configurable values without manually editing manifests.
The scenario: Your GitOps pipeline deploys all infrastructure via Helm charts committed to Git. You want ingress-nginx managed the same way — tracked, auditable, upgradeable with a single command, and configured with your production settings baked in.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# Add the ingress-nginx Helm chart repository
helm repo update
# Update the local chart cache
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--version 4.9.1 \
--set controller.replicaCount=2 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"=external \
--set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-nlb-target-type"=ip \
--set controller.metrics.enabled=true \
--set controller.metrics.serviceMonitor.enabled=true
# --version: pin the exact chart version — never use latest in production
# --set controller.replicaCount=2: run 2 controller replicas for HA
# metrics.enabled + serviceMonitor: expose Prometheus metrics + auto-configure scraping
helm get values ingress-nginx -n ingress-nginx
# Review the values that are active for this installation
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--version 4.10.0 \
--reuse-values
# Upgrade to a new chart version while reusing existing value overrides
$ helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace --version 4.9.1 ...
NAME: ingress-nginx
LAST DEPLOYED: Mon Mar 10 09:22:41 2025
NAMESPACE: ingress-nginx
STATUS: deployed
NOTES:
The ingress-nginx controller has been installed.
Get the application URL by running:
kubectl --namespace ingress-nginx get services -o wide ingress-nginx-controller
$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-6b94c75899-2xkpj 1/1 Running 0 35s
ingress-nginx-controller-6b94c75899-7rvqn 1/1 Running 0 35sWhat just happened?
replicaCount: 2 for HA — A single ingress-nginx controller Pod is a single point of failure. During a node drain or controller pod crash, external traffic would fail for a few seconds. Two replicas running on different nodes (enforced by default PodAntiAffinity in the chart) gives you zero-downtime controller upgrades and node maintenance without external traffic interruption.
Prometheus metrics + ServiceMonitor — With metrics.serviceMonitor.enabled=true, the Helm chart creates a ServiceMonitor resource. If Prometheus Operator is installed, it automatically starts scraping the controller's metrics: request rate, error rate, upstream response times, active connections, SSL handshake duration. This is your ingress observability layer — visible in Grafana dashboards without any manual configuration.
--reuse-values on upgrade — When upgrading the chart version, --reuse-values applies the new chart version with your existing value overrides preserved. Without it, a Helm upgrade reverts all custom settings to chart defaults — which has caused surprising production incidents when engineers forget to re-specify their values.
Multiple Ingress Controllers in One Cluster
Large organisations sometimes run multiple Ingress controllers in the same cluster — one public-facing (internet traffic) and one private (internal VPC traffic). IngressClass objects are how you separate them.
apiVersion: networking.k8s.io/v1
kind: IngressClass # IngressClass: names a controller so Ingress objects can reference it
metadata:
name: nginx-public # Used in Ingress: ingressClassName: nginx-public
annotations:
ingressclass.kubernetes.io/is-default-class: "false" # Not the default — must be specified explicitly
spec:
controller: k8s.io/ingress-nginx # Which controller implementation handles this class
parameters: # Optional: controller-specific parameters
apiGroup: k8s.nginx.org
kind: IngressClassParameters
name: public-ingress-params # Points to an IngressClassParameters object with extra config
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx-internal # Second class for the internal-facing controller
annotations:
ingressclass.kubernetes.io/is-default-class: "true" # Default: Ingress without className gets this
spec:
controller: k8s.io/ingress-nginx
$ kubectl get ingressclass NAME CONTROLLER PARAMETERS AGE nginx-internal k8s.io/ingress-nginx <none> 4s ← (default) nginx-public k8s.io/ingress-nginx public-... 4s $ kubectl apply -f my-ingress.yaml # (with ingressClassName: nginx-public) # This Ingress will be handled by the public-facing controller only # The internal controller ignores it completely
What just happened?
IngressClass gates which controller handles each Ingress — Each Ingress controller only processes Ingress objects that reference its IngressClass. The public controller with class nginx-public ignores all Ingress objects with ingressClassName: nginx-internal. This gives you clean separation: publicly-facing teams use the public IngressClass, internal teams use the internal one, and they never interfere with each other's routing rules.
is-default-class — When a cluster has a default IngressClass, any Ingress object without an explicit ingressClassName gets handled by the default controller. Only one IngressClass should have is-default-class: "true" at a time — having two defaults causes undefined behaviour where different controllers fight over the same Ingress objects.
Teacher's Note: The controller is infrastructure — treat it that way
The Ingress controller is as critical as your cluster's DNS (CoreDNS) or your kube-proxy. If it goes down, all external traffic to every HTTP service in your cluster stops. Yet I regularly see teams run a single ingress-nginx replica on a shared node with no resource limits and no monitoring. That's like running a production web server on a laptop with no backups.
Production checklist for your Ingress controller: (1) At least 2 replicas with podAntiAffinity to spread them across nodes. (2) Resource requests and limits set. (3) Prometheus metrics and alerting for error rate, latency, and 5xx responses. (4) TLS certificates managed by cert-manager (not manually). (5) Controller version pinned and upgrades tested in staging before production.
The Ingress rules (Lesson 34) are where developers spend most of their time — but the controller setup (this lesson) is where SREs focus. Getting the controller right once saves weeks of debugging later.
Practice Questions
1. You have two ingress-nginx controllers in one cluster — one for public internet traffic and one for internal VPC traffic. What Kubernetes object do Ingress resources reference to specify which controller should handle them?
2. You updated the ingress-nginx controller's global ConfigMap. What command do you run to apply those changes — since nginx needs to reload its configuration?
3. What is the name of the most widely-used community Kubernetes Ingress controller, which runs nginx as its reverse proxy?
Quiz
1. A developer creates several Ingress objects but forgets to install an Ingress controller. What happens to the Ingress rules?
2. What is the primary cost advantage of using an Ingress controller over giving each microservice its own type: LoadBalancer Service?
3. When ingress-nginx is installed, a second ClusterIP Service named ingress-nginx-controller-admission is created. What is its purpose?
Up Next · Lesson 34
Ingress Rules
Writing the actual routing rules — host-based and path-based routing, TLS termination, annotations for rate limiting and auth, and the rewrite tricks that every backend engineer eventually needs.