Kubernetes Lesson 32 – ClusterIP, NodePort, LoadBalancer | Dataplexa
Networking, Ingress & Security · Lesson 32

ClusterIP, NodePort, LoadBalancer

A Service without a type is internal-only. A Service exposed to the internet is completely different. Kubernetes gives you three Service types for three scopes of access — and picking the wrong one is how you accidentally expose your database to the internet or wonder why external traffic can't reach your app.

The Three Service Types at a Glance

Every Kubernetes Service builds on top of the previous type. NodePort includes ClusterIP. LoadBalancer includes NodePort and ClusterIP. Think of them as concentric circles of exposure — each type adding one more layer that reaches further out.

Service Types — Concentric Circles of Exposure

LoadBalancer
Cloud load balancer → NodePort → ClusterIP → Pod. Reachable from the internet.
NodePort
Port on every node → ClusterIP → Pod. Reachable from the node network (any node IP).
ClusterIP
Virtual IP inside the cluster only. Reachable from any Pod. Not accessible from outside.
Pods
The actual workloads. All Service types ultimately route traffic here.

ClusterIP: The Default, Internal-Only Service

ClusterIP is the default Service type — if you don't specify a type, you get ClusterIP. It creates a stable virtual IP address that is only reachable from inside the cluster. External clients — your laptop, users' browsers, anything outside the Kubernetes network — cannot reach a ClusterIP Service directly.

This is the right type for the vast majority of microservice-to-microservice communication. Your auth service, your payment processor, your database — all of these should be ClusterIP. Only the services that need to accept traffic from outside the cluster need anything more.

The scenario: You're deploying the order management system. It consists of three microservices: an order API (needs to accept external traffic eventually), a pricing engine (internal only, called by the order API), and a PostgreSQL database (internal only, called by both). The pricing engine and database get ClusterIP Services.

apiVersion: v1
kind: Service
metadata:
  name: pricing-engine-svc          # Service name — also the DNS hostname within the namespace
  namespace: production
  labels:
    app: pricing-engine
spec:
  type: ClusterIP                   # Explicit — same as omitting type entirely
  selector:
    app: pricing-engine             # Route traffic to Pods with this label
  ports:
    - name: http                    # name: optional but recommended — used in Ingress and multi-port Services
      protocol: TCP
      port: 80                      # Port exposed by the Service (callers use this port)
      targetPort: 8080              # Port the container actually listens on
                                    # port: 80 → targetPort: 8080 translation is transparent to callers
  sessionAffinity: None             # None: round-robin load balancing (default)
                                    # ClientIP: sticky sessions — same client IP always routes to same Pod
                                    # Rarely needed unless your app stores session state in Pod memory
$ kubectl apply -f pricing-engine-svc.yaml
service/pricing-engine-svc created

$ kubectl get svc pricing-engine-svc -n production
NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
pricing-engine-svc    ClusterIP   10.96.47.182    <none>        80/TCP    5s

$ kubectl describe svc pricing-engine-svc -n production
Name:              pricing-engine-svc
Namespace:         production
Selector:          app=pricing-engine
Type:              ClusterIP
IP Family Policy:  SingleStack
IP:                10.96.47.182
Port:              http  80/TCP
TargetPort:        8080/TCP
Endpoints:         10.244.1.8:8080,10.244.2.9:8080,10.244.3.4:8080
Session Affinity:  None

What just happened?

EXTERNAL-IP: <none> — This confirms the Service is ClusterIP — no external access. Any Pod inside the cluster can now call the pricing engine at http://pricing-engine-svc (within the same namespace) or http://pricing-engine-svc.production.svc.cluster.local (fully qualified, from any namespace). The ClusterIP 10.96.47.182 is stable and will never change for the life of this Service.

port vs targetPort — The Service listens on port 80. The container listens on 8080. kube-proxy does the translation. Callers use port 80 and never need to know the app's internal port. This decoupling is important: if the app team decides to change the internal port from 8080 to 9090, only the Service's targetPort changes — all callers keep using port 80 without any update to their code or config.

Headless Services (ClusterIP: None) — A special variant: set clusterIP: None to create a headless Service. Instead of a virtual IP, DNS returns the individual Pod IPs directly. Used by StatefulSets for stable DNS names per replica (pod-0.svc-name, pod-1.svc-name) and by client-side load balancers that need direct Pod IP access.

NodePort: Exposing Services on Node IPs

NodePort extends ClusterIP by additionally opening a port on every node in the cluster — typically in the range 30000–32767. External clients can reach the Service by hitting any node's IP on that port. kube-proxy then forwards the traffic to the backing Pods, regardless of which node they're running on.

NodePort is the simplest way to expose a service externally — no cloud load balancer required. It's commonly used in on-premises environments, bare-metal clusters, and development setups. The downside: clients must know a node IP, and if that node goes down, they need to try another. No automatic failover at the DNS level.

The scenario: Your team runs Kubernetes on-premises with bare metal nodes. There's no cloud load balancer available. The order API needs to accept traffic from an external reverse proxy (nginx running outside the cluster) which will forward requests to the Kubernetes nodes. NodePort is the right tool for this.

apiVersion: v1
kind: Service
metadata:
  name: order-api-svc
  namespace: production
spec:
  type: NodePort                    # NodePort: opens a port on every node in the cluster
  selector:
    app: order-api
  ports:
    - name: http
      protocol: TCP
      port: 80                      # ClusterIP port — still accessible internally at 80
      targetPort: 3000              # Container port
      nodePort: 30080               # Port opened on EVERY node's IP (range: 30000–32767)
                                    # If omitted, Kubernetes picks a random port in the range
                                    # Specifying it explicitly makes it predictable for firewall rules
  externalTrafficPolicy: Local      # Local: only route to Pods on the SAME node that received traffic
                                    # Preserves the original client IP (prevents SNAT)
                                    # Cluster (default): can forward to any node — loses original client IP
                                    # Local requires Pods on every node or some nodes get no traffic
$ kubectl apply -f order-api-nodeport.yaml
service/order-api-svc created

$ kubectl get svc order-api-svc -n production
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
order-api-svc   NodePort   10.96.88.201    <none>        80:30080/TCP   6s

$ kubectl get nodes -o wide
NAME              STATUS   INTERNAL-IP     EXTERNAL-IP
node-eu-west-1a   Ready    192.168.0.10    54.123.45.67
node-eu-west-1b   Ready    192.168.0.11    54.123.45.68
node-eu-west-1c   Ready    192.168.0.12    54.123.45.69

# External nginx reverse proxy config (outside the cluster):
# upstream order_api {
#   server 192.168.0.10:30080;
#   server 192.168.0.11:30080;
#   server 192.168.0.12:30080;
# }
# Nginx load-balances across all nodes — if one node is down, nginx stops sending to it

What just happened?

PORT(S): 80:30080/TCP — This notation means port 80 is the ClusterIP port (internal) and 30080 is the NodePort (external). Traffic arriving at any node IP on port 30080 gets forwarded to the Pods on targetPort 3000. All three access paths work: Pod-to-Pod via ClusterIP:80, external via any NodeIP:30080, and both end up at containerPort:3000.

externalTrafficPolicy: Local vs ClusterCluster (default): traffic arriving on Node 1 can be forwarded to a Pod on Node 2. The client's original IP is replaced with the node's IP (SNAT) — you lose the real client IP in your application logs. Local: traffic only goes to Pods on the same node. No SNAT — client IP preserved. But if no Pod runs on the receiving node, the connection is dropped. Use Local when you need client IP visibility and ensure your Pods run on all nodes (DaemonSet or high replica count).

NodePort limitations — You only get one service per NodePort number. With hundreds of services, the high port range fills up fast. Firewall rules get complex. No TLS termination. No path-based routing. This is why NodePort is primarily for on-premises or development — cloud environments use LoadBalancer or Ingress instead.

LoadBalancer: Cloud-Native External Exposure

LoadBalancer is the cloud-native way to expose a Service externally. When you create a LoadBalancer Service on EKS, GKE, or AKS, the cloud controller manager automatically provisions a cloud load balancer (AWS ALB/NLB, GCP Load Balancer, Azure LB), assigns it a public IP or DNS name, and configures it to forward traffic to the NodePorts on your cluster nodes. The whole thing happens automatically when you run kubectl apply.

The scenario: Your order API needs to be directly reachable from the internet on a stable public IP — no Ingress controller involved. You're running on EKS and want an AWS Network Load Balancer (NLB) provisioned automatically. Here's the Service with the NLB-specific annotations.

apiVersion: v1
kind: Service
metadata:
  name: order-api-lb
  namespace: production
  annotations:
    # AWS NLB annotations — tell the cloud controller manager what kind of LB to provision
    service.beta.kubernetes.io/aws-load-balancer-type: "external"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
                                    # ip mode: NLB targets Pod IPs directly (requires AWS VPC CNI)
                                    # instance mode: NLB targets node instances on NodePort (default)
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
                                    # internet-facing: public IP. internal: VPC-internal IP only
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
                                    # Distribute traffic evenly across AZs — prevents hot AZ imbalance
spec:
  type: LoadBalancer                # Triggers cloud controller manager to provision an LB
  selector:
    app: order-api
  ports:
    - name: http
      protocol: TCP
      port: 80                      # Port the LB listens on externally
      targetPort: 3000              # Port on the Pod
    - name: https
      protocol: TCP
      port: 443                     # HTTPS on the LB (TLS handled by app or cert-manager)
      targetPort: 3000
  externalTrafficPolicy: Local      # Preserve client IPs — NLB in ip mode supports this cleanly
  loadBalancerSourceRanges:         # Whitelist: only allow traffic from these CIDRs
    - "0.0.0.0/0"                   # Open to internet — remove this to restrict to specific IPs
                                    # Example: "203.0.113.0/24" for office IP range only
$ kubectl apply -f order-api-lb.yaml
service/order-api-lb created

$ kubectl get svc order-api-lb -n production
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP                                              PORT(S)          AGE
order-api-lb   LoadBalancer   10.96.122.45    a1b2c3d4e5f6.elb.us-east-1.amazonaws.com                80:31204/TCP,443:31205/TCP   45s

$ kubectl describe svc order-api-lb -n production | grep -A5 "Events:"
Events:
  Normal  EnsuringLoadBalancer  30s  service-controller
    Ensuring load balancer
  Normal  EnsuredLoadBalancer   8s   service-controller
    Ensured load balancer

What just happened?

EXTERNAL-IP populated automatically — Within about 30–60 seconds, the cloud controller manager provisioned an AWS NLB and populated the EXTERNAL-IP field with the LB's DNS name. External clients can now reach the order API at http://a1b2c3d4e5f6.elb.us-east-1.amazonaws.com. The cloud load balancer distributes traffic across your node IPs on the auto-assigned NodePort (31204 for port 80).

One LoadBalancer per Service — Every LoadBalancer Service provisions a separate cloud load balancer. On AWS this means a separate ELB/NLB per service. At $0.008/hour per LB plus data charges, having 50 microservices each with a LoadBalancer Service gets expensive fast. This is the primary reason Ingress (Lesson 33) exists — one load balancer shared across all services, with routing handled by path/host rules inside the cluster.

Annotations are cloud-specific — The service.beta.kubernetes.io/aws-load-balancer-* annotations are specific to the AWS Load Balancer Controller. GKE has its own annotations (cloud.google.com/neg, cloud.google.com/load-balancer-type), Azure has its own, and on-premises clusters without a cloud controller will leave the EXTERNAL-IP in <pending> forever.

ExternalName: DNS Aliasing Outside the Cluster

There's a fourth Service type worth knowing: ExternalName. It doesn't create a ClusterIP or expose any ports. Instead it creates a DNS CNAME record inside the cluster pointing to an external hostname. This lets your Pods call an external service using a stable internal DNS name — and you can change the external destination by updating the Service without touching any application config.

apiVersion: v1
kind: Service
metadata:
  name: external-payments-gateway   # Internal DNS name your Pods will use
  namespace: production
spec:
  type: ExternalName                 # ExternalName: creates a CNAME in CoreDNS, no ClusterIP
  externalName: api.stripe.com       # The external hostname to alias
                                     # Pods calling external-payments-gateway get CNAME → api.stripe.com
                                     # To switch from Stripe to Braintree: change externalName here
                                     # Zero application code changes required
$ kubectl apply -f external-payments-gateway.yaml
service/external-payments-gateway created

$ kubectl get svc external-payments-gateway -n production
NAME                        TYPE           CLUSTER-IP   EXTERNAL-IP       PORT(S)   AGE
external-payments-gateway   ExternalName   <none>       api.stripe.com    <none>    3s

$ kubectl run debug-pod --image=nicolaka/netshoot --rm -it --restart=Never -n production
bash-5.1# nslookup external-payments-gateway
Server:   10.96.0.10
Name:     external-payments-gateway.production.svc.cluster.local
Address:  CNAME api.stripe.com
          Address: 54.187.174.169

What just happened?

CNAME resolution chain — When a Pod resolves external-payments-gateway, CoreDNS returns a CNAME record pointing to api.stripe.com. The DNS client then resolves api.stripe.com to its actual IP. The application code never needs to know about Stripe's real hostname — it just calls http://external-payments-gateway.

Migration use case — ExternalName is excellent during cloud migrations. Point your internal service names at external legacy systems while you migrate them progressively. Once the migration is complete, change the Service type from ExternalName to ClusterIP pointing to the new in-cluster service. Application code untouched throughout.

Service Type Decision Guide

Every time you write a Service manifest, run through this decision tree:

Use case Service type Why
Microservice called only by other Pods ClusterIP Internal only. Smallest attack surface. Default and correct.
Database, cache, message queue ClusterIP Never expose data stores externally. ClusterIP is the security boundary.
On-premises external access, development NodePort No cloud LB available. Pair with external nginx/HAProxy for HA.
Cloud: single service needs a public IP LoadBalancer Automated cloud LB provisioning. Best for TCP/UDP, low-latency, high-throughput.
Cloud: many HTTP services on one IP ClusterIP + Ingress One LB, many services, path/host routing. Most cost-efficient for HTTP. (Lesson 33–34)
External hostname aliasing ExternalName DNS CNAME for external services. Migration stepping stone.

Multi-Port Services and Named Ports

The scenario: Your monitoring service exposes two ports: port 8080 for the main API and port 9090 for Prometheus metrics. You need a single Service that routes to both ports — but with different Ingress rules for each (the metrics port should only be accessible from the monitoring namespace).

apiVersion: v1
kind: Service
metadata:
  name: monitoring-api-svc
  namespace: production
spec:
  type: ClusterIP
  selector:
    app: monitoring-api
  ports:
    - name: http                    # Named ports: required when a Service has multiple ports
      protocol: TCP
      port: 8080                    # Service port for the API
      targetPort: http              # targetPort can reference a named containerPort
                                    # Defined in the Pod spec as: ports: [{name: http, containerPort: 8080}]
                                    # Referencing by name is more resilient than by number
    - name: metrics                 # Second port for Prometheus scraping
      protocol: TCP
      port: 9090
      targetPort: metrics           # Container port named "metrics"
                                    # If the team later changes the metrics port from 9090 to 9091,
                                    # they update the Pod spec only — Service targetPort reference stays valid
$ kubectl apply -f monitoring-api-svc.yaml
service/monitoring-api-svc created

$ kubectl get svc monitoring-api-svc -n production
NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
monitoring-api-svc    ClusterIP   10.96.33.91     <none>        8080/TCP,9090/TCP   4s

$ kubectl describe svc monitoring-api-svc -n production
Port:          http    8080/TCP
TargetPort:    http/TCP
Port:          metrics  9090/TCP
TargetPort:    metrics/TCP
Endpoints:     10.244.1.12:8080,10.244.2.5:8080  (http port)
               10.244.1.12:9090,10.244.2.5:9090  (metrics port)

What just happened?

Named ports over numbered ports — Using targetPort: http instead of targetPort: 8080 makes the Service resilient to port number changes. If the development team changes the container's HTTP port, they update the Pod spec's containerPort name mapping — and the Service automatically routes to the new port. No Service manifest change needed.

When multi-port Services are required — If a Service exposes more than one port, every port entry must have a name field. Kubernetes will reject the manifest with a validation error if you have two unnamed ports on the same Service. The name must be unique within the Service.

Teacher's Note: LoadBalancer Services are expensive at scale — know when to use Ingress instead

A startup with 5 services might be fine with 5 LoadBalancer Services — the cost is trivial. A company with 50 microservices paying $0.008/hour per NLB is burning over $2,800/year just on empty load balancers that sit mostly idle. This is where the Ingress pattern (Lessons 33–34) pays for itself many times over: one load balancer shared across every HTTP service in the cluster, with routing handled by host name and URL path.

The rule of thumb I use: if the service is HTTP/HTTPS and will have other HTTP services alongside it, use ClusterIP + Ingress. If the service is non-HTTP (TCP/UDP — database proxies, game servers, streaming), use LoadBalancer. If you need source IP preservation at scale, use LoadBalancer with externalTrafficPolicy: Local. Otherwise, start with ClusterIP and only add external access when you actually need it.

One security reminder: the default Service type in most manifests engineers copy from tutorials is ClusterIP — which is good. The dangerous mistake is changing it to LoadBalancer on a database Service to "make it easier to connect from my laptop." That database now has a public IP. I have seen this happen.

Practice Questions

1. You are deploying a Redis cache that should only be accessible from other Pods inside the cluster — never from outside. Which Service type should you use?



2. You need a NodePort Service that preserves the original client IP address in your application logs rather than replacing it with the node's IP. Which field do you set, and to what value?



3. You want Pods in your cluster to call an external third-party API at api.stripe.com using a stable internal DNS name so you can swap providers without changing application code. Which Service type creates a DNS CNAME alias for this?



Quiz

1. You create a type: LoadBalancer Service on a bare-metal on-premises cluster that has no cloud controller manager. What happens to the EXTERNAL-IP field?


2. A StatefulSet needs each replica to have a stable, individual DNS name (pod-0.svc, pod-1.svc) for direct Pod addressing. Which Service configuration achieves this?


3. Your company is running 80 HTTP microservices on EKS, each with its own type: LoadBalancer Service. What is the main problem with this approach?


Up Next · Lesson 33

Ingress Controllers

One load balancer, all your HTTP services — how Ingress controllers solve the cost and complexity problem of exposing dozens of services to the internet.