Kubernetes Course
Services Overview
Your Deployment is running 3 Pods. They're healthy, they're self-healing, they update without downtime. But right now, nobody can reach them. No traffic can get in. Nothing inside the cluster knows their addresses. This lesson fixes that — with Services, the object that gives your Pods a stable, reliable address and spreads traffic across all of them automatically.
Use Play with Kubernetes at labs.play-with-k8s.com or start Minikube with minikube start. Confirm with kubectl get nodes.
The Problem — Pod IPs Are Unreliable
Every Pod gets its own IP address. Sounds useful — until you realise those IPs are completely temporary. The moment a Pod restarts, gets rescheduled, or is replaced during a rolling update, it gets a brand new IP address. The old one is gone forever.
Think about what that means in practice. Your frontend Pod talks to your backend Pod at 10.244.0.5. The backend Pod crashes and restarts at 10.244.0.9. Your frontend has no idea. It's still calling the old IP. Requests fail.
And if you have 3 Pods behind a Deployment — which one should the frontend call? Even if the IPs were stable, the frontend would need to know all three addresses and pick one. That's networking code you'd have to write yourself.
A Service solves both problems. It gives you one stable address that never changes, and automatically load balances traffic across all the healthy Pods behind it.
The best way to think about it
A Service is like a company's main phone number. Staff move desks, change their personal extensions, go on holiday — but the main number stays the same. Anyone who calls that number gets connected to whoever is available right now. You don't need to know which individual is picking up.
What a Service Actually Does
A Service does three things:
payment-service.default.svc.cluster.local that any Pod in the cluster can use to find it.The Three Service Types — Which One to Use When
Kubernetes has three main Service types. Each one exposes your Pods differently — from tightly private to fully public. Pick the right one for your situation.
ClusterIP — The Most Common Service Type
Most of your services will be ClusterIP — internal to the cluster, invisible from outside. If your frontend talks to your backend, and your backend talks to a database, those are all ClusterIP services. Only the frontend needs to be exposed externally.
The scenario: Your payment API Deployment is running. You need other services inside the cluster — like the auth service and the frontend — to be able to reach it reliably. Let's create a ClusterIP Service for it.
apiVersion: v1 # Services use the core v1 API
kind: Service # The object type
metadata:
name: payment-service # This becomes the DNS name other Pods use to reach it
namespace: default # Same namespace as the Pods it's routing to
spec:
type: ClusterIP # Internal only — not reachable from outside the cluster
selector: # Which Pods should receive traffic from this Service
app: payment-api # Route to any Pod with this label — matches our Deployment
ports:
- protocol: TCP # The protocol — almost always TCP
port: 80 # The port the Service listens on (what callers use)
targetPort: 80 # The port on the Pod to forward traffic to
This trips up almost every beginner. These are two different ports on two different things:
payment-service:80 to reach the payment API.First, let's make sure we have a Deployment running, then create the Service:
# Create a quick Deployment to go with our Service
kubectl create deployment payment-api --image=nginx:1.25 --replicas=3
# Apply the Service YAML (save it as payment-service.yaml first)
kubectl apply -f payment-service.yaml
# See the Service
kubectl get service payment-service
# Short form also works
kubectl get svc
# Full details including which Pods it's routing to
kubectl describe service payment-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE payment-service ClusterIP 10.96.142.87 <none> 80/TCP 5s Name: payment-service Namespace: default Selector: app=payment-api Type: ClusterIP IP: 10.96.142.87 Port: 80/TCP TargetPort: 80/TCP Endpoints: 10.244.0.5:80,10.244.0.6:80,10.244.0.7:80 Session Affinity: None Events: <none>
CLUSTER-IP: 10.96.142.87 — this is the stable virtual IP of the Service. This IP never changes, even as Pods come and go. Any Pod inside the cluster can reach the payment API at this IP on port 80.
EXTERNAL-IP: <none> — ClusterIP Services are internal only. Nothing from outside the cluster can reach this IP. That's intentional — your payment service shouldn't be publicly accessible directly.
Endpoints: 10.244.0.5:80, 10.244.0.6:80, 10.244.0.7:80 — these are the actual IPs of the three Pods behind this Service right now. The Endpoints Controller updates this list every time a Pod starts, stops, or fails its health check. kube-proxy reads this list to route traffic. When a Pod is replaced after a crash, its new IP appears here automatically.
Testing That the Service Works
The scenario: You need to confirm that the Service is actually routing traffic to your Pods. The ClusterIP is only reachable from inside the cluster — so you'll open a temporary Pod, and from inside that Pod, call the Service by name. This is the standard way to test internal Services.
# Spin up a temporary Pod with curl available, then get a shell inside it
# --rm means the Pod deletes itself when you exit
# -it means interactive terminal
kubectl run test-pod --image=curlimages/curl:latest -it --rm -- /bin/sh
# You're now inside the test Pod. Run these commands from in there:
# Call the Service by its DNS name — Kubernetes resolves this automatically
curl http://payment-service
# Call it using the full DNS name (same result)
curl http://payment-service.default.svc.cluster.local
# Call it by its ClusterIP directly (replace with your actual IP from above)
curl http://10.96.142.87
# Exit when done
exit
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working.</p> </body> </html>
You got the nginx welcome page — which means the full chain worked. From inside the test Pod, DNS resolved payment-service to the ClusterIP 10.96.142.87. kube-proxy routed that to one of the three backend Pods. That Pod served the nginx welcome page.
You called the Service by name, not by IP — this is the key habit to build. Always connect services to each other using DNS names, never hardcoded IPs. The name payment-service will work forever. The ClusterIP 10.96.142.87 could theoretically change if the Service is deleted and recreated.
--rm flag on the kubectl run command means the Pod cleans itself up automatically when you exit. Perfect for quick tests — no cleanup needed.
NodePort — Opening a Door From Outside
ClusterIP is internal only. What if you need something external to reach your service — maybe you're testing on a local Minikube cluster and want to open it in a browser, or you have a non-cloud cluster without a cloud load balancer?
A NodePort Service opens a specific port on every node in your cluster. Anything that can reach any node — inside or outside the cluster — can reach your service on that port.
The scenario: Your team is running a demo on a Minikube cluster. The product manager wants to see the app in a browser. You need to expose it externally, just for today.
apiVersion: v1
kind: Service
metadata:
name: payment-service-nodeport # Different name to not conflict with our ClusterIP service
spec:
type: NodePort # Open a port on every node in the cluster
selector:
app: payment-api # Still routes to the same Pods
ports:
- protocol: TCP
port: 80 # Service's internal port (used within the cluster)
targetPort: 80 # Port on the Pod — same as before
nodePort: 30080 # The external port opened on every node (30000-32767)
# If you leave this out, Kubernetes picks a random port
# Apply the NodePort Service
kubectl apply -f payment-nodeport.yaml
# Get the Service — note the PORT(S) column now shows two ports
kubectl get svc payment-service-nodeport
# On Minikube — this opens the service in your browser automatically
minikube service payment-service-nodeport
# Or get the URL to visit manually
minikube service payment-service-nodeport --url
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE payment-service-nodeport NodePort 10.96.201.44 <none> 80:30080/TCP 4s http://127.0.0.1:30080
PORT(S): 80:30080/TCP — the Service listens on internal port 80. It also opens port 30080 on every node. Traffic to any-node-ip:30080 gets routed to this Service, which routes to the Pods.
On Minikube, minikube service <name> handles the port forwarding for you and opens a browser. On a real cluster you'd visit your node's external IP at port 30080.
NodePort works but has real limitations. You're exposing a raw port directly on your node's IP — not behind a managed load balancer. You'd have to tell users "visit 203.0.113.4:30080" which is ugly and fragile. Also, if that node goes down, traffic on that IP stops — there's no automatic failover between nodes unless you put something in front.
For local development and testing: NodePort is fine. For production on a cloud: use LoadBalancer (or better yet, an Ingress — which we cover in Lesson 33).
LoadBalancer — The Production Standard for Cloud
A LoadBalancer Service tells your cloud provider to provision a real load balancer and give it a public IP. Traffic from the internet hits that IP, gets load balanced by the cloud, and lands on your Pods.
The scenario: Your payment API needs to be accessible from the internet. You're running on AWS EKS or Google GKE. Here's the YAML — it's almost identical to ClusterIP, just with a different type.
apiVersion: v1
kind: Service
metadata:
name: payment-service-lb # Name of the LoadBalancer Service
spec:
type: LoadBalancer # Tells your cloud to provision a real load balancer
selector:
app: payment-api # Route to the same payment-api Pods
ports:
- protocol: TCP
port: 80 # Port the load balancer exposes to the internet
targetPort: 80 # Port on the Pod
# Apply — on a real cloud cluster this provisions an actual load balancer
kubectl apply -f payment-lb.yaml
# Watch for the external IP to appear — it takes a minute on cloud clusters
# On Minikube it will stay as forever (no cloud load balancer available)
kubectl get svc payment-service-lb -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE payment-service-lb LoadBalancer 10.96.188.22 <pending> 80:31204/TCP 5s payment-service-lb LoadBalancer 10.96.188.22 203.0.113.4 80:31204/TCP 45s
On a cloud cluster, Kubernetes talks to the cloud provider's API and says "I need a load balancer." The cloud provisions one, assigns it the public IP 203.0.113.4, and points it at your nodes. The EXTERNAL-IP column changes from <pending> to the real IP once the cloud is done provisioning (typically 30–60 seconds).
Once the EXTERNAL-IP appears, anyone on the internet can visit http://203.0.113.4 and reach your payment API Pods. On Minikube this will stay at <pending> because there's no cloud to provision a load balancer — use minikube tunnel as a workaround.
Putting It All Together — A Real Architecture
Here's what a typical multi-service application looks like with Services wiring everything together. This is the pattern you'll build in practice in Lesson 15 — and it's the foundation of every real Kubernetes application.
Your Service Command Cheat Sheet
| Command | What it does |
|---|---|
| kubectl get services | List all Services — shows type, ClusterIP, external IP, ports |
| kubectl get svc | Short form of the above |
| kubectl describe svc <name> | Full details — shows Endpoints (actual Pod IPs), selector, ports |
| kubectl apply -f service.yaml | Create or update a Service from a file |
| kubectl delete svc <name> | Delete a Service (does not delete the Pods it was pointing to) |
| kubectl get endpoints <name> | See the live list of Pod IPs behind this Service right now |
| kubectl port-forward svc/<name> 8080:80 | Forward a Service port to your local machine for testing |
| minikube service <name> | Open a NodePort or LoadBalancer Service in your browser (Minikube only) |
Which Service Type to Use — Quick Decision Guide
| Type | Accessible from | Use it when... |
|---|---|---|
| ClusterIP | Inside the cluster only | Internal service-to-service communication. Databases, caches, internal APIs. The default — use this most of the time. |
| NodePort | Anything that can reach a node IP | Local development, demos, or non-cloud clusters with no load balancer. Not for production on cloud. |
| LoadBalancer | The internet (public IP) | Public-facing services on cloud clusters. Each one creates a separate cloud load balancer — use Ingress (Lesson 33) to share one load balancer across many services. |
The problem with LoadBalancer Services is cost and complexity. Each LoadBalancer Service on AWS or GCP provisions a separate cloud load balancer — which costs money and has its own IP. If you have 10 microservices that all need to be publicly accessible, that's 10 load balancers.
Ingress (Lesson 33) lets you have one load balancer in front of everything, and routes traffic to different Services based on the URL path or hostname — like a reverse proxy. That's the production pattern most teams use. But you need to understand Services first, which is exactly why we're here.
kubectl get endpoints payment-service and note the Pod IPs. Then delete one Pod manually. Run the endpoints command again within 5 seconds — watch the dead Pod's IP disappear and get replaced.kubectl port-forward svc/payment-service 8080:80. Then open http://localhost:8080 in your browser. You'll see nginx. This is the fastest way to test internal services without exposing them.for i in $(seq 1 10); do curl -s http://payment-service | grep title; done. Every request gets load balanced — in this case to the same nginx page, but in a real app you'd see different Pod names responding.Practice Questions
Type from memory.
1. What is the default Service type in Kubernetes — the one that creates an internal-only IP that no outside traffic can reach?
2. In a Service definition, there are two port fields. The port field is what callers use. What is the field called that specifies which port on the actual Pod to forward traffic to?
3. A Service uses a __________ object to keep track of the actual IP addresses of the healthy Pods behind it — updated automatically as Pods come and go.
Knowledge Check
Pick the best answer.
1. You're setting up a PostgreSQL database in your cluster. Only your backend Pods need to talk to it — nothing external should ever reach it. Which Service type should you use?
2. You update your Deployment from v1 to v2. Old Pods are terminated and new Pods start up. What happens to the Service pointing to these Pods?
3. You have a ClusterIP Service that you want to test from your laptop without exposing it externally. What command lets you access it at localhost:8080?
Up Next · Lesson 12
Cluster Networking Basics
Now you know how Services work at the surface level. Lesson 12 pulls back the curtain on how networking actually works inside a Kubernetes cluster — Pod-to-Pod communication, CNI plugins, and why every Pod can reach every other Pod.