Liveness vs Readiness Probes — Know the Difference
Using the wrong probe type causes either endless restarts or traffic sent to broken pods.
What
Kubernetes has three health check types: Liveness probes detect when a pod is stuck and needs a restart. Readiness probes detect when a pod can't serve traffic yet (e.g., still loading data). Startup probes give slow-starting apps extra time before liveness checks kick in.
Why It Matters
Without proper probes, Kubernetes can't distinguish between a pod that's temporarily busy and one that's genuinely broken. You'll either send traffic to unhealthy pods (users see errors) or restart pods that just need a moment (cascading failures).
Example
# Liveness: restart if pod is stuck
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
# Readiness: stop sending traffic if not ready
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
# Startup: give slow apps time to start
startupProbe:
httpGet:
path: /healthz
port: 8080
failureThreshold: 30
periodSeconds: 10Common Mistake
Using the same endpoint for both liveness and readiness probes, or making the liveness probe check dependencies like databases. If the database is down, Kubernetes restarts your pod — which won't fix the database and just causes a restart loop.
Quick Fix
Liveness should check if YOUR app process is alive (simple /healthz). Readiness should check if the app can actually serve requests (dependencies loaded, caches warm). Keep them separate.
Key Takeaways
- 1Liveness: is the pod stuck? → restart it
- 2Readiness: can it serve traffic? → remove from load balancer
- 3Startup: give slow apps time to boot
- 4Liveness = check YOUR process only
- 5Readiness = check dependencies too
Was this tip helpful?
Help us improve the DevOpsPath daily collection