Learn why structured JSON logs are essential for observability, how to include trace context for correlation, and how to configure logging levels, context propagation, and the OpenTelemetry log bridge API.
Compare plain text logs with structured JSON logs and understand why structured logging is non-negotiable for production observability.
cat <<'EOF'
--- PLAIN TEXT LOG (bad) ---
2024-03-15 10:30:00 ERROR Failed to process order 12345 for user john@example.com
--- STRUCTURED JSON LOG (good) ---
{"timestamp":"2024-03-15T10:30:00Z","level":"error","message":"Failed to process order","order_id":12345,"user_id":"john@example.com","service":"order-api","trace_id":"a1b2c3d4e5f6","span_id":"f6e5d4c3b2a1","error":"payment_declined"}
--- WHY STRUCTURED WINS ---
1. Machine-parseable: filter by any field
2. Consistent schema: no regex needed
3. Correlation: trace_id links logs to traces
4. Aggregation: count errors by service, user, type
EOFPlain text logs require fragile regex parsing and make it nearly impossible to filter or aggregate at scale. Structured JSON logs have explicit fields that log aggregation systems like Loki can index and query directly. Every field becomes a dimension you can filter, group, and alert on.
You see the contrast between a plain text log line and a structured JSON log line, followed by four reasons structured logging wins.