Deploy Grafana Loki for centralized log aggregation, configure the OTel Collector to export logs to Loki, learn LogQL query language, and build log exploration dashboards in Grafana.
Understand how Grafana Loki works and why it takes a fundamentally different approach from Elasticsearch. Loki indexes labels, not log content — making it far cheaper to run.
cat <<'EOF'
=== LOKI vs ELASTICSEARCH ===
Elasticsearch (ELK):
- Full-text indexes EVERY word in every log line
- Fast arbitrary search, but very expensive to run
- Needs significant RAM and disk for indexing
Grafana Loki:
- Indexes ONLY labels (service, level, namespace)
- Log content stored as compressed chunks (like grep)
- 10-100x cheaper to operate at scale
- Designed for the Grafana ecosystem
=== LOKI COMPONENTS ===
Promtail / OTel Collector / Fluentd
| (push logs via HTTP)
v
DISTRIBUTOR → routes to correct ingester
|
v
INGESTER → batches + compresses chunks
|
v
STORAGE → object store (S3/GCS/filesystem)
QUERIER → reads chunks + applies LogQL filter
EOFLoki's key insight is that most log queries start with a label selector (show me logs from service=order-api, level=error) and then filter content. By only indexing labels, Loki avoids the massive overhead of full-text indexing. Log content is compressed and stored cheaply. When you query, Loki loads only the relevant chunks and greps through them. This makes Loki 10-100x cheaper than Elasticsearch for most logging use cases.
You see a comparison of Loki vs Elasticsearch approaches, followed by the Loki component architecture showing distributors, ingesters, storage, and queriers.