Skip to main content

Command Palette

Search for a command to run...

Beginner's Guide to setting up Grafana Stack

Setup Grafana Stack on a single node with Docker Compose

Updated
Beginner's Guide to setting up Grafana Stack

This post will guide you through setting up a single-node deployment of the Grafana Stack – including Grafana, Prometheus, Loki, and Tempo – using Docker Compose.

Component Setup

We’ll setup each component step-by-step, with annotated configuration source.

Loki

Loki's architecture is microservices-based, horizontally scalable, and distributed system designed for multi-tenant log aggregation. It consists of multiple components that can run independently and in parallel.

Loki primarily uses two data types: 1) Index, which acts as a table-of-contents that helps locate logs based on specific labels, and 2) Chunks, which acts as containers that hold log entries for a particular set of labels.

All data, including both index and chunks, is stored in a single object storage backend like Amazon S3, Google Cloud Storage, or Azure Blob Storage. This is referred to as Single Store, where the same storage backend is used for both the index as well as the chunks.

For this article, we will run Loki in “monolithic mode” with all components running simultaneously in one process. Monolithic mode is useful for getting started quickly to experiment with Loki, as well as for small read/write volumes of up to approximately 20GB per day.

auth_enabled: false # for demonstration only!

server:
  http_listen_port: 3100

common:
  ring:
    instance_addr: 127.0.0.1
    kvstore:
      store: inmemory
  replication_factor: 1
  path_prefix: /tmp/loki

storage_config:
  # configures storing index in an object store in a prometheus TSDB-like format.
  tsdb_shipper:
    # Directory where ingesters would write index files which would then be
    # uploaded by shipper to configured storage
    active_index_directory: "/loki/index"

    # Cache location for restoring index files from storage for queries
    cache_location: "/loki/index_cache"
    cache_ttl: 24h # TTL for index files restored in cache for queries

  # configuring filesystem backend; for other supported stores see https://grafana.com/docs/loki/latest/configure/storage/#object-storage
  # for a real-world, production deployment, pick a reliable object store like s3, gcs, etc.
  filesystem:
    # directory to store chunks in
    directory: "/loki/chunks"

# Log retention in Loki is achieved through the Compactor (https://grafana.com/docs/loki/latest/get-started/components/#compactor)
compactor:
  retention_enabled: true
  delete_request_store: filesystem

limits_config:
  retention_period: 15d # retention period to apply to stored data

# configures the chunk index schema and where it is stored.
schema_config:
  # configures what index schemas should be used for from specific time periods.
  configs:
    - from: 2025-01-01          # since when this config should take effect; see: https://grafana.com/docs/loki/latest/configure/storage/#schema-config
      store: tsdb               # storage used for index
      object_store: filesystem  # storage used for chunks
      schema: v13
      index:
        prefix: index_
        period: 24h

Alloy

Grafana Alloy is an open source OpenTelemetry collector that acts as an agent to collect, transform and write logs to Loki (and other supported stores). It supersedes Grafana Agent and Promtail.

Alloy’s configuration is written in CUE-like configuration language. The following configuration configures Alloy to discover local containers, apply labels (like service_name, project, etc.) and push data to our Loki instance.

Alloy can do much more than this simple forward. Make sure to checkout its documentation to learn more.

// This configuration uses Grafana Alloy's native syntax (based on CUE)
// to scrape Docker container logs and forward them to Loki.

// discover all containers running on the docker instance
discovery.docker "containers" {
  host = "unix:///var/run/docker.sock"
}

// relabel configuration to annotate entry with service_name, instance_id etc.
discovery.relabel "add_metadata" {
  targets = []

  rule {
    source_labels = ["__meta_docker_container_label_com_docker_compose_service"]
    target_label  = "service_name"
  }

  rule {
    source_labels = ["__meta_docker_container_label_com_docker_compose_project"]
    target_label  = "project"
  }

  rule {
    source_labels = ["__meta_docker_container_id"]
    target_label  = "instance_id"
  }
}

// define a loki.source.docker component to scrape logs from discovered docker containers.
loki.source.docker "docker_logs" {
  host = "unix:///var/run/docker.sock"
  targets = discovery.docker.containers.targets

  relabel_rules = discovery.relabel.add_metadata.rules

  forward_to = [loki.write.default.receiver] // connect the output to relabel configuration below
}

// define a loki.write component to send the processed logs to the Loki instance.
loki.write "default" {
  endpoint {
    url = "http://loki:3100/loki/api/v1/push" // Loki push endpoint
  }
}

Prometheus

Unlike Loki, Prometheus is deployed as a single service / component. Scaling and high-availability of Prometheus is achieved using alternative tools like Thanos or Grafana Mimir, but is outside the scope of this article.

global:
  scrape_interval: 15s # How frequently to scrape targets.
  evaluation_interval: 15s # How frequently to evaluate rules.

scrape_configs:
  # Job 'prometheus' scrapes metrics from prometheus instance itself
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

Tempo

Tempo can be deployed in monolithic or microservices modes. The only external dependency required is an object store, though Tempo also supports using the local filesystem as the backend for trace storage.

For this article, we’ll deploy a monolithic / single-binary configuration with filesystem storage. For production deployment, the general recommendation is to use Object Store such as S3, GCS, etc.

server:
  http_listen_port: 3200

# Distributors receive spans and forward them to the appropriate ingesters.
# The following configuration enables OpenTelemetry receiver
distributor:
  receivers:
    otlp:
      protocols:
        grpc:
        http:

# The ingester is responsible for batching up traces and pushing them to TempoDB.
ingester:
  max_block_duration: 5m

# Compactors stream blocks from the storage backend, combine them and write them back.
compactor:
  compaction:
    compaction_window: 1h

# Storage block configures TempoDB. See https://grafana.com/docs/tempo/latest/configuration/#storage
storage:
  trace:
    backend: local
    local: # configuration block for local storage
      path: /tmp/tempo/blocks
    wal:   # configuration block for the Write Ahead Log (WAL)
      path: /tmp/tempo/wal

Docker Compose

With this we have all our components configured 🚀 Next, we’ll use Docker Compose and create a deployment for our stack.

💡
Make sure the configuration files are placed in the correct location, under scripts/ for docker compose setup to work.
version: '3.8'

networks:
  grafana-net:
    driver: bridge

volumes:
  grafana_data:
  prometheus_data:
  loki_data:
  tempo_data:

services:
  # Grafana dashboard service accessible over port 3030
  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3030:3000"
    volumes:
      - grafana_data:/var/lib/grafana
    networks:
      - grafana-net
    depends_on:
      - prometheus
      - loki
      - tempo
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin # Allows anonymous users to have Admin role
    restart: unless-stopped

  # Prometheus monitoring service
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    volumes:
      - ./scripts/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
    networks:
      - grafana-net
    restart: unless-stopped

  # Loki Service (Log Aggregation)
  loki:
    image: grafana/loki:3.5
    container_name: loki
    volumes:
      - ./scripts/loki.yml:/etc/loki/config.yaml
      - loki_data:/loki
    command: -config.file=/etc/loki/config.yaml
    networks:
      - grafana-net
    restart: unless-stopped

  # Tempo Service (Distributed Tracing)
  tempo:
    image: grafana/tempo:2.7.0
    container_name: tempo
    user: root # to temporarily fix permission issues; do not use on prod
    volumes:
      - ./scripts/tempo.yml:/etc/tempo/config.yaml
      - tempo_data:/tmp/tempo
    command: -config.file=/etc/tempo/config.yaml
    networks:
      - grafana-net
    restart: unless-stopped

  # Grafana Alloy Service (Agent for Logs and Metrics)
  alloy:
      image: grafana/alloy:v1.8.3
      container_name: alloy
      volumes:
        - ./scripts/config.alloy:/etc/config.alloy
        - /var/lib/docker/containers:/var/lib/docker/containers:ro
        - /var/run/docker.sock:/var/run/docker.sock:ro
      command: run /etc/config.alloy --server.http.listen-addr=0.0.0.0:12345
      networks:
        - grafana-net
      depends_on:
        - loki
      restart: unless-stopped

Next, run docker-compose up -d to start all the services and go to http://localhost:3030

Configuring Grafana

After you’ve setup all services, head over to Grafana and add data sources for Loki, Prometheus and Tempo.

Adding a new data source

To add a new data source, follow the steps below:

  • Go to datasources page

  • Pick the connection type. Here, we’ll pick Loki. Enter the URL to the Loki instance.

  • Click on “Save & Test”. It’ll create a new data source.

Repeat the same steps and add data source for Prometheus and Tempo as well.


In the next article, we’ll deploy a sample application that generates logs, metric and traces, and use Grafana to setup an observability dashboard 🚀

Observability With Grafana Stack

Part 2 of 3

In this series, I introduce you to Grafana's stack and show how it could supercharge ⚡️ your organization's observability stack with open-source technologies 🚀

Up next

Observability With Grafana Stack

Exploring Grafana's Observability stack