Setting up Loki/Grafana in Docker to analyze Nginx logs¶
To be honest, it's a little confusing how to get started because there are lots of instructions out there and docker compose files floating around that are outdated. Let's follow the office docs at https://grafana.com/docs/loki/latest/setup/install/docker/#install-with-docker-compose.
The docker compose file from the docs as of today is here: https://raw.githubusercontent.com/grafana/loki/v3.4.1/production/docker-compose.yaml. Note that the link is likely subject to change with new versions.
Docker Compose¶
In any case, let's examine the file and make a few practical adjustments.
networks:
loki:
services:
loki:
image: grafana/loki:latest
container_name: loki
ports:
- "3100:3100"
volumes:
- ./loki-config.yaml:/etc/loki/loki-config.yaml:ro # Custom config file (1)
- ./loki:/loki # Persist Loki index and chunks
command: -config.file=/etc/loki/local-config.yaml
networks:
- loki
promtail:
image: grafana/promtail:latest
container_name: promtail
volumes:
- /home/thomas/docker/nginx/data/nginx/logs:/var/log/nginx:ro # Mount Nginx logs so they can be tailed
- ./promtail-config.yaml:/etc/promtail/config.yaml:ro
command: -config.file=/etc/promtail/config.yml
networks:
- loki
grafana:
environment:
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_FEATURE_TOGGLES_ENABLE=alertingSimplifiedRouting,alertingQueryAndExpressionsStepMode
entrypoint:
- sh
- -euc
- |
mkdir -p /etc/grafana/provisioning/datasources
cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
orgId: 1
url: http://loki:3100
basicAuth: false
isDefault: true
version: 1
editable: false
EOF
/run.sh
image: grafana/grafana:latest
ports:
- "3000:3000"
volumes:
- ./grafana-data:/var/lib/grafana # Persist dashboards, settings, and plugins
- ./grafana-config:/etc/grafana # Optional: Persist config files
networks:
- loki
-
Anything mounted here may result in a permission issue because the Loki containers are running as user 10001 You may want to set ownership like so
or allow read access to "anyone"
Loki Config¶
For the loki-config.yaml
file, we'll use the official docker config example from https://github.com/grafana/loki/blob/main/cmd/loki/loki-docker-config.yaml
As of today, that looks like the following. Note that I uncommented the analytics, because I don't really like things phoning home in general.
auth_enabled: false
server:
http_listen_port: 3100
common:
instance_addr: 127.0.0.1
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
# By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
# analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
#
# Statistics help us better understand how Loki is used, and they show us performance
# levels for most users. This helps us prioritize features and documentation.
# For more information on what's sent, look at
# https://github.com/grafana/loki/blob/main/pkg/usagestats/stats.go
# Refer to the buildReport method to see what goes into a report.
#
# If you would like to disable reporting, uncomment the following lines:
analytics:
reporting_enabled: false
This is quite similar to the local configuration example provided in the docs: https://grafana.com/docs/loki/latest/configure/examples/configuration-examples/#configuration, but I like that the github folder explicitly shows this as their "docker" example.
Promtail Config¶
Our promtail config will define the jobs for scraping log files, nginx and otherwise.
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: nginx_json_logs
pipeline_stages:
- json:
expressions:
remote_addr: remote_addr
remote_user: remote_user
time_local: time_local
time_iso8601: time_iso8601
request: request
request_uri: request_uri
body_bytes_sent: body_bytes_sent
http_referer: http_referer
http_user_agent: http_user_agent
http_x_forwarded_for: http_x_forwarded_for
request_method: request_method
status: status
request_time: request_time
server_name: server_name
ssl_protocol: ssl_protocol
ssl_cipher: ssl_cipher
geoip2_data_country_code: geoip2_data_country_code
geoip2_data_country_name: geoip2_data_country_name
geoip2_data_city_name: geoip2_data_city_name
geoip2_data_region_name: geoip2_data_region_name
static_configs:
- targets:
- localhost
labels:
job: nginx
log_format: json
__path__: /var/log/nginx/json_access.log
# - job_name: nginx
# static_configs:
# - targets:
# - localhost
# labels:
# job: nginx_logs
# __path__: /var/log/nginx/access*.log
Querying Logs¶
Within the grafana web app running on port 3000, we can play around with our log files in the Explore
menu option. (Loki should already be added in the Data sources menu). Choose Loki as the data source.
For the label filters, we can select job = "nginx". This is the label we gave the job. If you run the query you should see all the labels. Now, you can create a json
expression which will extract keys and values from the expression we establish in the Promtail config. Then, for step 3
, you can create a label filter expression, like server_name=immich.mydomain.com
. This will filter the log files for this particular server name. So that query will end up looking something like the following