We recently lifted our loki from 2.6.2 stepwise to 2.9.8. Our deployment is “simple scalable mode” as container instances in Azure with Azure storage account blob container as storage.
With 2.6.2 we had 1x read and 3x writer. Now with 2.9.8 we have 2x read, 1x backend and 3x write.
We also switched from boltdb to tsdb.
Problem is that when we Explore from Grafana we cannot see the logs until after approx 10-30 minutes.
I suspect it is because the query do no get result from logs cached at the ingesters before they are persisted to the blob container.
Here is our config:
auth_enabled: false
server:
http_listen_port: 3100
http_server_read_timeout: 130s
http_server_write_timeout: 130s
grpc_listen_port: 9096
grpc_server_max_recv_msg_size: 104857600
grpc_server_max_send_msg_size: 104857600
common:
path_prefix: /tmp/loki
storage:
azure:
account_name: company9wnyv4rst
container_name: loki-production
use_managed_identity: true
request_timeout: 0
replication_factor: 3
ring:
kvstore:
store: memberlist
compactor_grpc_address: cx-mon-backend-loki-backend-1-ci.company.net:9096
memberlist:
advertise_port: 7946
bind_port: 7946
join_members:
- cx-mon-backend-loki-read-1-ci.company.net:7946
- cx-mon-backend-loki-read-2-ci.company.net:7946
- cx-mon-backend-loki-write-1-ci.company.net:7946
- cx-mon-backend-loki-write-2-ci.company.net:7946
- cx-mon-backend-loki-write-3-ci.company.net:7946
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: azure
schema: v11
index:
prefix: index_
period: 24h
- from: 2023-05-01
store: boltdb-shipper
object_store: azure
schema: v12
index:
prefix: index2_
period: 24h
- from: 2024-05-21
store: tsdb
object_store: azure
schema: v13
index:
prefix: index3_
period: 24h
storage_config:
tsdb_shipper:
active_index_directory: /tmp/loki/tsdb-shipper-active
cache_location: /tmp/loki/cache
cache_ttl: 24h
shared_store: azure
compactor:
working_directory: /mnt/data/compactor
shared_store: azure
compaction_interval: 5m
retention_enabled: false
delete_request_store: azure
querier:
max_concurrent: 6
ingester_client:
grpc_client_config:
max_recv_msg_size: 104857600
ingester:
chunk_encoding: snappy
analytics:
reporting_enabled: false
limits_config:
query_timeout: 2m
What can the problem be? Why did we not have it before?
BTW: I also posted this topic in the grafana community.
/Thanks
Jon Harald Berge is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.