I had filebeat running and is configured with kafka as ouput. Everything is working fine but when connection with kafka server is broke, filebeat stores the info of last log line read in its registry. But when connection is resumed, filebeat does not ingest old data.
Below is my configuration of filebeat:
filebeat.yml
filebeat.inputs:
- input_type: log
close_inactive: 120m
paths: [/home/test/files.log]
json.keys_under_root: true
enabled: true
fields:
topic: test
output.kafka:
hosts: ["192.168.0.155:9092"]
topic: "%{[fields.topic]}"
required_acks: 1
bulk_max_size: 100000
broker_timeout: 84600
timeout: 84600
retry.max: 10000
retry.backoff: 84600
channel_buffer_size: 100000
Filebeat version is 7.16.3
Can anyone tell how to avoid this data loss?