I have a system which runs and communicates with a Kafka cluster. This morning, I noticed that one of the processes which talks to Kafka had crashed. It crashed quite some time ago, over a week ago, and I had not noticed.
When I fixed the bug and tried to restart this process, I realized that Kafka had lost the data relating to the current offset commit for the consumer group which this process was a part of.
Because this was a single process, the consumer group became empty.
I suspected that the “offset topics”, which record information about commit offsets, had probably been removed by Kafka.
I found the following information:
offsets.retention.minutes
For subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention period has elapsed after the consumer group loses all its consumers (i.e. becomes empty); 2) this retention period has elapsed since the last time an offset is committed for the partition and the group is no longer subscribed to the corresponding topic. For standalone consumers (using manual assignment), offsets will be expired after this retention period has elapsed since the time of last commit. Note that when a group is deleted via the delete-group request, its committed offsets will also be deleted without extra retention period; also when a topic is deleted via the delete-topic request, upon propagated metadata update any group’s committed offsets for that topic will also be deleted without extra retention period.
https://kafka.apache.org/documentation/#brokerconfigs_offsets.retention.minutes
The documentation page states that valid values are integers from 1, and that the default is 10080, which would be 1 week.
This strongly suggests to me that the reason why the consumer group offsets were lost is because this particular consumer group remained empty for more than a week.
However, there doesn’t appear to be a way to make this data persistent.
For example -1
is not a valid value, at least not according to this documentation.
For regular topics, I currently have the maximum retention in bytes set to -1
(infinite) or some very large value such as 100 GB. This is because in this case, I do not care about losing data if more than 100 GB of data is retained for this particular topic.
I also have set, for all topics, retention.hours set to -1
, which again means “infinite”.
I had assumed that these values would apply to offset topics as well, however this appears not to be the case.
Is there a way to make Kafka operate as a persistant data storage? This has to include the topics offset data. Without this, a process cannot always resume where it left off, because there is a possibility that the offsets will expire and be garbage collected.