kafka-acls.sh Command Fails with Operations Containing Underscores in Kafka 2.7.0
I’m experiencing issues when setting ACLs for my users in Kafka 2.7.0. Specifically, the kafka-acls.sh command fails when using operations that contain underscores. However, similar commands for operations without underscores seem to work correctly.
kafka-acls.sh Command Fails with Operations Containing Underscores in Kafka 2.7.0
I’m experiencing issues when setting ACLs for my users in Kafka 2.7.0. Specifically, the kafka-acls.sh command fails when using operations that contain underscores. However, similar commands for operations without underscores seem to work correctly.
Transaction Management Across Microservices with Rollback Mechanism using Kafka
I am currently developing a microservices architecture consisting of two services, and I’m facing a challenge related to transaction management and error handling.
Transaction Management Across Microservices with Rollback Mechanism using Kafka
I am currently developing a microservices architecture consisting of two services, and I’m facing a challenge related to transaction management and error handling.
How do I cleanup “zombie” consumer groups on Kafka after accidental __consumer_offsets partition increase?
I have accidentally performed partition increase to __consumer_offets
topic in Kafka (Was version 2.4 now it’s 3.6.1)
Kafka Producer throws error after awhile / Consumer disconnecting from node upon start-up
Consumer Behaviour
Kafka Producer throws error after awhile / Consumer disconnecting from node upon start-up
Consumer Behaviour
message delivery guarantee in Kafka
The question is the following, or to be more precise, in my misunderstanding regarding Kafka delivery guarantees, I looked everywhere for information and somewhere the data differs. As I understand it, the message delivery guarantee in Kafka is how they will be delivered to the target unit (Kafka, consumer). As it was said in one of the articles, before the Kafka update in 2017, Kafka supported the delivery guarantee al least once and at most once, and here is the question, is this delivery from the producer side or from the consumer side. And how is this achieved, by what parameter. From the assumption, this is the asc parameter, which indicates whether it is necessary to wait for confirmation or not. The article said that since 2017 transactions have been introduced in Kafka and this helps to achieve exactly once, in another article it was said that transactions work only at the topic level, but in no way relate to exactly once, that is, they work at the level of messages being entered into 2 topics within one transaction, and consumers in turn could read these messages after they are committed. I also heard about idempotency from the producer side, that Kafka can do deduplication by saving a certain message ID in its topic and as this happens prevents duplicates from being found in case of re-sending a message due to a producer or Kafka failure. So how can we achieve full exactly once, so that there is a guarantee that the full process, including sending from the producer to Kafka, reading and processing on the consumer, was complete?
Kafka starves a partition when another partition has a lot of lag
I have topic1 with 3 partitions and 150k messages of lag and topic2 with 3 partitions and 1 message of lag.
Duplicated Kafka messages in output of Embulk collection
We are using: Embulk version v0.10.12.
We are collecting files using sftp input and pushing them to Kafka using embulk-output-kafka.
From time to time, we face duplicated kafka messages within our output Kafka topic although the Embulk logs shows that each file is processed ony once and Embulk Kafka producer pushes the message only once.
What could be the reason of such duplication ?