Can’t migrate from Zookeeper to KRaft
I’m having trouble with this situation. First I tried to migrate the volume which is in zookeeper to new kraft cluster with MirrorMaker2 but I couldn’t manage it. I posted that as a question as well. Now I tried to follow the migration steps that is described in Confluent’s Docs. But I couldn’t manage this also.
Zookeeper to KRaft post-migration question: is __cluster_metadata actually a topic?
We recently started migrating our test clusters from Zookeeper to KRaft, all running Confluent Kafka on prem. In KRaft, cluster’s metadata is stored in __cluster_metadata
topic (see: https://developer.confluent.io/courses/architecture/control-plane/#kraft-cluster-metadata) . What I noticed after the first cluster was migrated is that I am not able to describe this topic using kafka-topics
command, it just does not exist:
How to replicate Duplicate Message Delivery for a Kafka Producer Configured with at least once semantics
I have a kafka producer configured with at least once delivery semantics. The challenge now is How do I actually Implement a scenario to force Kafka to retry and publish Duplicate messages? The following are some of the ideas that came to mind —- Any better ideas?
Connect Timeout error when starting Confluent connector tasks post upgrade
We have java sink connectors recently upgraded kafka connect from 5.3.1 to 7.3.1.
I am trying to connect the SalesforceSObjectSinkConnector in my local machine
org.apache.kafka.common.config.ConfigException: Invalid value [] for configuration bootstrap.servers: Empty list
at io.confluent.connect.utils.validators.NonEmptyListValidator.ensureValid(NonEmptyListValidator.java:21)
Error: Could not find or load main class org.apache.zookeeper.server.quorum.QuorumPeerMain confluent kafka
While starting the server for confluent kafka in windows 10, getting this error
Error: Could not find or load main class org.apache.zookeeper.server.quorum.QuorumPeerMain confluent kafka
Kafka topic creation best strategy
Im building a streaming application where I will receive logs of various transactions which need to be processed. Planning to use kafka for this, and Im a newbie to kafka. What are the best practices when choosing a topic for kafka, are they supposed to be static values? Also in my case, each log will have a transaction-id. Can I use the value of transaction-id to generate a new topic when a new unique value of transaction-id is published to kafka?
Are Kafka write commits agnostic to producers?
I am learning the designs of Kafka replication, and producers.
And to my surprise, I somehow get the conclusion that it is impossible for producers to know accurately if the messages are committed by the broker, and the safety of writes are agnostic.
Kafka producers cannot guarantee messages will not lost or not after a write?
I am learning the designs of Kafka replication, and producers.
And to my surprise, I somehow get the conclusion that it is impossible for producers to know accurately if the messages are committed by the broker, and the safety of writes are agnostic.