I’m encountering an issue where my Flink job, deployed on Flink Clusters 1.19.0 and 1.19.1, fails to consume from all partitions of a Kafka topic when deployed with a checkpoint or savepoint. Despite the Flink 1.19 documentation not listing an official Kafka connector, I’m using one from Maven, which is supposed to support 1.19 bacause it’s for the 1.19 version.
Has anyone faced similar issues with Flink and Kafka integration, especially regarding partition consumption with checkpoints/savepoints? Additionally, insights on the Maven Kafka connector’s compatibility with Flink 1.19 would be helpful.
In my attempts to resolve the issue, I ensured that the source parallelism matched the Kafka topic’s partition count, expecting this alignment would enable the Flink job to consume from all partitions. Despite this configuration, the problem persists.
Thanks for any assistance!
Srikanth Mergu is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
10