Messages are being produced to Kafka faster than they are being consumed in Flink.
When I produce 1.3 million messages per second to Kafka, the Flink consumer processes about 1.2 million messages per second. I measured this throughput by aggregating the numRecordsInPerSecond metric shown for each source subtask. Initially, the records-lag-max metric for a given subtask shows a lag of a few hundred thousand records, which steadily increases to over a million records.
To address this, I decided to lower the number of messages produced to Kafka to 100,000 per second. My reasoning was that if Flink can handle 1.2 million messages per second, it should manage 100,000 records per second without any consumer lag. I kept all resource allocations the same. However, I still observed consumer lag in the records-lag-max metric, approximately 10,000 records. I expected this lag to be zero.
Below is the code of the test flink job.
env.setParallelism(25);
env.setMaxParallelism(25);
........
var ssgKafkaSource = SlotSharingGroup.newBuilder("kafka source")
.setCpuCores(5.3)
.setTaskHeapMemoryMB(19666)
.build();
var deserializationSchema = ConfluentRegistryAvroDeserializationSchema.forSpecific(Message.class, jobProperties.schemaRegistryUrl);
KafkaSource<Message> source = KafkaSource.<Message>builder()
.setBootstrapServers(jobProperties.bootstrapServers)
.setGroupId(jobProperties.consumerGroupId)
.setTopics(jobProperties.inputTopic)
.setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.LATEST))
.setValueOnlyDeserializer(deserializationSchema)
.build();
DataStream<Message> messages = env
.fromSource(source, WatermarkStrategy.noWatermarks(), "Kafka Source")
.slotSharingGroup(ssgKafkaSource);
var messages = messages.map(m -> 1);
In terms of serialization I followed the guidelines outlined in flink serialization tuning and ensured I was using flinks avro serializer, by including the flink-avro dependency and having my Message type extend SpecificRecordBase.
A Task Manager (TM) is run on a EC2 instance with 32vCores, 256GB and SSDs optimized for fast storage I/O latency. I read through the case study impact on disk on rocks db state backend… and am pretty certain I can rule out the issues faced here.
I’ve tried modifying the Kafka consumer properties, such as the “fetch.max.bytes” and “max.poll.records” but they seem to have no real effect.
Is there something I’m not considering here?