Kafka consumer is not reading message sent by producer if started after producer

时间:2018-11-05 17:43:44

标签: java apache-kafka kafka-consumer-api

I found property auto.offset.reset=earliest or auto.offset.reset=latest.

Now here is my scenario with 1 Topic, 1 partition,1 consumer

For example, i have started producer. Producer send 100 records to Topic. Now i start the consumer. According to the property auto.offset.reset=earliest my consumer will start reading record from 0 index of partition. Now if my consumer does async commit for 1-100 record and goes down. Meanwhile producer send 100 records more. When consumer comes up, will it start reading message from 0 index of partition or will it start reading from 101 index of partition and process the record from 101 to 200.

2 个答案:

答案 0 :(得分:1)

从Kafka 0.9开始,如果提交成功,Kafka将在特殊的内部主题__consumer_offsets中存储使用者的进度。该主题存储每个使用者组在主题和分区上消耗的偏移量。

因此,当您的使用者再次启动时(在同一使用者组内!),它将继续从上次提交的偏移量(您的示例中为101)读取。 auto.offset.reset指定在__consumer_offsets中没有信息(您还没有任何提交)的情况下的行为。

答案 1 :(得分:0)

偏移量未实现。 U应该将其配置为从开头读取的属性。在Java中,有类似seektobeginning