我试图在kafka-0.10.0.0中做一些简单的演示。 我的制作人还可以,但消费者可能不正确,代码如下。
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "group1");
props.put("enable.auto.commit", "false");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("topictest2"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (TopicPartition partition : records.partitions()) {
List<ConsumerRecord<String, String>> partitionRecords = records.records(partition);
for (ConsumerRecord<String, String> record : partitionRecords) {
System.out.println("Thread = "+Thread.currentThread().getName()+" ");
System.out.printf("partition = %d, offset = %d, key = %s, value = %s",record.partition(), record.offset(), record.key(), record.value());
System.out.println("\n");
}
// consumer.commitSync();
long lastOffset = partitionRecords.get(partitionRecords.size() - 1).offset();
consumer.commitSync(Collections.singletonMap(partition, new OffsetAndMetadata(lastOffset + 1)));
}
}
但是当我运行这个演示时,没有输出!我的代码中有什么问题?
答案 0 :(得分:0)
看起来有效。
我认为该程序只是在等待新消息,因为
auto.offset.reset
默认为最新
如果您在该主题中有一些消息并想要阅读它们,请尝试添加
props.put("auto.offset.reset", "earliest");
从头开始阅读主题,并将group.id重置为唯一的内容,以确保它不会从保存的偏移量继续或根本不提交偏移量。一旦有了组ID,就会跳过auto.offset.reset
。
props.put("group.id", "group."+UUID.randomUUID().toString());