kafka.api.OffsetRequest - 无法检索结果

时间:2016-11-25 15:57:36

标签: java scala apache-kafka

正在完成确定 ConsumerLag 的任务,需要按如下方式检索当前的Producer Offset:

PartitionOffsetRequestInfo partitionOffsetRequestInfo = 
    new PartitionOffsetRequestInfo(OffsetRequest.LatestTime(), 100);

List<TopicAndPartition> partitions = new ArrayList<>();
for(int i = 0; i < partitionMetadataList.size(); i++) {
    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, i);
    partitions.add(topicAndPartition);

    tuple2List.add(new Tuple2<>(topicAndPartition, partitionOffsetRequestInfo));
}

Tuple2<TopicAndPartition, PartitionOffsetRequestInfo>[] tuple2Array =
    tuple2List.parallelStream().toArray(Tuple2[]::new);

WrappedArray<Tuple2<TopicAndPartition, PartitionOffsetRequestInfo>> wrappedArray =
    Predef.wrapRefArray(tuple2Array);

scala.collection.immutable.Map<TopicAndPartition, PartitionOffsetRequestInfo> offsetRequestInfoMap =
    (scala.collection.immutable.Map<TopicAndPartition, PartitionOffsetRequestInfo>)
    scala.Predef$.MODULE$.Map().apply(wrappedArray);

OffsetRequest offsetRequest = new OffsetRequest(offsetRequestInfoMap, (short)0,
    0, OffsetRequest.DefaultClientId(), Request.OrdinaryConsumerId());

查看 OffsetResponse ,我看到了一个UnknownTopicOrPartitionException offsets数组。如果我为版本ID传递了(short)1(就像我调用 OffsetFetchResponse 那样),当我尝试检索结果时,我会收到NetworkReceive.readFromReadableChannel异常。

问题:

一个。是否有更好的方法来获得当前的生产者抵消? 湾为什么 OffsetRequest 调用不适用于VersionId = 1?

编辑:

请注意,我可以使用此频道来检索ConsumerOffset,因此我知道它有效。

我可以使用cmdline检索值:

  

kafka-consumer-groups --bootstrap-server hostname:9092 --describe --new-consumer --group test_consumer

编辑:

尝试重用示例scala(重写为Java)代码:

KafkaConsumer<String, String> kafkaConsumer = getConsumer();
List<org.apache.kafka.common.TopicPartition>topicAndPartitions = new ArrayList<>();

org.apache.kafka.common.TopicPartition topicAndPartition = new org.apache.kafka.common.TopicPartition("my_topic", 0);
topicAndPartitions.add(topicAndPartition);

kafkaConsumer.assign(topicAndPartitions);

kafkaConsumer.seekToEnd(topicAndPartitions);
long lPos = kafkaConsumer.position(topicAndPartition);

NetworkReceive.readFromReadableChannel来电时遇到同样的异常(.position())。

2 个答案:

答案 0 :(得分:0)

根据源代码,OffsetRequest的当前版本为0而不是1.此外,源代码不会返回包含时间戳信息的版本1响应,如doc所示。所以它可能是一个doc bug。

答案 1 :(得分:0)

如果您已达到这一点,那么这是一个有效的解决方案:

private void getOffsets(String topic, String group) {
    KafkaConsumer<String, String> kafkaConsumer = getConsumer(topic, group);
    List<PartitionInfo> partitionInfos = kafkaConsumer.partitionsFor(topic);

    List<org.apache.kafka.common.TopicPartition>topicAndPartitions = new ArrayList<>();

    for(int i = 0; i < partitionInfos.size(); i++) {
        org.apache.kafka.common.TopicPartition topicAndPartition = new org.apache.kafka.common.TopicPartition(topic, i);
        topicAndPartitions.add(topicAndPartition);
    }

    List<Long>startList = new ArrayList<>();
    List<Long>endList = new ArrayList<>();

    kafkaConsumer.assign(topicAndPartitions);

    for(int i = 0; i < partitionInfos.size(); i++) {
        OffsetAndMetadata offsetAndMetadata = kafkaConsumer.committed(topicAndPartitions.get(i));
        if(offsetAndMetadata != null) {
            startList.add(offsetAndMetadata.offset());
        }
    }

    // did we find any active partitions?
    if(startList.size() == 0) {
        LOGGER.info("topic:group not found: {}:{}", topic, group);
        return;
    }

    kafkaConsumer.seekToEnd(topicAndPartitions);

    for(int i = 0; i < partitionInfos.size(); i++) {
        endList.add(i, kafkaConsumer.position(topicAndPartitions.get(i)));
    }

    LOGGER.debug("startlist.size: {}  endlist.size: {}  partitions: {}", startList.size(), endList.size(), partitionInfos.size());

    long sumLag = 0;
    for(int i = 0; i < partitionInfos.size(); i++) {
        long lStart = startList.get(i);
        long lEnd = endList.get(i);

        sumLag += (lEnd - lStart);

/*
 *  At this point Im sending the info to data dog. 
 *  The 'sum' value is nice to have.
 */
        LOGGER.debug("partition: {}  start: {}   end: {}  lag: {}", i, lStart, lEnd, (lEnd - lStart));
    }

    kafkaConsumer.poll(100);

    topicAndPartitions.clear();
    kafkaConsumer.assign(topicAndPartitions)

}