将循环消息分配给分区,不适用于没有密钥的消息

时间:2019-12-28 05:27:29

标签: java apache-kafka round-robin

我创建了一个主题first_topic并向其发送了消息。

        for (int i=0;i<10;i++) {

        //create producer record
        ProducerRecord<String, String> record = new ProducerRecord<String, String>("first_topic", "hello world " + i);
        //send Data
        producer.send(record, new Callback() {
            public void onCompletion(RecordMetadata recordMetadata, Exception e) {
                //executes every time a record is send or an exception occurs
                if (e == null) {
                    //the record was successfully sent
                    logger.info("Received new meta data \n" +
                            "Topic : " + recordMetadata.topic() + "\n" +
                            "Partition : " + recordMetadata.partition() + "\n" +
                            "OFfset : " + recordMetadata.offset() + "\n" +
                            "Timestamp : " + recordMetadata.timestamp());
                } else {
                    e.printStackTrace();
                    logger.error("Error while Producing record ", e);

                }
            }
        });

但是所有消息都转到分区2。理想情况下,它们应该以循环方式进入所有3个。但不是。见下文。我在做什么错了?

kafka-consumer-groups --bootstrap-server localhost:9092 --describe --group my-third-application

Consumer group 'my-third-application' has no active members.
GROUP                TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID     HOST            CLIENT-ID
my-third-application first_topic     0          0               0               0               -               -               -
my-third-application first_topic     1          0               0               0               -               -               -
my-third-application first_topic     2          10              10              0               -               -               -

1 个答案:

答案 0 :(得分:0)

我也遇到过这个问题。原因不是循环分区器,而是生产者“doSend”方法导致了这个问题。在“doSend”方法中,当累加器返回带有 abortForNewBatch 标志且值为真值的结果时,doSend 方法再次调用“partition”方法并且之前选择的分区保持未使用。如果主题只有两个分区,这个问题很危险,因为在这种情况下,只会使用一个分区。

doSend 方法:

...

    RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey,
                        serializedValue, headers, interceptCallback, remainingWaitMs, true, nowMs);
    
                if (result.abortForNewBatch) {
                    int prevPartition = partition;
                    partitioner.onNewBatch(record.topic(), cluster, prevPartition);
                    partition = partition(record, serializedKey, serializedValue, cluster);
                    tp = new TopicPartition(record.topic(), partition);
                    if (log.isTraceEnabled()) {
                        log.trace("Retrying append due to new batch creation for topic {} partition {}. The old partition was {}", record.topic(), partition, prevPartition);
                    }
                    // producer callback will make sure to call both 'callback' and interceptor callback
                    interceptCallback = new InterceptorCallback<>(callback, this.interceptors, tp);
    
                    result = accumulator.append(tp, timestamp, serializedKey,
                        serializedValue, headers, interceptCallback, remainingWaitMs, false, nowMs);
                }
...

使用像这样的自定义循环分区器可以解决这个问题:

public class CustomRoundRobinPartitioner implements Partitioner {

    private final ConcurrentMap<String, AtomicInteger> topicCounterMap = new ConcurrentHashMap<>();
private final ConcurrentMap<String, AtomicInteger> unusedPartition = new ConcurrentHashMap<>();

@Override
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
    if (unusedPartition.containsKey(topic))
        return unusedPartition.remove(topic).get();

    return nextPartition(topic, cluster);
}

public int nextPartition(String topic, Cluster cluster) {
    List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
    int numPartitions = partitions.size();
    int nextValue = counterNextValue(topic);
    List<PartitionInfo> availablePartitions = cluster.availablePartitionsForTopic(topic);
    if (!availablePartitions.isEmpty()) {
        int part = Utils.toPositive(nextValue) % availablePartitions.size();
        return availablePartitions.get(part).partition();
    } else {
        // no partitions are available, give a non-available partition
        return Utils.toPositive(nextValue) % numPartitions;
    }
}

private int counterNextValue(String topic) {
    AtomicInteger counter = topicCounterMap.computeIfAbsent(topic, k -> {
        return new AtomicInteger(0);
    });
    return counter.getAndIncrement();
}

@Override
public void close() {
}

@Override
public void configure(Map<String, ?> configs) {
}

@Override
public void onNewBatch(String topic, Cluster cluster, int prevPartition) {
    unusedPartition.put(topic, new AtomicInteger(prevPartition));
}
}