当我使用相同的密钥时,Kafka JDBC连接未将消息发布到一个分区

时间:2019-01-22 11:36:49

标签: jdbc apache-kafka apache-kafka-connect confluent

具有相同密钥的消息应转到该主题的相同分区,但是Kafka JDBC源连接器会将消息发布到不同的分区。

我创建了一个具有5个分区的TOPIC(student-topic-in)。

我使用以下脚本创建了一个学生表:

create TABLE student (
  std_id INT AUTO_INCREMENT PRIMARY KEY,
  std_name VARCHAR(50),
  class_name VARCHAR(50),
  father_name VARCHAR(50),
  mother_name VARCHAR(50), 
  school VARCHAR(50)
);

我的JDBC source-quickstart属性文件如下

query= select * from student
task.max=1
mode=incrementing
incrementing.column.name=std_id
topic.prefix=student-topic-in
numeric.mapping=best_fit
timestamp.delay.interval.ms =10
transforms=CreateKey,ExtractKey,ConvertDate,Replace,InsertPartition,InsertTopic
transforms.CreateKey.type=org.apache.kafka.connect.transforms.ValueToKey
transforms.CreateKey.fields=class_name
transforms.ExtractKey.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.ExtractKey.field=class_name

当我在DB表中插入同班学生的详细信息时,所有消息都将发布到一个分区中。

student-topic-in 3 "15" @ 35: {"std_id":145,"std_name":"pranavi311","class_name":"15","father_name":"abcd1","mother_name":"efgh1","school_name":"CSI","partition":null,"topic":"student-topic-in"}
student-topic-in 3 "15" @ 36: {"std_id":146,"std_name":"pranavi321","class_name":"15","father_name":"abcd2","mother_name":"efgh2","school_name":"CSI","partition":null,"topic":"student-topic-in"}
student-topic-in 3 "15" @ 37: {"std_id":147,"std_name":"pranavi331","class_name":"15","father_name":"abcd3","mother_name":"efgh3","school_name":"CSI","partition":null,"topic":"student-topic-in"}
student-topic-in 3 "15" @ 38: {"std_id":148,"std_name":"pranavi341","class_name":"15","father_name":"abcd4","mother_name":"efgh4","school_name":"CSI","partition":null,"topic":"student-topic-in"}
student-topic-in 3 "15" @ 39: {"std_id":149,"std_name":"pranavi351","class_name":"15","father_name":"abcd5","mother_name":"efgh5","school_name":"CSI","partition":null,"topic":"student-topic-in"}
student-topic-in 3 "15" @ 40: {"std_id":150,"std_name":"pranavi361","class_name":"15","father_name":"abcd6","mother_name":"efgh6","school_name":"CSI","partition":null,"topic":"student-topic-in"}

到达学生[-topic-in]的主题的末尾[3],位于偏移量41

但是,如果我插入不同的班级学生详细信息,它仍将发布到一个分区。

student-topic-in 3 "11" @ 41: {"std_id":151,"std_name":"pranavi311","class_name":"11","father_name":"abcd1","mother_name":"efgh1","school_name":"CSI","partition":null,"topic":"student-topic-in"}
student-topic-in 3 "12" @ 42: {"std_id":152,"std_name":"pranavi321","class_name":"12","father_name":"abcd2","mother_name":"efgh2","school_name":"CSI","partition":null,"topic":"student-topic-in"}
student-topic-in 3 "13" @ 43: {"std_id":153,"std_name":"pranavi331","class_name":"13","father_name":"abcd3","mother_name":"efgh3","school_name":"CSI","partition":null,"topic":"student-topic-in"}
student-topic-in 3 "14" @ 44: {"std_id":154,"std_name":"pranavi341","class_name":"14","father_name":"abcd4","mother_name":"efgh4","school_name":"CSI","partition":null,"topic":"student-topic-in"}
student-topic-in 3 "15" @ 45: {"std_id":155,"std_name":"pranavi351","class_name":"15","father_name":"abcd5","mother_name":"efgh5","school_name":"CSI","partition":null,"topic":"student-topic-in"}
student-topic-in 0 "16" @ 31: {"std_id":156,"std_name":"pranavi361","class_name":"16","father_name":"abcd6","mother_name":"efgh6","school_name":"CSI","partition":null,"topic":"student-topic-in"}

在“偏移量46”处达到“ student-topic-in”主题的末尾[3]

我正在使用以下命令来打印详细信息。

kafkacat -b localhost:9092 -C -t student-topic-in -f '%t %p %k @ %o: %s\n' 

我的期望是,每个班级学生的消息都应发布到一个特定的分区(在JDBC连接器中,我将Class_name分配为Key),但是它不起作用。

我到底想念什么?如何将每个班级的学生发布到特定的分区?

2 个答案:

答案 0 :(得分:2)

就您而言,一切正常。

如果检查Kafka Connect源代码,则可以在WorkerSourceTask::sendRecords方法中看到,在生产者发送之前,对每个记录都应用了转换,然后通过Converter将消息转换为字节数组< / p>

private boolean sendRecords() {
    ...
    final SourceRecord record = transformationChain.apply(preTransformRecord);
    final ProducerRecord<byte[], byte[]> producerRecord = convertTransformedRecord(record); 
    ...
}

在您的情况下,转换为:CreateKey,ExtractKey,ConvertDate,Replace,InsertPartition,InsertTopic,而Converter为org.apache.kafka.connect.json.JsonConverter

转换器将具有架构的密钥映射到字节数组,然后发送给Kafka。

@Override
public byte[] fromConnectData(String topic, Schema schema, Object value) {
    JsonNode jsonValue = enableSchemas ? convertToJsonWithEnvelope(schema, value) : convertToJsonWithoutEnvelope(schema, value);
    try {
        return serializer.serialize(topic, jsonValue);
    } catch (SerializationException e) {
        throw new DataException("Converting Kafka Connect data to byte[] failed due to serialization error: ", e);
    }
}

您已禁用架构,因此,调用结果后的键为:

  • 11 serializer.serialize(topic,new TextNode("11")) = [34,49,49,34]
  • 12 serializer.serialize(topic,new TextNode("12")) = [34,49,50,34]
  • 13 serializer.serialize(topic,new TextNode("13")) = [34,49,51,34]
  • 14 serializer.serialize(topic,new TextNode("14")) = [34,49,52,34]
  • 15 serializer.serialize(topic,new TextNode("15")) = [34,49,53,34]
  • 16 serializer.serialize(topic,new TextNode("16")) = [34,49,54,34]

每条消息都由Producer发送到某个分区。 将向哪个分区消息发送消息取决于Partitionerorg.apache.kafka.clients.producer.Partitioner)。 Kafka Connect使用默认值-org.apache.kafka.clients.producer.internals.DefaultPartitioner

内部DefaultPartitioner使用以下函数来计算分区:org.apache.kafka.common.utils.Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;

如果将其应用于参数(5个分区,以及键的字节数组),则会得到以下信息:

  • Utils.toPositive(Utils.murmur2(new byte[]{34,49,49,34})) % 5 = 3
  • Utils.toPositive(Utils.murmur2(new byte[]{34,49,50,34})) % 5 = 3
  • Utils.toPositive(Utils.murmur2(new byte[]{34,49,51,34})) % 5 = 3
  • Utils.toPositive(Utils.murmur2(new byte[]{34,49,52,34})) % 5 = 3
  • Utils.toPositive(Utils.murmur2(new byte[]{34,49,53,34})) % 5 = 3
  • Utils.toPositive(Utils.murmur2(new byte[]{34,49,54,34})) % 5 = 0

希望,这或多或少地解释了什么和原因

答案 1 :(得分:-1)

我通过使用字符串转换器key.converter=org.apache.kafka.connect.storage.StringConverter

解决了此问题