通过Kafka Connect HDFS Sink中的多个嵌套字段进行分区

时间:2019-05-02 16:07:31

标签: apache-kafka hdfs apache-kafka-connect confluent

我们正在运行kafka hdfs接收器连接器(版本5.2.1),需要将HDFS数据按多个嵌套字段进行分区。主题中的数据存储为Avro并具有嵌套元素。但是connect无法识别嵌套字段和抛出一个错误,该字段找不到。以下是我们正在使用的连接器配置。 hdfs sink不支持通过嵌套字段进行分区吗?我可以使用非嵌套字段进行分区

{
            "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
            "topics.dir": "/projects/test/kafka/logdata/coss",
            "avro.codec": "snappy",
            "flush.size": "200",
            "connect.hdfs.principal": "test@DOMAIN.COM",
            "rotate.interval.ms": "500000",
            "logs.dir": "/projects/test/kafka/tmp/wal/coss4",
            "hdfs.namenode.principal": "hdfs/_HOST@HADOOP.DOMAIN",
            "hadoop.conf.dir": "/etc/hdfs",
            "topics": "test1",
            "connect.hdfs.keytab": "/etc/hdfs-qa/test.keytab",
            "hdfs.url": "hdfs://nameservice1:8020",
            "hdfs.authentication.kerberos": "true",
            "name": "hdfs_connector_v1",
            "key.converter": "org.apache.kafka.connect.storage.StringConverter",
            "value.converter": "io.confluent.connect.avro.AvroConverter",
            "value.converter.schema.registry.url": "http://myschema:8081",
            "partition.field.name": "meta.ID,meta.source,meta.HH",
            "partitioner.class": "io.confluent.connect.storage.partitioner.FieldPartitioner"
  }

1 个答案:

答案 0 :(得分:1)

我为TimestampPartitioner添加了嵌套字段支持,但FieldPartitioner仍然具有出色的PR

https://github.com/confluentinc/kafka-connect-storage-common/pull/67