如何配置Kafka Connect Worker将更多消息流式传输到HDFS

时间:2019-05-27 07:02:17

标签: apache-kafka hdfs parquet apache-kafka-connect confluent

我当前正在使用的设置

NiFi将Avro消息(Confluent Schema注册表参考)流式传输到Kafka(v2.0.0,20个分区,Confluent v5.0.0),Kafka Connect Worker(HDFS接收器)将这些消息以Parquet格式通过flush.size=70000流式传输到HDFS

我的问题

此配置工作正常,但是当我将配置更改为flush.size=1000000时(因为70k消息最大为5-7 Mb,但Parquet文件块大小为256 Mb),connect worker返回Error sending fetch request错误:

...
[2019-05-24 14:00:21,784] INFO [ReplicaFetcher replicaId=1, leaderId=3, fetcherId=0] Error sending fetch request (sessionId=1661483807, epoch=374) to node 3: java.io.IOException: Connection to 3 was disconnected before the response was read. (org.apache.kafka.clients.FetchSessionHandler)
[2019-05-24 14:00:21,784] WARN [ReplicaFetcher replicaId=1, leaderId=3, fetcherId=0] Error in response for fetch request (type=FetchRequest, replicaId=1, maxWait=500, minBytes=1, maxBytes=10485760, fetchData={mytopic-10=(offset=27647797, logStartOffset=24913298, maxBytes=1048576), mytopic-16=(offset=27647472, logStartOffset=24913295, maxBytes=1048576), mytopic-7=(offset=27647429, logStartOffset=24913298, maxBytes=1048576), mytopic-4=(offset=27646967, logStartOffset=24913296, maxBytes=1048576), mytopic-13=(offset=27646404, logStartOffset=24913298, maxBytes=1048576), mytopic-19=(offset=27648276, logStartOffset=24913300, maxBytes=1048576), mytopic-1=(offset=27647036, logStartOffset=24913307, maxBytes=1048576)}, isolationLevel=READ_UNCOMMITTED, toForget=, metadata=(sessionId=1661483807, epoch=374)) (kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to 3 was disconnected before the response was read
...

我的配置:

HDFS连接器配置:

name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
format.class=io.confluent.connect.hdfs.parquet.ParquetFormat
tasks.max=1
topics=mytopic
hdfs.url=hdfs://hdfsnode:8020/user/someuser/kafka_hdfs_sink/
flush.size=1000000

Kafka Connect Worker配置:

bootstrap.servers=confleuntnode1:9092,confleuntnode2:9092,confleuntnode3:9092
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://confleuntnode:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
plugin.path=/opt/confluent/current/share/java/

我的问题:

如何使用Kafka Connect Worker从Kafka到HDFS传输更大尺寸的消息?

1 个答案:

答案 0 :(得分:1)

我通过以分布式模式(而不是独立模式)运行connect解决了此问题。 现在,我可以向HDFS写入多达350万条记录(约256 mb)。但这带来了一个新问题:1)处理速度非常慢(1小时内记录达3500万条); 2)无法写入大于256 Mb的镶木地板文件。我将发布一个新的SO问题。