在Landoop Docker容器中将s3连接器与kafka一起使用时出错

时间:2018-07-20 14:17:48

标签: docker apache-kafka avro apache-kafka-connect confluent

使用以下配置创建接收器连接器时

connector.class=io.confluent.connect.s3.S3SinkConnector
s3.region=us-west-2
topics.dir=topics
flush.size=3
schema.compatibility=NONE
topics=my_topic
tasks.max=1
s3.part.size=5242880
format.class=io.confluent.connect.s3.format.avro.AvroFormat
# added after comment 
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
schema.generator.class=io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
storage.class=io.confluent.connect.s3.storage.S3Storage
s3.bucket.name=my-bucket

运行它,我得到以下错误

org.apache.kafka.connect.errors.DataException: coyote-test-avro
    at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:97)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:453)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:287)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema for id 91319
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Schema not found; error code: 40403
    at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:192)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:218)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:394)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:387)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:65)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndId(CachedSchemaRegistryClient.java:138)
    at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:122)
    at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserializeWithSchemaAndVersion(AbstractKafkaAvroDeserializer.java:194)
    at io.confluent.connect.avro.AvroConverter$Deserializer.deserialize(AvroConverter.java:121)
    at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:84)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:453)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:287)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

但是我的主题具有一个架构,正如我使用docker容器landoop/fast-data-dev提供的UI所看到的那样。而且即使我尝试将原始数据写入s3并按照以下配置进行更改

value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
format.class=io.confluent.connect.s3.format.bytearray.ByteArrayFormat
storage.class=io.confluent.connect.s3.storage.S3Storage
schema.compatibility=NONE

并删除schema.generator.class,即使我不应该使用avro模式,也会出现相同的错误。

为了能够写入s3,我在容器中设置了环境变量AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEY,但问题似乎早于此。

我想象版本可能存在问题,如上所述,我在docker机器中使用了容器landoop/fast-data-dev(在mac本地docker机器中不起作用)并进行生产和消费者工作完美。 这是关于部分

enter image description here

我查看了连接日志,但找不到任何有用的信息,但是,如果您能告诉我要查找的内容,则将添加相关行(所有日志都太大了)

1 个答案:

答案 0 :(得分:1)

每个单一主题消息都必须按照架构注册表的指定编码为Avro。

该转换器查看原始Kafka数据(键和值)的字节2-5,将其转换为整数(在您的情况下为错误中的ID),然后对注册表进行查找。

如果不是Avro或其他不良数据,则您将在此处收到错误或关于invalid magic byte的错误。

此错误不是Connect错误。如果添加print-key属性,则可以使用Avro控制台使用者重现它。

假设是这种情况,一种解决方案是将密钥序列更改为使用字节数组解串器,从而跳过avro查找

否则,由于您无法删除Kafka中的消息,因此这里唯一的解决方案是弄清生产者为何发送错误数据,进行修复,然后将连接使用者组移动到具有有效数据的最新偏移量,等待无效数据将使该主题过期,或完全移至新主题