使用ConfluentSchemaRegistry反序列化Avro数据时发生异常?

时间:2019-04-22 14:47:16

标签: apache-kafka apache-flink avro confluent confluent-schema-registry

我是flink和Kafka的新手。我正在尝试使用Confluent Schema注册表反序列化Avro数据。我已经在ec2机器上安装了flink和Kafka。另外,在运行代码之前,已经创建了“测试”主题。

代码路径:https://gist.github.com/mandar2174/5dc13350b296abf127b92d0697c320f2

作为实现的一部分,代码执行以下操作:

1) Create a flink DataStream object using a list of user element. (User class is avro generated class)
2) Write the Datastream source to Kafka using AvroSerializationSchema.
3) Read the data from Kafka using ConfluentRegistryAvroDeserializationSchema by reading the schema from Confluent Schema registry.

运行flink可执行jar的命令:

./bin/flink run -c com.streaming.example.ConfluentSchemaRegistryExample /opt/flink-1.7.2/kafka-flink-stream-processing-assembly-0.1.jar

运行代码时发生异常:

java.io.IOException: Unknown data format. Magic number does not match
    at org.apache.flink.formats.avro.registry.confluent.ConfluentSchemaRegistryCoder.readSchema(ConfluentSchemaRegistryCoder.java:55)
    at org.apache.flink.formats.avro.RegistryAvroDeserializationSchema.deserialize(RegistryAvroDeserializationSchema.java:66)
    at org.apache.flink.streaming.util.serialization.KeyedDeserializationSchemaWrapper.deserialize(KeyedDeserializationSchemaWrapper.java:44)
    at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:140)
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:665)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:94)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:58)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
    at java.lang.Thread.run(Thread.java:748)

我用于User类的Avro模式如下:

{
  "type": "record",
  "name": "User",
  "namespace": "com.streaming.example",
  "fields": [
    {
      "name": "name",
      "type": "string"
    },
    {
      "name": "favorite_number",
      "type": [
        "int",
        "null"
      ]
    },
    {
      "name": "favorite_color",
      "type": [
        "string",
        "null"
      ]
    }
  ]
}

有人可以指出我使用融合的Kafka模式注册表反序列化avro数据时缺少哪些步骤吗?

1 个答案:

答案 0 :(得分:1)

如何编写Avro数据还需要使用注册表,以使依赖它的反序列化器正常工作。

But this is an open PR in Flink, still用于添加ConfluentRegistryAvroSerializationSchema

我认为,解决方法是使用AvroDeserializationSchema,它不依赖于注册表。

如果您确实想在生产者代码中使用注册表,则必须在Flink之外使用它,直到合并PR。