JSON到KSQL中的AVRO反序列化错误:由于反序列化错误,跳过记录

时间:2019-07-10 05:55:20

标签: avro apache-kafka-connect ksql

我已经在AWS上建立了一个融合平台。我的来源是MySql,并已使用debezium连接器将其连接到Kafka connect。源中的数据格式为JSON。现在,在KSQL中,我创建了一个派生主题,并将JSON主题转换为AVRO,以使数据可行,从而可以使用JDBC连接器沉入MYSQL。我已经使用了以下查询:

CREATE STREAM json_stream (userId int, auth_id varchar, email varchar) WITH (KAFKA_TOPIC='test', VALUE_FORMAT='JSON');

派生主题:

create TABLE avro_stream WITH (VALUE_FORMAT='AVRO') AS select * from json_stream;

我试图直接使用JSON消息沉入mysql,但是由于连接器需要模式,所以它失败了,因此带模式的JSON或Avro消息都会帮助我下沉数据。

使用主题avro_stream时:

 [2019-07-09 13:27:30,239] WARN task [0_3] Skipping record due to
 deserialization error. topic=[avro_stream] partition=[3] offset=[144]
 (org.apache.kafka.streams.processor.internals.RecordDeserializer:86)
 org.apache.kafka.connect.errors.DataException: avro_stream     at
 io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:97)
    at
 io.confluent.ksql.serde.connect.KsqlConnectDeserializer.deserialize(KsqlConnectDeserializer.java:44)
    at
 io.confluent.ksql.serde.connect.KsqlConnectDeserializer.deserialize(KsqlConnectDeserializer.java:26)
    at
 org.apache.kafka.common.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:65)
    at
 org.apache.kafka.common.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:55)
    at
 org.apache.kafka.streams.processor.internals.SourceNode.deserializeValue(SourceNode.java:63)
    at
 org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:66)
    at
 org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:97)
    at
 org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:117)
    at
 org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:638)
    at
 org.apache.kafka.streams.processor.internals.StreamThread.addRecordsToTasks(StreamThread.java:936)
    at
 org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:831)
    at
 org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
    at
 org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
 Caused by: org.apache.kafka.common.errors.SerializationException:
 Error deserializing Avro message for id -1 Caused by:
 org.apache.kafka.common.errors.SerializationException: Unknown magic
 byte!

我的debezium连接器配置:

{
"name": "debezium-connector",
"config": {
    "connector.class": "io.debezium.connector.mysql.MySqlConnector",
    "database.user": "XXXXX",
    "auto.create.topics.enable": "true",
    "database.server.id": "1",
    "tasks.max": "1",
    "database.history.kafka.bootstrap.servers": "X.X.X.X:9092",,
    "database.history.kafka.topic": "XXXXXXX",
    "transforms": "unwrap",
    "database.server.name": "XX-server",
    "database.port": "3306",
    "include.schema.changes": "true",
    "table.whitelist": "XXXX.XXXX",
    "key.converter.schemas.enable": "false",
    "value.converter.schema.registry.url": "http://localhost:8081",
    "database.hostname": "X.X.X.X",
    "database.password": "xxxxxxx",
    "value.converter.schemas.enable": "false",
    "name": "debezium-connector",
    "transforms.unwrap.type": "io.debezium.transforms.UnwrapFromEnvelope",
    "value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "database.whitelist": "XXXXX",
    "key.converter": "org.apache.kafka.connect.json.JsonConverter"
},
"tasks": [
    {
        "connector": "debezium-connector",
        "task": 0
    }
],
"type": "source"

}

1 个答案:

答案 0 :(得分:0)

KSQL将密钥写为STRING,因此在使用Avro进行值序列化时,密钥不是。因此,您的Sink worker需要进行以下配置:

"key.converter": "org.apache.kafka.connect.storage.StringConverter"
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "<url to schema registry>",

如果您已经将自己的工作人员配置为使用Avro,则可以仅对连接器配置覆盖key.converter