Kafka Connect每次都无法反序列化偏移量

时间:2017-07-03 10:40:33

标签: apache-kafka apache-kafka-connect

这可能是显而易见的,然而,我无法弄明白。

每次启动源连接器时,都无法读取存储在文件中的偏移量,并显示以下错误:

21:05:01:519 | ERROR | pool-1-thread-1 | o.a.k.c.s.OffsetStorageReaderImpl | CRITICAL: Failed to deserialize offset data when getting offsets for tas
k with namespace zohocrm-source-calls. No value for this data will be returned, which may break the task or cause it to skip some data. This could ei
ther be due to an error in the connector implementation or incompatible schema.
org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additiona
l fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
        at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:309)

以下是我的StandaloneConfig值:

    access.control.allow.methods =
    access.control.allow.origin =
    bootstrap.servers = [localhost:9092]
    internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
    internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
    key.converter = class io.confluent.connect.avro.AvroConverter
    offset.flush.interval.ms = 60000
    offset.flush.timeout.ms = 5000
    offset.storage.file.filename = maxoptra-data.offset
    rest.advertised.host.name = null
    rest.advertised.port = null
    rest.host.name = null
    rest.port = 8083
    task.shutdown.graceful.timeout.ms = 5000
    value.converter = class io.confluent.connect.avro.AvroConverter

这是我的连接器配置:

    connector.class = com.maxoptra.data.zoho.connect.ZohoCrmSourceConnector
    key.converter = null
    name = zohocrm-source-calls
    tasks.max = 1
    transforms = null
    value.converter = null

请告知。

谢谢

1 个答案:

答案 0 :(得分:0)

设置key.converter.schemas.enable=truevalue.converter.schemas.enable=true。这将使JsonConverter尝试解释您的架构,而不是反序列化与此处不匹配的默认架构