反序列化Avro消息

时间:2020-02-01 21:45:24

标签: python apache-kafka avro apache-kafka-connect kafka-producer-api

我从here部署了Kafka。我也这样添加到docker-compose.yml Postgres容器中:

postgres:
    image: postgres
    hostname: kafka-postgres
    container_name: kafka-postgres
    depends_on:
      - ksql-server
      - broker
      - schema-registry
      - connect
    ports:
      - 5432:5432

创建主题综合浏览量。

我还用设置创建了DatagenConnector并运行它。

{
  "name": "datagen-pageviews",
  "connector.class": "io.confluent.kafka.connect.datagen.DatagenConnector",
  "key.converter": "org.apache.kafka.connect.storage.StringConverter",
  "kafka.topic": "pageviews",
  "max.interval": "100",
  "iterations": "999999999",
  "quickstart": "pageviews"
} 

据我所见,连接器为该主题定义了一个架构:

{
  "type": "record",
  "name": "pageviews",
  "namespace": "ksql",
  "fields": [
    {
      "name": "viewtime",
      "type": "long"
    },
    {
      "name": "userid",
      "type": "string"
    },
    {
      "name": "pageid",
      "type": "string"
    }
  ],
  "connect.name": "ksql.pageviews"
} 

我的下一步是创建JdbcSinkConnector,它将把数据从Kafka主题传输到Postgres表。那行得通。连接器的设置:

{
  "name": "from-kafka-to-pg",
  "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
  "errors.tolerance": "all",
  "errors.log.enable": "true",
  "errors.log.include.messages": "true",
  "topics": [
    "pageviews"
  ],
  "connection.url": "jdbc:postgresql://kafka-postgres:5432/postgres",
  "connection.user": "postgres",
  "connection.password": "********",
  "auto.create": "true",
  "auto.evolve": "true"
}

然后,我尝试自己发送消息到该主题。但失败并显示错误:

[2020-02-01 21:16:11,750]错误在任务至pg-0中遇到错误。 使用类执行阶段“ VALUE_CONVERTER” “ io.confluent.connect.avro.AvroConverter”,其中消耗的记录是 {topic ='综合浏览量',分区= 0,偏移量= 23834, timestamp = 1580591160374,timestampType = CreateTime}。 (org.apache.kafka.connect.runtime.errors.LogReporter) org.apache.kafka.connect.errors.DataException:无法反序列化 到Avro的主题浏览量的数据: io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:110) 在 org.apache.kafka.connect.runtime.WorkerSinkTask.lambda $ convertAndTransformRecord $ 1(WorkerSinkTask.java:487) 在 org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128) 在 org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162) 在 org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104) 在 org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:487) 在 org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:464) 在 org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:320) 在 org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224) 在 org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192) 在 org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177) 在 org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227) 在 java.util.concurrent.Executors $ RunnableAdapter.call(Executors.java:511) 在java.util.concurrent.FutureTask.run(FutureTask.java:266)在 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 在 java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:624) 在java.lang.Thread.run(Thread.java:748)造成原因: org.apache.kafka.common.errors.SerializationException:错误 为ID -1反序列化Avro消息的原因: org.apache.kafka.common.errors.SerializationException:未知魔术 字节!

所以send方法很重要。这就是我的操作方式(Python,confluent-kafka-python):

producer = Producer({'bootstrap.servers': 'localhost:9092'})
producer.poll(0)
producer.produce(topic, json.dumps({
   'viewtime': 123,
   'userid': 'user_1',
   'pageid': 'page_1'
}).encode('utf8'), on_delivery=kafka_delivery_report)
producer.flush()

也许我应该提供一个带有消息的架构(AvroProducer)?

2 个答案:

答案 0 :(得分:1)

出现问题是因为您尝试使用 Avro转换器从不是 Avro 的主题读取数据。

有两种可能的解决方案:

1。切换Kafka Connect的接收器连接器以使用正确的转换器

例如,如果您要将JSON数据从Kafka主题消费到Kafka Connect接收器中:

...
value.converter=org.apache.kafka.connect.json.JsonConverter. 
value.converter.schemas.enable=true/false
...

value.converter.schemas.enable取决于消息是否包含架构。

2。将上游格式切换为Avro

要让DatagenConnector将消息生成为消息值格式为Avro的Kafka消息,请设置value.convertervalue.converter.schema.registry.url参数:

...
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://localhost:8081",
...

有关详细信息,请参见kafka-connect-datagen docs


关于Kafka Connect转换器和序列化的article非常好。

答案 1 :(得分:1)

该主题需要一条Avro类型的消息。

AvroProducer中的

confluent-kafka-python可以达到目的:

from confluent_kafka import avro
from confluent_kafka.avro import AvroProducer


value_schema_str = """
{
   "namespace": "ksql",
   "name": "value",
   "type": "record",
   "fields" : [
     {
       "name" : "viewtime",
       "type" : "long"
     }, 
     {
       "name" : "userid",
       "type" : "string"
     }, 
     {
       "name" : "pageid",
       "type" : "string"
     }
   ]
}
"""

key_schema_str = """
{
   "namespace": "ksql",
   "name": "key",
   "type": "record",
   "fields" : [
     {
       "name" : "pageid",
       "type" : "string"
     }
   ]
}
"""

value_schema = avro.loads(value_schema_str)
key_schema = avro.loads(key_schema_str)
value = {"name": "Value"}
key = {"name": "Key"}


def delivery_report(err, msg):
    """ Called once for each message produced to indicate delivery result.
        Triggered by poll() or flush(). """
    if err is not None:
        print('Message delivery failed: {}'.format(err))
    else:
        print('Message delivered to {} [{}]'.format(msg.topic(), msg.partition()))


avroProducer = AvroProducer({
    'bootstrap.servers': 'mybroker,mybroker2',
    'on_delivery': delivery_report,
    'schema.registry.url': 'http://schema_registry_host:port'
    }, default_key_schema=key_schema, default_value_schema=value_schema)

avroProducer.produce(topic='my_topic', value=value, key=key)
avroProducer.flush()