无法使用Kafka将数据从MySQL流式传输到Postgres

时间:2020-07-16 15:03:35

标签: postgresql jdbc apache-kafka

我第一次尝试使用Kafka,并使用AWS MSK设置Kafka集群。目的是将数据从MySQL服务器流传输到Postgresql。 我将debezium MySQL连接器用于源,将Confluent JDBC连接器用于接收器。

MySQL配置:

  "connector.class": "io.debezium.connector.mysql.MySqlConnector",
  "database.server.id": "1",
  "tasks.max": "3",
  "internal.key.converter.schemas.enable": "false",
  "transforms.unwrap.add.source.fields": "ts_ms",
  "key.converter.schemas.enable": "false",
  "internal.key.converter": "org.apache.kafka.connect.json.JsonConverter",
  "internal.value.converter.schemas.enable": "false",
  "value.converter.schemas.enable": "false",
  "internal.value.converter": "org.apache.kafka.connect.json.JsonConverter",
  "value.converter": "org.apache.kafka.connect.json.JsonConverter",
  "key.converter": "org.apache.kafka.connect.json.JsonConverter",
  "transforms": "unwrap",
  "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState"

注册Mysql连接器后,其状态为“正在运行”,并捕获MySQL表中所做的更改,并以以下格式在使用者控制台中显示结果:

{"id":5,"created_at":1594910329000,"userid":"asldnl3r234mvnkk","amount":"B6Eg","wallet_type":"CDW"}

我的第一个问题:表“金额”列中的类型为“十进制”,并且包含数字值,但是在使用者控制台中,为什么它显示为字母数字值?

对于Postgresql作为目标数据库,我使用了JDBC接收器连接器,并具有以下配置:

"name": "postgres-connector-db08",
  "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
  "tasks.max": "1",
  "key.converter": "org.apache.kafka.connect.storage.StringConverter",
  "key.converter.schemas.enable": "false",
  "value.converter": "org.apache.kafka.connect.json.JsonConverter",
  "value.converter.schemas.enable": "false",
  "topics": "mysql-cash.kafka_test.test",
  "connection.url": "jdbc:postgresql://xxxxxx:5432/test?currentSchema=public",
  "connection.user": "xxxxxx",
  "connection.password": "xxxxxx",
  "insert.mode": "upsert",
  "auto.create": "true",
  "auto.evolve": "true"

在注册JDBC连接器后,当我检查状态时,它会给出错误:

{"name":"postgres-connector-db08","connector":{"state":"RUNNING","worker_id":"x.x.x.x:8083"},"tasks":[{"id":0,"state":"FAILED","worker_id":"x.x.x.x:8083","trace":"org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
 org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:561)
 org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
 org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
 org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
 org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
 org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)\nCaused by: org.apache.kafka.connect.errors.ConnectException: Sink connector 'postgres-connector-db08' is configured with 'delete.enabled=false' and 'pk.mode=none' and therefore requires records with a non-null Struct value and non-null Struct schema, but found record at (topic='mysql-cash.kafka_test.test',partition=0,offset=0,timestamp=1594909233389) with a HashMap value and null value schema.
 io.confluent.connect.jdbc.sink.RecordValidator.lambda$requiresValue$2(RecordValidator.java:83)
 io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:82)
 io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)
 io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
 org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
... 10 more
"}],"type":"sink"}

为什么会出现此错误?是我在接收器配置中错过的东西吗?

1 个答案:

答案 0 :(得分:0)

https://docs.confluent.io/kafka-connect-jdbc/current/sink-connector/index.html#data-mapping

The sink connector requires knowledge of schemas, so you should use a suitable converter e.g. the Avro converter that comes with Schema Registry, or the JSON converter with schemas enabled.

由于 JSON 是普通的(没有架构)并且连接器配置为 "value.converter.schemas.enable": "false"(禁用架构的 JSON 转换器),Avro 转换器应设置为架构注册表:https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/#applying-schema