Kafka v2.5
融合的JDBC
接收器连接器:v5.5
工作流程:
在SQL Server数据库中执行SQL'DELETE'查询时,将在Topic1中生成一个逻辑删除记录,并将该记录从MySQL数据库中删除。
由控制台生产者在Topic1中创建墓碑记录时,将导致错误。
当debezium连接器和控制台生产者创建的消息完全相同时,为什么会出现错误?
错误:
[2020-05-19 15:14:48,158]错误WorkerSinkTask {id = arielai-mysql-sink-arielai-dev-39-0}任务引发了一个未捕获且不可恢复的异常(org.apache.kafka.connect。 runtime.WorkerTask:179)org.apache.kafka.connect.errors.ConnectException:在org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)的错误处理程序中超出了容忍度org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:488)上的.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)在org.apache.kafka.connect.runtime org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)的.WorkerSinkTask.convertMessages(WorkerSinkTask.java:465)在org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java :224),位于org.apache.kafka.connect.runtime.W,位于org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192) org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)上的orkerTask.doRun(WorkerTask.java:177)在java.base / java.util.concurrent.Executors $ RunnableAdapter.call(Executors。 java.base / java.util.concurrent.FutureTask.run(FutureTask.java:264)处的java:515)java.base / java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)处的Java /java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)at java.base / java.lang.Thread.run(Thread.java:834)原因:org.apache.kafka.connect.errors .DataException:具有schemas.enable的JsonConverter需要“ schema”和“ payload”字段,并且可能不包含其他字段。如果要反序列化纯JSON数据,请在转换器配置中设置schemas.enable = false。在org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:359)在org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:86)在org.apache.kafka.connect org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)的.runtime.WorkerSinkTask.lambda $ convertAndTransformRecord $ 2(WorkerSinkTask.java:488)在org.apache.kafka.connect.runtime。 errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)...还有13个[2020-05-19 15:14:48,162]错误WorkerSinkTask {id = arielai-mysql-sink-arielai-dev-39-0}的任务是被杀死并且无法恢复,除非手动重新启动(org.apache.kafka.connect.runtime.WorkerTask:180)
墓碑记录:
{"schema": {"type": "struct", "fields": [{"type": "int32", "optional": false, "field": "ID"}], "optional": false, "name": "ariel2_39.dbo.Holidays.Key"}, "payload": {"ID": 112}}:null
接收器连接器配置:
Github上报告的完整版:https://github.com/confluentinc/kafka-connect-jdbc/issues/859