我无法使用KSQL表沉没于postgres
我用流中的一些聚合创建了一个KSQL表(源主题是Avro)。我可以用SELECT看到数据。我也可以直接将话题归入postgres。但是我不能沉迷于KSQL表的Postgres。如何指定value.converter?
我创建了KSQL表,如下所示:
CREATE TABLE some_table AS SELECT customer_name, COUNT(*) as cnt FROM some_stream GROUP BY customer_name;
我尝试了类似的连接配置:
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
key.converter.schemas.enable=false
value.converter.schema.registry.url=http://localhost:8081
auto.evolve=true
tasks.max=1
topics=some_table
auto.create=true
value.converter=io.confluent.connect.avro.AvroConverter
connection.url=jdbc:postgresql://localhost:5432/mydb?user=postgres&password=postgres
key.converter=org.apache.kafka.connect.storage.StringConverter
错误是:
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:514)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:491)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Failed to deserialize data for topic some_topic to Avro:
at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:107)
我也尝试过:
{
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"key.converter.schemas.enable": "false",
"auto.evolve": "true",
"tasks.max": "1",
"topics": "some_topic",
"value.converter.schemas.enable": "false",
"auto.create": "true",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"connection.url": "jdbc:postgresql://localhost:5432/mydb?user=postgres&password=postgres",
"key.converter": "org.apache.kafka.connect.storage.StringConverter"
}
错误是:
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:514)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:491)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
那么如何使用JdbcSinkConnector接收KSQL表呢?
答案 0 :(得分:0)
检查架构注册表是否在线,并在value.converter.schema.registry.url
上设置正确的URL。