连接器配置不包含连接器类型

时间:2018-04-16 11:53:49

标签: apache-kafka apache-kafka-connect confluent

我尝试使用JDBC Connector连接到我的群集上的PostgreSQL数据库(数据库不是由群集直接管理)。

我用以下命令调用了Kafka Connect:

connect-standalone.sh worker.properties jdbc-connector.properties

这是worker.properties文件的内容:

class=io.confluent.connect.jdbc.JdbcSourceConnector
name=test-postgres-1
tasks.max=1

internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false

offset.storage.file.filename=/home/user/offest
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter

connection.url=jdbc:postgresql://database-server.url:port/database?user=user&password=password

这是jdbc-connector.properties

的内容
mode=incrementing
incrementing.column.name=id
topic.prefix=test-postgres-jdbc-

当我尝试使用上述命令启动连接器时,它崩溃并出现以下错误:

[2018-04-16 11:39:08,164] ERROR Failed to create job for jdbc.properties (org.apache.kafka.connect.cli.ConnectStandalone:88)
[2018-04-16 11:39:08,166] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:99)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector config {mode=incrementing, incrementing.column.name=pdv, topic.prefix=test-postgres-jdbc-} contains no connector type
    at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:80)
    at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:67)
    at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:96)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector config {mode=incrementing, incrementing.column.name=id, topic.prefix=test-postgres-jdbc-} contains no connector type
    at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:233)
    at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:158)
    at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:93)

在注意到导致错误的连接器仅显示来自jdbc-connector.properties的信息后,我尝试将两个文件合并在一起,但随后命令突然终止(不创建主题或偏移文件)以下输出:

[SLF4J infos...]
[2018-04-16 11:48:54,620] INFO Usage: ConnectStandalone worker.properties connector1.properties [connector2.properties ...] (org.apache.kafka.connect.cli.ConnectStandalone:59)

1 个答案:

答案 0 :(得分:1)

您需要在jdbc-connector.properties而不是worker.properties中拥有大部分属性。有关连接器配置中的配置选项的完整列表,请参阅https://docs.confluent.io/current/connect/connect-jdbc/docs/source_config_options.html(示例中为jdbc-connector.properties)。

试试这个:

  • worker.properties

    internal.key.converter=org.apache.kafka.connect.json.JsonConverter
    internal.value.converter=org.apache.kafka.connect.json.JsonConverter
    internal.key.converter.schemas.enable=false
    internal.value.converter.schemas.enable=false
    
    offset.storage.file.filename=/home/user/offest
    value.converter=org.apache.kafka.connect.json.JsonConverter
    key.converter=org.apache.kafka.connect.json.JsonConverter
    
  • jdbc-connector.properties

    class=io.confluent.connect.jdbc.JdbcSourceConnector
    name=test-postgres-1
    tasks.max=1
    
    mode=incrementing
    incrementing.column.name=id
    topic.prefix=test-postgres-jdbc-
    
    connection.url=jdbc:postgresql://database-server.url:port/database?user=user&password=password
    

您可以在此处看到更多Kafka Connect的示例: