删除并重新创建任何连接器后,所有连接器都将进入“失败”状态

时间:2019-08-30 16:47:35

标签: apache-kafka apache-kafka-connect debezium

我具有启用SSL的Kafka设置,并使用POST请求在kafka-connect上注册消息(如下)。如果连接设置是新的,没有现有连接器,则它将注册该连接器。但是,如果删除任何连接器,则一段时间后,所有连接器都会进入带有TimeoutException的Failed状态(如下所示)。如果我停止kafka-connect并从kafka中删除所有与kafka-connect相关的元数据主题,然后重新启动它。问题解决了,但是我再次注册了所有连接器。问题是kafka-connect元数据主题可能未更新,但我无法指出问题并找到解决方案。 这是POST请求:

    curl -k -v -X POST -H "Accept:application/json"  -H "Content-Type:application/json"  https://kafka-connect.domain.com:9093/connectors/  -d '{
    "name": "TEST-CONNECTOR-TEST1131",
    "config": {
        "connector.class": "io.debezium.connector.mysql.MySqlConnector",
        "database.hostname": "test.domain.com",
        "database.port": "3306",
        "database.user": "debezium",
        "database.password": "test",
        "database.serverTimezone":"America/Los_Angeles",
        "database.server.id": "201908281131",
        "database.server.name": "TEST-CONNECTOR",
        "database.history.kafka.bootstrap.servers": 
        "kafka1.domain.com:9094",
        "database.history.kafka.topic": "dbhistory.test_201908281131",
        "include.schema.changes": "true",
        "table.whitelist": "qwerdb.test1",
        "database.history.producer.sasl.mechanism": "PLAIN",
        "database.history.producer.security.protocol": "SASL_SSL",
        "database.history.producer.ssl.key.password": "test",
        "database.history.producer.ssl.keystore.location": 
        "/opt/keystore.jks",
        "database.history.producer.ssl.keystore.password": "test",
        "database.history.producer.ssl.truststore.location": 
        "/opt/truststore.jks",
        "database.history.producer.ssl.truststore.password": "test"
    }
}'

这是异常跟踪:

"trace": "org.apache.kafka.connect.errors.ConnectException: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata\n
\tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:273)\n
\tat io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)\n
\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:198)\n
\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n
\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n
\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n
\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n
\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n
\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata\n"

1 个答案:

答案 0 :(得分:0)

将生产者设置添加到kafka connect connect-distributed.properties文件后,此问题消失了。这些在connect-distributed.properties文件中丢失了