似乎我经常根据查询从JdbcConnectionSource创建一个Kafka Connect连接器,并成功创建状态为“ RUNNING”的连接器,但未创建任何任务。在我的容器的控制台日志中,我看不到任何提示可以告诉我的错误:没有错误,没有警告,没有说明任务失败的原因。我可以使其他连接器正常工作,但有时不能。
当连接器无法创建RUNNING任务时,如何获取更多信息以进行故障排除?
我将在下面发布我的连接器配置示例。
我正在使用Kafka Connect 5.4.1-ccs。
连接器配置(它是JDBC背后的Oracle数据库):
{
"name": "FiscalYear",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max": 1,
"connection.url": "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=myhost.example.com)(PORT=1521))(LOAD_BALANCE=OFF)(FAILOVER=OFF)(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=MY_DB_PRI)(UR=A)))",
"connection.user":"myuser",
"connection.password":"mypass",
"mode": "timestamp",
"timestamp.column.name": "MAINT_TS",
"topic.prefix": "MyTeam.MyTopicName",
"poll.interval.ms": 5000,
"value.converter" : "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "false",
"numeric.mapping": "best_fit",
"_comment": "The query is wrapped in `select * from ()` so that JdbcSourceConnector can automatically append a WHERE clause.",
"query": "SELECT * FROM (SELECT fy_nbr, min(fy_strt_dt) fy_strt_dt, max(fy_end_dt) fy_end_dt FROM myuser.fsc_dt fd WHERE fd.fy_nbr >= 2020 and fd.fy_nbr < 2022 group by fy_nbr)/* outer query must have no WHERE clause so that the source connector can append one of its own */"
}
}
还有创建我的工作程序的Dockerfile:
FROM confluentinc/cp-kafka-connect:latest
# each "CONNECT_" env var refers to a Kafka Connect setting; e.g. CONNECT_REST_PORT refers to setting rest.port
# see also https://docs.confluent.io/current/connect/references/allconfigs.html
ENV CONNECT_BOOTSTRAP_SERVERS="d.mybroker.example.com:9092"
ENV CONNECT_REST_PORT="8083"
ENV CONNECT_GROUP_ID="MyGroup2"
ENV CONNECT_CONFIG_STORAGE_TOPIC="MyTeam.ConnectorConfig"
ENV CONNECT_OFFSET_STORAGE_TOPIC="MyTeam.ConnectorOffsets"
ENV CONNECT_STATUS_STORAGE_TOPIC="MyTeam.ConnectorStatus"
ENV CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter"
ENV CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter"
ENV CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter"
ENV CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter"
ENV CONNECT_LOG4J_ROOT_LOGLEVEL="INFO"
COPY ojdbcDrivers /usr/share/java/kafka-connect-jdbc
(我还通过Helm图表设置了REST通告的主机名环境变量,因此这就是上面未设置的原因。)
旋转起来后,我创建连接器,然后从REST“ / status”中获取它:
{"name":"FiscalYear","connector":{"state":"RUNNING","worker_id":"10.1.2.3:8083"},"tasks":[],"type":"source"}
答案 0 :(得分:2)
当连接器无法创建RUNNING任务时,如何获取更多信息以进行故障排除?
我会增加您的Kafka Connect工作者的日志记录级别。 由于您使用的是Apache Kafka 2.4,因此可以动态地执行此操作,这非常有用。向您的Kafka Connect工作程序发出此REST API调用:
curl -X PUT http://localhost:8083/admin/loggers/io.confluent \
-H "Content-Type:application/json" -d '{"level": "TRACE"}'
这会将所有Confluent连接器的所有消息都弹出TRACE
。它还会返回各个记录器的列表,您可以从中选择不同的记录器,并根据需要将其特定的日志级别调高或调低。例如:
curl -X PUT http://localhost:8083/admin/loggers/io.confluent.connect.jdbc.dialect.DatabaseDialects \
-H "Content-Type:application/json" -d '{"level": "INFO"}'
参考:https://rmoff.net/2020/01/16/changing-the-logging-level-for-kafka-connect-dynamically/