我最近将我的 Debezium 图像从 1.4 更新到 1.5。 然而,现在当我插入我的连接器时,似乎接收器连接器只是“死了”,根本没有给出任何输出..
在 1.4 上一切正常,但由于我的项目需要一些新功能,我需要升级到 1.5。
以下是我的 debezium 连接映像的 DockerFile:
FROM debezium/connect:1.5
ENV KAFKA_CONNECT_JDBC_DIR=$KAFKA_CONNECT_PLUGINS_DIR/kafka-connect-jdbc \
KAFKA_CONNECT_ES_DIR=$KAFKA_CONNECT_PLUGINS_DIR/kafka-connect-elasticsearch
ARG POSTGRES_VERSION=42.2.20
ARG KAFKA_JDBC_VERSION=10.0.0
ARG KAFKA_ELASTICSEARCH_VERSION=10.0.0
# Deploy PostgreSQL JDBC Driver
RUN cd /kafka/libs && curl -sO https://jdbc.postgresql.org/download/postgresql-$POSTGRES_VERSION.jar
# Deploy Kafka Connect JDBC
RUN mkdir $KAFKA_CONNECT_JDBC_DIR && cd $KAFKA_CONNECT_JDBC_DIR &&\
curl -sO https://packages.confluent.io/maven/io/confluent/kafka-connect-jdbc/$KAFKA_JDBC_VERSION/kafka-connect-jdbc-$KAFKA_JDBC_VERSION.jar
# Deploy Confluent Elasticsearch sink connector
RUN mkdir $KAFKA_CONNECT_ES_DIR && cd $KAFKA_CONNECT_ES_DIR &&\
curl -sO https://packages.confluent.io/maven/io/confluent/kafka-connect-elasticsearch/$KAFKA_ELASTICSEARCH_VERSION/kafka-connect-elasticsearch-$KAFKA_ELASTICSEARCH_VERSION.jar && \
curl -sO https://repo1.maven.org/maven2/io/searchbox/jest/6.3.1/jest-6.3.1.jar && \
curl -sO https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore-nio/4.4.4/httpcore-nio-4.4.4.jar && \
curl -sO https://repo1.maven.org/maven2/org/apache/httpcomponents/httpclient/4.5.1/httpclient-4.5.1.jar && \
curl -sO https://repo1.maven.org/maven2/org/apache/httpcomponents/httpasyncclient/4.1.1/httpasyncclient-4.1.1.jar && \
curl -sO https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.4.4/httpcore-4.4.4.jar && \
curl -sO https://repo1.maven.org/maven2/commons-logging/commons-logging/1.2/commons-logging-1.2.jar && \
curl -sO https://repo1.maven.org/maven2/commons-codec/commons-codec/1.9/commons-codec-1.9.jar && \
curl -sO https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.4.4/httpcore-4.4.4.jar && \
curl -sO https://repo1.maven.org/maven2/io/searchbox/jest-common/6.3.1/jest-common-6.3.1.jar && \
curl -sO https://repo1.maven.org/maven2/com/google/code/gson/gson/2.8.6/gson-2.8.6.jar && \
curl -sO https://repo1.maven.org/maven2/com/google/guava/guava/20.0/guava-20.0.jar
我的 Elasticsearch 接收器连接器:
{
"name": "es-sink-connector",
"config": {
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"tasks.max": "1",
"topics": "report",
"connection.url": "http://elasticsearch:9200",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.drop.tombstones": "false",
"transforms.unwrap.drop.deletes": "false",
"key.ignore": "false",
"type.name": "_doc",
"behavior.on.null.values": "delete",
"transforms": "ExtractKey",
"transforms.ExtractKey.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.ExtractKey.field": "id"
}
}
最后,我的 debeium-connect 的 docker-compose 配置
connect:
build: ./debezium-jdbc-es
ports:
- "8083:8083"
- "5005:5005"
depends_on:
- kafka
- elasticsearch
environment:
- BOOTSTRAP_SERVERS=kafka:9092
- GROUP_ID=1
- CONFIG_STORAGE_TOPIC=my_connect_configs
- OFFSET_STORAGE_TOPIC=my_connect_offsets
- STATUS_STORAGE_TOPIC=my_source_connect_statuses
- host.docker.internal= host.docker.internal
关于如何修复它/为什么它可能不起作用的任何想法表示赞赏!
谢谢!
更新:
所以在仔细研究之后,这似乎是由于我的 Postgres 连接器没有找到我的表,而不是我的 Elasticsearch 接收器连接器。
Debezium 1.4:
Snapshot step 3 - Locking captured tables [repo.report] [io.debezium.relational.RelationalSnapshotChangeEventSource]
Debezium 1.5:
Snapshot step 3 - Locking captured tables [] [io.debezium.relational.RelationalSnapshotChangeEventSource]
这是我的 Postgres 连接器:
{
"name": "reporting-connector",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"database.hostname": "host.docker.internal",
"database.port": "5432",
"database.user": "debezium_user",
"database.password": "postgres",
"database.server.id": "184054",
"database.dbname": "reportdatabase",
"database.server.name": "reporting",
"plugin.name": "pgoutput",
"database.include.list": "reportdatabase",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "reporting",
"schema.include.list":"repo",
"table.include.list":"repo.report",
"transforms": "route",
"transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.route.regex": "([^.]+)\\.([^.]+)\\.([^.]+)",
"transforms.route.replacement": "$3"
}
}