如何在kafka主题中告诉debezuim Mysql源连接器停止重新获取现有表的快照?

时间:2019-09-05 14:02:16

标签: mysql apache-kafka apache-kafka-connect debezium

我正在使用Debezium MySQL CDC源连接器将数据库从mysql移动到Kafka。该连接器工作正常,但快照工作异常。该连接器成功拍摄了第一个快照,然后在几个小时内因堆内存限制而关闭(这不是问题)。我暂停了连接器,停止了集群上的工作器,修复了问题,然后再次启动了工作器...连接器现在运行良好,但再次拍摄了快照! 看起来连接器没有从中断处恢复。而且我认为我的配置有问题。 我正在使用debezium 0.95。

我将snapshot.mode=initial更改为initial_only,但是没有用。

连接属性:

{
  "properties": {
    "connector.class": "io.debezium.connector.mysql.MySqlConnector",
    "snapshot.locking.mode": "minimal",
    "errors.log.include.messages": "false",
    "table.blacklist": "mydb.someTable",
    "include.schema.changes": "true",
    "database.jdbc.driver": "com.mysql.cj.jdbc.Driver",
    "database.history.kafka.recovery.poll.interval.ms": "100",
    "poll.interval.ms": "500",
    "heartbeat.topics.prefix": "__debezium-heartbeat",
    "binlog.buffer.size": "0",
    "errors.log.enable": "false",
    "key.converter": "org.apache.kafka.connect.json.JsonConverter",
    "snapshot.fetch.size": "100000",
    "errors.retry.timeout": "0",
    "database.user": "kafka_readonly",
    "database.history.kafka.bootstrap.servers": "bootstrap:9092",
    "internal.database.history.ddl.filter": "DROP TEMPORARY TABLE IF EXISTS .+ /\\* generated by server \\*/,INSERT INTO mysql.rds_heartbeat2\\(.*\\) values \\(.*\\) ON DUPLICATE KEY UPDATE value \u003d .*,FLUSH RELAY LOGS.*,flush relay logs.*",
    "heartbeat.interval.ms": "0",
    "header.converter": "org.apache.kafka.connect.json.JsonConverter",
    "autoReconnect": "true",
    "inconsistent.schema.handling.mode": "fail",
    "enable.time.adjuster": "true",
    "gtid.new.channel.position": "latest",
    "ddl.parser.mode": "antlr",
    "database.password": "pw",
    "name": "mysql-cdc-replication",
    "errors.tolerance": "none",
    "database.history.store.only.monitored.tables.ddl": "false",
    "gtid.source.filter.dml.events": "true",
    "max.batch.size": "2048",
    "connect.keep.alive": "true",
    "database.history": "io.debezium.relational.history.KafkaDatabaseHistory",
    "snapshot.mode": "initial_only",
    "connect.timeout.ms": "30000",
    "max.queue.size": "8192",
    "tasks.max": "1",
    "database.history.kafka.topic": "history-topic",
    "snapshot.delay.ms": "0",
    "database.history.kafka.recovery.attempts": "100",
    "tombstones.on.delete": "true",
    "decimal.handling.mode": "double",
    "snapshot.new.tables": "parallel",
    "database.history.skip.unparseable.ddl": "false",
    "value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "table.ignore.builtin": "true",
    "database.whitelist": "mydb",
    "bigint.unsigned.handling.mode": "long",
    "database.server.id": "6022",
    "event.deserialization.failure.handling.mode": "fail",
    "time.precision.mode": "adaptive_time_microseconds",
    "errors.retry.delay.max.ms": "60000",
    "database.server.name": "host",
    "database.port": "3306",
    "database.ssl.mode": "disabled",
    "database.serverTimezone": "UTC",
    "task.class": "io.debezium.connector.mysql.MySqlConnectorTask",
    "database.hostname": "host",
    "database.server.id.offset": "10000",
    "connect.keep.alive.interval.ms": "60000",
    "include.query": "false"
  }
}

1 个答案:

答案 0 :(得分:0)

我可以在上面确认Gunnar的回答。在快照过程中遇到了一些问题,必须重新启动整个快照过程。目前,连接器在特定点不支持恢复快照。您的配置对我来说似乎很好。希望这会有所帮助。