主题看不到Mongodb Kafka消息

时间:2020-12-30 13:00:18

标签: mongodb apache-kafka apache-kafka-connect mongodb-kafka-connector

我遇到了尽管运行和操作我的主题并没有注册在我的 MongoDB 中发生的事件。

每次我插入/修改记录时,我都不再从 kafka-console-consumer 命令获取日志。

有没有办法清除 Kafka 的缓存/偏移量? 源和接收器连接已启动并正在运行。整个集群也很健康,事情是一切都像往常一样,但每隔几周我就会看到这种情况回来,或者当我从其他位置登录到我的 Mongo 云时。

--partition 0 参数没有帮助,也将 retention_ms 更改为 1

enter image description here

enter image description here

我检查了两个连接器的状态并得到了 RUNNING

curl localhost:8083/connectors | jq enter image description here

curl localhost:8083/connectors/monit_people/status | jq enter image description here

运行 docker-compose logs connect 我发现:

    WARN Failed to resume change stream: Resume of change stream was not possible, as the resume point may no longer be in the oplog. 286

If the resume token is no longer available then there is the potential for data loss.
Saved resume tokens are managed by Kafka and stored with the offset data.
 
When running Connect in standalone mode offsets are configured using the:
`offset.storage.file.filename` configuration.
When running Connect in distributed mode the offsets are stored in a topic.

Use the `kafka-consumer-groups.sh` tool with the `--reset-offsets` flag to reset offsets.

Resetting the offset will allow for the connector to be resume from the latest resume token. 
Using `copy.existing=true` ensures that all data will be outputted by the connector but it will duplicate existing data.
Future releases will support a configurable `errors.tolerance` level for the source connector and make use of the `postBatchResumeToken

1 个答案:

答案 0 :(得分:1)

问题需要更多的 Confluent Platform 练习,因此现在我通过删除整个容器来重建整个环境:

docker system prune -a -f --volumes

docker container stop $(docker container ls -a -q -f "label=io.confluent.docker")

运行 docker-compose up -d 后一切正常。