无法恢复Kafka MongoDB源连接器

时间:2020-07-10 09:24:47

标签: mongodb apache-kafka apache-kafka-connect confluent-platform mongodb-kafka-connector

我将Kafka MongoDB源连接器[https://www.confluent.io/hub/mongodb/kafka-connect-mongodb]与融合平台v5.4.1和MongoDB v3.6 ReplicaSet一起使用。 Kafka MongoDB源连接器已删除,现在一个月后又重新创建它,这是在出现以下错误时。

com.mongodb.MongoQueryException: Query failed with error code 280 and error message 'resume of change stream was not possible, as the resume token was not found. {_data: BinData(0, "825F06E90400000004463C5F6964003C38316266623663632D326638612D343530662D396534652D31393936336362376130386500005A1004A486EE3E58984454ADD5BF58F364361E04")}' on server 40.118.122.226:27017
        at com.mongodb.operation.QueryHelper.translateCommandException(QueryHelper.java:29)
        at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:267)
        at com.mongodb.operation.QueryBatchCursor.tryHasNext(QueryBatchCursor.java:216)
        at com.mongodb.operation.QueryBatchCursor.tryNext(QueryBatchCursor.java:200)
        at com.mongodb.operation.ChangeStreamBatchCursor$3.apply(ChangeStreamBatchCursor.java:86)
        at com.mongodb.operation.ChangeStreamBatchCursor$3.apply(ChangeStreamBatchCursor.java:83)
        at com.mongodb.operation.ChangeStreamBatchCursor.resumeableOperation(ChangeStreamBatchCursor.java:166)
        at com.mongodb.operation.ChangeStreamBatchCursor.tryNext(ChangeStreamBatchCursor.java:83)
        at com.mongodb.client.internal.MongoChangeStreamCursorImpl.tryNext(MongoChangeStreamCursorImpl.java:78)
        at com.mongodb.kafka.connect.source.MongoSourceTask.getNextDocument(MongoSourceTask.java:338)
        at com.mongodb.kafka.connect.source.MongoSourceTask.poll(MongoSourceTask.java:155)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:265)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
[2020-07-09 09:53:09,353] INFO Watching for collection changes on '<myDBName.myCollectionName>' (com.mongodb.kafka.connect.source.MongoSourceTask:374)

在寻找此错误的原因之后,我确实了解到Oplog中找不到恢复令牌,因为Oplog的内存/大小有上限,并且它会清除掉旧信息。我也确实知道,为了最大程度地减少此问题的发生,我应该增加Oplog的大小等。但是我想知道是否有可能从Kafka / Confluent Platform方面解决此问题?就像我可以删除Kafka主题,KSQL主题一样,因为我正在使用主题'myDBName.myCollectionNamedata'创建流,与此主题相关联的数据,或者在Kafka Connect中执行某些操作,以便MongoDB源连接器开始捕获更改从当前时间再次从MongoDB Collections中删除旧信息?

0 个答案:

没有答案