Debezium错误:ConnectException:数据行小于列索引

时间:2019-08-16 21:07:12

标签: sql-server apache-kafka apache-kafka-connect debezium

此错误现在存在于我所有的debezium连接器上(在sql-server上),我尝试清除一些K-Connect主题以清除元数据,但没有运气,不胜感激,不胜感激...感受一下元数据是我需要重设的sql-server端吗?...

  

错误:数据行小于列索引,内部架构   表示可能与真实数据库架构不同步

[2019-08-16 20:13:14,745] ERROR Error requesting a row value, row: 8, requested index: 8 at position 8 (io.debezium.relational.TableSchemaBuilder)
[2019-08-16 20:13:14,746] ERROR Producer failure (io.debezium.pipeline.ErrorHandler)
org.apache.kafka.connect.errors.ConnectException: Data row is smaller than a column index, internal schema representation is probably out of sync with real database schema
        at io.debezium.relational.TableSchemaBuilder.validateIncomingRowToInternalMetadata(TableSchemaBuilder.java:209)
        at io.debezium.relational.TableSchemaBuilder.lambda$createValueGenerator$2(TableSchemaBuilder.java:235)
        at io.debezium.relational.TableSchema.valueFromColumnData(TableSchema.java:145)

2 个答案:

答案 0 :(得分:0)

我相信您的源表架构不符合CDC中的更改表架构。您最近是否更新了表架构?您是否关注https://debezium.io/docs/connectors/sqlserver/#schema-evolution? 如果不是这样,您可能应该通过从偏移量主题中删除连接器偏移量并清除database.history.kafka.topic中配置的主题来从这种情况中恢复。而且不要忘了这个主题对于每个连接器都是毫无疑问的!

J。

答案 1 :(得分:0)

感谢@ jiri-pechanec,感谢您的答复。

我尝试了您的解决方案,现在出现了此错误...我想我已经找出问题的根源,试图从DBA中获取更多信息,对我来说好像添加了索引并且出现了此错误。因此,有2个事件从Debezium触发,它们不兼容。

但是,当我按照您的建议删除邮件时,由于以下原因,它仍然无法正常工作。

[2019-08-19 17:19:50,147] ERROR WorkerSourceTask{id=prm-flags-connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
java.lang.IllegalStateException: The database history couldn't be recovered. Consider to increase the value for database.history.kafka.recovery.poll.interval.ms
        at io.debezium.relational.history.KafkaDatabaseHistory.recoverRecords(KafkaDatabaseHistory.java:224)
        at io.debezium.relational.history.AbstractDatabaseHistory.recover(AbstractDatabaseHistory.java:79)

有2种清除主题的方法,可将其保留时间降低为0并将其设置回7天或运行此脚本

./bin/kafka-delete-records.sh --bootstrap-server localhost:9092 --offset-json-file ./offsetfile.json

 {"partitions": [{"topic": "prm-ist-metadata", "partition": 0, "offset": 3}], "version":1 }

我运行了kafka-delete-record脚本

破坏此代码的索引:

CREATE CLUSTERED INDEX [CIX_PRM_HIST_FLAGS] ON [PRM_HIST].[FLAGS]
(
    [isb_IDENTITY_NUMBER] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO