我们将kafka connect jdbc配置为从db2中读取数据并发布到kafka主题,并且我们将timestamp类型的列之一用作timestamp.column.name,但是我看到kafka connect不会将任何数据发布到kafka主题,即使在完成kafka连接设置后没有新数据出现,它们在DB2中也是巨大数据,因此至少应该将其发布到kaka主题,但在我的连接器源配置下也不会发生
{
"name": "next-error-msg",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max": "1",
"connection.url": "DB_DATA_SOURCE_URL",
"connection.user": "DB_DATA_SOURCE_USERNAME",
"connection.password": "DB_DATA_SOURCE_PASSWORD",
"schema.pattern": "DB_DATA_SCHEMA_PATTERN",
"mode": "timestamp",
"query": "SELECT SEQ_I AS error_id, SEND_I AS scac , to_char(CREATE_TS,'YYYY-MM-DD-HH24.MI.SS.FF6') AS create_timestamp, CREATE_TS, MSG_T AS error_message FROM DB_ERROR_MEG",
"timestamp.column.name": "CREATE_TS",
"validate.non.null": false,
"topic.prefix": "DB_ERROR_MSG_TOPIC_NAME"
}
}
我的疑问是为什么它不读取数据,而应该读取数据库中已经存在的现有数据,但是那没有发生,我需要配置或添加额外的内容吗?