融合的JDBC Source Connect v5.2.2提供了“ org.apache.kafka.connect.errors.DataException”

时间:2019-07-05 06:58:00

标签: jdbc apache-kafka apache-kafka-connect confluent confluent-schema-registry

我正在使用独立的confluent-jdbc源连接器v5.2.2,并使用confluent-schema-registry从DB2 v10.5.0.10中提取表数据。在v5.0.0中一切正常,但是在v5.2.2中出现以下错误。

ERROR WorkerSourceTask{id=source-ibm-db2-jdbc-project-0} Task threw an uncaught and unre
coverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.ja
va:178)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:269)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:228)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.connect.errors.DataException: Kafka Connect Date type should not have any time fields set to non-z
ero values.
        at org.apache.kafka.connect.data.Date.fromLogical(Date.java:64)
        at io.confluent.connect.avro.AvroData$6.convert(AvroData.java:275)
        at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:419)
        at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:606)
        at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:365)
        at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:77)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:269)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128
)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.ja
va:162)
        ... 11 more
[2019-07-05 12:19:25,217] ERROR WorkerSourceTask{id=source-ibm-db2-jdbc-project-0} Task is being killed and will n
ot recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)

confluent-jdbc-source-connector.properties

connection.url=jdbc:db2://*****:50000/dbName
connection.user=*****
connection.password=*****

schema.pattern=schemaName
table.whitelist=TableName
poll.interval.ms=5000
validate.non.null=false

mode=timestamp
timestamp.column.name=CREATE_TIMESTAMP,SYN_TIMESTAMP
db.timezone= Asia/Hong_Kong
numeric.mapping=best_fit

topic.prefix=TopicPrefix.

transforms=ExtractKey, SetSchemaMetadata, SetSchemaMetadataKey
transforms.ExtractKey.type=org.apache.kafka.connect.transforms.ValueToKey
transforms.ExtractKey.fields=ID

transforms.SetSchemaMetadataKey.type=org.apache.kafka.connect.transforms.SetSchemaMetadata$Key
transforms.SetSchemaMetadataKey.schema.name=namespace.ClassNameKey
transforms.SetSchemaMetadataKey.schema.version=1

transforms.SetSchemaMetadata.type= org.apache.kafka.connect.transforms.SetSchemaMetadata$Value
transforms.SetSchemaMetadata.schema.name=namespace.ClassNameValue
transforms.SetSchemaMetadata.schema.version=1

connect-standalone.properties

###JsonConverter###
#key.converter=org.apache.kafka.connect.json.JsonConverter
#value.converter=org.apache.kafka.connect.json.JsonConverter

#key.converter.schemas.enable=false
#value.converter.schemas.enable=false

###AvroConverter###
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8084

value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8084

offset.flush.interval.ms=10000
plugin.path=/Users/confluent-5.2.2/share/java/confluentinc-kafka-connect-jdbc-5.2.2

我还尝试通过从connect-standalone.properties中注释掉所有与AvroConverter相关的字段来尝试使用JsonConverter,但是仍然存在类似的错误。

IBM DB2数据库表具有以下列:CREATE_TIMESTAMP(TYPE_NAME = TIMESTAMP,IS_NULLABLE = NO),SYN_TIMESTAMP(TYPE_NAME = TIMESTAMP,IS_NULLABLE = YES),EVENT_DATE(TYPE_NAME = DATE,IS_NULLABLE = NO),其余列为使用IS_NULLABLE = NO

定义的CHAR,DECIMALS和VARCHAR

为什么它适用于v5.0.0但不适用于v5.2.2?我要使用v5.2.2的原因是,我为IBM DB2 Database定义的类似confluent-jdbc-sink连接器不适用于v5.0.0,但适用于v5.2.2。我想将最新版本用于jdbc接收器和源。

0 个答案:

没有答案