我正在尝试在postgres中为Adventureworks数据库设置源连接器,上图描述了该表。源配置如下。连接器运行时,无法处理具有数值的列,并且无法跳过所有声称错误值的行
警告记录已跳过记录
[2018-09-04 14:48:03,324] WARN Ignoring record due to SQL error: (io.confluent.connect.jdbc.source.TimestampIncrementingTableQuerier)
org.postgresql.util.PSQLException: Bad value for type byte : 183.9382
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getByte(AbstractJdbc2ResultSet.java:2093)
at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.lambda$columnConverterFor$18(GenericDatabaseDialect.java:1166)
at io.confluent.connect.jdbc.source.SchemaMapping$FieldSetter.setField(SchemaMapping.java:160)
at io.confluent.connect.jdbc.source.TimestampIncrementingTableQuerier.extractRecord(TimestampIncrementingTableQuerier.java:176)
at io.confluent.connect.jdbc.source.JdbcSourceTask.poll(JdbcSourceTask.java:297)
at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:244)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:220)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
连接器配置
{
"name": "jdbc_source_sales.salesorderdetail",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max": "1",
"topics": "sales.salesorderdetail",
"connection.url": "jdbc:postgresql://172.18.0.1/adventureworks?user=postgres&password=postgres",
"mode": "timestamp+incrementing",
"timestamp.column.name": "modifieddate",
"incrementing.column.name": "salesorderid",
"topic.prefix": "jdbc_source_sales_",
"table.whitelist": "sales.salesorderdetail",
"transforms": "CastUnitPrice, InsertKey, ExtractId, CastLong, AddNamespace",
"transforms.InsertKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.InsertKey.fields": "salesorderid",
"transforms.ExtractId.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.ExtractId.field": "salesorderid",
"transforms.CastLong.type": "org.apache.kafka.connect.transforms.Cast$Key",
"transforms.CastLong.spec": "int64",
"transforms.AddNamespace.type": "de.smava.kafka.connect.transforms.Namespacefy",
"transforms.AddNamespace.record.namespace": "com.company.data.vault20",
"transforms.CastUnitPrice.type": "org.apache.kafka.connect.transforms.Cast$Value",
"transforms.CastUnitPrice.spec": "unitprice:float64",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": "false",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schemas.enable": "true",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"numeric.mapping": "best_fit"
}
}
[ EDIT ] 问题出在NUMERIC字段的比例和精度上,但是我仍然不明白为什么默认的Scale会吐出错误