Kafka - 错误连接器错误后停止java.lang.IllegalArgumentException:组数必须为正

时间:2018-01-29 20:32:03

标签: jdbc apache-kafka apache-kafka-connect confluent confluent-schema-registry

设置从我们的RDS Postgres 9.6到Redhift运行的Kafka。使用https://blog.insightdatascience.com/from-postgresql-to-redshift-with-kafka-connect-111c44954a6a处的指南,我们已经设置了所有基础架构,并且正在努力完全设置Confluent。我收到了ava.lang.IllegalArgumentException的错误:组数必须是正数。当试图设置东西。这是我的配置文件:

name=source-postgres
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=16

connection.url= ((correct url and information here))
mode=timestamp+incrementing
timestamp.column.name=updated_at
incrementing.column.name=id
topic.prefix=postgres_

完整错误:

  

/ usr / local / confluent $ / usr / local / confluent / bin / connect-standalone   /usr/local/confluent/etc/schema-registry/connect-avro-standalone.properties   /usr/local/confluent/etc/kafka-connect-jdbc/source-postgres.properties   SLF4J:类路径包含多个SLF4J绑定。 SLF4J:找到了   绑定   [JAR:文件:/usr/local/confluent/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar /org/slf4j/impl/StaticLoggerBinder.class]   SLF4J:发现绑定   [JAR:文件:/usr/local/confluent/share/java/kafka-connect-elasticsearch/slf4j-simple-1.7.5.jar /org/slf4j/impl/StaticLoggerBinder.class]   SLF4J:发现绑定   [JAR:文件:/usr/local/confluent/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar /org/slf4j/impl/StaticLoggerBinder.class]   SLF4J:发现绑定   [JAR:文件:/usr/local/confluent/share/java/kafka/slf4j-log4j12-1.7.21.jar /org/slf4j/impl/StaticLoggerBinder.class]   SLF4J:请参阅http://www.slf4j.org/codes.html#multiple_bindings   说明。 SLF4J:实际绑定是类型   [org.slf4j.impl.Log4jLoggerFactory] ​​[2018-01-29 16:49:49,820] INFO   StandaloneConfig值:           access.control.allow.methods =           access.control.allow.origin =           bootstrap.servers = [localhost:9092]           internal.key.converter = class org.apache.kafka.connect.json.JsonConverter           internal.value.converter = class org.apache.kafka.connect.json.JsonConverter           key.converter = class io.confluent.connect.avro.AvroConverter           offset.flush.interval.ms = 60000           offset.flush.timeout.ms = 5000           offset.storage.file.filename = /tmp/connect.offsets           rest.advertised.host.name = null           rest.advertised.port = null           rest.host.name = null           rest.port = 8083           task.shutdown.graceful.timeout.ms = 5000           value.converter = class io.confluent.connect.avro.AvroConverter   (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:180)   [2018-01-29 16:49:49,942] INFO记录初始化@ 549ms   (org.eclipse.jetty.util.log:186)[2018-01-29 16:49:50,301] INFO Kafka   连接起始(org.apache.kafka.connect.runtime.Connect:52)   [2018-01-29 16:49:50,302] INFO Herder开始   (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:70)   [2018-01-29 16:49:50,302] INFO工人开始   (org.apache.kafka.connect.runtime.Worker:113)[2018-01-29   16:49:50,302] INFO使用file启动FileOffsetBackingStore   /tmp/connect.offsets   (org.apache.kafka.connect.storage.FileOffsetBackingStore:60)   [2018-01-29 16:49:50,304] INFO工人开始了   (org.apache.kafka.connect.runtime.Worker:118)[2018-01-29   16:49:50,305] INFO Herder开始了   (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:72)   [2018-01-29 16:49:50,305] INFO启动REST服务器   (org.apache.kafka.connect.runtime.rest.RestServer:98)[2018-01-29   16:49:50,434] INFO jetty-9.2.15.v20160210   (org.eclipse.jetty.server.Server:327)2018年1月29日下午4:49:51   org.glassfish.jersey.internal.Errors logErrors警告:以下内容   已检测到警告:警告:(子)资源方法   listConnectors in   org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource   包含空路径注释。警告:(子)资源方法   createConnector in   org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource   包含空路径注释。警告:(子)资源方法   listConnectorPlugins in   org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource   包含空路径注释。警告:(子)资源方法   serverInfo in   org.apache.kafka.connect.runtime.rest.resources.RootResource包含   空路径注释。 [2018-01-29 16:49:51,385] INFO开始   o.e.j.s.ServletContextHandler@5aabbb29 {/,NULL,AVAILABLE}   (org.eclipse.jetty.server.handler.ContextHandler:744)[2018-01-29   16:49:51,409]信息开始了   ServerConnector @ 54dab9ac {HTTP / 1.1} {0.0.0.0:8083}   (org.eclipse.jetty.server.ServerConnector:266)[2018-01-29   16:49:51,409] INFO开始@ 2019ms   (org.eclipse.jetty.server.Server:379)[2018-01-29 16:49:51,410] INFO   REST服务器在http://127.0.0.1:8083/收听广告URL   http://127.0.0.1:8083/   (org.apache.kafka.connect.runtime.rest.RestServer:150)[2018-01-29   16:49:51,410] INFO Kafka Connect开始了   (org.apache.kafka.connect.runtime.Connect:58)[2018-01-29   16:49:51,412] INFO ConnectorConfig值:           connector.class = io.confluent.connect.jdbc.JdbcSourceConnector           key.converter = null           name = source-postgres           tasks.max = 16           value.converter = null(org.apache.kafka.connect.runtime.ConnectorConfig:180)[2018-01-29   16:49:51,413] INFO创建类型的连接器源 - postgres   io.confluent.connect.jdbc.JdbcSourceConnector   (org.apache.kafka.connect.runtime.Worker:159)[2018-01-29   16:49:51,416] INFO实例化连接器源 - postgres版本   3.1.2类型类io.confluent.connect.jdbc.JdbcSourceConnector(org.apache.kafka.connect.runtime.Worker:162)[2018-01-29   16:49:51,419] INFO JdbcSourceConnectorConfig值:           batch.max.rows = 100           connection.url =           incrementing.column.name = id           mode = timestamp + incrementing           poll.interval.ms = 5000           query =           schema.pattern = null           table.blacklist = []           table.poll.interval.ms = 60000           table.types = [表]           table.whitelist = []           timestamp.column.name = updated_at           timestamp.delay.interval.ms = 0           topic.prefix = postgres_           validate.non.null = true(io.confluent.connect.jdbc.source.JdbcSourceConnectorConfig:180)   [2018-01-29 16:49:52,129] INFO完成创建连接器   source-postgres(org.apache.kafka.connect.runtime.Worker:173)   [2018-01-29 16:49:52,130] INFO SourceConnectorConfig值:           connector.class = io.confluent.connect.jdbc.JdbcSourceConnector           key.converter = null           name = source-postgres           tasks.max = 16           value.converter = null(org.apache.kafka.connect.runtime.SourceConnectorConfig:180)   [2018-01-29 16:49:52,209]错误连接器错误后停止   (org.apache.kafka.connect.cli.ConnectStandalone:102)   java.lang.IllegalArgumentException:组数必须为正数。           在org.apache.kafka.connect.util.ConnectorUtils.groupPartitions(ConnectorUtils.java:45)           at io.confluent.connect.jdbc.JdbcSourceConnector.taskConfigs(JdbcSourceConnector.java:123)           在org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:193)           在org.apache.kafka.connect.runtime.standalone.StandaloneHerder.recomputeTaskConfigs(StandaloneHerder.java:251)           在org.apache.kafka.connect.runtime.standalone.StandaloneHerder.updateConnectorTasks(StandaloneHerder.java:281)           在org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:163)           在org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:96)   [2018-01-29 16:49:52,210] INFO Kafka Connect停止   (org.apache.kafka.connect.runtime.Connect:68)[2018-01-29   16:49:52,210] INFO停止REST服务器   (org.apache.kafka.connect.runtime.rest.RestServer:154)[2018-01-29   16:49:52,213] INFO停止了   ServerConnector @ 54dab9ac {HTTP / 1.1} {0.0.0.0:8083}   (org.eclipse.jetty.server.ServerConnector:306)[2018-01-29   16:49:52,218] INFO停止了   o.e.j.s.ServletContextHandler@5aabbb29 {/,NULL,UNAVAILABLE}   (org.eclipse.jetty.server.handler.ContextHandler:865)[2018-01-29   16:49:52,224] INFO REST服务器已停止   (org.apache.kafka.connect.runtime.rest.RestServer:165)[2018-01-29   16:49:52,224]信息Herder停止   (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:76)   [2018-01-29 16:49:52,224] INFO停止连接器源 - postgres   (org.apache.kafka.connect.runtime.Worker:218)[2018-01-29   16:49:52,225] INFO停止表监视线程   (io.confluent.connect.jdbc.JdbcSourceConnector:137)[2018-01-29   16:49:52,225] INFO停止连接器源 - postgres   (org.apache.kafka.connect.runtime.Worker:229)[2018-01-29   16:49:52,225] INFO工人停止   (org.apache.kafka.connect.runtime.Worker:122)[2018-01-29   16:49:52,225] INFO已停止FileOffsetBackingStore   (org.apache.kafka.connect.storage.FileOffsetBackingStore:68)   [2018-01-29 16:49:52,225] INFO工人停了下来   (org.apache.kafka.connect.runtime.Worker:142)[2018-01-29   16:49:57,334] INFO反思花了6952毫秒来扫描263个网址,   产生12036个密钥和80097个值   (org.reflections.Reflections:229)[2018-01-29 16:49:57,346]信息   赫尔德停了下来   (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:86)   [2018-01-29 16:49:57,346] INFO Kafka Connect停了下来   (org.apache.kafka.connect.runtime.Connect:73)

我们在RDS Postgres(9.6)和Redshift之间使用DMS。它已经失败了,简直是悲惨的,而且几乎在这一点上几乎是非常昂贵的,所以我们正在将其作为一种可能的解决方案。我很喜欢这里的墙,我真的很想得到一些帮助。

2 个答案:

答案 0 :(得分:0)

我正在处理一个非常类似的问题,我发现如果连接器没有配置告诉它要拉什么,那就简单地说错了。尝试将以下内容添加到连接器配置中:

table.whitelist =

然后指定要抓取的表列表。

答案 1 :(得分:0)

我在JDBC Source Connector工作时遇到此错误。问题是table.whitelist设置区分大小写,即使底层数据库不是(RDBMS是MS Sql Server)。

所以我的表格是tableName,我有"table.whitelist": "tablename",。这失败了,我得到了上述错误。将其更改为"table.whitelist": "tableName",可修复错误。

尽管SELECT * FROM tablenameSELECT * FROM tableName都适用于MS Sql Manager。