融合Kafka Connect-JdbcSourceTask:java.sql.SQLException:Java堆空间

时间:2018-11-26 11:47:27

标签: jdbc apache-kafka apache-kafka-connect confluent

我正在尝试将模式时间戳记与mysql一起使用,但是当我这样做时,它不会在我的kafka队列中创建任何主题,并且也没有错误日志。

这是我正在使用的连接器属性,

{
        "name": "jdbc_source_mysql_reqistrations_local",
        "config": {
                 "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
                 "key.converter": "io.confluent.connect.avro.AvroConverter",
                 "key.converter.schema.registry.url": "http://localhost:8081",
                 "value.converter": "io.confluent.connect.avro.AvroConverter",
                 "value.converter.schema.registry.url": "http://localhost:8081",
                 "tasks.max": "5",
                 "connection.url": "jdbc:mysql://localhost:3306/prokafka?zeroDateTimeBehavior=ROUND&user=kotesh&password=kotesh",
                 "poll.interval.ms":"100000000",
                 "query": "SELECT Language, matriid, DateUpdated from usersdata.user",
                 "mode": "timestamp",
                 "timestamp.column.name": "DateUpdated",
                 "validate.non.null": "false",
                 "batch.max.rows":"10",
                 "topic.prefix": "mysql-local-"
        }
}

启动:

./bin/confluent load jdbc_source_mysql_registration_local -d /home/prokafka/config-json/kafka-connect-jdbc-local-mysql.json



{
  "name": "jdbc_source_mysql_reqistrations_local",
  "config": {
    "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
    "key.converter": "io.confluent.connect.avro.AvroConverter",
    "key.converter.schema.registry.url": "http://localhost:8081",
    "value.converter": "io.confluent.connect.avro.AvroConverter",
    "value.converter.schema.registry.url": "http://localhost:8081",
    "tasks.max": "5",
    "connection.url": "jdbc:mysql://localhost:3306/prokafka?zeroDateTimeBehavior=ROUND&user=kotesh&password=kotesh",
    "poll.interval.ms": "100000000",
    "query": "SELECT Language, matriid, DateUpdated from usersdata.users",
    "mode": "timestamp",
    "timestamp.column.name": "DateUpdated",
    "validate.non.null": "false",
    "batch.max.rows": "10",
    "topic.prefix": "mysql-local-",
    "name": "jdbc_source_mysql_reqistrations_local"
  },
  "tasks": [
    {
      "connector": "jdbc_source_mysql_reqistrations_local",
      "task": 0
    }
  ],
  "type": null
}

1 个答案:

答案 0 :(得分:1)

  

SQLException:Java堆空间

似乎您正在加载太多数据以供Connect处理,并且必须增加堆大小

例如,将其增加到6GB(或更多)

我没有尝试使用Confluent CLI来执行此操作,但是根据代码,这可能有效

confluent stop connect 
export CONNECT_KAFKA_HEAP_OPTS="-Xmx6g"
confluent start connect

如果您在这台计算机上的内存有限,请与Mysql数据库,Kafka代理,Zookeeper,架构注册表等分开运行Connect。