如何在我的Kafka Connect Sink到HBase中修复Zookeeper Nullpointer

时间:2019-07-16 14:58:31

标签: bigdata hbase apache-kafka-connect

我在其中一个AWS环境中使用了Ambari服务。已经有一个与Zookeeper和HBase数据库一起运行的Kafka消息队列。我使用了Landoop HBaseSink连接器。 https://docs.lenses.io/connectors/sink/hbase.html

我能够使用Avro Schema Registry读写Kafka。连接器已经在主题中检测到新消息。但是,在将数据写入HBase时存在问题。它给了我一个Nullpointerexception,它似乎与Zookeeper有关。

我已经尝试找出连接器如何检测到hbase集群,并且似乎它是通过zookeeper自动完成的。在上面提供的网站上,我说我应该编辑hbase-site.xml并将其添加到连接器属性中的path.plugin中。

这些是我的连接器属性。

bootstrap.servers=...
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=...
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=...
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=...
name=hbasetest
connector.class=com.datamountaineer.streamreactor.connect.hbase.HbaseSinkConnector
tasks.max=1
topics=hbasetest
connect.hbase.column.family=d
connect.hbase.kcql=INSERT INTO person SELECT * FROM hbasetest

连接器的错误消息如下:

ERROR Encountered error null (com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter:62)
java.lang.NullPointerException
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:489)
        at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:558)
        at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1195)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1179)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1365)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1199)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:410)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:359)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
        at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1498)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1094)
        at com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter$$anonfun$insert$1$$anonfun$1.apply$mcV$sp(HbaseWriter.scala:104)
        at com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter$$anonfun$insert$1$$anonfun$1.apply(HbaseWriter.scala:104)
        at com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter$$anonfun$insert$1$$anonfun$1.apply(HbaseWriter.scala:104)
        at scala.util.Try$.apply(Try.scala:192)
        at com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter$$anonfun$insert$1.apply(HbaseWriter.scala:104)
        at com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter$$anonfun$insert$1.apply(HbaseWriter.scala:75)
        at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
        at com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter.insert(HbaseWriter.scala:75)
        at com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter.write(HbaseWriter.scala:64)
        at com.datamountaineer.streamreactor.connect.hbase.HbaseSinkTask$$anonfun$put$2.apply(HbaseSinkTask.scala:81)
        at com.datamountaineer.streamreactor.connect.hbase.HbaseSinkTask$$anonfun$put$2.apply(HbaseSinkTask.scala:81)
        at scala.Option.foreach(Option.scala:257)
        at com.datamountaineer.streamreactor.connect.hbase.HbaseSinkTask.put(HbaseSinkTask.scala:81)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:564)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

编辑: 我正在这样启动连接器(具有上述属性文件):

./bin/connect-standalone.sh ./config/connect-standalone.properties ./config/connect-hbase-sink.properties

0 个答案:

没有答案