无法从Eclipse访问HBase数据库(在安全集群上运行)?

时间:2016-11-18 13:38:50

标签: scala hbase apache-spark-sql kerberos hbase-client

尝试从HBase database中的Eclipse Scala程序连接到Windows

群集是secured using Kerberos身份验证,因此它不连接到Hbase数据库。

每次我们创建jar文件并在群集中运行时。但这对开发和调试没有用。

如何在类路径中设置hbase-site.xml

我下载了*site.xml个文件,尝试将hbase-site.xml, core-site.xml and hdfs-site.xml添加为source文件夹,并尝试将此文件作为外部类文件夹从项目构建路径添加,但没有任何效果。我如何使这个工作?

无论如何我们可以在sqlContext中设置hbase-site.xml,因为我使用sqlContext来使用HortonWorks连接器读取Hbase表。

错误日志为:

Exception in thread "main" java.io.IOException: java.lang.reflect.InvocationTargetException
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.init(HBaseResources.scala:93)
       at org.apache.spark.sql.execution.datasources.hbase.ReferencedResource$class.liftedTree1$1(HBaseResources.scala:57)
       at org.apache.spark.sql.execution.datasources.hbase.ReferencedResource$class.acquire(HBaseResources.scala:54)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.acquire(HBaseResources.scala:88)
       at org.apache.spark.sql.execution.datasources.hbase.ReferencedResource$class.releaseOnException(HBaseResources.scala:74)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.releaseOnException(HBaseResources.scala:88)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.<init>(HBaseResources.scala:108)
       at org.apache.spark.sql.execution.datasources.hbase.HBaseTableScanRDD.getPartitions(HBaseTableScan.scala:60)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
       at scala.Option.getOrElse(Option.scala:120)
       at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
       at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
       at scala.Option.getOrElse(Option.scala:120)
       at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
       at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
       at scala.Option.getOrElse(Option.scala:120)
       at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
       at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:190)
       at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
       at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
       at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
       at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
       at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
       at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
       at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
       at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
       at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
       at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
       at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
       at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
       at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
       at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
       at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
       at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
       at org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)
       at scb.HBaseBroadcast$.main(HBaseBroadcast.scala:106)
       at scb.HBaseBroadcast.main(HBaseBroadcast.scala)
Caused by: java.lang.reflect.InvocationTargetException
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
       ... 44 more
Caused by: java.lang.AbstractMethodError: org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
       at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
       at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
       at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58)
       at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:147)
       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510)
       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
       at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
       at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
       at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
       at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
       at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:241)
       at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
       at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
       at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
       at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)
       at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:635)
       ... 49 more

1 个答案:

答案 0 :(得分:1)

你有一个hadoop-dfs冲突。请检查服务器上的版本与开发路径上的版本。