scala connect hbase master failure

时间:2016-12-13 08:45:41

标签: scala api hadoop hbase

我写下Scala代码如下:

44 val config: Configuration = HBaseConfiguration.create()
 45     config.set("hbase.zookeeper.property.clientPort", zooKeeperClientPort)
 46     config.set("hbase.zookeeper.quorum", zooKeeperQuorum)
 47     config.set("zookeeper.znode.parent", zooKeeperZNodeParent)
 48     config.set("hbase.master", hbaseMaster)
 49     config.addResource("hbase-site.xml")
 50     config.addResource("hdfs-site.xml")
 51     HBaseAdmin.checkHBaseAvailable(config);
 52     val admin: HBaseAdmin = new HBaseAdmin(config)
 53     // descriptor.addColumn(new HColumnDescriptor(Bytes.toBytes("cfbfeature")))
 54     val conn = ConnectionFactory.createConnection(config)
 55     table = conn.getTable(TableName.valueOf(outputTable))

这是我的完整错误日志:

  

zooKeeperClientPort:2181,zooKeeperQuorum:zk1.hbase.busdev.usw2.cmcm.com,zk2.hbase.busdev.usw2.cmcm.com,zk3.hbase.busdev.usw2.cmcm.com,zooKeeperZNodeParent:/ HBase的, outputTable:RequestFeature,hbaseMaster:10.2.2.62:60000       16/12/13 08:25:56 WARN util.HeapMemorySizeUtil:hbase.regionserver.global.memstore.upperLimit不推荐使用hbase.regionserver.global.memstore.size       16/12/13 08:25:56 WARN util.HeapMemorySizeUtil:hbase.regionserver.global.memstore.upperLimit不推荐使用hbase.regionserver.global.memstore.size       16/12/13 08:25:56 WARN util.HeapMemorySizeUtil:hbase.regionserver.global.memstore.upperLimit不推荐使用hbase.regionserver.global.memstore.size       16/12/13 08:25:57 INFO zookeeper.RecoverableZooKeeper:进程标识符= hconnection-0x6ae9e162连接到ZooKeeper ensemble = zk2.hbase.busdev.usw2.cmcm.com:2181,zk1.hbase.busdev.usw2.cmcm。玉米:2181,zk3.hbase.busdev.usw2.cmcm.com:2181       16/12/13 08:25:57 WARN util.HeapMemorySizeUtil:hbase.regionserver.global.memstore.upperLimit不推荐使用hbase.regionserver.global.memstore.size       16/12/13 08:25:57 WARN util.DynamicClassLoader:无法识别dir hdfs:// mycluster / hbase / lib的fs,忽略       java.net.UnknownHostException:未知主机:mycluster           在org.apache.hadoop.ipc.Client $ Connection。(Client.java:214)           在org.apache.hadoop.ipc.Client.getConnection(Client.java:1196)           在org.apache.hadoop.ipc.Client.call(Client.java:1050)           在org.apache.hadoop.ipc.RPC $ Invoker.invoke(RPC.java:225)           at com.sun.proxy。$ Proxy3.getProtocolVersion(Unknown Source)           在org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)           在org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)           在org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)           在org.apache.hadoop.hdfs.DFSClient。(DFSClient.java:238)           在org.apache.hadoop.hdfs.DFSClient。(DFSClient.java:203)           在org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)           在org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)           在org.apache.hadoop.fs.FileSystem.access $ 200(FileSystem.java:66)           在org.apache.hadoop.fs.FileSystem $ Cache.get(FileSystem.java:1404)           在org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)           在org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)           在org.apache.hadoop.hbase.util.DynamicClassLoader。(DynamicClassLoader.java:104)           在org.apache.hadoop.hbase.protobuf.ProtobufUtil。(ProtobufUtil.java:229)           at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)           在org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)           at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86)           at org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833)           在org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation。(ConnectionManager.java:623)           at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)           at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)           at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)           at java.lang.reflect.Constructor.newInstance(Constructor.java:526)           在org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)           在org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)           在org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)           at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2508)           在com.cmcm.datahero.streaming.actor.ToHBaseActor.preStart(ToHBaseActor.scala:51)           at akka.actor.Actor $ class.aroundPreStart(Actor.scala:472)           在com.cmcm.datahero.streaming.actor.ToHBaseActor.aroundPreStart(ToHBaseActor.scala:16)           at akka.actor.ActorCell.create(ActorCell.scala:580)           at akka.actor.ActorCell.invokeAll $ 1(ActorCell.scala:456)           at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)           at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)           at akka.dispatch.Mailbox.run(Mailbox.scala:219)           在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)           at java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:615)           在java.lang.Thread.run(Thread.java:745)       16/12/13 08:25:57 INFO client.ConnectionManager $ HConnectionImplementation:关闭zookeeper sessionid = 0x356c1ee7cac04c8

1 个答案:

答案 0 :(得分:1)

我最终将hbase和hdfs xml confiure放入子路径src / main / resources中。然后addResouce到hadoop configure。 但这不是我问题的核心。 jar版的hbase包应该与hbase版本匹配。我修复了build.sbt。代码发布在下面。希望可以帮助别人遇到我遇到的错误。

libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.0.0-cdh5.4.8"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.0.0-cdh5.4.8"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.0.0-cdh5.4.8"
libraryDependencies += "org.apache.hadoop" % "hadoop-core" % "2.6.0-mr1-cdh5.4.8"
libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.6.0-cdh5.4.8"
libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.6.0-cdh5.5.4"
// libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.0.0-CDH"
// libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.0.0"
// libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.0.0"

//scalaSource in Compile := baseDirectory.value / "src/main/scala"
//resourceDirectory in Compile := baseDirectory.value / "src/main/resources"
unmanagedBase := baseDirectory.value / "lib"
//unmanagedResourceDirectories in Compile += baseDirectory.value / "conf"
packAutoSettings
resolvers += Resolver.sonatypeRepo("snapshots")
resolvers += "cloudera repo" at "https://repository.cloudera.com/content/repositories/releases/"
resolvers += "cloudera repo1" at "https://repository.cloudera.com/artifactory/cloudera-repos/"