使用Spark Scala远程连接HBase

时间:2018-08-08 12:45:47

标签: scala apache-spark hadoop hbase spark-streaming

我在Windows(这是我的本地窗口)中配置了Hadoop和spark,并且在其中装有hbase的vm(同一台机器)中安装了cloudera。 我正在尝试使用spark流提取数据并将其放入vm中的hbase中。

有可能这样做吗?

我的尝试:

hbase包

import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{ConnectionFactory,HBaseAdmin,HTable,Put,Get}


object Connect {

  def main(args: Array[String]){
  val conf = HBaseConfiguration.create()
val tablename = "Acadgild_spark_Hbase"

val HbaseConf = HBaseConfiguration.create()
  HbaseConf.set("hbase.zookeeper.quorum","192.168.117.133")
  HbaseConf.set("hbase.zookeeper.property.clientPort","2181")

  val connection = ConnectionFactory.createConnection(HbaseConf);

  val admin = connection.getAdmin();

 val listtables=admin.listTables()

listtables.foreach(println)

  }
}

错误:

18/08/08 21:05:09 INFO ZooKeeper: Initiating client connection, connectString=192.168.117.133:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$13/1357491107@12d1bfb1
18/08/08 21:05:15 INFO ClientCnxn: Opening socket connection to server 192.168.117.133/192.168.117.133:2181. Will not attempt to authenticate using SASL (unknown error)
18/08/08 21:05:15 INFO ClientCnxn: Socket connection established to 192.168.117.133/192.168.117.133:2181, initiating session
18/08/08 21:05:15 INFO ClientCnxn: Session establishment complete on server 192.168.117.133/192.168.117.133:2181, sessionid = 0x16518f57f950012, negotiated timeout = 40000
18/08/08 21:05:16 WARN ConnectionUtils: Can not resolve quickstart.cloudera, please check your network
java.net.UnknownHostException: quickstart.cloudera
    at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress$2.lookupAllHostAddr(Unknown Source)
    at java.net.InetAddress.getAddressesFromNameService(Unknown Source)
    at java.net.InetAddress.getAllByName0(Unknown Source)
    at java.net.InetAddress.getAllByName(Unknown Source)
    at java.net.InetAddress.getAllByName(Unknown Source)
    at java.net.InetAddress.getByName(Unknown Source)
    at org.apache.hadoop.hbase.client.ConnectionUtils.getStubKey(ConnectionUtils.java:233)
    at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries(ConnectionImplementation.java:1126)
    at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1148)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1213)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1202)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3055)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3047)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:460)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:444)
    at azure.iothub$.main(iothub.scala:35)
    at azure.iothub.main(iothub.scala)

1 个答案:

答案 0 :(得分:0)

基于此错误,您不能在代码中使用quickstart.cloudera,因为网络堆栈正在使用DNS尝试访问它,但是您的外部路由器不了解您的VM。


您需要使用localhost,然后确保已正确配置VM以使用需要连接的端口。

但是,我认为Zookeeper正在将该主机名返回到您的代码中。因此,必须在Host OS计算机上编辑Hosts文件以添加订单项。

例如

127.0.0.1 localhost quickstart.cloudera

或者,您可以在zookeeper-shell或Cloudera Manager(在HBase配置中)四处浏览,然后编辑quickstart.cloudera以返回地址192.168.117.133