Spark Hbase连接问题

时间:2016-11-16 09:56:52

标签: apache-spark hbase apache-spark-sql hortonworks-data-platform

当我尝试通过spark(使用newhadoopAPIRDD)在HDP 2.4.2中连接hbase时,遇到跟随错误。已经尝试在hbase站点xml文件中增加RPC时间,仍然是相同的。任何想法如何解决?

Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Wed Nov 16 14:59:36 IST 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=71216: row 'scores,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hklvadcnc06.hk.standardchartered.com,16020,1478491683763, seqNum=0

   at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
   at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:195)
   at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
   at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
   at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
   at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
   at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
   at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:821)
   at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
   at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
   at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:324)
   at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:88)
   at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)
   at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:81)
   at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)
   at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
   at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
   at scala.Option.getOrElse(Option.scala:120)
   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
   at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
   at org.apache.spark.rdd.RDD.count(RDD.scala:1157)
   at scb.Hbasetest$.main(Hbasetest.scala:85)
   at scb.Hbasetest.main(Hbasetest.scala)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=71216: row 'scores,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hklvadcnc06.hk.standardchartered.com,16020,1478491683763, seqNum=0
       at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
       at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
       at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hklvadcnc06.hk.standardchartered.com/10.20.235.13:16020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hklvadcnc06.hk.standardchartered.com/10.20.235.13:16020 is closing. Call id=9, waitTime=171
       at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1281)
       at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1252)
       at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
       at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
       at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
       at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372)
       at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
       at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
       at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
       at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:346)
       at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:320)
       at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
       ... 4 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hklvadcnc06.hk.standardchartered.com/10.20.235.13:16020 is closing. Call id=9, waitTime=171
       at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1078)
       at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:879)
       at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:604)
16/11/16 14:59:36 INFO SparkContext: Invoking stop() from shutdown hook

2 个答案:

答案 0 :(得分:1)

我在hadoop classpath中添加了hbase-conf路径,问题已经解决。

谢谢!

答案 1 :(得分:0)

尽管上下文略有不同,但是在将 hive与hbase 连接时,我遇到了相似类型的异常。 你猜怎么了!我的hbase表的列映射配置错误。 正确配置hbase表的列(表的元数据)后,问题消失了。

foreach editor in FileEditorManager.getInstance(project).getAllEditors() { 
  if (editor.getName() != null && editor.getName().equals("EDITER_TO_UPDATE")) { 
    SwingUtilities.invokeLater(new Runnable() { 
      @Override 
      public void run() { 
        if (operations.equals(Operations.UNDO)) { 
          UndoManagerImpl.getInstance(project).undo(editor); 
        } else if (operations.equals(Operations.REDO)) { 
          UndoManagerImpl.getInstance(project).redo(editor); 
        } 
      } 
    }); 
  } 
}