由于" RetriesExhaustedException" Thrift Server崩溃

时间:2015-11-12 10:10:47

标签: hadoop hbase thrift hortonworks-data-platform

当运行thrift(/usr/hdp/2.3.0.0-2557/hbase/bin/hbase-daemon.sh start thrift)时,它会偶尔停止工作。 在日志中我可以看到异常:

2015-11-12 11:56:11,926 WARN [thrift-worker-3] thrift.ThriftServerRunner$HBaseHandler: Can't get the location 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location 
 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:309)
 at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153) 
 at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61) 
 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) 
 at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) 
 at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295) 
 at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160) 
 at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155) 
 at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811) 
 at org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.scannerOpenWithScan(ThriftServerRunner.java:1451) 
 at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) 
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
 at java.lang.reflect.Method.invoke(Method.java:497) 
 at org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(HbaseHandlerMetricsProxy.java:67) 
 at com.sun.proxy.$Proxy11.scannerOpenWithScan(Unknown Source) 
 at org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerOpenWithScan.getResult(Hbase.java:4609) 
 at org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerOpenWithScan.getResult(Hbase.java:4593) 
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 at org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:289) 
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745) 
Caused by: java.io.IOException: hconnection-0x20ae27f0 closed 
 at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1146) 
 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
 ... 22 more 

我最终重新启动了节俭以使其再次运行。 我已尝试实施以下解决方法here,但仍然崩溃:

<property>
<name>hbase.thrift.connection.max-idletime</name>
<value>1800000</value>
</property>

我们正在使用HortonWorks / HBase 1.1.1.2.3.0.0-2557中的HDP 2.3.0堆栈

1 个答案:

答案 0 :(得分:2)

这是HBase一直存在的问题。关于此问题有两个单独的门票:HBASE-14533HBASE-14196。这两张票都提供了一个补丁来解决问题。

关于01/19的Pankaj Kumar's comment,如果合并两个补丁,则可以解决该问题。也许这对你的情况也有帮助。