代码:
Filter filter = new RowFilter(CompareFilter.CompareOp.EQUAL,new SubstringComparator(参数[1]));
扫描扫描=新扫描();
scan.setFilter(过滤器);
ResultScanner res = table.getScanner(scan);
for(结果r:res) // LINE 49
{...}
我运行这个jar,然后我收到以下异常消息:
> Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=32, exceptions: Mon Dec 26 11:45:00 CST 2016, null,
> java.net.SocketTimeoutException: callTimeout=60000,
> callDuration=60304: row '' on table 'maintable' at
> region=maintable,,1482293923088.ac4aebf960554591febc078e38ef5f08.,
> hostname=comp75,16020,1482598230057, seqNum=5645968
>
> at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
> at com.beidu.hbaseutil.Query.main(Query.java:49) Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Failed after attempts=32, exceptions: Mon Dec 26 11:45:00 CST 2016,
> null, java.net.SocketTimeoutException: callTimeout=60000,
> callDuration=60304: row '' on table 'maintable' at
> region=maintable,,1482293923088.ac4aebf960554591febc078e38ef5f08.,
> hostname=comp75,16020,1482598230057, seqNum=5645968
>
> at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:264)
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:199)
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:287)
> at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:367)
> at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
> ... 1 more Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=60304: row '' on table 'maintable' at
> region=maintable,,1482293923088.ac4aebf960554591febc078e38ef5f08.,
> hostname=comp75,16020,1482598230057, seqNum=5645968
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:294)
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:275)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Call to comp75/172.16.249.75:16020 failed on
> local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException:
> Call id=2, waitTime=60002, operationTimeout=60000 expired.
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1235)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1203)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31751)
> at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
> at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> ... 6 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=2,
> waitTime=60002, operationTimeout=60000 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1177)
> ... 12 more
有人能提供一些线索吗?thx
答案 0 :(得分:0)
在这种情况下,您提供的扫描程序执行的任务执行的时间间隔超过指定的超时时间。(callTimeout = 60000,callDuration = 60304)。
如果您已连接到群集,则需要优化群集/表/架构以进行读取,或者更改超时,如果需要大量扫描,请转到map-reduce。