Phoenix sql查询无法处理大型数据集

时间:2017-08-29 12:41:58

标签: hadoop hbase phoenix bigdata

我在hbase中有5百万条记录,并试图查找记录的总数,然后我使用凤凰命令行跟踪错误。

Error: org.apache.phoenix.exception.PhoenixIOException: Failed to get result within timeout, timeout=60000ms (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed to get result within timeout, timeout=60000ms
    at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
    at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
    at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
    at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
    at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
    at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
    at org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
    at org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
    at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
    at sqlline.BufferedRows.<init>(BufferedRows.java:37)
    at sqlline.SqlLine.print(SqlLine.java:1650)
    at sqlline.Commands.execute(Commands.java:833)
    at sqlline.Commands.sql(Commands.java:732)
    at sqlline.SqlLine.dispatch(SqlLine.java:808)
    at sqlline.SqlLine.begin(SqlLine.java:681)
    at sqlline.SqlLine.start(SqlLine.java:398)
    at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: java.util.concurrent.ExecutionException: org.apache.phoenix.exception.PhoenixIOException: Failed to get result within timeout, timeout=60000ms
    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
    at java.util.concurrent.FutureTask.get(FutureTask.java:206)
    at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:766)
    ... 15 more
Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed to get result within timeout, timeout=60000ms
    at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
    at org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:203)
    at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
    at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to get result within timeout, timeout=60000ms
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:206)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
    at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
    at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
    at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
    at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
    at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
    at org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:199)

请查看此内容,因为我无法解决此问题。因为我根据这方面的改变改变了Hbase的移民,但它仍然无效。

https://community.hortonworks.com/content/supportkb/49037/phoenix-sqlline-query-on-larger-data-set-fails-wit.html

我在以下路径/etc/hbase/conf/hbase-site.xml中进行了更改。我是否需要在凤凰城移民局的任何地方复制此内容,因为我不理解这一点。你能帮帮我吗?

如果您需要更多详细信息,请与我们联系。

2 个答案:

答案 0 :(得分:2)

它的回答很晚但是我遇到了同样的问题,我通过在zoo.cfg中将maxSessionTimeout设置为更大的值来解决(可以通过hdp中的ambari来完成)。默认值为60000ms或一分钟后,zookeeper关闭它为查询打开的会话。

答案 1 :(得分:0)

您可以增加以下参数来解决:

  1. phoenix.query.timeoutMs
  2. hbase.regionserver.lease.period
  3. hbase.rpc.timeout
  4. hbase.client.scanner.timeout.period