Apache Phoenix + Pentaho Mondrian错误加入订单

时间:2016-02-17 16:04:08

标签: mondrian phoenix

我正在使用来自Cloudera labs distribution的Apache Phoenix 4.5.2,它安装在CDH 5.4群集上。现在我尝试使用安装了嵌入式Mondrian和SAIKU插件的Pentaho BA 5.4服务器。

我计划使用的是Pentaho Mondrian ROLAP引擎的聚合器。所以我通过略微定制的Pentaho数据集成将大约6500万个事实导入到事实表中(如果有人感兴趣,我added UPSERTTable Output step,设置Commit size-1,设置瘦驱动程序phoenix-<version>-query-server-thin-client.jar网址指向Apache Query Server并通过hbase-site.xmlphoenix.connection.autoCommit中启用自动提交,现在我有大约400行时间维度表。

问题在于Mondrian生成查询,假设表的顺序无关紧要。它使用FROM语句生成笛卡尔联接,其中维度表位于第一位,事实表位于最后。如果我更改表的顺序,查询将成功运行。

这结束时凤凰试图将65 M行表缓存到内存中,因此我得到org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size of hash cache (104857626 bytes) exceeds the maximum allowed size (104857600 bytes)

除了构建将事实表放在首位的自定义Mondrian之外,是否有任何提示或索引技巧可以强制Phoenix首先迭代事实表,因为对我而言,它应该迭代超过65M行表和hash将它与更小的维度表连接起来?

异常堆栈跟踪:

Caused by: mondrian.olap.MondrianException: Mondrian Error:Internal error: Error while loading segment; sql=[select "DAYS"."DAY" as "c0", sum("account_transactions"."AMOUNT") as "m0" from "DAYS" as "DAYS", "account_transactions" as "account_transactions" where "account_transactions"."DATE" = "DAYS"."DATE" group by "DAYS"."DAY"]
        at mondrian.resource.MondrianResource$_Def0.ex(MondrianResource.java:972)
        at mondrian.olap.Util.newInternal(Util.java:2404)
        at mondrian.olap.Util.newError(Util.java:2420)
        at mondrian.rolap.SqlStatement.handle(SqlStatement.java:353)
        at mondrian.rolap.SqlStatement.execute(SqlStatement.java:253)
        at mondrian.rolap.RolapUtil.executeQuery(RolapUtil.java:350)
        at mondrian.rolap.agg.SegmentLoader.createExecuteSql(SegmentLoader.java:625)
        ... 8 more
Caused by: java.sql.SQLException: Encountered exception in sub plan [0] execution.
        at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:171)
        at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:121)
        at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
        at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:256)
        at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
        at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:255)
        at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1409)
        at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
        at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
        at mondrian.rolap.SqlStatement.execute(SqlStatement.java:200)
        ... 10 more
Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size of hash cache (104857626 bytes) exceeds the maximum allowed size (104857600 bytes)
        at org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:109)
        at org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
        at org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:353)
        at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:145)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
        ... 3 more

1 个答案:

答案 0 :(得分:1)

  

散列加入与排序合并加入

     

基本散列连接通常优于其他类型的连接算法,但它也有其局限性,其中最重要的是假设其中一个关系必须足够小以适应内存。因此,Phoenix现在实现了散列连接和排序合并连接,以便于快速连接操作以及两个大型表之间的连接。

     

Phoenix目前尽可能使用散列连接算法,因为它通常要快得多。但是我们有一个提示“USE_SORT_MERGE_JOIN”来强制在查询中使用sort-merge join。这两种连接算法之间的选择,以及检测散列连接的较小关系,将在表格统计提供的指导下自动完成。

您可以在查询中添加USE_SORT_MERGE_JOIN提示,以便Phoenix不会尝试在内存中使用该关系。

即。 SELECT /*+ USE_SORT_MERGE_JOIN*/ ...

或者,如果您确信您的关系适合内存,则可以配置更大的最大缓存大小。

https://phoenix.apache.org/tuning.html

phoenix.query.maxServerCacheBytes Default 100MB. 104857600

Maximum size (in bytes) of a single sub-query result (usually the filtered result of a table) before compression and conversion to a hash map. Attempting to hash an intermediate sub-query result of a size bigger than this setting will result in a MaxServerCacheSizeExceededException.