为什么collect_set聚合添加Exchange运算符以加入分块表的查询?

时间:2017-12-21 14:25:15

标签: apache-spark apache-spark-sql apache-spark-2.2

我使用的是Spark-2.2。 我正在进行Spark的抨击。 我创建了一个分段表,这里是desc formatted my_bucketed_tbl输出:

+--------------------+--------------------+-------+
|            col_name|           data_type|comment|
+--------------------+--------------------+-------+
|              bundle|              string|   null|
|                 ifa|              string|   null|
|               date_|                date|   null|
|                hour|                 int|   null|
|                    |                    |       |
|# Detailed Table ...|                    |       |
|            Database|             default|       |
|               Table|             my_bucketed_tbl|
|               Owner|            zeppelin|       |
|             Created|Thu Dec 21 13:43:...|       |
|         Last Access|Thu Jan 01 00:00:...|       |
|                Type|            EXTERNAL|       |
|            Provider|                 orc|       |
|         Num Buckets|                  16|       |
|      Bucket Columns|             [`ifa`]|       |
|        Sort Columns|             [`ifa`]|       |
|    Table Properties|[transient_lastDd...|       |
|            Location|hdfs:/user/hive/w...|       |
|       Serde Library|org.apache.hadoop...|       |
|         InputFormat|org.apache.hadoop...|       |
|        OutputFormat|org.apache.hadoop...|       |
|  Storage Properties|[serialization.fo...|       |
+--------------------+--------------------+-------+

当我按查询执行某个小组的解释时,我可以看到我们已经放弃了交换阶段:

sql("select ifa,max(bundle) from my_bucketed_tbl group by ifa").explain

== Physical Plan ==
SortAggregate(key=[ifa#932], functions=[max(bundle#920)])
+- SortAggregate(key=[ifa#932], functions=[partial_max(bundle#920)])
   +- *Sort [ifa#932 ASC NULLS FIRST], false, 0
      +- *FileScan orc default.level_1[bundle#920,ifa#932] Batched: false, Format: ORC, Location: InMemoryFileIndex[hdfs://ip-10-44-9-73.ec2.internal:8020/user/hive/warehouse/level_1/date_=2017-1..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<bundle:string,ifa:string>

但是,当我用max替换Spark的collect_set函数时,我可以看到执行计划与非bucketed表相同,意味着,交换阶段不是幸免:

sql("select ifa,collect_set(bundle) from my_bucketed_tbl group by ifa").explain

== Physical Plan ==
ObjectHashAggregate(keys=[ifa#1010], functions=[collect_set(bundle#998, 0, 0)])
+- Exchange hashpartitioning(ifa#1010, 200)
   +- ObjectHashAggregate(keys=[ifa#1010], functions=[partial_collect_set(bundle#998, 0, 0)])
      +- *FileScan orc default.level_1[bundle#998,ifa#1010] Batched: false, Format: ORC, Location: InMemoryFileIndex[hdfs://ip-10-44-9-73.ec2.internal:8020/user/hive/warehouse/level_1/date_=2017-1..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<bundle:string,ifa:string>

我是否遗漏了任何配置,或者这是Spark目前的分支限制?

1 个答案:

答案 0 :(得分:1)

该问题已在2.2.1版中修复。 您可以找到Jira issue here