与存储桶交换

时间:2018-11-21 23:27:25

标签: apache-spark apache-spark-sql

我有两个启用了存储桶功能的表。

DESCRIBE EXTENDED table1

Table                       |table1                                                                                                 |       |
|Owner                       |user                                                                                                  |       |
|Created                     |Wed Nov 21 16:24:25 CST 2018                                                                             |       |
|Last Access                 |Wed Dec 31 18:00:00 CST 1969                                                                             |       |
|Type                        |MANAGED                                                                                                  |       |
|Provider                    |parquet                                                                                                  |       |
|Num Buckets                 |180                                                                                                      |       |
|Bucket Columns              |[`seq_id`]                                                                                   |       |
|Sort Columns                |[`seq_id`]


DESCRIBE EXTENDED table2

Table                       |table2                                                                                                |       |
|Owner                       |user                                                                                                  |       |
|Created                     |Wed Nov 21 16:15:09 CST 2018                                                                            |       |
|Last Access                 |Wed Dec 31 18:00:00 CST 1969                                                                            |       |
|Type                        |MANAGED                                                                                                 |       |
|Provider                    |parquet                                                                                                 |       |
|Num Buckets                 |180                                                                                                     |       |
|Bucket Columns              |[`seq_id`]                                                                                  |       |
|Sort Columns                |[`seq_id`]

那我希望这能使我在加入他们两个时避免混洗(交换)。

但是,那里可以进行交换:

spark.table("table2").join(spark.table("table1"), "seq_id").explain
== Physical Plan ==
Project [seq_id#0, field1#1, ... 165 more fields]
+- SortMergeJoin [seq_id#0], [seq_id#196], Inner
   :- *Sort [seq_id#0 ASC NULLS FIRST], false, 0
   :  +- Exchange(coordinator id: 713544719) hashpartitioning(seq_id#0, 200), coordinator[target post-shuffle partition size: 77108864]
   :     +- *Project [seq_id#0, field1#1,  ... 73 more fields]
   :        +- *Filter isnotnull(seq_id#0)
   :           +- *FileScan parquet 
   test2[seq_id#0, field1#1,... 73 more fields] Batched: true, Format: Parquet, Location: InMemoryFileIndex[maprfs:/ds/hive/warehouse/test2..., PartitionFilters: [], PushedFilters: [IsNotNull(seq_id)], ReadSchema: struct<seq_id:string,field1:string...
   +- *Sort [seq_id#196 ASC NULLS FIRST], false, 0
      +- Exchange(coordinator id: 713544719) hashpartitioning(seq_id#196, 200), coordinator[target post-shuffle partition size: 77108864]
         +- *Project [line_s#195, seq_id#196, field1#197, ... 69 more fields]
            +- *Filter isnotnull(seq_id#196)
               +- *FileScan parquet test1[line_s#195,seq_id#196,field1#197,69 more fields] Batched: true, Format: Parquet, Location: InMemoryFileIndex[maprfs:/ds/test1..., PartitionFilters: [], PushedFilters: [IsNotNull(seq_id)], ReadSchema: struct<line_s:string,seq_id:string,field1:string,...

我正在使用Spark 2.2.1,知道在那儿进行交换仍是什么原因吗?

表(表1和表2)的创建如下:

spark.table("src_table1").write
  .bucketBy(180, "seq_id")
  .sortBy("seq_id")
  .saveAsTable("table1")

spark.table("src_table2").write
  .bucketBy(180, "seq_id")
  .sortBy("seq_id")
  .saveAsTable("table2")

配置单元表src_table1和src_table2是没有存储桶的镶木地板格式。

1 个答案:

答案 0 :(得分:0)

似乎启用了自适应查询执行(spark.sql.adaptive.enabled = true)是问题所在。禁用此功能后,不再存在交换。需要挖掘更多,为什么会发生。