Spark数据帧:在连接期间或之后过滤是否更有效?

时间:2018-06-18 09:04:51

标签: apache-spark dataframe join apache-spark-sql

我在寻找这个问题的答案时遇到了一些麻烦,所以我想知道是否有人可以帮助我。

以下是一些背景信息:

我有两个数据帧df1和df2:

val df1: DataFrame = List((1, 2, 3), (2, 3, 3)).toDF("col1", "col2", "col3")
val df2: DataFrame = List((1, 5, 6), (1, 2, 5)).toDF("col1", "col2_bis", "col3_bis")

我想做的是

  

在“col1”上加入那些数据帧df1和df2,但只将行保持在哪里   df1(“col2”)< DF2( “col2_bis”)

所以我的问题是,这样做是否更有效:

df1.join(df2, df1("col1") === df2("col1") and df1("col2") < df2("col2_bis"), "inner")

或者那样:

df1.join(df2, Seq("col1"), "inner").filter(col("col2") < col("col2_bis"))

结果将是:

Array(Row(1, 2, 3, 5, 6)) with columns ("col1", "col2", "col2_bis", "col3", "col3_bis")

这两个表达式是否已解析为同一个执行计划?或者其中一个比另一个更节省时间?

谢谢。

1 个答案:

答案 0 :(得分:2)

如果查看查询计划,两者都相同,则与联接没有区别。催化剂优化器可以进行各种优化。

scala> val df2 = List((1, 5, 6), (1, 2, 5)).toDF("col1", "col2_bis", "col3_bis")
df2: org.apache.spark.sql.DataFrame = [col1: int, col2_bis: int ... 1 more field]

scala> val df1 = List((1, 2, 3), (2, 3, 3)).toDF("col1", "col2", "col3")
df1: org.apache.spark.sql.DataFrame = [col1: int, col2: int ... 1 more field]

scala> df1.join(df2, df1("col1") === df2("col1") and df1("col2") < df2("col2_bis"), "inner")
res0: org.apache.spark.sql.DataFrame = [col1: int, col2: int ... 4 more fields]

scala> df1.join(df2, Seq("col1"), "inner").filter(col("col2") < col("col2_bis"))
res1: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [col1: int, col2: int ... 3 more fields]

scala> res0.show
+----+----+----+----+--------+--------+
|col1|col2|col3|col1|col2_bis|col3_bis|
+----+----+----+----+--------+--------+
|   1|   2|   3|   1|       5|       6|
+----+----+----+----+--------+--------+

scala> res1.show
+----+----+----+--------+--------+
|col1|col2|col3|col2_bis|col3_bis|
+----+----+----+--------+--------+
|   1|   2|   3|       5|       6|
+----+----+----+--------+--------+

scala> res0.explain
== Physical Plan ==
*BroadcastHashJoin [col1#21], [col1#7], Inner, BuildRight, (col2#22 < col2_bis#8)
:- LocalTableScan [col1#21, col2#22, col3#23]
+- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
   +- LocalTableScan [col1#7, col2_bis#8, col3_bis#9]

scala> res1.explain
== Physical Plan ==
*Project [col1#21, col2#22, col3#23, col2_bis#8, col3_bis#9]
+- *BroadcastHashJoin [col1#21], [col1#7], Inner, BuildRight, (col2#22 < col2_bis#8)
   :- LocalTableScan [col1#21, col2#22, col3#23]
   +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
      +- LocalTableScan [col1#7, col2_bis#8, col3_bis#9]