为什么加入两个spark数据帧失败,除非我添加" .as('别名)"二者皆是?

时间:2018-03-28 22:36:49

标签: scala apache-spark join apache-spark-sql spark-dataframe

假设我们想加入2个Spark DataFrames,无论出于何种原因:

val df1 = Seq(("A", 1), ("B", 2), ("C", 3)).toDF("agent", "in_count")
val df2 = Seq(("A", 2), ("C", 2), ("D", 2)).toDF("agent", "out_count")

可以使用以下代码完成:

val joinedDf = df1.as('d1).join(df2.as('d2), ($"d1.agent" === $"d2.agent"))

// Result:
val joinedDf.show

+-----+--------+-----+---------+
|agent|in_count|agent|out_count|
+-----+--------+-----+---------+
|    A|       1|    A|        2|
|    C|       3|    C|        2|
+-----+--------+-----+---------+

现在,我不明白,为什么只要我使用别名df1.as(d1)df2.as(d2),它才有效?我可以想象,如果我直截了当地写出

,列之间会有名称冲突
val joinedDf = df1.join(df2, ($"df1.agent" === $"df2.agent")) // fails

但是......我不明白为什么我只能使用.as(alias) 只有两个DF

df1.as('d1).join(df2, ($"d1.agent" === $"df2.agent")).show()

失败
org.apache.spark.sql.AnalysisException: cannot resolve '`df2.agent`' given input columns: [agent, in_count, agent, out_count];;
'Join Inner, (agent#25 = 'df2.agent)
:- SubqueryAlias d1
:  +- Project [_1#22 AS agent#25, _2#23 AS in_count#26]
:     +- LocalRelation [_1#22, _2#23]
+- Project [_1#32 AS agent#35, _2#33 AS out_count#36]
   +- LocalRelation [_1#32, _2#33]

为什么最后一个例子无效?

1 个答案:

答案 0 :(得分:4)

您好使用别名DataFrame时转换为org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [agent: string, in_count: int],因此您可以在那里使用$"d1.agent"

如果您想加入DataFrame,可以这样做:

scala> val joinedDf = df1.join(df2, (df1("agent") === df2("agent")))
joinedDf: org.apache.spark.sql.DataFrame = [agent: string, in_count: int ... 2 more fields]

scala> joinedDf.show
+-----+--------+-----+---------+
|agent|in_count|agent|out_count|
+-----+--------+-----+---------+
|    A|       1|    A|        2|
|    C|       3|    C|        2|
+-----+--------+-----+---------+