加入这两个Spark DataFrames的正确方法是什么?

时间:2018-03-28 22:17:30

标签: scala apache-spark apache-spark-sql spark-dataframe outer-join

假设我有两个spark DataFrames:

val addStuffDf = Seq(
  ("A", "2018-03-22", 5),
  ("A", "2018-03-24", 1),
  ("B", "2018-03-24, 3))
.toDF("user", "dt", "count")

val removedStuffDf = Seq(
  ("C", "2018-03-25", 10),
  ("A", "2018-03-24", 5),
  ("B", "2018-03-25", 1)
).toDF("user", "dt", "count")

最后我想得到一个包含这样的摘要统计数据的单个数据帧(实际上排序无关紧要):

+----+----------+-----+-------+
|user|        dt|added|removed|
+----+----------+-----+-------+
|   A|2018-03-22|    5|      0|
|   A|2018-03-24|    1|      5|
|   B|2018-03-24|    3|      0|
|   B|2018-03-25|    0|      1|
|   C|2018-03-25|    0|     10|
+----+----------+-----+-------+

很明显,我可以简单地在“步骤0”重命名“计数”列,以便拥有数据帧df1df2

val df1 = addedDf.withColumnRenamed("count", "added")
df1.show()
+----+----------+-----+
|user|        dt|added|
+----+----------+-----+
|   A|2018-03-22|    5|
|   A|2018-03-24|    1|
|   B|2018-03-24|    3|
+----+----------+-----+

val df2 = removedDf.withColumnRenamed("count", "removed")
df2.show()
+----+----------+-------+
|user|        dt|applied|
+----+----------+-------+
|   C|2018-03-25|     10|
|   A|2018-03-24|      5|
|   B|2018-03-25|      1|
+----+----------+-------+

但是现在我没有定义“第1步” - 即确定将df1和df2压缩在一起的转换。 从逻辑角度来看full_outer连接会在单个DF中引入我需要的所有行,但是我需要以某种方式合并重复列:

df1.as('d1)
  .join(df2.as('d2),
        ($"d1.user"===$"d2.user" && $"d1.dt"===$"d2.dt"),
        "full_outer")
.show()

+----+----------+-----+----+----------+-------+
|user|        dt|added|user|        dt|applied|
+----+----------+-----+----+----------+-------+
|null|      null| null|   C|2018-03-25|     10|
|null|      null| null|   B|2018-03-25|      1|
|   B|2018-03-24|    3|null|      null|   null|
|   A|2018-03-22|    5|null|      null|   null|
|   A|2018-03-24|    1|   A|2018-03-24|      5|
+----+----------+-----+----+----------+-------+

如何将这些userdt列合并在一起?而且,总的来说 - 我使用正确的方法来解决我的问题还是有一个更简单/有效的解决方案?

1 个答案:

答案 0 :(得分:2)

由于要为两个DataFrame连接的列具有匹配的名称,因此使用Seq("user", "dt")作为连接条件将导致所需的合并表:

val addStuffDf = Seq(
  ("A", "2018-03-22", 5),
  ("A", "2018-03-24", 1),
  ("B", "2018-03-24", 3)
).toDF("user", "dt", "count")

val removedStuffDf = Seq(
  ("C", "2018-03-25", 10),
  ("A", "2018-03-24", 5),
  ("B", "2018-03-25", 1)
).toDF("user", "dt", "count")

val df1 = addStuffDf.withColumnRenamed("count", "added")
val df2 = removedStuffDf.withColumnRenamed("count", "removed")

df1.as('d1).join(df2.as('d2), Seq("user", "dt"), "full_outer").
  na.fill(0).
  show
// +----+----------+-----+-------+
// |user|        dt|added|removed|
// +----+----------+-----+-------+
// |   C|2018-03-25|    0|     10|
// |   B|2018-03-25|    0|      1|
// |   B|2018-03-24|    3|      0|
// |   A|2018-03-22|    5|      0|
// |   A|2018-03-24|    1|      5|
// +----+----------+-----+-------+