在Scala中向上插入两个数据框

时间:2018-10-09 20:05:52

标签: scala apache-spark

我有两个数据源,它们都对同一实体集的当前状态有意见。任一数据源都可能包含最新数据,这些数据可能来自或可能不是来自当前日期。例如:

val df1 = Seq((1, "green", "there", "2018-01-19"), (2, "yellow", "there", "2018-01-18"), (4, "yellow", "here", "2018-01-20")).toDF("id", "status", "location", "date")

val df2 = Seq((2, "red", "here", "2018-01-20"), (3, "green", "there", "2018-01-20"), (4, "green", "here", "2018-01-19")).toDF("id", "status", "location", "date")

df1.show
+---+------+--------+----------+
| id|status|location|      date|
+---+------+--------+----------+
|  1| green|   there|2018-01-19|
|  2|yellow|   there|2018-01-18|
|  4|yellow|    here|2018-01-20|
+---+------+--------+----------+

df2.show
+---+------+--------+----------+
| id|status|location|      date|
+---+------+--------+----------+
|  2|   red|    here|2018-01-20|
|  3| green|   there|2018-01-20|
|  4| green|    here|2018-01-19|
+---+------+--------+----------+

我希望输出为每个实体的最新状态集:

+---+------+--------+----------+
| id|status|location|      date|
+---+------+--------+----------+
|  1| green|   there|2018-01-19|
|  2|   red|    here|2018-01-20|
|  3| green|   there|2018-01-20|
|  4|yellow|    here|2018-01-20|
+---+------+--------+----------+

我的方法似乎可行,它是将两个表连接在一起,然后根据日期执行一种自定义合并操作:

val joined = df1.join(df2, df1("id") === df2("id"), "outer")
+----+------+--------+----------+----+------+--------+----------+
|  id|status|location|      date|  id|status|location|      date|
+----+------+--------+----------+----+------+--------+----------+
|   1| green|   there|2018-01-19|null|  null|    null|      null| 
|null|  null|    null|      null|   3| green|   there|2018-01-20| 
|   4|yellow|    here|2018-01-20|   4|yellow|    here|2018-01-20|
|   2|yellow|   there|2018-01-18|   2|   red|    here|2018-01-20|
+----+------+--------+----------+----+------+--------+----------+

val weirdCoal(name: String) = when(df1("date") > df2("date") || df2("date").isNull, df1(name)).otherwise(df2(name)) as name

val ouput = joined.select(df1.columns.map(weirdCoal):_*)
+---+------+--------+----------+
| id|status|location|      date|
+---+------+--------+----------+
|  1| green|   there|2018-01-19|
|  2|   red|    here|2018-01-20|
|  3| green|   there|2018-01-20|
|  4|yellow|    here|2018-01-20|
+---+------+--------+----------+

这是我期望的输出。

我还可以看到通过某种联合/聚合方法或通过按ID分区并按日期排序并获取最后一行的窗口来执行此操作。

我的问题:是否有惯用的方法?

1 个答案:

答案 0 :(得分:1)

是的,无需使用Window函数就可以完成:

df1.union(df2)
  .withColumn("rank", rank().over(Window.partitionBy($"id").orderBy($"date".desc)))
  .filter($"rank" === 1)
  .drop($"rank")
  .orderBy($"id")
  .show

输出:

+---+------+--------+----------+
| id|status|location|      date|
+---+------+--------+----------+
|  1| green|   there|2018-01-19|
|  2|   red|    here|2018-01-20|
|  3| green|   there|2018-01-20|
|  4|yellow|    here|2018-01-20|
+---+------+--------+----------+

上面的代码按id对数据进行分区,并在属于同一date的所有日期中找到顶部的id