如何使用Spark Dataframes

时间:2017-01-27 04:51:32

标签: scala apache-spark spark-dataframe

如何通过比较列来删除重复项来合并2个数据帧。



I have two dataframes with same column names
a.show()
+-----+----------+--------+
| name|      date|duration|
+-----+----------+--------+
|  bob|2015-01-13|       4|
|alice|2015-04-23|      10|
+-----+----------+--------+
b.show()
+------+----------+--------+
|  name|      date|duration|
+------+----------+--------+
|   bob|2015-01-12|       3|
|alice2|2015-04-13|      10|
+------+----------+--------+




我要做的是合并2个数据帧,通过应用两个条件来仅显示唯一的行 1.同名持续时间将是持续时间的总和。 2.同名,最终日期为最新日期。



  

Final output will be 
final.show()
+-------+----------+--------+
| name  |      date|duration|
+-----  +----------+--------+
|  bob  |2015-01-13|       7|
|alice  |2015-04-23|      10|
|alice2 |2015-04-13|      10|
+-------+----------+--------+

I followed the following method.

//Take union of 2 dataframe
val df =a.unionAll(b)

//group and take sum
val grouped =df.groupBy("name").agg($"name",sum("duration"))

//join
val j=df.join(grouped,"name").drop("duration").withColumnRenamed("sum(duration)", "duration")

and I got 
+------+----------+--------+
|  name|      date|duration|
+------+----------+--------+
|   bob|2015-01-13|       7|
| alice|2015-04-23|      10|
|   bob|2015-01-12|       7|
|alice2|2015-04-23|      10|
+------+----------+--------+



     我现在如何通过比较日期来删除重复项。     将它注册为表后运行sql查询是否可行?     我是SparkSQL的初学者,我觉得我接近这个问题的方式很奇怪。有没有更好的方法来进行这种数据处理。

1 个答案:

答案 0 :(得分:0)

您可以在max(date)groupBy()进行操作。无需joingrouped进行df

// In 1.3.x, in order for the grouping column "name" to show up,
val grouped = df.groupBy("name").agg($"name",sum("duration"), max("date"))

// In 1.4+, grouping column "name" is included automatically.
val grouped = df.groupBy("name").agg(sum("duration"), max("date"))