根据值

时间:2017-11-03 21:08:58

标签: scala apache-spark

我有一个多行的RDD,如下所示。

val row = [(String, String), (String, String, String)]

该值是一系列元组。在元组中,最后一个String是时间戳,第二个String是category。我想根据每个类别的最大时间戳过滤此序列。

(A,B)       Id      Category        Timestamp
-------------------------------------------------------
(123,abc)   1       A              2016-07-22 21:22:59+0000
(234,bcd)   2       B              2016-07-20 21:21:20+0000
(123,abc)   1       A              2017-07-09 21:22:59+0000
(345,cde)   4       C              2016-07-05 09:22:30+0000
(456,def)   5       D              2016-07-21 07:32:06+0000
(234,bcd)   2       B              2015-07-20 21:21:20+0000

我想为每个类别添加一行。我正在寻找一些帮助获取每个类别的最大时间戳行。我希望得到的结果是

(A,B)       Id      Category        Timestamp
-------------------------------------------------------
(234,bcd)   2       B              2016-07-20 21:21:20+0000
(123,abc)   1       A              2017-07-09 21:22:59+0000
(345,cde)   4       C              2016-07-05 09:22:30+0000
(456,def)   5       D              2016-07-21 07:32:06+0000

1 个答案:

答案 0 :(得分:1)

将输入dataframe视为

+---------+---+--------+------------------------+
|(A,B)    |Id |Category|Timestamp               |
+---------+---+--------+------------------------+
|[123,abc]|1  |A       |2016-07-22 21:22:59+0000|
|[234,bcd]|2  |B       |2016-07-20 21:21:20+0000|
|[123,abc]|1  |A       |2017-07-09 21:22:59+0000|
|[345,cde]|4  |C       |2016-07-05 09:22:30+0000|
|[456,def]|5  |D       |2016-07-21 07:32:06+0000|
|[234,bcd]|2  |B       |2015-07-20 21:21:20+0000|
+---------+---+--------+------------------------+

您可以执行以下操作以获得您需要的结果dataframe

import org.apache.spark.sql.functions._
val requiredDataframe = df.orderBy($"Timestamp".desc).groupBy("Category").agg(first("(A,B)").as("(A,B)"), first("Id").as("Id"), first("Timestamp").as("Timestamp"))

您应该requiredDataframe

+--------+---------+---+------------------------+
|Category|(A,B)    |Id |Timestamp               |
+--------+---------+---+------------------------+
|B       |[234,bcd]|2  |2016-07-20 21:21:20+0000|
|D       |[456,def]|5  |2016-07-21 07:32:06+0000|
|C       |[345,cde]|4  |2016-07-05 09:22:30+0000|
|A       |[123,abc]|1  |2017-07-09 21:22:59+0000|
+--------+---------+---+------------------------+

您可以使用Window功能,如下所示

import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
val windowSpec = Window.partitionBy("Category").orderBy($"Timestamp".desc)
df.withColumn("rank", rank().over(windowSpec)).filter($"rank" === lit(1)).drop("rank")