通过时间戳标量更新数据帧值

时间:2019-02-18 11:34:51

标签: scala apache-spark dataframe bigdata spark-streaming

我有这个数据框

+----------------+-----------------------------+--------------------+--------------+----------------+
|customerid|     |  event                      | A                  | B            |    C           |
+----------------+-----------------------------+--------------------+--------------+----------------+
|     1222222    | 2019-02-07 06:50:40.0       |aaaaaa              | 25           | 5025           |
|     1222222    | 2019-02-07 06:50:42.0       |aaaaaa              | 35           | 5000           |
|     1222222    | 2019-02-07 06:51:56.0       |aaaaaa              | 100          | 4965           |
+----------------+-----------------------------+--------------------+--------------+----------------+

我想通过事件(tiemstamp)更新C列的值,并仅将具有最新值更新的行保留在这样的新数据框中

+----------------+-----------------------------+--------------------+--------------+----------------+
|customerid|     |  event                      | A                  | B            |    C           |
+----------------+-----------------------------+--------------------+--------------+----------------+
|     1222222    | 2019-02-07 06:51:56.0       |aaaaaa              | 100          | 4965           |
+----------------+-----------------------------+--------------------+--------------+----------------+

数据通过火花流进入流模式

1 个答案:

答案 0 :(得分:0)

您可以尝试创建按customerid划分的行号,并按事件desc排序,并获取rownum为1的行。希望对您有所帮助。

df.withColumn("rownum", row_number().over(Window.partitionBy("customerid").orderBy(col("event").desc)))
    .filter(col("rownum") === 1)
    .drop("rownum")