根据另一列Spark Scala中的时间戳过滤行

时间:2018-03-18 14:53:23

标签: scala apache-spark apache-spark-sql spark-dataframe

假设我在Spark Scala中有以下数据框:

 +--------+--------------------+--------------------+
 |Index   |                Date|              Date_x|
 +--------+--------------------+--------------------+
 |       1|2018-01-31T20:33:...|2018-01-31T21:18:...|
 |       1|2018-01-31T20:35:...|2018-01-31T21:18:...|
 |       1|2018-01-31T21:04:...|2018-01-31T21:18:...|
 |       1|2018-01-31T21:05:...|2018-01-31T21:18:...|
 |       1|2018-01-31T21:15:...|2018-01-31T21:18:...|
 |       1|2018-01-31T21:16:...|2018-01-31T21:18:...|
 |       1|2018-01-31T21:19:...|2018-01-31T21:18:...|
 |       1|2018-01-31T21:20:...|2018-01-31T21:18:...|
 |       2|2018-01-31T19:43:...|2018-01-31T20:35:...|
 |       2|2018-01-31T19:44:...|2018-01-31T20:35:...|
 |       2|2018-01-31T20:36:...|2018-01-31T20:35:...|
 +--------+--------------------+--------------------+

我想删除每个索引Date < Date_x的行,如下所示:

 +--------+--------------------+--------------------+
 |Index   |                Date|              Date_x|
 +--------+--------------------+--------------------+
 |       1|2018-01-31T21:19:...|2018-01-31T21:18:...|
 |       1|2018-01-31T21:20:...|2018-01-31T21:18:...|
 |       2|2018-01-31T20:36:...|2018-01-31T20:35:...|
 +--------+--------------------+--------------------+

我尝试使用x_idx添加列monotonically_increasing_id(),并为每个min(x_idx) Index获取Date < Date_x。这样我随后可以从不满足条件的数据框中删除行。但它似乎并不适合我。我可能会错过对agg()如何运作的理解。谢谢你的帮助!

  val test_df = df.withColumn("x_idx", monotonically_increasing_id())
  val newIdx = test_df
           .filter($"Date" > "Date_x")
           .groupBy($"Index")
           .agg(min($"x_idx"))
           .toDF("n_Index", "min_x_idx")

      newIdx.show

      +-------+--------+
      |n_Index|min_x_idx|
      +-------+--------+
      +-------+--------+

2 个答案:

答案 0 :(得分:1)

您忘了在

中添加$
.filter($"Date" > "Date_x")

所以正确的filter

.filter($"Date" > $"Date_x")

您可以使用alias代替将toDF称为

val newIdx = test_df
  .filter($"Date" > $"Date_x")
  .groupBy($"Index".as("n_Index"))
  .agg(min($"x_idx").as("min_x_idx"))

你应该输出

+-------+---------+
|n_Index|min_x_idx|
+-------+---------+
|1      |6        |
|2      |10       |
+-------+---------+

答案 1 :(得分:0)

过滤条件可能会过滤所有记录。请检查在过滤记录后打印数据帧,并确保过滤器按预期工作。

 val newIdx = test_df
           .filter($"Date" > $"Date_x")
           .show