计算不断增加的日期序列Spark

时间:2019-06-09 13:39:34

标签: scala date apache-spark apache-spark-sql

我在Spark中有一个带有名称列和日期的数据框。我想找到每个名称不断增加的日期(日复一日)的所有连续序列,并计算其持续时间。输出应包含名称,开始日期(日期序列的日期)和该时间段的持续时间(天数) 如何使用Spark函数执行此操作?

连续的日期序列示例:

2019-03-12
2019-03-13
2019-03-14
2019-03-15

我已经定义了这样的解决方案,但是它会按每个名称计算总天数,并且不会将其划分为序列:

val result = allDataDf
    .groupBy($"name")
    .agg(count($"date").as("timePeriod"))
    .orderBy($"timePeriod".desc)
    .head()

此外,我尝试使用排名,但出于某些原因,counts列只有1s:

val names = Window
    .partitionBy($"name")
    .orderBy($"date")
 val result = allDataDf
    .select($"name", $"date", rank over names as "rank")
    .groupBy($"name", $"date", $"rank")
    .agg(count($"*") as "count")

输出看起来像这样:

+-----------+----------+----+-----+
|stationName|      date|rank|count|
+-----------+----------+----+-----+
|       NAME|2019-03-24|   1|    1|
|       NAME|2019-03-25|   2|    1|
|       NAME|2019-03-27|   3|    1|
|       NAME|2019-03-28|   4|    1|
|       NAME|2019-01-29|   5|    1|
|       NAME|2019-03-30|   6|    1|
|       NAME|2019-03-31|   7|    1|
|       NAME|2019-04-02|   8|    1|
|       NAME|2019-04-05|   9|    1|
|       NAME|2019-04-07|  10|    1|
+-----------+----------+----+-----+

1 个答案:

答案 0 :(得分:2)

在SQL中查找连续日期非常容易。您可以使用以下查询来做到这一点:

WITH s AS (
   SELECT
    stationName,
    date,
    date_add(date, -(row_number() over (partition by stationName order by date))) as discriminator
  FROM stations
)
SELECT
  stationName,
  MIN(date) as start,
  COUNT(1) AS duration
FROM s GROUP BY stationName, discriminator

幸运的是,我们可以在spark中使用SQL。让我们检查一下是否可行(我使用了不同的日期):

val df = Seq(
       ("NAME1", "2019-03-22"),
       ("NAME1", "2019-03-23"),
       ("NAME1", "2019-03-24"),
       ("NAME1", "2019-03-25"),

       ("NAME1", "2019-03-27"),
       ("NAME1", "2019-03-28"),

       ("NAME2", "2019-03-27"),
       ("NAME2", "2019-03-28"),

       ("NAME2", "2019-03-30"),
       ("NAME2", "2019-03-31"),

       ("NAME2", "2019-04-04"),
       ("NAME2", "2019-04-05"),
       ("NAME2", "2019-04-06")
  ).toDF("stationName", "date")
      .withColumn("date", date_format(col("date"), "yyyy-MM-dd"))

df.createTempView("stations");

  val result = spark.sql(
  """
     |WITH s AS (
     |   SELECT
     |    stationName,
     |    date,
     |    date_add(date, -(row_number() over (partition by stationName order by date)) + 1) as discriminator
     |  FROM stations
     |)
     |SELECT
     |  stationName,
     |  MIN(date) as start,
     |  COUNT(1) AS duration
     |FROM s GROUP BY stationName, discriminator
   """.stripMargin)

result.show()

似乎输出正确的数据集:

+-----------+----------+--------+
|stationName|     start|duration|
+-----------+----------+--------+
|      NAME1|2019-03-22|       4|
|      NAME1|2019-03-27|       2|
|      NAME2|2019-03-27|       2|
|      NAME2|2019-03-30|       2|
|      NAME2|2019-04-04|       3|
+-----------+----------+--------+