具有两个日期列的Spark时间序列查询

时间:2017-03-22 11:04:45

标签: apache-spark pyspark time-series apache-spark-sql

我需要根据Spark中的历史数据进行一些计算,但我的情况与遍布互联网的示例略有不同。我有一个包含3列的数据集:enter_date,exit_date,client_id。我需要在每小时间隔之间计算在线客户端数量。

例如,请考虑以下数据:

enter_date             | exit_date               | client_id
2017-03-01 12:30:00    | 2017-03-01 13:30:00     | 1
2017-03-01 12:45:00    | 2017-03-01 14:10:00     | 2
2017-03-01 13:00:00    | 2017-03-01 15:20:00     | 3

我必须得到以下结果:

time_interval          | count
2017-03-01 12:00:00    | 2
2017-03-01 13:00:00    | 3
2017-03-01 14:00:00    | 2
2017-03-01 15:00:00    | 1

如您所见,计算必须不仅基于enter_date,还基于enter_date和exit_date列。

所以,主要有两个问题:

  1. 火花是否可以进行此类计算?
  2. 如果是,怎么样?

2 个答案:

答案 0 :(得分:2)

在Scala上可以用这种方式实现,猜猜,Python是类似的:

val clientList = List(
  Client("2017-03-01 12:30:00", "2017-03-01 13:30:00", 1),
  Client("2017-03-01 12:45:00", "2017-03-01 14:10:00", 2),
  Client("2017-03-01 13:00:00", "2017-03-01 15:20:00", 3)
)

val clientDF = sparkContext.parallelize(clientList).toDF
val timeFunctions = new TimeFunctions()

val result = clientDF.flatMap(
   // return list of times between "enter_date" and "exit_date"
  row => timeFunctions.getDiapason(row.getAs[String]("enter_date"), row.getAs[String]("exit_date"))
).map(time => (time, 1)).reduceByKey(_ + _).sortByKey(ascending = true)

result.foreach(println(_))

结果如下:

(2017-03-01 12:00:00,2)
(2017-03-01 13:00:00,3)
(2017-03-01 14:00:00,2)
(2017-03-01 15:00:00,1)

TimeFunction可以实现如下:

  val formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss")
  def getDiapason(from: String, to: String): Seq[String] = {
      var fromDate = LocalDateTime.parse(from,formatter).withSecond(0).withMinute(0)
      val result = ArrayBuffer(formatter.format(fromDate))

      val toDate = LocalDateTime.parse(to, formatter).withSecond(0).withMinute(0)
      while (toDate.compareTo(fromDate) > 0) {
        fromDate = fromDate.plusHours(1)
        result += formatter.format(fromDate)
      }
      result
    }

答案 1 :(得分:1)

您也可以使用Spark SQL,但是您必须使用另一个包含间隔的数据集。我使用了单独的CSV文件,但理论上你可以根据需要添加它。

我的设置

Java中的Apache Spark

  • 火花core_2.10
  • 火花sql_2.10

所需文件:

timeinterval.csv

time_interval
01.03.2017 11:00:00
01.03.2017 12:00:00
01.03.2017 13:00:00
01.03.2017 14:00:00
01.03.2017 15:00:00
01.03.2017 16:00:00

test.csv

enter_date             | exit_date               | client_id
2017-03-01 12:30:00    | 2017-03-01 13:30:00     | 1
2017-03-01 12:45:00    | 2017-03-01 14:10:00     | 2
2017-03-01 13:00:00    | 2017-03-01 15:20:00     | 3

我是怎么做的

我是用Java做过的,但由于我使用SQL,转换应该非常简单

Dataset<Row> rowsTest = spark.read()
  .option("header", "true")
  .option("delimiter", ";")
  .option("quoteMode", "NONE")
  .csv("C:/Temp/stackoverflow/test.csv");

Dataset<Row> rowsTimeInterval = spark.read()
  .option("header", "true")
  .option("delimiter", ";")
  .option("quoteMode", "NONE")
  .csv("C:/Temp/stackoverflow/timeinterval.csv");

rowsTest.createOrReplaceTempView("test");
rowsTimeInterval.createOrReplaceTempView("timeinterval");

String sql = "SELECT timeinterval.time_interval,(" +
              "SELECT COUNT(test.client_id) FROM timeinterval AS sub" +
                " INNER JOIN test ON " +
                  " ((unix_timestamp(sub.time_interval,\"dd.MM.yyyy HH:mm:SS\") + 60*60) > unix_timestamp(test.enter_date,\"dd.MM.yyyy HH:mm:SS\"))" +
                  " AND" +
                  " (sub.time_interval < test.exit_date)" +
                " WHERE timeinterval.time_interval = sub.time_interval" +
              ") AS RowCount" +
              " FROM timeinterval";
Dataset<Row> result = spark.sql(sql);

result.show();

这里是原始SQL语句

SELECT timeinterval.time_interval,(
    SELECT COUNT(test.client_id)
    FROM timeinterval AS sub
    INNER JOIN test ON
        (unix_timestamp(sub.time_interval,"dd.MM.yyyy HH:mm:SS") + 60*60) > unix_timestamp(test.enter_date,"dd.MM.yyyy HH:mm:SS"))
        AND
        (sub.time_interval < test.exit_date)
    WHERE
        timeinterval.time_interval = sub.time_interval
) AS RowCount
FROM timeinterval

当我使用unix_timestamp函数(请参阅https://spark.apache.org/docs/1.6.2/api/java/org/apache/spark/sql/functions.html#unix_timestamp%28%29)时,您需要的版本等于或高于1.5.0

结果

|      time_interval|RowCount|
+-------------------+--------+
|01.03.2017 11:00:00|       0|
|01.03.2017 12:00:00|       2|
|01.03.2017 13:00:00|       3|
|01.03.2017 14:00:00|       2|
|01.03.2017 15:00:00|       1|
|01.03.2017 16:00:00|       0|
+-------------------+--------+