pyspark:在日期和时间重新采样pyspark数据框

时间:2020-06-28 13:27:27

标签: pandas pyspark

如何重新采样pyspark数据帧,就像在熊猫中一样,我们有pd.grouper和pd.resample,我可以在每周的h,2h,3h进行重新采样。我有以下示例pyspark数据帧,如何在 ind date 列以及每h / 2h / 3h

from pyspark import SparkContext
from pyspark.sql import SQLContext

sc = SparkContext.getOrCreate()
sqlContext = SQLContext(sc)

a = sqlContext.createDataFrame([["Anand", "2020-02-01 16:00:00", 12, "ba"],
                         ["Anand", "2020-02-01 16:05:00", 7, "ba" ]
                        ["Anand", "2020-02-02 19:10:00", 14,"sa"], 
                        ["Carl", "2020-02-01 16:00:00", 16,"da"], 
                        ["Carl", "2020-02-02 16:02:00", 12,"ga"],
                        ["Carl", "2020-02-02 17:10:00", 1,"ga"],
                        ["Eric", "2020-02-01 16:o0:00", 24, "sa"]], ['ind',"date","sal","imp"])
a.show()

|  ind|               date|sal|imp|
+-----+-------------------+---+---+
|Anand|2020-02-01 16:00:00| 12| ba|
|Anand|2020-02-01 16:05:00|  7| sa|
|Anand|2020-02-02 19:10:00| 14| sa|
| Carl|2020-02-01 16:00:00| 16| da|
| Carl|2020-02-01 16:02:00| 12| ga|
| Carl|2020-02-02 17:10:00|  1| ga|
| Eric|2020-02-01 16:00:00| 24| sa|

因此,当在列 ind 上进行汇总并在 sale 日期(每小时)平均值上重新采样时,需要输出可能看起来像

|  ind|               date|sal|
+-----+-------------------+---+
|Anand|2020-02-01 16:00:00|  9|
|Anand|2020-02-02 19:00:00| 14|
| Carl|2020-02-01 16:00:00|  9|
| Carl|2020-02-02 17:00:00|  1|
| Eric|2020-02-01 16:00:00| 24|

3 个答案:

答案 0 :(得分:3)

您可以完全按照问题中的描述进行操作:按inddate分组。借助date_trunc,我们可以将日期列四舍五入为分组前的小时:

from pyspark.sql import functions as F
a.groupBy('ind', F.date_trunc('hour', F.col('date')).alias('date'))\
   .agg(F.mean('sal')) \
   .orderBy('ind', 'date') \
   .show()

打印

+-----+-------------------+--------+
|  ind|               date|avg(sal)|
+-----+-------------------+--------+
|Anand|2020-02-01 16:00:00|     9.5|
|Anand|2020-02-02 19:00:00|    14.0|
| Carl|2020-02-01 16:00:00|    14.0|
| Carl|2020-02-02 17:00:00|     1.0|
| Eric|2020-02-01 16:00:00|    24.0|
+-----+-------------------+--------+

答案 1 :(得分:1)

一种可能的方法是使用2个窗口1来确定inddate分区上的时间差是否在1小时内,然后使用上面的窗口和{{1 }}计算得出(注意:对于time_diff(12 + 7)/ 2 = 9.5,在预期输出中为9,):

Anand

one_hrs= 1*60*60
w = Window.partitionBy("ind",F.to_date("date"))
w1 = Window.partitionBy("ind",F.to_date("date"),"time_diff")

(df.withColumn("date",F.to_timestamp("date"))
   .withColumn("first_date",F.first("date").over(w))
   .withColumn("time_diff",((F.unix_timestamp("date")-F.unix_timestamp("first_date"))
    <=one_hrs).cast("Integer"))
 .withColumn("sal",F.mean("sal").over(w1)).dropDuplicates(["ind","sal","time_diff"])
 .drop("first_date","time_diff").orderBy("ind")).show()

答案 2 :(得分:0)

看到日期是字符串,一种简单的方法是拆分和汇总。

import pyspark.sql.functions as F
a = sqlContext.createDataFrame([["Anand", "2020-02-01 16:00:00", 12, "ba"],
                         ["Anand", "2020-02-01 16:05:00", 7, "ba"],
                        ["Anand", "2020-02-02 19:10:00", 14,"sa"], 
                        ["Carl", "2020-02-01 16:00:00", 16,"da"], 
                        ["Carl", "2020-02-02 16:02:00", 12,"ga"],
                        ["Carl", "2020-02-02 17:10:00", 1,"ga"],
                        ["Eric", "2020-02-01 16:o0:00", 24, "sa"]], ['ind',"date","sal","imp"])
a_spli = a.withColumn("hour",F.split(F.col('date'),':')[0])

test_res = a_spli.groupby('ind','hour').agg(F.mean('sal'))

sparkts是一个很酷的库,用于处理与时间相关的任务:https://github.com/sryza/spark-timeseries。看看。

test_res.show()
+-----+-------------+--------+
|  ind|         hour|avg(sal)|
+-----+-------------+--------+
|Anand|2020-02-01 16|     9.5|
|Anand|2020-02-02 19|    14.0|
| Carl|2020-02-01 16|    16.0|
| Carl|2020-02-02 16|    12.0|
| Carl|2020-02-02 17|     1.0|
| Eric|2020-02-01 16|    24.0|
+-----+-------------+--------+