我为其生成了timestamps
的数据框:
from pyspark.sql.functions import avg, first
rdd = sc.parallelize(
[
(0, "A", 223,"201603_170302", "PORT"),
(0, "A", 22,"201602_100302", "PORT"),
(0, "A", 422,"201601_114300", "DOCK"),
(1,"B", 3213,"201602_121302", "DOCK")
]
)
df_data = sqlContext.createDataFrame(rdd, ["id","type", "cost", "date", "ship"])
所以我可以生成datetime
:
dt_parse = udf(lambda x: datetime.strptime(x,"%Y%m%d_%H%M%S")
df_data = df_data.withColumn('datetime', dt_parse(df_data.date))
但现在我需要按每天6小时的间隔进行分组。
每小时就会出现问题 df_data.groupby(hour(df_data.datetime)).agg(count(ship).alias(ship)).show()
但这不适用于小时以外的其他时间间隔。有办法吗?
答案 0 :(得分:1)
这适合我。
import pyspark.sql.functions
# ...
interval = 60 * 60 * 6 # 6 hours
gdf = dataframe.withColumn(
'time_interval',
pyspark.sql.functions.from_unixtime(pyspark.sql.functions.floor(pyspark.sql.functions.unix_timestamp(dataframe[obj['field']]) / interval) * interval)
).groupBy('time_interval')
# and then something like gdf.agg(...); gdf.collect()