我想在splitUtlisation
的每一行应用utilisationDataFarme
并将startTime
和endTime
作为参数传递。因此splitUtlisation
将返回多行因此我想创建一个新的DataFrame(Id,Day,Hour,Minute)
def splitUtlisation(onDateTime, offDateTime):
yield onDateTime
rule = rrule.rrule(rrule.HOURLY, byminute = 0, bysecond = 0, dtstart=offDateTime)
for result in rule.between(onDateTime, offDateTime):
yield result
yield offDateTime
utilisationDataFarme = (
sc.parallelize([
(10001, "2017-02-12 12:01:40" , "2017-02-12 12:56:32"),
(10001, "2017-02-13 12:06:32" , "2017-02-15 16:06:32"),
(10001, "2017-02-16 21:45:56" , "2017-02-21 21:45:56"),
(10001, "2017-02-21 22:32:41" , "2017-02-25 00:52:50"),
]).toDF(["id", "startTime" , "endTime"])
.withColumn("startTime", col("startTime").cast("timestamp"))
.withColumn("endTime", col("endTime").cast("timestamp"))
在核心Python中,我确实喜欢这个
dayList = ['SUN' , 'MON' , 'TUE' , 'WED' , 'THR' , 'FRI' , 'SAT']
for result in hours_aligned(datetime.datetime.now(), datetime.datetime.now() + timedelta(hours=68)):
print(dayList[datetime.datetime.weekday(result)], result.hour, 60 if result.minute == 0 else result.minute)
结果
THR 21 60
THR 22 60
THR 23 60
FRI 0 60
FRI 1 60
FRI 2 60
FRI 3 60
如何在pySpark中创建它?
我尝试创建新架构并应用
schema = StructType([StructField("Id", StringType(), False), StructField("Day", StringType(), False), StructField("Hour", StringType(), False) , StructField("Minute", StringType(), False)])
udf_splitUtlisation = udf(splitUtlisation, schema)
df = sqlContext.createDataFrame([],"id" , "Day" , "Hour" , "Minute")
我仍然无法处理多行作为回应。
答案 0 :(得分:4)
正确定义 udf 后,您可以使用pyspark的explode
将包含多个值的单行解压缩为多行。
据我所知,您将无法使用yield
作为udf的生成器。相反,您需要将所有值一次性返回为数组(请参阅return_type
),然后可以展开和展开:
from pyspark.sql.functions import col, udf, explode
from pyspark.sql.types import ArrayType, StringType, MapType
import pandas as pd
return_type = ArrayType(MapType(StringType(), StringType()))
@udf(returnType=return_type)
def your_udf_func(start, end):
"""Insert your function to return whatever you like
as a list of dictionaries.
For example, I chose to return hourly values for
day, hour and minute.
"""
date_range = pd.date_range(start, end, freq="h")
df = pd.DataFrame({"day": date_range.strftime("%a"),
"hour": date_range.hour,
"minute": date_range.minute})
values = df.to_dict("index").values()
return list(values)
extracted = your_udf_func("startTime", "endTime")
exploded = explode(extracted).alias("exploded")
expanded = [col("exploded").getItem(k).alias(k) for k in ["hour", "day", "minute"]]
result = df.select("id", exploded).select("id", *expanded)
结果是:
result.show(5)
+-----+----+---+------+
| id|hour|day|minute|
+-----+----+---+------+
|10001| 12|Sun| 1|
|10001| 12|Mon| 6|
|10001| 13|Mon| 6|
|10001| 14|Mon| 6|
|10001| 15|Mon| 6|
+-----+----+---+------+
only showing top 5 rows