考虑以下Spark中的数据集 ,我想以特定的频率(例如5分钟)对日期进行重新采样。
START_DATE = dt.datetime(2019,8,15,20,33,0)
test_df = pd.DataFrame({
'school_id': ['remote','remote','remote','remote','onsite','onsite','onsite','onsite','remote','remote'],
'class_id': ['green', 'green', 'red', 'red', 'green', 'green', 'green', 'green', 'red', 'green'],
'user_id': [15,15,16,16,15,17,17,17,16,17],
'status': [0,1,1,1,0,1,0,1,1,0],
'start': pd.date_range(start=START_DATE, periods=10, freq='2min')
})
test_df.groupby(['school_id', 'class_id', 'user_id', 'start']).min()
不过,我也希望在两个特定的日期范围之间进行重新采样:2019-08-15 20:30:00
和2019-08-15 21:00:00
。因此,school_id
,class_id
和user_id
的每组将有6个条目,两个日期范围之间每5分钟存储一次。
重新采样生成的null
条目应使用正向填充填充。
我已将Pandas用于示例数据集,但实际的数据帧将在Spark中提取,因此我要寻找的方法也应在Spark中完成。
我想这种方法可能与此PySpark: how to resample frequencies类似,但是我无法在这种情况下使用它。
感谢您的帮助
答案 0 :(得分:1)
这可能不是获得最终结果的最佳方法,而只是想在这里展示想法。
from datetime import datetime
import pytz
from pytz import timezone
# Create DataFrame
START_DATE = datetime(2019,8,15,20,33,0)
test_df = pd.DataFrame({
'school_id': ['remote','remote','remote','remote','onsite','onsite','onsite','onsite','remote','remote'],
'class_id': ['green', 'green', 'red', 'red', 'green', 'green', 'green', 'green', 'red', 'green'],
'user_id': [15,15,16,16,15,17,17,17,16,17],
'status': [0,1,1,1,0,1,0,1,1,0],
'start': pd.date_range(start=START_DATE, periods=10, freq='2min')
})
# Convert TimeStamp to Integers
df = spark.createDataFrame(test_df)
print(df.dtypes)
df = df.withColumn('start', F.col('start').cast("bigint"))
df.show()
这将输出:
+---------+--------+-------+------+----------+
|school_id|class_id|user_id|status| start|
+---------+--------+-------+------+----------+
| remote| green| 15| 0|1565915580|
| remote| green| 15| 1|1565915700|
| remote| red| 16| 1|1565915820|
| remote| red| 16| 1|1565915940|
| onsite| green| 15| 0|1565916060|
| onsite| green| 17| 1|1565916180|
| onsite| green| 17| 0|1565916300|
| onsite| green| 17| 1|1565916420|
| remote| red| 16| 1|1565916540|
| remote| green| 17| 0|1565916660|
+---------+--------+-------+------+----------+
# Create time sequece needed
start = datetime.strptime('2019-08-15 20:30:00', '%Y-%m-%d %H:%M:%S')
eastern = timezone('US/Eastern')
start = eastern.localize(start)
times = pd.date_range(start = start, periods = 6, freq='5min')
times = [s.timestamp() for s in times]
print(times)
[1565915400.0, 1565915700.0, 1565916000.0, 1565916300.0, 1565916600.0, 1565916900.0]
# Use pandas_udf to create final DataFrame
schm = StructType(df.schema.fields + [StructField('epoch', IntegerType(), True)])
@pandas_udf(schm, PandasUDFType.GROUPED_MAP)
def resample(pdf):
pddf = pd.DataFrame({'epoch':times})
pddf['school_id'] = pdf['school_id'][0]
pddf['class_id'] = pdf['class_id'][0]
pddf['user_id'] = pdf['user_id'][0]
res = np.searchsorted(times, pdf['start'])
arr = np.zeros(len(times))
arr[:] = np.nan
arr[res] = pdf['start']
pddf['status'] = arr
arr[:] = np.nan
arr[res] = pdf['status']
pddf['start'] = arr
return pddf
df = df.groupBy('school_id', 'class_id', 'user_id').apply(resample)
df = df.withColumn('timestamp', F.to_timestamp(df['epoch']))
df.show(60)
最终结果:
+---------+--------+-------+----------+-----+----------+-------------------+
|school_id|class_id|user_id| status|start| epoch| timestamp|
+---------+--------+-------+----------+-----+----------+-------------------+
| remote| red| 16| null| null|1565915400|2019-08-15 20:30:00|
| remote| red| 16| null| null|1565915700|2019-08-15 20:35:00|
| remote| red| 16|1565915940| 1|1565916000|2019-08-15 20:40:00|
| remote| red| 16| null| null|1565916300|2019-08-15 20:45:00|
| remote| red| 16|1565916540| 1|1565916600|2019-08-15 20:50:00|
| remote| red| 16| null| null|1565916900|2019-08-15 20:55:00|
| onsite| green| 15| null| null|1565915400|2019-08-15 20:30:00|
| onsite| green| 15| null| null|1565915700|2019-08-15 20:35:00|
| onsite| green| 15| null| null|1565916000|2019-08-15 20:40:00|
| onsite| green| 15|1565916060| 0|1565916300|2019-08-15 20:45:00|
| onsite| green| 15| null| null|1565916600|2019-08-15 20:50:00|
| onsite| green| 15| null| null|1565916900|2019-08-15 20:55:00|
| remote| green| 17| null| null|1565915400|2019-08-15 20:30:00|
| remote| green| 17| null| null|1565915700|2019-08-15 20:35:00|
| remote| green| 17| null| null|1565916000|2019-08-15 20:40:00|
| remote| green| 17| null| null|1565916300|2019-08-15 20:45:00|
| remote| green| 17| null| null|1565916600|2019-08-15 20:50:00|
| remote| green| 17|1565916660| 0|1565916900|2019-08-15 20:55:00|
| onsite| green| 17| null| null|1565915400|2019-08-15 20:30:00|
| onsite| green| 17| null| null|1565915700|2019-08-15 20:35:00|
| onsite| green| 17| null| null|1565916000|2019-08-15 20:40:00|
| onsite| green| 17|1565916180| 1|1565916300|2019-08-15 20:45:00|
| onsite| green| 17|1565916420| 1|1565916600|2019-08-15 20:50:00|
| onsite| green| 17| null| null|1565916900|2019-08-15 20:55:00|
| remote| green| 15| null| null|1565915400|2019-08-15 20:30:00|
| remote| green| 15|1565915580| 0|1565915700|2019-08-15 20:35:00|
| remote| green| 15| null| null|1565916000|2019-08-15 20:40:00|
| remote| green| 15| null| null|1565916300|2019-08-15 20:45:00|
| remote| green| 15| null| null|1565916600|2019-08-15 20:50:00|
| remote| green| 15| null| null|1565916900|2019-08-15 20:55:00|
+---------+--------+-------+----------+-----+----------+-------------------+
现在,每个组都有6个时间戳。
请注意,并非所有原始的“状态”和“开始”都映射到最终的DataFrame,这是因为在resample
udf中,它发生的时间为5minute
,两个“开始”时间可以映射到同时网格点,您在这里输了一个。您可以根据自己的频率和保存数据的方式在udf
中进行调整。