如何扩展基于列的Pyspark数据框?

时间:2020-01-16 10:33:24

标签: python apache-spark pyspark pyspark-dataframes

如何根据列值展开数据框?我打算从这个数据框开始:

+---------+----------+----------+
|DEVICE_ID|  MIN_DATE|  MAX_DATE|
+---------+----------+----------+
|        1|2019-08-29|2019-08-31|
|        2|2019-08-27|2019-09-02|
+---------+----------+----------+

对于一个看起来像这样的人:

+---------+----------+
|DEVICE_ID|      DATE|
+---------+----------+
|        1|2019-08-29|
|        1|2019-08-30|
|        1|2019-08-31|
|        2|2019-08-27|
|        2|2019-08-28|
|        2|2019-08-29|
|        2|2019-08-30|
|        2|2019-08-31|
|        2|2019-09-01|
|        2|2019-09-02|
+---------+----------+

任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:0)

from datetime import timedelta, date
from pyspark.sql.functions import udf
from pyspark.sql.types import ArrayType

# Create a sample data row.
df = sqlContext.sql("""
select 'dev1' as device_id, 
to_date('2020-01-06') as start, 
to_date('2020-01-09') as end""")

# Define a UDf to return a list of dates
@udf
def datelist(start, end):
    return ",".join([str(start + datetime.timedelta(days=x)) for x in range(0, 1+(end-start).days)])

# explode the list of dates into rows
df.select("device_id", 
          F.explode(
              F.split(datelist(df["start"], df["end"]), ","))
          .alias("date")).show(10, False)