将列表中的缺失元素作为每个时间窗口组的行插入到DataFrame中

时间:2019-06-06 19:56:05

标签: python scala apache-spark pyspark

尝试从语法上解决这个问题……似乎是一个难题……基本上,如果未在时间序列时间戳记间隔源数据中捕获传感器项,则想为每个缺少的传感器项添加一行每个时间戳窗口为NULL值

# list of sensor items [have 300 plus; only showing 4 as example]
list = ["temp", "pressure", "vacuum", "burner"]

# sample data
df = spark.createDataFrame([('2019-05-10 7:30:05', 'temp', '99'),\
                            ('2019-05-10 7:30:05', 'burner', 'TRUE'),\
                            ('2019-05-10 7:30:10', 'vacuum', '.15'),\
                            ('2019-05-10 7:30:10', 'burner', 'FALSE'),\
                            ('2019-05-10 7:30:10', 'temp', '75'),\
                            ('2019-05-10 7:30:15', 'temp', '77'),\
                            ('2019-05-10 7:30:20', 'pressure', '.22'),\
                            ('2019-05-10 7:30:20', 'temp', '101'),], ["date", "item", "value"])
# current dilemma => all sensor items are not being captured / only updates to sensors are being captured in current back-end design streaming devices
+------------------+--------+-----+
|              date|    item|value|
+------------------+--------+-----+
|2019-05-10 7:30:05|    temp|   99|
|2019-05-10 7:30:05|  burner| TRUE|

|2019-05-10 7:30:10|  vacuum|  .15|
|2019-05-10 7:30:10|  burner|FALSE|
|2019-05-10 7:30:10|    temp|   75|

|2019-05-10 7:30:15|    temp|   77|

|2019-05-10 7:30:20|pressure|  .22|
|2019-05-10 7:30:20|    temp|  101|
+------------------+--------+-----+

想要捕获每个时间戳的每个传感器项,因此可以在旋转数据帧之前执行正向填充估算[正向填充300 plus cols会导致scala错误=>

Spark Caused by: java.lang.StackOverflowError Window Function?

# desired output
+------------------+--------+-----+
|              date|    item|value|
+------------------+--------+-----+
|2019-05-10 7:30:05|    temp|   99|
|2019-05-10 7:30:05|  burner| TRUE|
|2019-05-10 7:30:05|  vacuum| NULL|
|2019-05-10 7:30:05|pressure| NULL|

|2019-05-10 7:30:10|  vacuum|  .15|
|2019-05-10 7:30:10|  burner|FALSE|
|2019-05-10 7:30:10|    temp|   75|
|2019-05-10 7:30:10|pressure| NULL|

|2019-05-10 7:30:15|    temp|   77|
|2019-05-10 7:30:15|pressure| NULL|
|2019-05-10 7:30:15|  burner| NULL|
|2019-05-10 7:30:15|  vacuum| NULL|

|2019-05-10 7:30:20|pressure|  .22|
|2019-05-10 7:30:20|    temp|  101|
|2019-05-10 7:30:20|  vacuum| NULL|
|2019-05-10 7:30:20|  burner| NULL|
+------------------+--------+-----+

1 个答案:

答案 0 :(得分:2)

展开my comment

您可以将数据框与不同日期和sensor_list的笛卡尔乘积一起正确连接。由于sensor_list很小,因此可以broadcast

from pyspark.sql.functions import broadcast

sensor_list = ["temp", "pressure", "vacuum", "burner"]

df.join(
    df.select('date')\
        .distinct()\
        .crossJoin(broadcast(spark.createDataFrame([(x,) for x in sensor_list], ["item"]))),
    on=["date", "item"],
    how="right"
).sort("date", "item").show()
#+------------------+--------+-----+
#|              date|    item|value|
#+------------------+--------+-----+
#|2019-05-10 7:30:05|  burner| TRUE|
#|2019-05-10 7:30:05|pressure| null|
#|2019-05-10 7:30:05|    temp|   99|
#|2019-05-10 7:30:05|  vacuum| null|
#|2019-05-10 7:30:10|  burner|FALSE|
#|2019-05-10 7:30:10|pressure| null|
#|2019-05-10 7:30:10|    temp|   75|
#|2019-05-10 7:30:10|  vacuum|  .15|
#|2019-05-10 7:30:15|  burner| null|
#|2019-05-10 7:30:15|pressure| null|
#|2019-05-10 7:30:15|    temp|   77|
#|2019-05-10 7:30:15|  vacuum| null|
#|2019-05-10 7:30:20|  burner| null|
#|2019-05-10 7:30:20|pressure|  .22|
#|2019-05-10 7:30:20|    temp|  101|
#|2019-05-10 7:30:20|  vacuum| null|
#+------------------+--------+-----+