根据第一个数据帧从第二个数据帧获取数据

时间:2021-05-29 22:03:46

标签: dataframe apache-spark pyspark apache-spark-sql

我有两个 PySpark 数据框 df1df2,具有以下架构

df1:

root
 |-- RCBNorthAmps: double (nullable = true)
 |-- RCBSouthAmps: double (nullable = true)
 |-- RCBTOB: double (nullable = true)
 |-- time: timestamp (nullable = true)

+-----------------+-----------------+------+-------------------+
|     RCBNorthAmps|     RCBSouthAmps|RCBTOB|               time|
+-----------------+-----------------+------+-------------------+
|             88.6|             89.6| 234.0|2019-01-01 00:00:00|
|          88.6699|            89.77| 234.4|2019-01-01 00:00:01|
|            88.74|            89.94| 234.8|2019-01-01 00:00:02|
|            88.81|            90.11| 235.2|2019-01-01 00:00:03|
|            88.88|            90.28| 235.6|2019-01-01 00:00:04|
showing first 5 rows
df2:

root
 |-- slip_start: timestamp (nullable = true)
 |-- slip_end: timestamp (nullable = true)
 |-- premature: integer (nullable = true)

+-------------------+-------------------+---------+
|         slip_start|           slip_end|premature|
+-------------------+-------------------+---------+
|2019-01-01 00:06:50|2019-01-01 00:06:50|        0|
|2019-01-01 00:10:30|2019-01-01 00:10:30|        0|
|2019-01-01 00:10:40|2019-01-01 00:10:40|        0|
|2019-01-01 00:10:50|2019-01-01 00:10:50|        0|
|2019-01-01 00:15:10|2019-01-01 00:15:10|        0|
showing first 5 rows

是否可以通过考虑以下聚合在 df2 中创建一个新列并填充其值?

variance = df1.filter(df1.time > df2_perticular_row.slip_start)['RCBNorthAmps'].var()
return variance # variance in df1 for the particular row in df2

对于 df2 中的每一行,df1 中必须有一些聚合。然后把它放回到新列中,并像这样在 df2 中得到最终输出

+-------------------+-------------------+---------+
|         slip_start|           slip_end|premature| variance
+-------------------+-------------------+---------+
|2019-01-01 00:06:50|2019-01-01 00:06:50|        0|    0.0123         
|2019-01-01 00:10:30|2019-01-01 00:10:30|        0|     0.323         
|2019-01-01 00:10:40|2019-01-01 00:10:40|        0|     0.013         
|2019-01-01 00:10:50|2019-01-01 00:10:50|        0|    0.0123         
|2019-01-01 00:15:10|2019-01-01 00:15:10|        0|    0.1423         

1 个答案:

答案 0 :(得分:0)

您可以使用 df2["slip_start"] < df1["time"] 作为连接条件连接两个数据框,然后按列 slip_start 对结果进行分组。聚合函数将是 var_samp

from pyspark.sql import functions as F

df2.join(df1, df2["slip_start"] < df1["time"], "left_outer") \
      .groupBy("slip_start") \
      .agg(F.var_samp("RCBNorthAmps")) \
      .show()
相关问题