我大约有25个表,每个表都有3列(id,date,value),在这里我需要通过与id和date列联接来从每个列中选择value列,并创建一个合并表。
df_1 = df_1.join(
df_2,
on=(df_1.id == df_2.id) & (df_1.date == df_2.date),
how="inner"
).select([df_1["*"], df_2["value1"]]).dropDuplicates()
pyspark中是否有任何优化方法来生成具有这25个值+ id +日期列的合并表。
谢谢。
答案 0 :(得分:1)
df_1 = spark.createDataFrame([[1, '2018-10-10', 3]], ['id', 'date', 'value'])
df_2 = spark.createDataFrame([[1, '2018-10-10', 3], [2, '2018-10-10', 4]], ['id', 'date', 'value'])
df_3 = spark.createDataFrame([[1, '2018-10-10', 3], [2, '2018-10-10', 4]], ['id', 'date', 'value'])
from functools import reduce
# list of data frames / tables
dfs = [df_1, df_2, df_3]
# rename value column
dfs_renamed = [df.selectExpr('id', 'date', f'value as value_{i}') for i, df in enumerate(dfs)]
# reduce the list of data frames with inner join
reduce(lambda x, y: x.join(y, ['id', 'date'], how='inner'), dfs_renamed).show()
+---+----------+-------+-------+-------+
| id| date|value_0|value_1|value_2|
+---+----------+-------+-------+-------+
| 1|2018-10-10| 3| 3| 3|
+---+----------+-------+-------+-------+