我有一个FOR循环函数,该函数遍历表和列(zip)的列表以获取最小值和最大值。对于每种组合,输出是分开的,而不是单个数据帧/表。有没有一种方法可以将FOR循环的结果合并到函数中的一个最终输出中?
from pyspark.sql import functions as f
def minmax(tables, cols):
for table, column in zip(tables, cols):
minmax = spark.table(table).where(col(column).isNotNull()).select(f.lit(table).alias("table"), f.lit(column).alias("col"), min(col(column)).alias("min"),
max(col(column)).alias("max"))
minmax.show()
tables = ["sales_123", "sales_REW"]
cols = ["costs", "price"]
minmax(tables, cols)
该函数的输出:
+---------+-----+---+---+
| table| col|min|max|
+---------+-----+---+---+
|sales_123|costs| 0|400|
+---------+-----+---+---+
+----------+-----+---+---+
| table| col|min|max|
+----------+-----+---+---+
|sales_REW |price| 0|400|
+----------+-----+---+---+
所需的输出:
+---------+-----+---+---+
| table| col|min|max|
+---------+-----+---+---+
|sales_123|costs| 0|400|
|sales_REW|price| 0|400|
+---------+-----+---+---+
答案 0 :(得分:1)
将所有数据帧放入列表中,并在for循环之后进行并集:
from functools import reduce
from pyspark.sql import functions as f
from pyspark.sql import DataFrame
def minmax(tables, cols):
dfs = []
for table, column in zip(tables, cols):
minmax = spark.table(table).where(col(column).isNotNull()).select(f.lit(table).alias("table"), f.lit(column).alias("col"), min(col(column)).alias("min"), max(col(column)).alias("max"))
dfs.append(minmax)
df = reduce(DataFrame.union, dfs)
请注意,列的顺序必须与所有涉及的数据帧相同(在此情况下)。否则可能会产生意外结果。