我开始将我的 Pandas 实现转换为 pySpark,但是我在完成一些基本操作时遇到了麻烦。所以我有这张桌子:
+-----+-----+----+
| Col1|Col2 |Col3|
+-----+-----+----+
| 1 |[1,3]| 0|
| 44 |[2,0]| 1|
| 77 |[1,5]| 7|
+-----+-----+----+
我想要的输出是:
+-----+-----+----+----+
| Col1|Col2 |Col3|Col4|
+-----+-----+----+----+
| 1 |[1,3]| 0|2.67|
| 44 |[2,0]| 1|2.67|
| 77 |[1,5]| 7|2.67|
+-----+-----+----+----+
到达这里:
答案 0 :(得分:1)
您可以使用 greatest
获得数组中每个(子)列的最大平均值:
from pyspark.sql import functions as F, Window
df2 = df.withColumn(
'Col4',
F.greatest(*[F.avg(F.udf(lambda r: [float(i) for i in r.toArray()], 'array<double>')('Col2')[i]).over(Window.orderBy()) for i in range(2)])
)
df2.show()
+----+------+----+------------------+
|Col1| Col2|Col3| Col4|
+----+------+----+------------------+
| 1|[1, 3]| 0|2.6666666666666665|
| 44|[2, 0]| 1|2.6666666666666665|
| 77|[1, 5]| 7|2.6666666666666665|
+----+------+----+------------------+
如果你希望数组大小是动态的,你可以这样做
arr_size = df.select(F.max(F.size(F.udf(lambda r: [float(i) for i in r.toArray()], 'array<double>')('Col2')))).head()[0]
df2 = df.withColumn(
'Col4',
F.greatest(*[F.avg(F.udf(lambda r: [float(i) for i in r.toArray()], 'array<double>')('Col2')[i]).over(Window.orderBy()) for i in range(arr_size)])
)