我正在尝试在以下数据框中做一些累加积
| a | b |
1 1
1 2
1 3
1 4
我想使另一个名为“ c”的列包含“ a”的“ b”的累积乘积
结果数据框显示为
| a | b | c |
1 1 1
1 2 2
1 3 6
1 4 24
任何人都有解决方案,请还原
答案 0 :(得分:2)
这是不使用用户定义函数的另一种方法
df = spark.createDataFrame([(1, 1), (1, 2), (1, 3), (1, 4), (1, 5)], ['a', 'b'])
wind = Window.partitionBy("a").rangeBetween(Window.unboundedPreceding, Window.currentRow).orderBy("b")
df2 = df.withColumn("foo", collect_list("b").over(wind))
df2.withColumn("foo2", expr("aggregate(foo, cast(1 as bigint), (acc, x) -> acc * x)")).show()
+---+---+---------------+----+
| a| b| foo|foo2|
+---+---+---------------+----+
| 1| 1| [1]| 1|
| 1| 2| [1, 2]| 2|
| 1| 3| [1, 2, 3]| 6|
| 1| 4| [1, 2, 3, 4]| 24|
| 1| 5|[1, 2, 3, 4, 5]| 120|
+---+---+---------------+----+
如果您真的不关心精度,可以构建一个较短的版本
import pyspark.sql.functions as psf
df.withColumn("foo", psf.exp(psf.sum(psf.log("b")).over(wind))).show()
+---+---+------------------+
| a| b| foo|
+---+---+------------------+
| 1| 1| 1.0|
| 1| 2| 2.0|
| 1| 3| 6.0|
| 1| 4|23.999999999999993|
| 1| 5|119.99999999999997|
+---+---+------------------
答案 1 :(得分:1)
您必须设置一个订单列。在您的情况下,我使用了列“ b”
from pyspark.sql import functions as F, Window, types
from functools import reduce
from operator import mul
df = spark.createDataFrame([(1, 1), (1, 2), (1, 3), (1, 4), (1, 5)], ['a', 'b'])
order_column = 'b'
window = Window.orderBy(order_column)
expr = F.col('a') * F.col('b')
mul_udf = F.udf(lambda x: reduce(mul, x), types.IntegerType())
df = df.withColumn('c', mul_udf(F.collect_list(expr).over(window)))
df.show()
+---+---+---+
| a| b| c|
+---+---+---+
| 1| 1| 1|
| 1| 2| 2|
| 1| 3| 6|
| 1| 4| 24|
| 1| 5|120|
+---+---+---+
答案 2 :(得分:0)
您的回答与此类似。
import pandas as pd
df = pd.DataFrame({'v':[1,2,3,4,5,6]})
df['prod'] = df.v.cumprod()
v prod
0 1 1
1 2 2
2 3 6
3 4 24
4 5 120
5 6 720