我想计算总计百分比并将其保存在PYspark数据框行的新列中:

时间:2018-09-01 14:53:29

标签: python pyspark

数据应如下所示:

product total_spend needed

a        10          10%          

a        20          20%

a        30          30%

b        30          30%

b        10          10%

使用但不起作用的代码:

df.withColumn('needed',df['total_spend']/F.sum(df['total_spend'])).show()

3 个答案:

答案 0 :(得分:3)

Spark无法通过这种方式工作,您首先必须收集数据,然后才能使用它来计算百分比。下面是相同的示例代码,还有其他方法:

sum_spend = df.agg(F.sum(F.col("total_spend")).alias("sum_spend")).collect()[0][0]

df.withColumn(
    "needed",
    F.concat((F.col("total_spend")*100.0/F.lit(sum_spend)), F.lit("%"))
).show()

答案 1 :(得分:0)

一种可能性:

import org.apache.spark.sql.expressions._
import org.apache.spark.sql.functions._

val df = sc.parallelize(Seq(
   ("A", "X", 2, 100), ("A", "X", 7, 100), ("B", "X", 10, 100),
   ("C", "X", 1, 100), ("D", "X", 50, 100), ("E", "X", 30, 100)
    )).toDF("c1", "c2", "Val1", "Val2")

val df2 = df
   .groupBy("c1")
   .agg(sum("Val1").alias("sum"))
   .withColumn("fraction", col("sum") /  sum("sum").over())

 df2.show

答案 2 :(得分:0)

请在下面找到答案。

df.withColumn("needed",concat(df.col("total_spend").multiply(100)/df.agg(sum(col("total_spend"))).first.mkString.toInt,lit("%"))).show()