虽然有人已经询问有关计算Weighted Average in Spark的问题,但在这个问题中,我问的是使用数据集/数据框而不是RDD。
如何计算Spark中的加权平均值?我有两列:计数和以前的平均值:
case class Stat(name:String, count: Int, average: Double)
val statset = spark.createDataset(Seq(Stat("NY", 1,5.0),
Stat("NY",2,1.5),
Stat("LA",12,1.0),
Stat("LA",15,3.0)))
我希望能够像这样计算加权平均值:
display(statset.groupBy($"name").agg(sum($"count").as("count"),
weightedAverage($"count",$"average").as("average")))
可以使用UDF来关闭:
val weightedAverage = udf(
(row:Row)=>{
val counts = row.getAs[WrappedArray[Int]](0)
val averages = row.getAs[WrappedArray[Double]](1)
val (count,total) = (counts zip averages).foldLeft((0,0.0)){
case((cumcount:Int,cumtotal:Double),(newcount:Int,newaverage:Double))=>(cumcount+newcount,cumtotal+newcount*newaverage)}
(total/count) // Tested by returning count here and then extracting. Got same result as sum.
}
)
display(statset.groupBy($"name").agg(sum($"count").as("count"),
weightedAverage(struct(collect_list($"count"),
collect_list($"average"))).as("average")))
(感谢Passing a list of tuples as a parameter to a spark udf in scala的答案,以帮助写这篇文章)
新手:使用这些导入:
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import scala.collection.mutable.WrappedArray
有没有办法通过内置列函数而不是UDF来实现这一点? UDF感觉很笨,如果数字变大,你必须将Int转换为Long。
答案 0 :(得分:2)
看起来你可以分两次通过:
val totalCount = statset.select(sum($"count")).collect.head.getLong(0)
statset.select(lit(totalCount) as "count", sum($"average" * $"count" / lit(totalCount)) as "average").show
或者,包括你刚刚添加的群组:
display(statset.groupBy($"name").agg(sum($"count").as("count"),
sum($"count"*$"average").as("total"))
.select($"name",$"count",($"total"/$"count")))