我正在努力优化我的Spark流程,并尝试将UDF与累加器一起使用。我已经让累加器独立工作,并且想看看我是否会使用UDF获得任何加速。但相反,当我将累加器包装在UDF中时,它仍然是空的。我特别错了吗? Lazy Execution还有什么东西,即使我的.count
它还没有执行吗?
输入:
0,[0.11,0.22]
1,[0.22,0.33]
输出:
(0,0,0.11),(0,1,0.22),(1,0,0.22),(1,1,0.33)
代码
val accum = new MapAccumulator2d()
val session = SparkSession.builder().getOrCreate()
session.sparkContext.register(accum)
//Does not work - Empty Accumlator
val rowAccum = udf((itemId: Int, item: mutable.WrappedArray[Float]) => {
val map = item
.zipWithIndex
.map(ff => {
((itemId, ff._2), ff._1.toDouble)
}).toMap
accum.add(map)
itemId
})
dataFrame.select(rowAccum(col("itemId"), col("jaccardList"))).count
//Works
dataFrame.foreach(f => {
val map = f.getAs[mutable.WrappedArray[Float]](1)
.zipWithIndex
.map(ff => {
((f.getInt(0), ff._2), ff._1.toDouble)
}).toMap
accum.add(map)
})
val list = accum.value.toList.map(f => (f._1._1, f._1._2, f._2))
答案 0 :(得分:1)
这里看起来唯一的问题是使用count
来触发"触发"经过懒惰评估的UDF:Spark是" smart"足以意识到select
操作不能改变count
的结果,因此不能真正执行UDF。选择不同的操作(例如collect
)表明UDF工作并更新累加器。
这是一个(更简洁)的例子:
val accum = sc.longAccumulator
val rowAccum = udf((itemId: Int) => { accum.add(itemId); itemId })
val dataFrame = Seq(1,2,3,4,5).toDF("itemId")
dataFrame.select(rowAccum(col("itemId"))).count() // won't trigger UDF
println(s"RESULT: ${accum.value}") // prints 0
dataFrame.select(rowAccum(col("itemId"))).collect() // triggers UDF
println(s"RESULT: ${accum.value}") // prints 15