如何在Spark / Scala中找到DataFrame的spark / vector元素的总和/平均值?

时间:2018-04-04 17:37:33

标签: scala spark-dataframe graphframes

我在Graphframes中有ParallelPersonalizedPageRank的pageranks结果,这是一个DataFrame,每个元素都是sparseVector,如下所示:

+---------------------------------------+
|           pageranks                   |
+---------------------------------------+
|(1887,[0, 1, 2,...][0.1, 0.2, 0.3, ...]|
|(1887,[0, 1, 2,...][0.2, 0.3, 0.4, ...]|
|(1887,[0, 1, 2,...][0.3, 0.4, 0.5, ...]|
|(1887,[0, 1, 2,...][0.4, 0.5, 0.6, ...]|
|(1887,[0, 1, 2,...][0.5, 0.6, 0.7, ...]|

添加sparseVector的所有元素并生成总和或平均值的最佳方法是什么?我想我们可以使用toArray将每个sparseVector转换为denseVector并遍历每个数组以获得带有两个嵌套循环的结果,并获得如下内容:

+-----------+
|pageranks  |
+-----------+
|avg1|
|avg2|
|avg3|
|avg4|
|avg5|
|... |

我确信应该有更好的方法,但是我找不到有关sparseVector操作的API文档。谢谢!

1 个答案:

答案 0 :(得分:0)

我认为我找到了一个没有收集(实现)结果的解决方案,并在Scala中执行嵌套循环。只需发布在这里,以防其他人有用。

// convert Dataset element from SparseVector to Array
val ranksNursingArray = ranksNursing.vertices
  .orderBy("id")
  .select("pageranks")
  .map(r => 
  r(0).asInstanceOf[org.apache.spark.ml.linalg.SparseVector].toArray)
// Find average value of pageranks and add a column to DataFrame
val ranksNursingAvg = ranksNursingArray
  .map{case (value) => (value, value.sum/value.length)}
  .toDF("pageranks", "pr-avg")

最终结果如下:

+--------------------+--------------------+                                     
|           pageranks|              pr-avg|
+--------------------+--------------------+
|[1.52034575371428...|2.970332668789975E-5|
|[0.0, 0.0, 0.0, 0...|5.160299770346173E-6|
|[0.0, 0.0, 0.0, 0...|4.400537827779479E-6|
|[0.0, 0.0, 0.0, 0...|3.010621958524792...|
|[0.0, 0.0, 4.8987...|2.342424435412115E-5|
|[0.0, 0.0, 1.6895...|6.955151139681538E-6|
|[0.0, 0.0, 1.5669...| 5.47016001804886E-6|
|[0.0, 0.0, 0.0, 2...|2.303811469709906E-5|
|[0.0, 0.0, 0.0, 3...|1.985155979369427E-5|
|[0.0, 0.0, 0.0, 0...|1.411993797780601...|
+--------------------+--------------------+