spark mllib将函数应用于rowMatrix的所有元素

时间:2015-02-10 18:18:46

标签: scala apache-spark apache-spark-mllib

我有一个rowMatrix xw

scala> xw
res109: org.apache.spark.mllib.linalg.distributed.RowMatrix = org.apache.spark.mllib.linalg.distributed.RowMatrix@8e74950

我想在每个元素中应用一个函数:

f(x)=exp(-x*x)

矩阵元素的类型可以看作:

scala> xw.rows.first

res110: org.apache.spark.mllib.linalg.Vector = [0.008930720313311474,0.017169380001300985,-0.013414238595719104,0.02239106636801034,0.023009502628798143,0.02891937604244297,0.03378470969100948,0.03644030110678057,0.0031586143217048825,0.011230244437457062,0.00477455053405408,0.020251682490519785,-0.005429788421130285,0.011578489275815267,0.0019301805575977788,0.022513736483645713,0.009475039307158668,0.019457912132044935,0.019209006632742498,-0.029811133879879596]

我的主要问题是我无法在矢量上使用地图

scala> xw.rows.map(row => row.map(e => breeze.numerics.exp(e)))
<console>:44: error: value map is not a member of org.apache.spark.mllib.linalg.Vector
              xw.rows.map(row => row.map(e => breeze.numerics.exp(e)))
                                     ^

scala> 

我该如何解决?

1 个答案:

答案 0 :(得分:6)

这假设你知道你实际上有DenseVector(似乎就是这种情况)。您可以在包含地图的向量上调用toArray,然后使用DenseVector转换回Vectors.dense

xw.rows.map{row => Vectors.dense(row.toArray.map{e => breeze.numerics.exp(e)})}

您也可以在SparseVector上执行此操作;它在数学上是正确的,但转换为数组可能效率极低。另一种选择是调用row.copy,然后使用foreachActive,这对于密集和稀疏向量都是有意义的。但是,对于您正在使用的特定copy类,可能无法实现Vector,如果您不知道向量的类型,则无法改变数据。如果你真的需要支持稀疏&amp;密集的向量,我会做类似的事情:

xw.rows.map{
  case denseVec: DenseVector => 
    Vectors.dense(denseVec.toArray.map{e => breeze.numerics.exp(e)})}
  case sparseVec: SparseVector =>
    //we only need to update values of the sparse vector -- the indices remain the same  
    val newValues: Array[Double] = sparseVec.values.map{e => breeze.numerics.exp(e)}
    Vectors.sparse(sparseVec.size, sparseVec.indices, newValues)
}