如何使用Python在Spark中添加两个稀疏向量

时间:2015-10-07 00:29:42

标签: python apache-spark sparse-matrix

我到处搜索但是我找不到如何使用Python添加两个稀疏矢量。 我想添加两个这样的稀疏向量: -

(1048576, {110522: 0.6931, 521365: 1.0986, 697409: 1.0986, 725041: 0.6931, 749730: 0.6931, 962395: 0.6931})

(1048576, {4471: 1.0986, 725041: 0.6931, 850325: 1.0986, 962395: 0.6931})

4 个答案:

答案 0 :(得分:4)

这样的事情应该有效:

from pyspark.mllib.linalg import Vectors, SparseVector, DenseVector
import numpy as np

def add(v1, v2):
    """Add two sparse vectors
    >>> v1 = Vectors.sparse(3, {0: 1.0, 2: 1.0})
    >>> v2 = Vectors.sparse(3, {1: 1.0})
    >>> add(v1, v2)
    SparseVector(3, {0: 1.0, 1: 1.0, 2: 1.0})
    """
    assert isinstance(v1, SparseVector) and isinstance(v2, SparseVector)
    assert v1.size == v2.size 
    # Compute union of indices
    indices = set(v1.indices).union(set(v2.indices))
    # Not particularly efficient but we are limited by SPARK-10973
    # Create index: value dicts
    v1d = dict(zip(v1.indices, v1.values))
    v2d = dict(zip(v2.indices, v2.values))
    zero = np.float64(0)
    # Create dictionary index: (v1[index] + v2[index])
    values =  {i: v1d.get(i, zero) + v2d.get(i, zero)
       for i in indices
       if v1d.get(i, zero) + v2d.get(i, zero) != zero}

    return Vectors.sparse(v1.size, values)

如果您只选择单次传递并且不关心引入的零,则可以修改上述代码:

from collections import defaultdict

def add(v1, v2):
    assert isinstance(v1, SparseVector) and isinstance(v2, SparseVector)
    assert v1.size == v2.size
    values = defaultdict(float) # Dictionary with default value 0.0
    # Add values from v1
    for i in range(v1.indices.size):
        values[v1.indices[i]] += v1.values[i]
    # Add values from v2
    for i in range(v2.indices.size):
        values[v2.indices[i]] += v2.values[i]
    return Vectors.sparse(v1.size, dict(values))

如果你想要,你可以试试猴子补丁SparseVector

SparseVector.__add__ = add
v1 = Vectors.sparse(5, {0: 1.0, 2: 3.0})
v2 = Vectors.sparse(5, {0: -3.0, 2: -3.0, 4: 10})
v1 + v2
## SparseVector(5, {0: -2.0, 4: 10.0})

或者,您应该可以使用scipy.sparse

from scipy.sparse import csc_matrix
from pyspark.mllib.regression import LabeledPoint

m1 = csc_matrix((
   v1.values,
   (v1.indices, [0] * v1.numNonzeros())),
   shape=(v1.size, 1))

m2 = csc_matrix((
   v2.values,
   (v2.indices, [0] * v2.numNonzeros())),
   shape=(v2.size, 1))

LabeledPoint(0, m1 + m2)

答案 1 :(得分:1)

我遇到了同样的问题,但我无法在不到几个小时的时间内在中等大小的数据集上完成其他解决方案(约20M记录,矢量大小= 10k)

所以我采取了另一种相关的方法,只需几分钟即可完成:

import numpy as np

def to_sparse(v):
  values = {i: e for i,e in enumerate(v) if e != 0}
  return Vectors.sparse(v.size, values)

rdd.aggregate(
  np.zeros(vector_size), 
  lambda acc, b: acc + b.toArray(), 
  lambda acc, b: acc + b
).map(to_sparse)

基本思想是不在reduce的每一步构建稀疏向量,只在结束时进行一次,让numpy完成所有向量加法工作。即使使用需要对密集向量进行混洗的aggregateByKey,它仍然只需要几分钟。

答案 2 :(得分:1)

以上所有函数都添加了两个相同大小的稀疏向量。我试图添加不同长度的稀疏向量,并在此处找到类似于我在Java中的要求 How to combine or merge two sparse vectors in Spark using Java? 所以在python中编写了这个函数,如下所示:

def combineSparseVectors(svs):
    size = 0
    nonzeros = 0
    for sv in svs :
        size += sv.size
        nonzeros += len(sv.indices)
    if nonzeros != 0 :
        indices = np.empty([nonzeros])
        values = np.empty([nonzeros])
        pointer_D = 0
        totalPt_D = 0
        pointer_V = 0
        for sv in svs :
            indicesSV = sv.indices
            for i in indicesSV :
                indices[pointer_D] = i + totalPt_D
                pointer_D=pointer_D+1
            totalPt_D += sv.size
            valuesSV = sv.values
            for d in valuesSV :
                values[pointer_V] = d
                pointer_V=pointer_V+1
        return SparseVector(size, indices, values)
    else :
        return null

答案 3 :(得分:0)

其他答案违反了Spark的编程概念。更简单地说,只需将pyspark.ml.lingalg.SparseVector(下面代码中的urOldVec)转换为Scipy.sparse.csc_matrix个对象(即列向量),然后使用" +"操作

import scipy.sparse as sps
urNewVec = sps.csc_matrix(urOldVec) 
urNewVec + urNewVec

正如pyspark.ml.linalg的文档所述,scipy.sparse向量可以传递到pyspark。