pyspark:将DenseVector扩展为RDD中的元组

时间:2016-09-18 15:33:32

标签: python apache-spark pyspark rdd

我有以下RDD,每条记录都是(bigint,vector)的元组:

myRDD.take(5)

[(1, DenseVector([9.2463, 1.0, 0.392, 0.3381, 162.6437, 7.9432])),
 (1, DenseVector([9.2463, 1.0, 0.392, 0.3381, 162.6437, 7.9432])),
 (0, DenseVector([5.0, 20.0, 0.3444, 0.3295, 54.3122, 4.0])),
 (1, DenseVector([9.2463, 1.0, 0.392, 0.3381, 162.6437, 7.9432])),
 (1, DenseVector([9.2463, 2.0, 0.392, 0.3381, 162.6437, 7.9432]))]

如何扩展密集向量并使其成为元组的一部分?即我希望以上内容成为:

[(1, 9.2463, 1.0, 0.392, 0.3381, 162.6437, 7.9432),
 (1, 9.2463, 1.0, 0.392, 0.3381, 162.6437, 7.9432),
 (0, 5.0, 20.0, 0.3444, 0.3295, 54.3122, 4.0),
 (1, 9.2463, 1.0, 0.392, 0.3381, 162.6437, 7.9432),
 (1, 9.2463, 2.0, 0.392, 0.3381, 162.6437, 7.9432)]

谢谢!

1 个答案:

答案 0 :(得分:1)

好吧,因为pyspark.ml.linalg.DenseVector(或mllib)是iterbale(提供__len____getitem__方法),您可以将其视为任何其他python集合,例如:< / p>

def as_tuple(kv):
    """
    >>> as_tuple((1, DenseVector([9.25, 1.0, 0.31, 0.31, 162.37])))
    (1, 9.25, 1.0, 0.31, 0.31, 162.37)
    """
    k, v = kv
    # Use *v.toArray() if you want to support Sparse one as well.
    return (k, *v)

对于Python 2替换:

(k, *v)

使用:

from itertools import chain

tuple(chain([k], v))

或:

(k, ) + tuple(v)

如果要将值转换为Python(而不是NumPy),则标量使用:

v.toArray().tolist()

取代v