我有2RDD,我想在这两个rdd之间加倍元素。
让我们说我有以下RDD(例子):
a = ((1,[0.28,1,0.55]),(2,[0.28,1,0.55]),(3,[0.28,1,0.55]))
aRDD = sc.parallelize(a)
b = ((1,[0.28,0,0]),(2,[0,0,0]),(3,[0,1,0]))
bRDD = sc.parallelize(b)
可以看出b
是稀疏的,我想避免将零值与另一个值相乘。我正在做以下事情:
from pyspark.mllib.linalg import Vectors
def create_sparce_matrix(a_list):
length = len(a_list)
index = [i for i ,e in enumerate(a_list) if e !=0]
value = [e for i ,e in enumerate(a_list) if e !=0]
sv1 = Vectors.sparse(length,index,value)
return sv1
brdd = b.map(lambda (ids,a_list):(ids,create_sparce_matrix(a_list)))
乘法:
combinedRDD = ardd + brdd
result = combinedRDD.reduceByKey(lambda a,b:[c*d for c,d in zip(a,b)])
似乎我无法将spar与RDD中的列表相乘。有没有办法做到这一点?或者当两个RDD中的一个有很多零值时,还有另一种有效的方法来乘以元素?
答案 0 :(得分:1)
您可以处理此问题的一种方法是将aRDD
转换为RDD[DenseVector]
:
from pyspark.mllib.linalg import SparseVector, DenseVector, Vectors
aRDD = sc.parallelize(a).mapValues(DenseVector)
bRDD = sc.parallelize(b).mapValues(create_sparce_matrix)
并使用基本的NumPy操作:
def mul(x, y):
assert isinstance(x, DenseVector)
assert isinstance(y, SparseVector)
assert x.size == y.size
return SparseVector(y.size, y.indices, x[y.indices] * y.values)
aRDD.join(bRDD).mapValues(lambda xy: mul(*xy))