我有一个ipython笔记本,它有pyspark代码,它在我的机器上工作正常但是当我尝试在另一台机器上运行时它会在这一行抛出错误(rdd3行):
rdd2 = sc.parallelize(list1)
rdd3 = rdd1.zip(rdd2).map(lambda ((x1,x2,x3,x4), y): (y,x2, x3, x4))
list = rdd3.collect()
我得到的错误是:
ValueError Traceback (most recent call last)
<ipython-input-7-9daab52fc089> in <module>()
---> 16 rdd3 = rdd1.zip(rdd2).map(lambda ((x1,x2,x3,x4), y): (y,x2, x3, x4))
/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py in zip(self, other)
1960
1961 if self.getNumPartitions() != other.getNumPartitions():
-> 1962 raise ValueError("Can only zip with RDD which has the same number of partitions")
1963
1964 # There will be an Exception in JVM if there are different number
我不知道为什么这个错误会在一台机器上出现而在另一台机器上出现? ValueError:只能使用具有相同分区数的RDD进行压缩
答案 0 :(得分:2)
zip
通常说是一个棘手的操作。它要求两个RDD不仅具有相同数量的分区,而且每个分区具有相同数量的元素。
除了一些特殊情况,只有当两个RDD具有相同的祖先并且没有混洗和操作可能会改变共同祖先和当前祖先之间的元素数量(filter
,flatMap
)时,才能保证这一点。州。通常,它仅表示map
(1对1)转换。
如果您知道订单以其他方式保留,但每个分区的分区或元素数量不同,您可以使用带有索引的join
:
from operator import itemgetter
def custom_zip(rdd1, rdd2):
index = itemgetter(1)
def prepare(rdd, npart):
return (rdd.zipWithIndex()
.sortByKey(index, numPartitions=npart)
.keys())
npart = rdd1.getNumPartitions() + rdd2.getNumPartitions()
return prepare(rdd1, npart).zip(prepare(rdd2, npart))
rdd1 = sc.parallelize(["a_{}".format(x) for x in range(20)], 5)
rdd2 = sc.parallelize(["b_{}".format(x) for x in range(20)], 10)
rdd1.zip(rdd2).take(5)
## ValueError Traceback (most recent call last)
## ...
## ValueError: Can only zip with RDD which has the same number of partitions
custom_zip(rdd1, rdd2).take(5)
## [('a_0', 'b_0'), ('a_1', 'b_1'), ('a_2', 'b_2'),
## ('a_3', 'b_3'), ('a_4', 'b_4')]
Scala等价物将是这样的:
def prepare[T: ClassTag](rdd: RDD[T], n: Int) =
rdd.zipWithIndex.sortBy(_._2, true, n).keys
def customZip[T: ClassTag, U: ClassTag](rdd1: RDD[T], rdd2: RDD[U]) = {
val n = rdd1.partitions.size + rdd2.partitions.size
prepare(rdd1, n).zip(prepare(rdd2, n))
}
val rdd1 = sc.parallelize((0 until 20).map(i => s"a_$i"), 5)
val rdd2 = sc.parallelize((0 until 20).map(i => s"b_$i"), 10)
rdd1.zip(rdd2)
// java.lang.IllegalArgumentException: Can't zip RDDs with unequal numbers of partitions
// at org.apache.spark.rdd.ZippedPartitionsBaseRDD.getPartitions(ZippedPartitionsRD
// ...
customZip(rdd1, rdd2).take(5)
// Array[(String, String)] =
// Array((a_0,b_0), (a_1,b_1), (a_2,b_2), (a_3,b_3), (a_4,b_4))