我正在尝试了解coalesce
如何确定如何将初始分区加入最终问题,显然“首选位置”与其有关。
根据this question,Scala Spark有一个函数preferredLocations(split: Partition)
可以识别这个。但我对Spark的Scala方面并不熟悉。有没有办法在PySpark级别确定给定行或分区ID的首选位置?
答案 0 :(得分:1)
是的,这在理论上是可行的。强制某种形式偏好的示例数据(可能有一个更简单的例子):
rdd1 = sc.range(10).map(lambda x: (x % 4, None)).partitionBy(8)
rdd2 = sc.range(10).map(lambda x: (x % 4, None)).partitionBy(8)
# Force caching so downstream plan has preferences
rdd1.cache().count()
rdd3 = rdd1.union(rdd2)
现在您可以定义帮助程序:
from pyspark import SparkContext
def prefered_locations(rdd):
def to_py_generator(xs):
"""Convert Scala List to Python generator"""
j_iter = xs.iterator()
while j_iter.hasNext():
yield j_iter.next()
# Get JVM
jvm = SparkContext._active_spark_context._jvm
# Get Scala RDD
srdd = jvm.org.apache.spark.api.java.JavaRDD.toRDD(rdd._jrdd)
# Get partitions
partitions = srdd.partitions()
return {
p.index(): list(to_py_generator(srdd.preferredLocations(p)))
for p in partitions
}
申请:
prefered_locations(rdd3)
# {0: ['...'],
# 1: ['...'],
# 2: ['...'],
# 3: ['...'],
# 4: [],
# 5: [],
# 6: [],
# 7: []}