我试图使用spark 2.2
在DataFrame中打印每个分区中的总元素from pyspark.sql.functions import *
from pyspark.sql import SparkSession
def count_elements(splitIndex, iterator):
n = sum(1 for _ in iterator)
yield (splitIndex, n)
spark = SparkSession.builder.appName("tmp").getOrCreate()
num_parts = 3
df = spark.read.json("/tmp/tmp/gon_s.json").repartition(num_parts)
print("df has partitions."+ str(df.rdd.getNumPartitions()))
print("Elements across partitions is:" + str(df.rdd.mapPartitionsWithIndex(lambda ind, x: count_elements(ind, x)).take(3)))
上述代码一直失败并出现以下错误
删除下面的导入后n = sum(1 for _ in iterator) File "/home/dev/wk/pyenv/py3/lib/python3.5/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 40, in _ jc = getattr(sc._jvm.functions, name)(col._jc if isinstance(col, Column) else col) AttributeError: 'NoneType' object has no attribute '_jvm'
from pyspark.sql.functions import *
代码工作正常
skewed_large_df has partitions.3
The distribution of elements across partitions is:[(0, 1), (1, 2), (2, 2)]
导致此错误的原因是什么?如何解决?
答案 0 :(得分:4)
这是why you shouldn't use import *
的一个很好的例子。
行
from pyspark.sql.functions import *
会将pyspark.sql.functions
模块中的所有功能引入您的名称空间,其中包括一些将隐藏您的内置函数的功能。
具体问题在于在线上的count_elements
函数中:
n = sum(1 for _ in iterator)
# ^^^ - this is now pyspark.sql.functions.sum
您打算打电话给__builtin__.sum
,但是import *
掩盖了内置函数。
执行以下操作之一:
import pyspark.sql.functions as f
或
from pyspark.sql.functions import sum as sum_