Pyspark' NoneType'对象没有属性' _jvm'错误

时间:2018-03-25 21:54:47

标签: python apache-spark pyspark apache-spark-sql

我试图使用spark 2.2

在DataFrame中打印每个分区中的总元素
from pyspark.sql.functions import *
from pyspark.sql import SparkSession

def count_elements(splitIndex, iterator):
    n = sum(1 for _ in iterator)
    yield (splitIndex, n)

spark = SparkSession.builder.appName("tmp").getOrCreate()
num_parts = 3
df = spark.read.json("/tmp/tmp/gon_s.json").repartition(num_parts)
print("df has partitions."+ str(df.rdd.getNumPartitions()))
print("Elements across partitions is:" + str(df.rdd.mapPartitionsWithIndex(lambda ind, x: count_elements(ind, x)).take(3)))

上述代码一直失败并出现以下错误

  n = sum(1 for _ in iterator)
  File "/home/dev/wk/pyenv/py3/lib/python3.5/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 40, in _
    jc = getattr(sc._jvm.functions, name)(col._jc if isinstance(col, Column) else col)
AttributeError: 'NoneType' object has no attribute '_jvm'
删除

下面的导入后

from pyspark.sql.functions import *

代码工作正常

skewed_large_df has partitions.3
The distribution of elements across partitions is:[(0, 1), (1, 2), (2, 2)]

导致此错误的原因是什么?如何解决?

1 个答案:

答案 0 :(得分:4)

这是why you shouldn't use import *的一个很好的例子。

from pyspark.sql.functions import *

会将pyspark.sql.functions模块中的所有功能引入您的名称空间,其中包括一些将隐藏您的内置函数的功能。

具体问题在于在线上的count_elements函数中:

n = sum(1 for _ in iterator)
#   ^^^ - this is now pyspark.sql.functions.sum

您打算打电话给__builtin__.sum,但是import *掩盖了内置函数。

执行以下操作之一:

import pyspark.sql.functions as f

from pyspark.sql.functions import sum as sum_