AttributeError:' NoneType'对象没有属性' setCallSite'

时间:2018-05-30 13:32:25

标签: python pyspark statistics apache-spark-sql correlation

在PySpark中,我想使用以下代码计算两个数据帧向量之间的相关性(我在导入pyspark或createDataFrame时没有任何问题):

from pyspark.ml.linalg import Vectors
from pyspark.ml.stat import Correlation
import pyspark

spark = pyspark.sql.SparkSession.builder.master("local[*]").getOrCreate()

data = [(Vectors.sparse(4, [(0, 1.0), (3, -2.0)]),),
        (Vectors.dense([4.0, 5.0, 0.0, 3.0]),)]
df = spark.createDataFrame(data, ["features"])

r1 = Correlation.corr(df, "features").head()
print("Pearson correlation matrix:\n" + str(r1[0]))

但是,我得到了AttributeError(AttributeError:' NoneType'对象没有属性' setCallSite'):

AttributeError                            Traceback (most recent call last)
<ipython-input-136-d553c1ade793> in <module>()
      6 df = spark.createDataFrame(data, ["features"])
      7 
----> 8 r1 = Correlation.corr(df, "features").head()
      9 print("Pearson correlation matrix:\n" + str(r1[0]))

/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py in head(self, n)
   1130         """
   1131         if n is None:
-> 1132             rs = self.head(1)
   1133             return rs[0] if rs else None
   1134         return self.take(n)

/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py in head(self, n)
   1132             rs = self.head(1)
   1133             return rs[0] if rs else None
-> 1134         return self.take(n)
   1135 
   1136     @ignore_unicode_prefix

/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py in take(self, num)
    502         [Row(age=2, name=u'Alice'), Row(age=5, name=u'Bob')]
    503         """
--> 504         return self.limit(num).collect()
    505 
    506     @since(1.3)

/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py in collect(self)
    463         [Row(age=2, name=u'Alice'), Row(age=5, name=u'Bob')]
    464         """
--> 465         with SCCallSiteSync(self._sc) as css:
    466             port = self._jdf.collectToPython()
    467         return list(_load_from_socket(port, BatchedSerializer(PickleSerializer())))

/usr/local/lib/python3.6/dist-packages/pyspark/traceback_utils.py in __enter__(self)
     70     def __enter__(self):
     71         if SCCallSiteSync._spark_stack_depth == 0:
---> 72             self._context._jsc.setCallSite(self._call_site)
     73         SCCallSiteSync._spark_stack_depth += 1
     74 

AttributeError: 'NoneType' object has no attribute 'setCallSite'

任何解决方案?

3 个答案:

答案 0 :(得分:2)

我不仅在Correlation.corr(...)数据帧中遇到了相同的错误,  但也要使用ldaModel.describeTopics()

很可能是 SPARK错误

他们忘记初始化 创建结果数据框时为DataFrame::_sc._jsc成员。

每个数据框通常都使用适当的JavaObject初始化此成员。

答案 1 :(得分:0)

这有一个未解决的问题:

https://issues.apache.org/jira/browse/SPARK-27335?jql=text%20~%20%22setcallsite%22

发布者建议强制将DF的后端与Spark上下文同步:

df.sql_ctx.sparkSession._jsparkSession = spark._jsparkSession
df._sc = spark._sc

这对我们有用,希望在其他情况下也可以。

答案 2 :(得分:0)

导致该AttributeError的原因有很多:

  1. 您可以在初始化xContext之一(其中x 可能是SQL,Hive)。例如:

    sc = SparkContext.getOrCreate(conf = conf)  
    sc.stop() 
    spark = SQLContext(sc)  
    
  2. 您的火花未在群集上同步。

因此,只需重新启动jupyter笔记本内核或重新启动应用程序(而不是spark上下文),它将起作用。