我在Jupyter笔记本上使用pyspark。以下是Spark设置的方法:
import findspark
findspark.init(spark_home='/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive', python_path='python2.7')
import pyspark
from pyspark.sql import *
sc = pyspark.sql.SparkSession.builder.master("yarn-client").config("spark.executor.memory", "2g").config('spark.driver.memory', '1g').config('spark.driver.cores', '4').enableHiveSupport().getOrCreate()
sqlContext = SQLContext(sc)
然后当我这样做时:
spark_df = sqlContext.createDataFrame(df_in)
其中df_in
是pandas数据帧。然后我得到了以下错误:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-9-1db231ce21c9> in <module>()
----> 1 spark_df = sqlContext.createDataFrame(df_in)
/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/context.pyc in createDataFrame(self, data, schema, samplingRatio)
297 Py4JJavaError: ...
298 """
--> 299 return self.sparkSession.createDataFrame(data, schema, samplingRatio)
300
301 @since(1.3)
/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/session.pyc in createDataFrame(self, data, schema, samplingRatio)
520 rdd, schema = self._createFromRDD(data.map(prepare), schema, samplingRatio)
521 else:
--> 522 rdd, schema = self._createFromLocal(map(prepare, data), schema)
523 jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())
524 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/session.pyc in _createFromLocal(self, data, schema)
400 # convert python objects to sql data
401 data = [schema.toInternal(row) for row in data]
--> 402 return self._sc.parallelize(data), schema
403
404 @since(2.0)
AttributeError: 'SparkSession' object has no attribute 'parallelize'
有谁知道我做错了什么?谢谢!
答案 0 :(得分:14)
SparkSession
不是SparkContext
的替代品,而是SQLContext
的等价物。只需使用它,就像使用SQLContext
:
spark.createDataFrame(...)
如果您必须访问SparkContext
,请使用sparkContext
属性:
spark.sparkContext
因此,如果您需要SQLContext
以实现向后兼容,则可以:
SQLContext(sparkContext=spark.sparkContext, sparkSession=spark)