我刚接触到spark 2.0;到目前为止,我一直在使用spark 1.6.1。有人可以帮我用pyspark(python)设置sparkSession吗?我知道在线提供的scala示例类似(here),但我希望能直接使用python语言。
我的具体案例:我在zeppelin spark笔记本中加载来自S3的avro文件。然后建立df&s并运行各种pyspark& sql查询它们。我所有的旧查询都使用sqlContext。我知道这是不好的做法,但我用
开始了我的笔记本 sqlContext = SparkSession.builder.enableHiveSupport().getOrCreate()
。
我可以用avros阅读
mydata = sqlContext.read.format("com.databricks.spark.avro").load("s3:...
并构建没有问题的数据帧。但是一旦我开始查询数据帧/临时表,我就会继续得到" java.lang.NullPointerException"错误。我认为这表明存在转换错误(例如,旧的查询在1.6.1中工作但需要针对2.0进行调整)。无论查询类型如何,都会发生错误。所以我假设
1。)sqlContext别名是个坏主意
和
2.)我需要正确设置sparkSession。
因此,如果有人能告诉我这是如何完成的,或者可能解释他们所知道的不同版本的火花之间的差异,我将非常感激。如果我需要详细说明这个问题,请告诉我。如果它令人费解,我道歉。
答案 0 :(得分:16)
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('abc').getOrCreate()
现在要导入一些你可以使用的.csv文件
df=spark.read.csv('filename.csv',header=True)
答案 1 :(得分:8)
从这里http://spark.apache.org/docs/2.0.0/api/python/pyspark.sql.html
您可以使用以下方法创建一个spark会话:
>>> from pyspark.conf import SparkConf
>>> SparkSession.builder.config(conf=SparkConf())
答案 2 :(得分:7)
正如您在scala示例中所看到的,Spark Session是sql模块的一部分。类似于python。因此,请参阅pyspark sql module documentation
class pyspark.sql.SparkSession(sparkContext,jsparkSession = None) 使用Dataset和DataFrame API编程Spark的切入点。一个 SparkSession可用于创建DataFrame,将DataFrame注册为 表,在表上执行SQL,缓存表和读取镶木地板文件。 要创建SparkSession,请使用以下构建器模式:
>>> spark = SparkSession.builder \
... .master("local") \
... .appName("Word Count") \
... .config("spark.some.config.option", "some-value") \
... .getOrCreate()
答案 3 :(得分:0)
spark = SparkSession.builder\
.master("local")\
.enableHiveSupport()\
.getOrCreate()
spark.conf.set("spark.executor.memory", '8g')
spark.conf.set('spark.executor.cores', '3')
spark.conf.set('spark.cores.max', '3')
spark.conf.set("spark.driver.memory",'8g')
sc = spark.sparkContext
答案 4 :(得分:-1)
这是我开发的一个有用的Python SparkSession类:
#!/bin/python
# -*- coding: utf-8 -*-
######################
# SparkSession class #
######################
class SparkSession:
# - Notes:
# The main object if Spark Context ('sc' object).
# All new Spark sessions ('spark' objects) are sharing the same underlying Spark context ('sc' object) into the same JVM,
# but for each Spark context the temporary tables and registered functions are isolated.
# You can't create a new Spark Context into another JVM by using 'sc = SparkContext(conf)',
# but it's possible to create several Spark Contexts into the same JVM by specifying 'spark.driver.allowMultipleContexts' to true (not recommended).
# - See:
# https://medium.com/@achilleus/spark-session-10d0d66d1d24
# https://stackoverflow.com/questions/47723761/how-many-sparksessions-can-a-single-application-have
# https://stackoverflow.com/questions/34879414/multiple-sparkcontext-detected-in-the-same-jvm
# https://stackoverflow.com/questions/39780792/how-to-build-a-sparksession-in-spark-2-0-using-pyspark
# https://stackoverflow.com/questions/47813646/sparkcontext-getorcreate-purpose?noredirect=1&lq=1
from pyspark.sql import SparkSession
spark = None # The Spark Session
sc = None # The Spark Context
scConf = None # The Spark Context conf
def _init(self):
self.sc = self.spark.sparkContext
self.scConf = self.sc.getConf() # or self.scConf = self.spark.sparkContext._conf
# Return the current Spark Session (singleton), otherwise create a new oneÒ
def getOrCreateSparkSession(self, master=None, appName=None, config=None, enableHiveSupport=False):
cmd = "self.SparkSession.builder"
if (master != None): cmd += ".master(" + master + ")"
if (appName != None): cmd += ".appName(" + appName + ")"
if (config != None): cmd += ".config(" + config + ")"
if (enableHiveSupport == True): cmd += ".enableHiveSupport()"
cmd += ".getOrCreate()"
self.spark = eval(cmd)
self._init()
return self.spark
# Return the current Spark Context (singleton), otherwise create a new one via getOrCreateSparkSession()
def getOrCreateSparkContext(self, master=None, appName=None, config=None, enableHiveSupport=False):
self.getOrCreateSparkSession(master, appName, config, enableHiveSupport)
return self.sc
# Create a new Spark session from the current Spark session (with isolated SQL configurations).
# The new Spark session is sharing the underlying SparkContext and cached data,
# but the temporary tables and registered functions are isolated.
def createNewSparkSession(self, currentSparkSession):
self.spark = currentSparkSession.newSession()
self._init()
return self.spark
def getSparkSession(self):
return self.spark
def getSparkSessionConf(self):
return self.spark.conf
def getSparkContext(self):
return self.sc
def getSparkContextConf(self):
return self.scConf
def getSparkContextConfAll(self):
return self.scConf.getAll()
def setSparkContextConfAll(self, properties):
# Properties example: { 'spark.executor.memory' : '4g', 'spark.app.name' : 'Spark Updated Conf', 'spark.executor.cores': '4', 'spark.cores.max': '4'}
self.scConf = self.scConf.setAll(properties) # or self.scConf = self.spark.sparkContext._conf.setAll()
# Stop (clears) the active SparkSession for current thread.
#def stopSparkSession(self):
# return self.spark.clearActiveSession()
# Stop the underlying SparkContext.
def stopSparkContext(self):
self.spark.stop() # Or self.sc.stop()
# Returns the active SparkSession for the current thread, returned by the builder.
#def getActiveSparkSession(self):
# return self.spark.getActiveSession()
# Returns the default SparkSession that is returned by the builder.
#def getDefaultSession(self):
# return self.spark.getDefaultSession()