以下SOF问题How to run script in Pyspark and drop into IPython shell when done?说明如何启动pyspark脚本:
%run -d myscript.py
但我们如何访问existsin spark上下文?
只是创建一个新的不起作用:
----> sc = SparkContext("local", 1)
ValueError: Cannot run multiple SparkContexts at once; existing
SparkContext(app=PySparkShell, master=local) created by <module> at
/Library/Python/2.7/site-packages/IPython/utils/py3compat.py:204
但是试图使用现有的...... 现有的那个?
In [50]: for s in filter(lambda x: 'SparkContext' in repr(x[1]) and len(repr(x[1])) < 150, locals().iteritems()):
print s
('SparkContext', <class 'pyspark.context.SparkContext'>)
即。 SparkContext实例没有变量
答案 0 :(得分:35)
来自pyspark.context
导入SparkContext
然后在SparkContext
上调用静态方法:
sc = SparkContext.getOrCreate()
答案 1 :(得分:2)
wordcount的独立python脚本:使用 contextmanager
编写可重用的spark上下文"""SimpleApp.py"""
from contextlib import contextmanager
from pyspark import SparkContext
from pyspark import SparkConf
SPARK_MASTER='local'
SPARK_APP_NAME='Word Count'
SPARK_EXECUTOR_MEMORY='200m'
@contextmanager
def spark_manager():
conf = SparkConf().setMaster(SPARK_MASTER) \
.setAppName(SPARK_APP_NAME) \
.set("spark.executor.memory", SPARK_EXECUTOR_MEMORY)
spark_context = SparkContext(conf=conf)
try:
yield spark_context
finally:
spark_context.stop()
with spark_manager() as context:
File = "/home/ramisetty/sparkex/README.md" # Should be some file on your system
textFileRDD = context.textFile(File)
wordCounts = textFileRDD.flatMap(lambda line: line.split()).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a+b)
wordCounts.saveAsTextFile("output")
print "WordCount - Done"
发布:
/bin/spark-submit SimpleApp.py
答案 2 :(得分:1)
如果您已经创建了一个SparkSession:
spark = SparkSession \
.builder \
.appName("StreamKafka_Test") \
.getOrCreate()
然后,您可以像这样访问“现有” SparkContext:
sc = spark.sparkContext
答案 3 :(得分:0)
当你在终端输入pyspark时,python会自动创建spark context sc。