我是Pyspark的新手。我安装了" bash Anaconda2-4.0.0-Linux-x86_64.sh"在ubuntu上。还安装了pyspark。终端一切正常。我想在jupyter上工作。当我在我的ubuntu终端中创建配置文件时,如下所示:
wanderer@wanderer-VirtualBox:~$ ipython profile create pyspark
[ProfileCreate] Generating default config file: u'/home/wanderer/.ipython/profile_pyspark/ipython_config.py'
[ProfileCreate] Generating default config file: u'/home/wanderer/.ipython/profile_pyspark/ipython_kernel_config.py'
wanderer@wanderer-VirtualBox:~$ export ANACONDA_ROOT=~/anaconda2
wanderer@wanderer-VirtualBox:~$ export PYSPARK_DRIVER_PYTHON=$ANACONDA_ROOT/bin/ipython
wanderer@wanderer-VirtualBox:~$ export PYSPARK_PYTHON=$ANACONDA_ROOT/bin/python
wanderer@wanderer-VirtualBox:~$ cd spark-1.5.2-bin-hadoop2.6/
wanderer@wanderer-VirtualBox:~/spark-1.5.2-bin-hadoop2.6$ PYTHON_OPTS=”notebook” ./bin/pyspark
Python 2.7.11 |Anaconda 4.0.0 (64-bit)| (default, Dec 6 2015, 18:08:32)
Type "copyright", "credits" or "license" for more information.
IPython 4.1.2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/04/24 15:27:42 INFO SparkContext: Running Spark version 1.5.2
16/04/24 15:27:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/04/24 15:27:53 INFO BlockManagerMasterEndpoint: Registering block manager localhost:33514 with 530.3 MB RAM, BlockManagerId(driver, localhost, 33514)
16/04/24 15:27:53 INFO BlockManagerMaster: Registered BlockManager
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.5.2
/_/
Using Python version 2.7.11 (default, Dec 6 2015 18:08:32)
SparkContext available as sc, HiveContext available as sqlContext.
In [1]: sc
Out[1]: <pyspark.context.SparkContext at 0x7fc96cc6fd10>
In [2]: print sc.version
1.5.2
In [3]:
以下是jupyter和ipython的版本
wanderer@wanderer-VirtualBox:~$ jupyter --version
4.1.0
wanderer@wanderer-VirtualBox:~$ ipython --version
4.1.2
我试图整合jupyter笔记本和pyspark,但每件事都失败了。我想在jupyter中锻炼,并且不知道如何整合jupyter笔记本和pyspark。
有人能展示如何整合上述组件吗?
答案 0 :(得分:11)
只需运行命令:
PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark
答案 1 :(得分:9)
使用nano或vim添加到pyspark这两行:
PYSPARK_DRIVER_PYTHON="jupyter"
PYSPARK_DRIVER_PYTHON_OPTS="notebook"
答案 2 :(得分:4)
编辑2017年10月
使用Spark 2.2 findspark这很有效,不需要那些env vars
import findspark
findspark.init('/opt/spark')
import pyspark
sc = pyspark.SparkContext()
<强> OLD 强>
我找到的最快的方法是运行:
export PYSPARK_DRIVER=ipython
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
pyspark
或等同于jupyter。这应该打开启用了pyspark的ipython笔记本。您可能还想查看Beaker notebook。