apache-spark - 在Windows上启动pyspark时出错

时间:2017-03-26 21:17:40

标签: python hadoop apache-spark pyspark py4j

我试图用python在Windows上试验MLlib。所以我似乎需要SPARK,而SPARK又需要HADOOP。我已经安装了包含python 2.7,numpy等的Anaconda2。

我一直在关注this recipe,在我看来,这主要是让我到达我需要去的地方,但我想我已经陷入了最后一次错误:

Python 2.7.13 |Anaconda 4.3.1 (64-bit)| (default, Dec 19 2016, 13:29:36) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Traceback (most recent call last):
  File "C:\spark\bin\..\python\pyspark\shell.py", line 43, in <module>
    spark = SparkSession.builder\
  File "C:\spark\python\pyspark\sql\session.py", line 179, in getOrCreate
    session._jsparkSession.sessionState().conf().setConfString(key, value)
  File "C:\spark\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 1133, in __call__
  File "C:\spark\python\pyspark\sql\utils.py", line 79, in deco
    raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':"

从这个输出中可以清楚地看到没有找到关于winutils.exe的错误。

此外,异常来自py4j的java域,但由于IllegalArgumentException,我们已经丢失了回溯。

所有指导赞赏!

干杯

0 个答案:

没有答案