当我尝试通过pyspark
与Spark进行互动时,我收到以下错误。
java.io.IOException: Cannot run program "/Users/jwayne/anaconda/envs/ds/bin/python2.7": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:161) at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:87) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:63) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134) at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.(UNIXProcess.java:247) at java.lang.ProcessImpl.start(ProcessImpl.java:134) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) ... 14 more
我的代码如下所示。
from pyspark.sql.types import Row
records = [Row(fname='john{}'.format(i), lname='doe{}'.format(i)) for i in range(10)]
rdd = sc.parallelize(records)
sdf = rdd.toDF()
在我开始pyspark
之前,我输入以下内容。
export PYSPARK_PYTHON="/Users/jwayne/anaconda/envs/ds/bin/python"
然后我开始pyspark
,如下所示。
pyspark --master spark://master:7077
如果我输入which python
,我会得到以下输出。
/Users/jwayne/anaconda/envs/ds/bin/python
键入/usr/bin/env python
或/usr/bin/env python2.7
我得到以下输出。
Python 2.7.13 |Anaconda 4.3.1 (x86_64)| (default, Dec 20 2016, 23:05:08) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. Anaconda is brought to you by Continuum Analytics. Please check out: http://continuum.io/thanks and https://anaconda.org
我正在使用conda
来管理我的Python环境。在执行任何操作之前,我已确保激活正确的环境:source activate ds
。如果我输入/Users/jwayne/anaconda/envs/ds/bin/python2.7
或/Users/jwayne/anaconda/envs/ds/bin/python
,我会获得Python REPL。关于我做错了什么的任何想法?
我的Spark群集(v1.6.1)不使用conda。 which python
返回/usr/bin/python
,python --version
返回Python 2.6.6
。我还应该在我的Spark集群上安装conda吗?看看堆栈跟踪,看来这个问题在它遇到Spark集群之前就已经发生了;似乎是在驾驶员方面发生的。对我来说,似乎这个文件/路径确实存在,据我所知。
关于我做错的任何想法?