错误= 193,%1不是有效的Win32应用程序,带有管道到Python脚本

时间:2018-12-23 10:07:17

标签: python windows scala apache-spark rdd

我是Spark的新手,他试图弄清楚pipe方法是如何工作的。我在Scala中有以下代码

val sc = new SparkContext("local[*]","ppp",new SparkConf())
val distScript = "test.py"
val distScriptName = "test.py"
sc.addFile(distScript)
val ipData = sc.parallelize(List("asd","xyz","zxcz","sdfsfd","Ssdfd","Sdfsf"))
val opData = ipData.pipe(SparkFiles.get(distScriptName)).collect()
opData.foreach(println)

test.py 是:

#!/usr/bin/python
import sys
for line in sys.stdin:
    print line.upper()

我在Windows 7 x64中使用了IntelliJ IDE。我运行程序后,发生此错误:

18/12/23 12:47:03 ERROR Executor: Exception in task 2.0 in stage 0.0 (TID 2)
java.io.IOException: Cannot run program "C:\Users\Farzad\AppData\Local\Temp\
spark-8e107381-7bdb-45cc-9f47-0d6d924cf3d1\userFiles-141d306b-f07c-4a0c-8281-dc9a6c09624c
\test.py": CreateProcess error=193, %1 is not a valid Win32 application
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at org.apache.spark.rdd.PipedRDD.compute(PipedRDD.scala:111)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: CreateProcess error=193, %1 is not a valid Win32 application
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
at java.lang.ProcessImpl.start(ProcessImpl.java:137)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 9 more

如何解决此错误以在Scala代码中运行test.py?

0 个答案:

没有答案