使用Spark API在Scala中运行独立应用程序示例时出错

时间:2014-02-27 13:09:08

标签: scala apache-spark

我无法运行火花示例 (https://spark.incubator.apache.org/docs/latest/quick-start.html#a-standalone-app-in-scala)。

sbt package命令成功运行。但是sbt run命令会出错。

sbt包的输出

[info] Set current project to Simple Project (in build file:/home/raghuveer/Spark/)
[info] Updating {file:/home/raghuveer/Spark/}default-13c61e...
[info] Resolving com.codahale.metrics#metrics-graphite;3.0.0 ...
[info] Done updating.
[info] Compiling 1 Scala source to /home/raghuveer/Spark/target/scala-2.10/classes...
[info] Packaging /home/raghuveer/Spark/target/scala-2.10/simple-project_2.10-1.0.jar ...
[info] Done packaging.
[success] Total time: 16 s, completed Feb 27, 2014 6:19:14 PM

sbt run错误

ERROR executor.Executor: Exception in task ID 0 java.io.IOException: Server returned HTTP response code: 504 for URL:http://10.135.217.189:49650/jars/simple-project_2.10-1.0.jar
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1403)
at java.net.URL.openStream(URL.java:1031)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:253)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$6.apply(Executor.scala:345)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$6.apply(Executor.scala:343)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:343)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:194)
at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)

[error] (run-main) org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed 1 times (most recent failure: 
Exception failure: java.io.IOException: Server returned HTTP response code: 504 for URL:    http://10.135.217.189:49650/jars/simple-project_2.10-1.0.jar)org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed 1 times (most recent failure:  
Exception failure: java.io.IOException: Server returned HTTP response code: 504 for URL: http://10.135.217.189:49650/jars/simple-project_2.10-1.0.jar)

跟踪是

[trace] Stack trace suppressed: run last compile:run for the full output.
14/02/27 18:20:58 INFO network.ConnectionManager: Selector thread was interrupted!
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 36 s, completed Feb 27, 2014 6:20:58 PM
编辑:我断开了网络连接,现在断开了java.io.IOException:服务器返回了HTTP响应代码:504没有来,它成功运行并显示输出。但我无法理解为什么它发生在那样。

1 个答案:

答案 0 :(得分:1)

这篇文章分享了如何创建Spark-streaming独立应用程序以及如何在scala-SDK(Eclipse IDE)中运行Spark应用程序。

check this