我有Hortonworks Sandbox和Hadoop 2.2.0,我在沙盒上安装了Apache-spark技术预览。
虽然我能够在本地模式下运行spark Java示例,但我无法在yarn-client模式下运行Java示例。
以下是我用于执行的步骤:
在Eclipse IDE中,我创建了一个Java项目,在src director下创建了一个JavaWordCount文件,代码来自Apache spark附带的示例。
然后我使用Eclipse-> export - >创建jar作为jar并在我的本地系统中使用该jar文件。
然后在终端上我去了spark home目录并给出了以下命令:
[train@sandbox spark-1.2.0.2.2.0.0-82-bin-2.6.0.2.2.0.0-2041]$ b**in/spark-submit --class JavaWordCount --master yarn-client --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1 /home/train/Desktop/sparkwc3.jar /README.md /out1
我的文件在hdfs上。
我收到以下错误:
15/02/28 11:04:02 ERROR cluster.YarnClientClusterScheduler: Lost executor 2 on sandbox.hortonworks.com: remote Akka client disassociated
15/02/28 11:04:02 INFO scheduler.TaskSetManager: Re-queueing tasks for 2 from TaskSet 0.0
15/02/28 11:04:02 WARN scheduler.TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3, sandbox.hortonworks.com): ExecutorLostFailure (executor 2 lost)
15/02/28 11:04:02 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
15/02/28 11:04:02 INFO cluster.YarnClientClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/02/28 11:04:02 ERROR cluster.YarnClientSchedulerBackend: Asked to remove non-existent executor 2
15/02/28 11:04:02 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkExecutor@sandbox.hortonworks.com:34111] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/02/28 11:04:02 INFO cluster.YarnClientClusterScheduler: Cancelling stage 0
15/02/28 11:04:02 INFO scheduler.DAGScheduler: Job 0 failed: collect at JavaWordCount.java:68, took 20.451136 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, sandbox.hortonworks.com): ExecutorLostFailure (executor 2 lost)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[train@sandbox spark-1.2.0.2.2.0.0-82-bin-2.6.0.2.2.0.0-2041]$
答案 0 :(得分:0)
遗嘱执行人可能已经崩溃了。 Spark网站上的此页面http://spark.apache.org/docs/latest/running-on-yarn.html讨论了如何查看各种日志以希望找到问题。您也可以尝试使用所讨论的调用标志和属性。
例如,如果省略--driver-memory 512m --executor-memory 512m --executor-cores 1
标志会发生什么?
最后,输入的路径是否存在于HDFS中并且您是否具有读取权限?您是否拥有/
的写入权限,以便创建/out1
? (但这些不应该导致这个错误......)