使用远程SparkContext在纱线上运行spark作业:Yarn应用程序已经结束

时间:2016-03-09 09:18:48

标签: scala apache-spark yarn

我正在尝试启动一个在纱线上创建SparkContext的程序。这是我的简单程序:

object Entry extends App {
  System.setProperty("SPARK_YARN_MODE", "true")

  val sparkConfig = new SparkConf()
    .setAppName("test-connection")
    .setMaster("yarn-client")

  val sparkContext = new SparkContext(sparkConfig)

  val numbersRDD = sparkContext.parallelize(List(1, 2, 3, 4, 5))

  println {
    s"result is ${numbersRDD.reduce(_ + _)}"
  }
}

build.sbt

scalaVersion := "2.10.5"

libraryDependencies ++= {
  val sparkV      = "1.6.0"

  Seq(
    "org.apache.spark" %% "spark-core" % sparkV,
    "org.apache.spark" %% "spark-yarn" % sparkV,
  )
}

我正在使用google cloud dataproc通过sbt run

在主节点内运行此程序

这些是日志:

16/03/09 08:38:31 INFO YarnClientImpl: Submitted application application_1457497836188_0013 to ResourceManager at /0.0.0.0:8032
16/03/09 08:38:32 INFO Client: Application report for application_1457497836188_0013 (state: ACCEPTED)
16/03/09 08:38:32 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1457512711191
     final status: UNDEFINED
     tracking URL: http://recommendation-cluster-m:8088/proxy/application_1457497836188_0013/
     user: ibosz
16/03/09 08:38:33 INFO Client: Application report for application_1457497836188_0013 (state: ACCEPTED)
16/03/09 08:38:34 INFO Client: Application report for application_1457497836188_0013 (state: ACCEPTED)
16/03/09 08:38:35 INFO Client: Application report for application_1457497836188_0013 (state: FAILED)
16/03/09 08:38:35 INFO Client: 
     client token: N/A
     diagnostics: Application application_1457497836188_0013 failed 2 times due to AM Container for appattempt_1457497836188_0013_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://recommendation-cluster-m:8088/cluster/app/application_1457497836188_0013Then, click on links to logs of each attempt.
Diagnostics: java.io.FileNotFoundException: File file:/home/ibosz/.ivy2/cache/org.apache.spark/spark-yarn_2.10/jars/spark-yarn_2.10-1.6.0.jar does not exist
Failing this attempt. Failing the application.
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1457512711191
     final status: FAILED
     tracking URL: http://recommendation-cluster-m:8088/cluster/app/application_1457497836188_0013
     user: ibosz
16/03/09 08:38:35 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
    at Entry$delayedInit$body.apply(Entry.scala:13)
    at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
    at scala.App$$anonfun$main$1.apply(App.scala:71)
    at scala.App$$anonfun$main$1.apply(App.scala:71)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
    at scala.App$class.main(App.scala:71)
    at Entry$.main(Entry.scala:6)
    at Entry.main(Entry.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sbt.Run.invokeMain(Run.scala:67)
    at sbt.Run.run0(Run.scala:61)
    at sbt.Run.sbt$Run$$execute$1(Run.scala:51)
    at sbt.Run$$anonfun$run$1.apply$mcV$sp(Run.scala:55)
    at sbt.Run$$anonfun$run$1.apply(Run.scala:55)
    at sbt.Run$$anonfun$run$1.apply(Run.scala:55)
    at sbt.Logger$$anon$4.apply(Logger.scala:85)
    at sbt.TrapExit$App.run(TrapExit.scala:248)
    at java.lang.Thread.run(Thread.java:745)

它说

java.io.FileNotFoundException: File file:/home/ibosz/.ivy2/cache/org.apache.spark/spark-yarn_2.10/jars/spark-yarn_2.10-1.6.0.jar does not exist

但确实存在。运行spark-shell --master yarn-client没问题。我的代码出了什么问题?

1 个答案:

答案 0 :(得分:1)

虽然可能有办法迫使sbt run正确地进行真正的yarn-client模式Spark提交,但您可能只想这样做:

sbt package
spark-submit target/scala-2.10/*SNAPSHOT.jar

基本上,您遇到的错误是,当创建SparkContext时,它会要求远程YARN容器保存AppMaster进程,该进程将驻留在您的一个工作节点上。它通过master的本地环境的各个方面,其中包括构建中使用的Spark程序集的特定于sbt的副本(在~/.ivy2/cache/目录下)。工人&#39;环境不会与您运行sbt run的环境相匹配,这就是失败的原因。

请注意,spark-submit命令本身只是一个bash脚本,其目的是运行具有所有正确的环境变量和类路径配置的jar文件,因此任何使sbt run工作的内容都将基本上复制spark-submit脚本的逻辑,并且可能以不可移植的方式执行。

所有这一切的优点是使用spark-submit foo.jar将使您的调用变得美观和便携;例如,一旦你想要生产你的工作,就可以在同一个jar文件上使用Dataproc的作业提交API,就像你使用spark-submit:gcloud dataproc jobs submit spark --jar foo.jar <your_job_args>一样,你甚至可以提交那些只需先将jar文件上传到GCS,然后为你的jar文件指定gs://路径,就可以通过Dataproc的Web GUI查看jar文件。

同样,如果您只需通过解开标准Spark二进制发行版来设置本地火花,即使您没有在本地火花设置上安装spark-submit,您仍然可以使用sbt