我目前正尝试针对Cloudera集群在yarn(-client)模式下使用Apache Spark执行一些Scala代码,但 sbt run 执行被中止以下Java异常:
[error] (run-main-0) org.apache.spark.SparkException: YARN mode not available ?
org.apache.spark.SparkException: YARN mode not available ?
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1267)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:199)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:100)
at SimpleApp$.main(SimpleApp.scala:7)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.scheduler.cluster.YarnClientClusterScheduler
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:191)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1261)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:199)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:100)
at SimpleApp$.main(SimpleApp.scala:7)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
15/11/24 17:18:03 INFO network.ConnectionManager: Selector thread was interrupted!
[error] Total time: 38 s, completed 24-nov-2015 17:18:04
我认为预构建的Apache Spark发行版是使用yarn支持构建的,因为如果我尝试执行spark-submit(yarn-client)模式,则不再有任何java异常,但是yarn似乎没有像我一样分配任何资源每秒获取相同的消息:INFO客户端:application_1448366262851_0022的应用程序报告(状态:ACCEPTED)。我想是因为配置问题。
我用谷歌搜索了最后一条消息,但是我无法理解我需要修改的纱线(也不是哪里)配置,以便用纱线上的火花来执行我的程序。
上下文:
Scala测试计划:
更新
嗯,SBT作业失败了,因为在SBT打包和执行时,hadoop-client.jar和spark-yarn.jar不在类路径中。
现在,sbt run要求一个环境变量SPARK_YARN_APP_JAR和SPARK_JAR,我的build.sbt配置如下:
name := "File Searcher"
version := "1.0"
scalaVersion := "2.10.4"
librearyDependencies += "org.apache.spark" %% "spark-core" % "0.9.1"
libraryDependencies += "org.apache.spark" %% "spark-yarn" % "0.9.1" % "runtime"
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.6.0" % "runtime"
libraryDependencies += "org.apache.hadoop" % "hadoop-yarn-client" % "2.6.0" % "runtime"
resolvers += "Maven Central" at "https://repo1.maven.org/maven2"
有没有办法“自动”配置这些变量?我的意思是,我可以设置SPARK_JAR,因为这个jar附带了Spark安装,但是SPARK_YARN_APP_JAR? 当我手动设置这些变量时,我注意到即使我设置了YARN_CONF_DIR变量,火花马达也不会考虑我的自定义配置。有没有办法告诉SBT使用我的本地Spark配置工作?
如果它可以提供帮助,我会让我正在执行的当前(丑陋)代码:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
object SimpleApp {
def main(args: Array[String]) {
val logFile = "src/data/sample.txt"
val sc = new SparkContext("yarn-client", "Simple App", "C:/spark/lib/spark-assembly-1.3.0-hadoop2.4.0.jar",
List("target/scala-2.10/file-searcher_2.10-1.0.jar"))
val logData = sc.textFile(logFile, 2).cache()
val numTHEs = logData.filter(line => line.contains("the")).count()
println("Lines with the: %s".format(numTHEs))
}
}
谢谢!
由于 Cheloute
答案 0 :(得分:0)
好吧,我终于找到了我的问题。
那就是它。其他一切都应该有用。