spark-submit yarn-client run失败了

时间:2015-05-12 09:30:41

标签: apache-spark client yarn

使用yarn-client运行spark程序。 我在纱线环境中建立了火花。 脚本是

./bin/spark-submit --class WordCountTest \
--master yarn-client \
--num-executors 1 \
--executor-cores 1 \
--queue root.hadoop \
/root/Desktop/test2.jar \
10

运行时我得到以下异常。

15/05/12 17:42:01 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/12 17:42:01 WARN spark.SparkConf: 
SPARK_CLASSPATH was detected (set to ':/usr/local/hadoop/hadoop-2.5.2/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath

15/05/12 17:42:01 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to ':/usr/local/hadoop/hadoop-2.5.2/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar' as a work-around.
15/05/12 17:42:01 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to ':/usr/local/hadoop/hadoop-2.5.2/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar' as a work-around.
15/05/12 17:42:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/12 17:42:02 INFO spark.SecurityManager: Changing view acls to: root
15/05/12 17:42:02 INFO spark.SecurityManager: Changing modify acls to: root
15/05/12 17:42:02 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/05/12 17:42:02 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/12 17:42:02 INFO Remoting: Starting remoting
15/05/12 17:42:03 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@master:49338]
15/05/12 17:42:03 INFO util.Utils: Successfully started service 'sparkDriver' on port 49338.
15/05/12 17:42:03 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/12 17:42:03 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/12 17:42:03 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-57f5fb29-784d-4730-92b8-c2e8be97c038/blockmgr-752988bc-b2d0-42f7-891d-5d3edbb4526d
15/05/12 17:42:03 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/12 17:42:04 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-2f2a46eb-9259-4c6e-b9af-7159efb0b3e9/httpd-3c50fe1e-430e-4077-9cd0-58246e182d98
15/05/12 17:42:04 INFO spark.HttpServer: Starting HTTP Server
15/05/12 17:42:04 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/12 17:42:04 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:41749
15/05/12 17:42:04 INFO util.Utils: Successfully started service 'HTTP file server' on port 41749.
15/05/12 17:42:04 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/12 17:42:05 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/12 17:42:05 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/05/12 17:42:05 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/12 17:42:05 INFO ui.SparkUI: Started SparkUI at http://master:4040
15/05/12 17:42:05 INFO spark.SparkContext: Added JAR file:/root/Desktop/test2.jar at http://192.168.147.201:41749/jars/test2.jar with timestamp 1431423725289
15/05/12 17:42:05 WARN cluster.YarnClientSchedulerBackend: NOTE: SPARK_WORKER_MEMORY is deprecated. Use SPARK_EXECUTOR_MEMORY or --executor-memory through spark-submit instead.
15/05/12 17:42:06 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.147.201:8032
15/05/12 17:42:06 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
15/05/12 17:42:06 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/12 17:42:06 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/12 17:42:06 INFO yarn.Client: Setting up container launch context for our AM
15/05/12 17:42:06 INFO yarn.Client: Preparing resources for our AM container
15/05/12 17:42:07 WARN yarn.Client: SPARK_JAR detected in the system environment. This variable has been deprecated in favor of the spark.yarn.jar configuration variable.
15/05/12 17:42:07 INFO yarn.Client: Uploading resource file:/usr/local/spark/spark-1.3.1-bin-hadoop2.5.0-cdh5.3.2/lib/spark-assembly-1.3.1-hadoop2.5.0-cdh5.3.2.jar -> hdfs://master:9000/user/root/.sparkStaging/application_1431423592173_0003/spark-assembly-1.3.1-hadoop2.5.0-cdh5.3.2.jar
15/05/12 17:42:11 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/12 17:42:11 WARN yarn.Client: SPARK_JAR detected in the system environment. This variable has been deprecated in favor of the spark.yarn.jar configuration variable.
15/05/12 17:42:11 INFO spark.SecurityManager: Changing view acls to: root
15/05/12 17:42:11 INFO spark.SecurityManager: Changing modify acls to: root
15/05/12 17:42:11 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/05/12 17:42:11 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/12 17:42:11 INFO impl.YarnClientImpl: Submitted application application_1431423592173_0003
15/05/12 17:42:12 INFO yarn.Client: Application report for application_1431423592173_0003 (state: FAILED)
15/05/12 17:42:12 INFO yarn.Client:
client token: N/A
     diagnostics: Application application_1431423592173_0003 submitted by user root to unknown queue: root.hadoop
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: root.hadoop
     start time: 1431423731271
     final status: FAILED
     tracking URL: N/A
     user: root
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:113)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:59)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:381)
    at WordCountTest$.main(WordCountTest.scala:14)
    at WordCountTest.main(WordCountTest.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

我的代码非常简单,如下所示:

object WordCountTest {
  def main (args: Array[String]): Unit = {
    Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
    Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)

    val sparkConf = new SparkConf().setAppName("WordCountTest Prog")
    val sc = new SparkContext(sparkConf)
    val sqlContext = new SQLContext(sc)

    val file = sc.textFile("/data/test/pom.xml")
    val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)
    println(counts)
    counts.saveAsTextFile("/data/test/pom_count.txt")
  }
}

我已经调试了这个问题2天了。救命!救命! THX。

2 个答案:

答案 0 :(得分:1)

尝试将队列名称更改为hadoop

答案 1 :(得分:0)

在我的情况下, 将“--queue thequeue”更改为“--queue default” 它的工作

  

运行:

     

./ bin / spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --driver-memory 4g --executor-memory 2g --executor-cores 1 --queue thequeue lib / spark-examples * .jar 10

     

时报如下错误,只需要将“--queue thequeue”改成“--queue default”即可。