Apache spark作业无法在Yarn上部署

时间:2015-07-29 18:19:40

标签: apache-spark yarn hadoop2 apache-spark-sql

我正在尝试将一个spark作业部署到我的Yarn集群但是我遇到了一些异常,我不明白为什么。

这是堆栈跟踪:

15/07/29 14:07:13 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
Exception in thread "Yarn application state monitor" org.apache.spark.SparkException: Error asking standalone scheduler to shut down executors
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(C oarseGrainedSchedulerBackend.scala:261)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stop(CoarseGrai nedSchedulerBackend.scala:2 66)
    at  org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSch edulerBackend.scala:158)
    at  org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:416)
    at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1411)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1644)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$$anon$1.run(YarnClientSchedulerBackend.scala:139)
Caused by: java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java
:1326) at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
    at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
    at scala.concurrent.Await$.result(package.scala:107)
    at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
    at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBackend.scala:257)
    ... 6 more
15/07/29 14:07:13 INFO cluster.YarnClientSchedulerBackend: Asking each executor to shut down

这是我的配置:

              SparkConf sparkConf = new SparkConf(true).setAppName("SparkQueryApp")
             .setMaster("yarn-client")// "yarn-cluster" or "yarn-client"
             .set("es.nodes", "10.0.0.207")
             .set("es.nodes.discovery", "false")
             .set("es.cluster", "wp-es-reporting-prod")     
             .set("es.scroll.size", "5000")
            .setJars(JavaSparkContext.jarOfClass(Demo.class))
            .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")

            .set("spark.logConf", "true");

知道为什么吗?

0 个答案:

没有答案