使用exitCode退出Spark作业容器:-1000

时间:2017-10-05 05:48:02

标签: apache-spark apache-spark-sql

我一直在努力使用spark 2.0.0在纱线群集模式下运行样本作业,使用exitCode存在作业:-1000,没有任何其他线索。相同的作业在本地模式下正常运行。

Spark命令:

spark-submit \
--conf "spark.yarn.stagingDir=/xyz/warehouse/spark" \
--queue xyz \
--class com.xyz.TestJob \
--master yarn \
--deploy-mode cluster \
--conf "spark.local.dir=/xyz/warehouse/tmp" \
/xyzpath/java-test-1.0-SNAPSHOT.jar $@

TestJob类:

public class TestJob {
    public static void main(String[] args) throws InterruptedException {
        SparkConf conf = new SparkConf();
        JavaSparkContext jsc = new JavaSparkContext(conf);
        System.out.println(
                "TOtal count:"+
                        jsc.parallelize(Arrays.asList(new Integer[]{1,2,3,4})).count());
        jsc.stop();
    }
}

错误日志:

17/10/04 22:26:52 INFO Client: Application report for application_1506717704791_130756 (state: ACCEPTED)
17/10/04 22:26:52 INFO Client:
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: root.xyz
         start time: 1507181210893
         final status: UNDEFINED
         tracking URL: http://xyzserver:8088/proxy/application_1506717704791_130756/
         user: xyz
17/10/04 22:26:53 INFO Client: Application report for application_1506717704791_130756 (state: ACCEPTED)
17/10/04 22:26:54 INFO Client: Application report for application_1506717704791_130756 (state: ACCEPTED)
17/10/04 22:26:55 INFO Client: Application report for application_1506717704791_130756 (state: ACCEPTED)
17/10/04 22:26:56 INFO Client: Application report for application_1506717704791_130756 (state: FAILED)
17/10/04 22:26:56 INFO Client:
         client token: N/A
         diagnostics: Application application_1506717704791_130756 failed 5 times due to AM Container for appattempt_1506717704791_130756_000005 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://xyzserver:8088/cluster/app/application_1506717704791_130756Then, click on links to logs of each attempt.
Diagnostics: Failing this attempt. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: root.xyz
         start time: 1507181210893
         final status: FAILED
         tracking URL: http://xyzserver:8088/cluster/app/application_1506717704791_130756
         user: xyz
17/10/04 22:26:56 INFO Client: Deleted staging directory /xyz/spark/.sparkStaging/application_1506717704791_130756
Exception in thread "main" org.apache.spark.SparkException: Application application_1506717704791_130756 finished with failed status
        at org.apache.spark.deploy.yarn.Client.run(Client.scala:1167)
        at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1213)

当我浏览页面http://xyzserver:8088/cluster/app/application_1506717704791_130756时,它不存在。

找不到纱线申请日志 -

$yarn logs -applicationId application_1506717704791_130756 
/apps/yarn/logs/xyz/logs/application_1506717704791_130756 does not have any log files.

可能导致此错误的根本原因以及如何获取详细的错误日志?

1 个答案:

答案 0 :(得分:1)

花了将近一整天后,我发现了根本原因。当我删除spark.yarn.stagingDir时,它开始工作,我仍然不确定为什么火花会抱怨它 -

以前的Spark提交 -

spark-submit \
--conf "spark.yarn.stagingDir=/xyz/warehouse/spark" \
--queue xyz \
--class com.xyz.TestJob \
--master yarn \
--deploy-mode cluster \
--conf "spark.local.dir=/xyz/warehouse/tmp" \
/xyzpath/java-test-1.0-SNAPSHOT.jar $@

新 -

spark-submit \
--queue xyz \
--class com.xyz.TestJob \
--master yarn \
--deploy-mode cluster \
--conf "spark.local.dir=/xyz/warehouse/tmp" \
/xyzpath/java-test-1.0-SNAPSHOT.jar $@