无法在Yarn群集上运行Spark作业 - 重试连接到服务器

时间:2015-07-27 14:37:14

标签: java apache-spark yarn

我在同一台机器上设置了我的纱线簇和我的火花簇,但现在我需要使用客户端模式运行带有纱线的火花作业。

以下是我的工作示例配置:

SparkConf sparkConf = new SparkConf(true).setAppName("SparkQueryApp")
             .setMaster("yarn-client")// "yarn-cluster" or "yarn-client"
             .set("es.nodes", "10.0.0.207")
             .set("es.nodes.discovery", "false")
             .set("es.cluster", "wp-es-reporting-prod")     
             .set("es.scroll.size", "5000")
            .setJars(JavaSparkContext.jarOfClass(Demo.class))
            .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
            .set("spark.default.parallelism", String.valueOf(cpus * 2))
            .set("spark.executor.memory", "10g")
            .set("spark.num.executors", "40")
            .set("spark.dynamicAllocation.enabled", "true")
            .set("spark.dynamicAllocation.minExecutors", "10")
            .set("spark.dynamicAllocation.maxExecutors", "50")              .set("spark.logConf", "true");

当我尝试运行我的Spark工作时,这似乎不起作用 java -jar spark-test-job.jar"

我得到了这个例外

405472 [main] INFO  org.apache.hadoop.ipc.Client - Retrying connect to   
server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,    
sleepTime=1 SECONDS)
406473 [main] INFO  org.apache.hadoop.ipc.Client - Retrying connect to   
server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is   
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
...

任何帮助?

0 个答案:

没有答案