用主纱线提交Spark2会产生错误"必须设置URL"

时间:2018-03-14 23:05:06

标签: apache-spark cloudera-cdh apache-kudu

我收到例外org.apache.spark.SparkException: A master URL must be set in your configuration

我使用spark2-submit选项deploy-mode = clustermaster = yarn。根据我的理解,我不应该以纱线作为主人来获得这个例外。

提交脚本

export JAVA_HOME=/usr/java/jdk1.8.0_131/
spark2-submit --class com.example.myapp.ClusterEntry \
    --name "Hello World" \
    --master yarn \
    --deploy-mode cluster \
    --driver-memory 1g \
    --executor-memory 1g \
    --executor-cores 3 \
    --packages org.apache.kudu:kudu-spark2_2.11:1.4.0 \
    myapp.jar myconf.file

异常

18/03/14 15:31:47 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 1.0 (TID 3, vm6.adcluster, executor 1): org.apache.spark.SparkException: A master URL must be set in your configuration
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:376)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
    at com.example.myapp.dao.KuduSink.open(KuduSink.scala:18)
    at org.apache.spark.sql.execution.streaming.ForeachSink$$anonfun$addBatch$1.apply(ForeachSink.scala:50)
    at org.apache.spark.sql.execution.streaming.ForeachSink$$anonfun$addBatch$1.apply(ForeachSink.scala:49)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)

群集是运行Spark 2.2的Cloudera群集 我注意到应用程序的KuduSink是异常消息的一部分,也许主URL错误来自KuduContext?但是,在本地为开发人员运行此应用程序时,我没有收到此类错误。

1 个答案:

答案 0 :(得分:0)

你是对的,YARN上的Spark不需要主URL。

确保正确配置SPARK_HOME,YARN_HOME和HADOOP_HOME。

希望你是同一群集中两个不同版本的spark。 CDH包裹默认附带spark 1.6。假设您已通过自定义服务描述符安装spark2并正确配置服务。

确保spark-submit(spark 1)和spark2-submit(spark 2)的配置没有重叠。

确保为spark2服务部署了客户端配置。