在集群模式下运行Spark-找不到上载的jar

时间:2018-07-03 07:40:32

标签: scala apache-spark

我正在尝试以集群模式运行以scala编写的spark应用程序。 客户端模式可以正常工作。

在执行期间,Spark将罐子上传到HDFS,但是在尝试加载这些罐子时失败,并出现FileNotFound异常。

以下是上传步骤的日志

18/07/03 09:13:36 INFO Client: Uploading resource file:/application/lib/Ingestion-assembly-1.0.0.jar -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/Ingestion-assembly-1.0.0.jar
18/07/03 09:13:36 INFO Client: Uploading resource file:/application/lib/exposition_2.10-1.0.jar -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/exposition_2.10-1.0.jar
18/07/03 09:13:36 INFO Client: Uploading resource file:/application/lib/ingestion_2.10-1.0.0.jar -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/ingestion_2.10-1.0.0.jar
18/07/03 09:13:36 INFO Client: Uploading resource file:/application/lib/preparation_2.10-1.0.jar -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/preparation_2.10-1.0.jar
18/07/03 09:13:36 INFO Client: Uploading resource file:/usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/datanucleus-api-jdo-3.2.6.jar
18/07/03 09:13:36 INFO Client: Uploading resource file:/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/datanucleus-rdbms-3.2.9.jar
18/07/03 09:13:36 INFO Client: Uploading resource file:/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/datanucleus-core-3.2.10.jar
18/07/03 09:13:36 INFO Client: Uploading resource file:/target/generated/ref_cli_20180625_150202/log4j-executor.properties -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/log4j-executor.properties
18/07/03 09:13:36 INFO Client: Uploading resource file:/target/generated/ref_cli_20180625_150202/environment.properties -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/environment.properties
18/07/03 09:13:36 INFO Client: Uploading resource file:/usr/hdp/current/spark-client/conf/hive-site.xml -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/hive-site.xml
18/07/03 09:13:36 INFO Client: Uploading resource file:/target/log/ref_cli_20180625_150202_4257/spark-399dc349-ac5b-424f-b929-ebeed5e36e36/__spark_conf__2546852781270769955.zip -> hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/__spark_conf__2546852781270769955.zip

这是发生错误时的日志:

18/07/03 09:13:53 INFO Client: Application report for application_1528888731166_2379 (state: ACCEPTED)
18/07/03 09:13:54 INFO Client: Application report for application_1528888731166_2379 (state: FAILED)
18/07/03 09:13:54 INFO Client:
         client token: N/A
         diagnostics: Application application_1528888731166_2379 failed 2 times due to AM Container for appattempt_1528888731166_2379_000002 exited with  exitCode: -1000
For more detailed output, check the application tracking page: http://cluster:8088/cluster/app/application_1528888731166_2379 Then click on links to logs of each attempt.
Diagnostics: File does not exist: hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/preparation_2.10-1.0.jar
java.io.FileNotFoundException: File does not exist: hdfs://server/user/elrudaille/.sparkStaging/application_1528888731166_2379/preparation_2.10-1.0.jar
        at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1446)

这是提交火花的电话

spark-submit  
  --master yarn 
  --files 
        /target/generated/ref_cli_20180625_150202/log4j-executor.properties,
        /target/generated/ref_cli_20180625_150202/environment.properties,
        /usr/hdp/current/spark-client/conf/hive-site.xml 
  --deploy-mode cluster   
  --driver-memory 2g  
  --num-executors 1  
  --executor-memory 3g  
  --executor-cores 1  
  --conf spark.yarn.am.cores=1  
  --conf spark.yarn.am.memory=1g  
  --conf spark.yarn.executor.memoryOverhead=512  
  --conf spark.yarn.am.memoryOverhead=512  
  --conf spark.logConf=true   
  --conf spark.ui.enabled=false 
  --conf spark.sql.hive.metastore.version=1.2.1 
  --conf spark.yarn.dist.files=
        /usr/hdp/current/spark-client/conf/hive-site.xml,
        /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,
        /usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,
        /usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar
  --conf hive.metastore.schema.verification=true 
  --conf spark.driver.extraJavaOptions= -Duser.timezone=UTC -Dlog4j.configuration=file:log4j-executor.properties 
  --conf spark.executor.extraJavaOptions= -Duser.timezone=UTC -Dlog4j.configuration=file:log4j-executor.properties  
  --class ibp.adl.spark.Starter 
  --jars    
        /application/lib/Ingestion-assembly-1.0.0.jar,
        /application/lib/exposition_2.10-1.0.jar,
        /application/lib/ingestion_2.10-1.0.0.jar,
        /application/lib/preparation_2.10-1.0.jar,
        /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,
        /usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,
        /usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar
  /appli/osg/spark-runner-assembly-0.1-SNAPSHOT.jar 
  /target/generated/ref_cli_20180625_150202/environment.properties 

创建SparkContext的Scala代码如下:

val conf = new SparkConf().setAppName("OSG Hive on Spark")
    val spark = new SparkContext(conf)
    spark.hadoopConfiguration.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem")
    spark.hadoopConfiguration.set("fs.file.impl", "org.apache.hadoop.fs.LocalFileSystem")
    val hiveContext = new HiveContext(spark)
    hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
    hiveContext.setConf("hive.groupby.orderby.position.alias", "true")

谢谢您的帮助。

亲切的问候, 鲁迪

0 个答案:

没有答案