火花错误和hadoop错误

时间:2018-04-06 12:40:18

标签: apache-spark hadoop

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/tmp/hadoop-hp/nm-local-dir/usercache/hp/filecache/28/__spark_libs__5301477595013800425.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hp/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/04/06 21:28:08 WARN SparkConf: spark.master yarn-cluster is deprecated in Spark 2.0+, please instead use "yarn" with specified deploy mode.
java.io.FileNotFoundException: /home/hp/data/gTree.txt (No such file or directory)
    at java.io.FileInputStream.open0(Native Method)
    at java.io.FileInputStream.open(FileInputStream.java:195)
    at java.io.FileInputStream.<init>(FileInputStream.java:138)
    at java.io.FileInputStream.<init>(FileInputStream.java:93)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.loadGenTree(kAnonymity_spark.java:50)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.run(kAnonymity_spark.java:391)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.main(kAnonymity_spark.java:427)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
java.io.FileNotFoundException: /home/hp/data/t1_resizingBy_10000.txt (No such file or directory)
    at java.io.FileInputStream.open0(Native Method)
    at java.io.FileInputStream.open(FileInputStream.java:195)
    at java.io.FileInputStream.<init>(FileInputStream.java:138)
    at java.io.FileInputStream.<init>(FileInputStream.java:93)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.loadData(kAnonymity_spark.java:149)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.run(kAnonymity_spark.java:392)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.main(kAnonymity_spark.java:427)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
18/04/06 21:28:39 ERROR ApplicationMaster: User class threw exception: java.lang.NullPointerException
java.lang.NullPointerException
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.performAnonymity(kAnonymity_spark.java:365)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.run(kAnonymity_spark.java:394)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.main(kAnonymity_spark.java:427)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)

/home/hp/data/gTree.txt存在并已添加到Hadoop文件系统中。但我仍然收到错误。

jar文件中的Java代码

FileInputStream stream = new FileInputStream ("/home/hp/data/gTree.txt");
InputStreamReader reader = new InputStreamReader (stream);
BufferedReader buffer = new BufferedReader (reader);

此部分发生错误。

使用Hadoop纱线群时,我不应该设置文件的路径值吗?

hp@master:~$ ls -al /home/hp/data/gTree.txt
-rw-rw-r-- 1 hp hp 419 11월 16 16:17 /home/hp/data/gTree.txt

hp@master:~$ hadoop fs -ls /home/hp/data/gTree.txt
-rw-r--r--   3 hp supergroup        419 2018-04-06 21:06 /home/hp/data/gTree.txt

1 个答案:

答案 0 :(得分:0)

问题正在发生,因为您使用/home/hp/data/gTree.txt专门引用文件FileInputStream,该文件从本地文件系统读取而不是从hdfs读取。

由于在数据节点中运行的Spark应用程序代码试图从本地文件系统读取此文件,因此遇到异常。

根据您的使用情况,您可能必须使用hdfs://<NN:port>/<File Name>来引用该文件。 你很可能想做SparkContext.textFile()。请参阅此example