我在Ubuntu Linux 15.04中安装了hadoop 2.6并且运行正常。但是,当我运行一个示例测试mapreduce程序时,它给出了以下错误:
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:54310/user/hduser/input.
请帮助我。以下是错误的完整细节。
hduser@krishadoop:/usr/local/hadoop/sbin$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount input output
Picked up JAVA_TOOL_OPTIONS: -javaagent:/usr/share/java/jayatanaag.jar
15/08/24 15:22:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/08/24 15:22:38 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/08/24 15:22:38 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
15/08/24 15:22:39 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/app/hadoop/tmp/mapred/staging/hduser1122930879/.staging/job_local1122930879_0001
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:54310/user/hduser/input
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
答案 0 :(得分:4)
如果您在物理上看到此路径(文件)并仍然收到错误,则可能与本地文件系统和Hadoop分布式文件系统(HDFS)混淆。为了运行这个map-reduce,这个文件应该位于HDFS中(仅在本地文件系统内部定位不会这样做。)。
您可以通过此命令将本地文件系统文件导入HDFS。
hadoop fs -put <local_file_path> <HDFS_diresctory>
通过此命令确认导入的文件存在于HDFS中。
hadoop fs -ls <HDFS_path>
答案 1 :(得分:1)
您必须在执行hadoop作业之前创建并上传输入。例如,如果您需要上传input.txt
文件,则应执行以下操作:
$HADOOP_HOME/bin/hdfs dfs -mkdir /user/hduser/input
$HADOOP_HOME/bin/hdfs dfs -copyFromLocal $HADOOP_HOME/input.txt /user/hduser/input/input.txt
第一行创建目录,另一行将输入文件上传到hdfs(hadoop fylesystem)。
答案 2 :(得分:1)
您需要以本地模式启动Pig而不是群集节点:
pig -x local
答案 3 :(得分:0)
程序无法找到输入的Hadoop路径。它在本地系统文件而不是Hadoop的DFS中搜索。
当您的程序能够找到HDFS位置时,此问题将消失。我们需要让程序理解配置文件中给出的HDFS位置。为此,请在程序代码中添加这些行。
display: inline-block;
答案 4 :(得分:0)
hadoop jar jarFileName.jar className /input_dir /outputdir
右 以下是错误,因为它是相对路径
hadoop jar jarFileName.jar className input_dir outputdir
错误
答案 5 :(得分:-2)
如果您在日志中找到/bin/bash: /bin/java: No such file or directory
,请在etc / hadoop / hadoop-env.sh中设置JAVA_HOME