我在vmware中的ubuntu 12.04上的单节点环境中运行hadoop wordcount示例。 我运行这样的例子: -
hadoop@master:~/hadoop$ hadoop jar hadoop-examples-1.0.4.jar wordcount
/home/hadoop/gutenberg/ /home/hadoop/gutenberg-output
我在以下位置输入文件:
/home/hadoop/gutenberg
和输出文件的位置是:
/home/hadoop/gutenberg-output
当我运行wordcount程序时,我遇到以下错误: -
13/04/18 06:02:10 INFO mapred.JobClient: Cleaning up the staging area
hdfs://localhost:54310/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201304180554_0001
13/04/18 06:02:10 ERROR security.UserGroupInformation: PriviledgedActionException
as:hadoop cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory
/home/hadoop/gutenberg-output already exists
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory
/home/hadoop/gutenberg-output already exists at
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.j
ava:137) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:887) at
org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850) at
org.apache.hadoop.mapreduce.Job.submit(Job.java:500) at
org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530) at
org.apache.hadoop.examples.WordCount.main(WordCount.java:67) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.util.RunJar.main(RunJar.java:156) hadoop@master:~/hadoop$ bin/stop-
all.sh Warning: $HADOOP_HOME is deprecated. stopping jobtracker localhost: stopping
tasktracker stopping namenode localhost: stopping datanode localhost: stopping
secondarynamenode hadoop@master:~/hadoop$
答案 0 :(得分:9)
删除已存在的输出文件,或输出到其他文件。
(我有点好奇你对你所考虑的错误信息的其他解释。)
答案 1 :(得分:2)
就像Dave(和例外)所说的那样,你的输出目录已经存在。您需要先输出到其他目录或首先删除现有目录,然后使用:
hadoop fs -rmr /home/hadoop/gutenberg-output
答案 2 :(得分:2)
如果您已经创建了自己的.jar并且正在尝试运行它,请注意:
为了完成你的工作,你必须写下这样的东西:
hadoop jar <jar-path> <package-path> <input-in-hdfs-path> <output-in-hdfs-path>
但如果你仔细查看一下你的驱动程序代码,就会发现你已将arg[0]
设为输入,arg[1]
为输出 ...我会告诉你:
FileInputFormart.addInputPath(conf, new Path(args[0]));
FileOutFormart.setOutputPath(conf, new Path(args[1]));
但是,hadoop将arg[0
]改为<package-path>
而不是<input-in-hdfs-path>
而将arg [1]改为<input-in-hdfs-path>
而不是<output-in-hdfs-path>
因此,为了使其有效,您应该使用:
FileInputFormart.addInputPath(conf, new Path(args[1]));
FileOutFormart.setOutputPath(conf, new Path(args[2]));
使用arg[1]
和arg[2]
,所以它会得到正确的东西! :)
希望它有所帮助。欢呼声。
答案 3 :(得分:1)
检查是否有'tmp'文件夹。
hadoop fs -ls /
如果您看到输出文件夹或'tmp'同时删除(考虑到没有正在运行的活动作业)
hadoop fs -rmr / tmp