hadoop抱怨试图覆盖非空目的地目录

时间:2015-06-16 01:17:54

标签: hadoop hdfs

我正在关注Rasesh Mori的instructions to install Hadoop on a multinode cluster,并且已经达到了jps显示各个节点已启动并运行的程度。我可以将文件复制到hdfs;我是这样做的 $HADOOP_HOME/bin/hdfs dfs -put ~/in /in 然后尝试用它运行wordcount示例程序 $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /in /out 但我收到了错误

15/06/16 00:59:53 INFO mapreduce.Job: Task Id : attempt_1434414924941_0004_m_000000_0, Status : FAILED Rename cannot overwrite non empty destination directory /home/hduser/hadoop-2.6.0/nm-local-dir/usercache/hduser/appcache/application_1434414924941_0004/filecache/10 java.io.IOException: Rename cannot overwrite non empty destination directory /home/hduser/hadoop-2.6.0/nm-local-dir/usercache/hduser/appcache/application_1434414924941_0004/filecache/10 at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:716) at org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:228) at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:659) at org.apache.hadoop.fs.FileContext.rename(FileContext.java:909) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:364) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

我该如何解决这个问题?

2 个答案:

答案 0 :(得分:4)

这是Hadoop 2.6.0中的一个错误。它已被标记为已修复但偶尔仍会发生(请参阅:https://issues.apache.org/jira/browse/YARN-2624)。

清除appcache目录并重新启动YARN守护进程应该很可能解决这个问题。

答案 1 :(得分:0)

我在/ hadoop / yarn / local / usercache / hue / filecache /目录中遇到了同样的错误。 我已经完成了sudo rm -rf / hadoop / yarn / local / usercache / hue / filecache / *并解决了它。