hadoop重启后,Namenode停止工作

时间:2014-03-09 17:21:07

标签: hadoop

我有一台安装了Hadoop的服务器。

我想更改一些配置(关于mapreduce.map.output.compress);因此,我更改了配置文件,并重新启动了Hadoop,其中包含:

stop-all.sh
start-all.sh

之后,我无法再次使用它,因为它处于安全模式:

The reported blocks is only 0 but the threshold is 0.9990 and the total blocks 11313. Safe mode will be turned off automatically

请注意,报告的块数是0,并且根本没有增加。

因此,我强迫它离开安全模式:

bin/hadoop dfsadmin -safemode leave

现在,我收到这样的错误:

2014-03-09 18:16:40,586 [Thread-1] ERROR org.apache.hadoop.hdfs.DFSClient - Failed to close file /tmp/temp-39739076/tmp2073328134/GQL.jar
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/temp-39739076/tmp2073328134/GQL.jar could only be replicated to 0 nodes, instead of 1

如果有帮助,我的hdfs-site.xml就是:

<configuration>
<property>
        <name>dfs.replication</name>
        <value>1</value>
</property>
<property>
    <name>dfs.name.dir</name>
    <value>/home/hduser/hadoop/name/data</value>
</property>


</configuration>

1 个答案:

答案 0 :(得分:1)

我多次遇到这个问题。每当您收到错误说明x could only be replicated to 0 nodes, instead of 1时,以下步骤都应解决问题:

Stop all Hadoop services with: stop-all.sh
Delete the dfs/name and dfs/data directories
Format the NameNode with: hadoop namenode -format
Start Hadoop again with: start-all.sh