jobtracker.info只能复制到0个节点,而不是1

时间:2015-06-03 03:33:02

标签: java hadoop hdfs

启动时遇到Hadoop错误。以下是JobTracker的日志信息:

2015-06-03 09:38:26,106 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/hadoop-hadooptest/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2091)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:795)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy7.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy7.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3779)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3639)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2842)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3082)

继续上述:

 
2015-06-03 09:38:26,107 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for null bad datanode[0] nodes == null
2015-06-03 09:38:26,107 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations. Source file "/tmp/hadoop-hadooptest/mapred/system/jobtracker.info" - Aborting...
2015-06-03 09:38:26,107 WARN org.apache.hadoop.mapred.JobTracker: Writing to file hdfs://172.18.11.9:9000/tmp/hadoop-hadooptest/mapred/system/jobtracker.info failed!
2015-06-03 09:38:26,107 WARN org.apache.hadoop.mapred.JobTracker: FileSystem is not ready yet!
2015-06-03 09:38:26,130 WARN org.apache.hadoop.mapred.JobTracker: Failed to initialize recovery manager.
磁盘空间足够了,我关闭了防火墙,结果是:

[hadooptest@hw009 logs]$ chkconfig iptables --list
iptables        0:off   1:off   2:off   3:off   4:off   5:off   6:off

这是为什么?我该如何解决这个问题?非常感谢。

我尝试了一些方法,但无法解决问题。

enter link1

enter link2

enter link3

1 个答案:

答案 0 :(得分:0)

我已经解决了这个问题。我使用了commond bin/start-all.sh来启动hadoop集群。但是现在我先运行commond bin/start-dfs.sh,然后在5分钟后运行commond bin/start-mapred.sh

我想也许我的服务器太旧了,所以启动HDFS需要很长时间。当HDFS真正运行时,启动mapred系统不会遇到问题。