错误namenode.FSNamesystem:FSNamesystem初始化失败

时间:2012-01-07 23:18:32

标签: hadoop hdfs

我在ubuntu VM上以伪分布式模式运行hadoop。我最近决定增加可用于我的VM的内核和内核数量,这似乎完全搞砸了hdfs。首先,它处于安全模式,我使用以下方式手动释放:

hadoop dfsadmin -safemode leave

然后我跑了:

hadoop fsck -blocks

几乎每个街区都已损坏或丢失。所以我想,这只是为了我的学习,我删除了“/ user / msknapp”中的所有内容以及“/var/lib/hadoop-0.20/cache/mapred/mapred/.settings”中的所有内容。所以块错误消失了。然后我尝试:

hadoop fs -put myfile myfile

得到(删节):

    12/01/07 15:05:29 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1490)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:653)
    at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
12/01/07 15:05:29 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
12/01/07 15:05:29 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/msknapp/myfile" - Aborting...
put: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
12/01/07 15:05:29 ERROR hdfs.DFSClient: Exception closing file /user/msknapp/myfile : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1490)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:653)
    at ...

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1490)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:653)
    at ...

所以我试图停止并重新启动namenode和datanode。没有运气:

hadoop namenode

    12/01/07 15:13:47 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/image/fsimage (Permission denied)
    at java.io.RandomAccessFile.open(Native Method)
    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.isConversionNeeded(FSImage.java:683)
    at org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:690)
    at org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:60)
    at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:469)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:297)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:358)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:327)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:465)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1239)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1248)
12/01/07 15:13:47 ERROR namenode.NameNode: java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/image/fsimage (Permission denied)
    at java.io.RandomAccessFile.open(Native Method)
    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.isConversionNeeded(FSImage.java:683)
    at org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:690)
    at org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:60)
    at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:469)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:297)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:358)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:327)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:465)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1239)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1248)

12/01/07 15:13:47 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/

请有人帮帮我吗?我一直试图解决这个问题几个小时。

2 个答案:

答案 0 :(得分:6)

进入已配置hdfs的位置。删除那里的所有内容,格式化namenode,你很高兴。如果您没有正确关闭群集,通常会发生这种情况!

答案 1 :(得分:0)

以下错误意味着,fsimage文件没有权限

namenode.NameNode: java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/image/fsimage (Permission denied)

所以授予fsimage文件的权限,

$ chmod -R 777 fsimage