错误org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:doCheckpoint中的异常

时间:2014-02-12 15:58:36

标签: hadoop

我在群集设置中使用Hadoop 2.2.0并且我反复收到以下错误,异常在文件/opt/dev/hadoop/2.2.0/logs/hadoop-deploy-secondarynamenode-olympus.log下的名称节点olympus中生成,例如

2014-02-12 16:19:59,013 INFO org.mortbay.log: Started SelectChannelConnector@olympus:50090
2014-02-12 16:19:59,013 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Web server init done
2014-02-12 16:19:59,013 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Secondary Web-server up at: olympus:50090
2014-02-12 16:19:59,013 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint Period   :3600 secs (60 min)
2014-02-12 16:19:59,013 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Log Size Trigger    :1000000 txns
2014-02-12 16:20:59,161 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
java.io.IOException: Inconsistent checkpoint fields.
LV = -47 namespaceID = 291272852 cTime = 0 ; clusterId = CID-e3e4ac32-7384-4a1f-9dce-882a6e2f4bd4 ; blockpoolId = BP-166254569-192.168.92.21-1392217748925.
Expecting respectively: -47; 431978717; 0; CID-85b65e19-4030-445b-af8e-5933e75a6e5a; BP-1963497814-192.168.92.21-1392217083597.
    at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:133)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:519)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:380)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$2.run(SecondaryNameNode.java:346)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:342)
    at java.lang.Thread.run(Thread.java:744)
2014-02-12 16:21:59,183 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
java.io.IOException: Inconsistent checkpoint fields.
LV = -47 namespaceID = 291272852 cTime = 0 ; clusterId = CID-e3e4ac32-7384-4a1f-9dce-882a6e2f4bd4 ; blockpoolId = BP-166254569-192.168.92.21-1392217748925.
Expecting respectively: -47; 431978717; 0; CID-85b65e19-4030-445b-af8e-5933e75a6e5a; BP-1963497814-192.168.92.21-1392217083597.
    at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:133)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:519)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:380)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$2.run(SecondaryNameNode.java:346)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:342)
    at java.lang.Thread.run(Thread.java:744)

有人可以在这里建议什么是错的吗?

2 个答案:

答案 0 :(得分:9)

我遇到了同样的错误,当我删除[hadoop临时目录] / dfs / namesecondary目录时,它就出现了。 对我来说[hadoop临时目录]是core-site.xml中hadoop.tmp.dir的值

答案 1 :(得分:2)

我们需要先停止hadoop服务,然后删除tmp辅助namenode目录(hadoop.tmp.dir将告诉辅助namenode数据目录的路径)。在此之后,再次启动服务,问题将得到解决。