Hadoop SecondaryNameNode问题

时间:2015-02-04 21:20:13

标签: apache hadoop

使用Ubuntu VM运行Hadoop 1.2.1版 4 VM 1. hadoop-NN(名称节点) 2. hadoop-snn(辅助名称节点) 3. hadoop-dn01(数据节点1) 4. hadoop-dn02(数据节点2) 所有流程都使用start-all.sh

开始

我没有看到辅助名称节点中的编辑事件,这意味着辅助中的fsiamge没有得到更新。 SecondaryNameNode上的LOg文件显示以下错误。

2015-02-04 13:16:12,083 INFO org.apache.hadoop.hdfs.server.common.Storage:文件数= 50 2015-02-04 13:16:12,086 INFO org.apache.hadoop.hdfs.server.common.Storage:正在建设的文件数量= 0 2015-02-04 13:16:12,087 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:开始加载编辑文件/ tmp / hadoop-hadoop / dfs / namesecondary / current / edits 2015-02-04 13:16:12,088 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:/ tmp / hadoop-hadoop / dfs / namesecondary / current / edits的EOF,到达编辑日志结束发现:8。字节读数:740 2015-02-04 13:16:12,088 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:编辑文件/ tmp / hadoop-hadoop / dfs / namesecondary / current / edits of size 740 edits#8 in 0 in 0秒。 2015-02-04 13:16:12,088 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:事务数:0事务总时间(毫秒):0同步批处理的事务数:0同步数:0 SyncTimes(ms):0 2015-02-04 13:16:12,128 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:关闭编辑日志:position = 740,editlog = / tmp / hadoop-hadoop / dfs / namesecondary / current / edits 2015-02-04 13:16:12,128 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:关闭成功:truncate to 740,editlog = / tmp / hadoop-hadoop / dfs / namesecondary / current / edits 2015-02-04 13:16:12,130 INFO org.apache.hadoop.hdfs.server.common.Storage:图像文件/ tmp / hadoop-hadoop / dfs / namesecondary / current / fsimage,大小为5124字节,保存在0秒内。 2015-02-04 13:16:12,229 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:关闭编辑日志:position = 4,editlog = / tmp / hadoop-hadoop / dfs / namesecondary / current / edits 2015-02-04 13:16:12,230 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:关闭成功:截断为4,editlog = / tmp / hadoop-hadoop / dfs / namesecondary / current / edits 2015-02-04 13:16:12,485 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:已发布网址hadoop-nn:50070putimage = 1& port = 50090& machine = 0.0.0.0& token = -41 :307905665:0:1423080068000:1423079764851&安培; newChecksum = 9bbe4619db3323211ed473f3f8acb7a9 2015-02-04 13:16:12,485 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage:打开与http://hadoop-nn:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=-41:307905665:0:1423080068000:1423079764851&newChecksum=9bbe4619db3323211ed473f3f8acb7a9的连接 2015-02-04 13:16:12,489 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:doCheckpoint中的异常: 2015-02-04 13:16:12,490 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:java.io.FileNotFoundException:http://hadoop-nn:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=-41:307905665:0:1423080068000:1423079764851&newChecksum=9bbe4619db3323211ed473f3f8acb7a9         at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1624)         在org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:177)         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(SecondaryNameNode.java:462)         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:525)         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:396)         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:360)         在java.lang.Thread.run(Thread.java:745)

1 个答案:

答案 0 :(得分:1)

<property> <name>dfs.secondary.http.address</name> <value>hadoop-snn:50090</value> </property> 在hdfs-site.xml中添加此标记可解决问题。