我想在我的ubuntu机器上实现一个伪分布式hadoop系统。但是我无法启动namenode(其他像jobtracker可以正常启动)。 我的开始命令是:
./hadoop namenode -format
./start-all.sh
我检查了位于logs / hadoop-mongodb-namenode-mongodb.log中的namenode日志
65 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec cl ock time, 1 cycles
66 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
67 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec c lock time, 1 cycles
68 2013-12-25 13:44:39,799 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
69 2013-12-25 13:44:39,809 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
70 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
71 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
72 2013-12-25 13:44:39,812 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:9000
73 2013-12-25 13:44:39,847 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
74 2013-12-25 13:44:39,878 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
75 2013-12-25 13:44:39,884 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
76 2013-12-25 13:44:39,888 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
77 2013-12-25 13:44:39,889 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mongodb cause:java.net.BindException: Address already in use
78 2013-12-25 13:44:39,889 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
79 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
80 java.lang.InterruptedException: sleep interrupted
81 at java.lang.Thread.sleep(Native Method)
82 at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
83 at java.lang.Thread.run(Thread.java:701)
84 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number o f syncs: 0 SyncTimes(ms): 0
85 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
86 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
87 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
88 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
89 2013-12-25 13:44:39,909 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use
90 at sun.nio.ch.Net.bind0(Native Method)
91 at sun.nio.ch.Net.bind(Net.java:174)
92 at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
93 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
94 at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
95 at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602)
96 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
97 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
98 at java.security.AccessController.doPrivileged(Native Method)
99 at javax.security.auth.Subject.doAs(Subject.java:416)
100 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
101 at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
102 at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
103 at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
104 at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
105 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
106
107 2013-12-25 13:44:39,910 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
108 /************************************************************
109 SHUTDOWN_MSG: Shutting down NameNode at mongodb/192.168.10.2
110 ************************************************************/
110,1 Bot
63 2013-12-25 13:44:39,796 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
64 2013-12-25 13:44:39,796 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
65 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec cl ock time, 1 cycles
66 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
67 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec c lock time, 1 cycles
68 2013-12-25 13:44:39,799 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
69 2013-12-25 13:44:39,809 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
70 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
71 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
72 2013-12-25 13:44:39,812 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:9000
73 2013-12-25 13:44:39,847 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
74 2013-12-25 13:44:39,878 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
75 2013-12-25 13:44:39,884 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
76 2013-12-25 13:44:39,888 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
77 2013-12-25 13:44:39,889 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mongodb cause:java.net.BindException: Address already in use
78 2013-12-25 13:44:39,889 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
79 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
80 java.lang.InterruptedException: sleep interrupted
81 at java.lang.Thread.sleep(Native Method)
82 at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
83 at java.lang.Thread.run(Thread.java:701)
84 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number o f syncs: 0 SyncTimes(ms): 0
85 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
86 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
87 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
88 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
89 2013-12-25 13:44:39,909 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use
90 at sun.nio.ch.Net.bind0(Native Method)
91 at sun.nio.ch.Net.bind(Net.java:174)
92 at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
93 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
94 at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
95 at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602)
96 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
97 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
98 at java.security.AccessController.doPrivileged(Native Method)
99 at javax.security.auth.Subject.doAs(Subject.java:416)
100 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
101 at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
102 at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
103 at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
104 at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
105 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
106
107 2013-12-25 13:44:39,910 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
这是错误信息。很明显,端口号出错了! 以下是我的conf文件: 芯的site.xml
1 <?xml version="1.0"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <configuration>
4 <property>
5 <name>fs.default.name</name>
6 <value>hdfs://localhost:9000</value>
7 </property>
8 </configuration>
HDFS-site.xml中
1 <?xml version="1.0"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3
4 <!-- Put site-specific property overrides in this file. -->
5 <configuration>
6 <property>
7 <name>dfs.replication</name>
8 <value>1</value>
9 </property>
10
11 <property>
12 <name>dfs.name.dir</name>
13 <value>/var/hadoop/hadoop-1.2.1/dfs.name.dir</value>
14 </property>
15
16 <property>
17 <name>dfs.data.dir</name>
18 <value>/var/hadoop/hadoop-1.2.1/dfs.data.dir</value>
19 </property>
20 </configuration>
无论我如何将端口更改为其他人并重新启动hadoop,错误都存在! 有人可以帮帮我吗?
答案 0 :(得分:2)
尝试删除hdfs数据目录,而不是在启动hdfs之前格式化namenode,首先启动hdfs并检查jps
输出。如果一切正常,那么尝试格式化namenode并重新检查。如果仍有问题,请提供日志详细信息。
P.S:不要杀死进程。只需使用stop-all.sh
或任何你应该停止hadoop。
答案 1 :(得分:0)
我的集群中的一台从属机器上的datanode正在抛出类似的端口绑定异常:
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException: Address already in use
我注意到datanode的默认Web界面端口,即50075已经绑定到另一个应用程序:
[ap2]-> netstat -an | grep -i 50075
tcp 0 0 10.0.1.1:45674 10.0.1.1:50075 ESTABLISHED
tcp 0 0 10.0.1.1:50075 10.0.1.1:45674 ESTABLISHED
[ap2]->
我更改了conf/hdfs-site.xml
中的Datanode网络界面:
<property>
<name>dfs.datanode.http.address</name>
<value>10.0.1.1:50080</value>
<description>Datanode http port</description>
</property>
这有助于解决此问题,类似地,您可以通过在dfs.http.address
中设置conf/hadoop-site.xml
来更改网络界面侦听的默认地址和端口,例如localhost:9090,但请确保该端口可用。