我正在尝试在 azure服务器上多路径安装hadoop,在我的 secondarynamenode 的start-dfs.sh命令上启动, datanode 正在启动但是 namenode 无法从以下错误开始,
2016-02-10 10:52:59,321 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2016-02-10 10:52:59,322 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2016-02-10 10:52:59,324 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2016-02-10 10:52:59,336 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2016-02-10 10:52:59,336 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2016-02-10 10:52:59,336 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2016-02-10 10:52:59,346 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [testhadooptest:9000] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:938)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:783)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:344)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2016-02-10 10:52:59,349 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2016-02-10 10:52:59,356 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at testhadooptest/public_ip
************************************************************/
的/ etc /主机
127.0.0.1 localhost
public_ip testhadooptest
public_ip tesdatanode
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
但当我从/ etc / hosts 评论行public_ip testhadooptest 时,namenode启动,为什么会出现这种奇怪的行为? 我检查了link和link,但未找到解决方案。请帮忙。
Hadoop文件:
芯-site.xml中
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://testhadooptest:9000</value>
</property>
</configuration>
HDFS-site.xml中
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
</configuration>
Hadoop Verion 2.7.1
提前致谢:)
答案 0 :(得分:0)
问题出在azure服务器上,它不允许在PUBLIC VIRTUAL IP(VIP)ADDRESS上绑定,它只允许在INTERNAL IP ADDRESS上绑定,所以我创建virtual network,将所有3个节点添加到虚拟网络
还将public_ip替换为每台计算机上/ etc / hosts中的internal_ip
新/ etc / hosts
127.0.0.1 localhost
internal_ip testhadooptest
internal_ip tesdatanode
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts