Hadoop:无法启动NameNode

时间:2019-05-21 23:36:25

标签: java hadoop hdfs namenode

我正在建立一个由1个主节点和2个从属节点组成的Hadoop集群。当我运行jps时,NameNode守护进程未在主节点上运行,但DataNode和Secondary NameNode正在运行(分别在从属节点和主节点上)。

错误

java.net.BindException: Problem binding to [master-1320-2:9000] java.net.BindException: Cannot assign requested address
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:736)
        at org.apache.hadoop.ipc.Server.bind(Server.java:562)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:1038)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2810)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:960)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:421)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:802)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:457)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:545)
        ... 13 more
2019-05-21 22:39:06,481 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.net.BindException: Problem binding to [master-1320-2:9000] java.net.BindException: Cannot assign requested address

配置文件

etc / hosts /

127.0.0.1 localhost
ip master-1320-2
ip hdfs-slave1-1320
ip hdfs-slave2-1320

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

core-site.xml

<configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://master-ip:9000</value>
        </property>
    </configuration>

hdfs-site.xml

<configuration>
    <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:///home/ubuntu/data/</value>
    </property>

    <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:///home/ubuntu/data/</value>
    </property>

    <property>
            <name>dfs.replication</name>
            <value>1</value>
    </property>
</configuration>

yarn-site.xml

<configuration>
    <property>
            <name>yarn.acl.enable</name>
            <value>0</value>
    </property>

    <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>master-ip</value>
    </property>

    <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>1536</value>
    </property>

    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>1536</value>
    </property>

    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>128</value>
    </property>

    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>
</configuration>

mapred-site.xml

<configuration>
    <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobtracker.address</name>
        <value>master-ip:54311</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.resource.mb</name>
        <value>512</value>
    </property>

    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>256</value>
    </property>

    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>256</value>
    </property>
</configuration>

大师

master-ip

工人

slave1-ip
slave2-ip

上下文

  • 如果我netstat -tulpn | grep ':9000'执行了任何操作, 那个港口
  • 我试图删除/ tmp文件夹并重新格式化namenode(如 由Google建议),但不起作用
  • 很奇怪,如果我这样修改etc / hosts(请注意第一行的第三个字符串):
127.0.0.1 localhost master-1320-2
ip master-1320-2
ip hdfs-slave1-1320
ip hdfs-slave2-1320

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

NameNode运行,但是我认为这做错了什么(对于网络来说,我是完全陌生的),例如为localhost创建别名或类似的东西。此外,下游流程(如使用hdfs dfsadmin -report进行检查)无效。

0 个答案:

没有答案