Hadoop:获取容器启动失败错误

时间:2016-01-18 05:02:42

标签: java hadoop

我刚刚安装了一个多节点hadoop集群,其中包含一个namenode计算机和两个slavenode。但是,当我运行mapreduce任务时,我不断收到此错误:

container_1453020503065_0030_01_000009

的容器启动失败
:java.lang.IllegalArgumentException:java.net.UnknownHostException: HOME

此处HOME和shubhranshu-OptiPlex-9020是从机的主机名。我已将他们的IP地址和主机名放在/ etc / hosts文件中。 我的/ etc / hosts文件如下所示:

10.0.3.107  HadoopMaster
10.0.3.108  HadoopSlave1
10.0.3.109  HadoopSlave2
127.0.0.1       localhost amrit
#127.0.1.1      amrit
10.0.3.107      amrit
10.0.3.108      HOME
10.0.3.109      shubhranshu-OptiPlex-9020
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

请告诉我是否需要添加更多东西。谢谢!

1 个答案:

答案 0 :(得分:0)

修改/ etc / hosts文件,如下所示:

    127.0.0.1       localhost 
    10.0.3.107      HadoopMaster amrit
    10.0.3.108      HadoopSlave1
    10.0.3.109      HadoopSlave2

还要修改10.0.3.108机器的/ etc / hosts,如下所示:

    127.0.0.1       localhost 
    10.0.3.107      HadoopMaster 
    10.0.3.108      HadoopSlave1 HOME
    10.0.3.109      HadoopSlave2

并修改10.0.3.109机器中的/ etc / hosts,如下所示:

    127.0.0.1       localhost 
    10.0.3.107      HadoopMaster 
    10.0.3.108      HadoopSlave1
    10.0.3.109      HadoopSlave2 shubhranshu-OptiPlex-9020