Hadoop NameNode无法启动

时间:2015-06-11 05:09:37

标签: hadoop cluster-computing

我目前正在尝试在Amazon EC2实例(多节点群集)上运行Hadoop 2.6.0。我启动了两个Ubuntu 14.04实例。其中一个是主人,另一个是奴隶。以下是我的配置:

大师

-core-site.xml中

<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoopmaster:9000</value>
        </property>
</configuration>

-hdfs-site.xml中

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/home/ubuntu/hadoop-2.6.0/hadoop_data/hdfs/namenode</value>
        </property>
</configuration>

-yarn-site.xml中

<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>hadoopmaster:8025</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>hadoopmaster:8030</value>
</property>
<property>
        <name>yarn.resourcemanager.address</name>
        <value>hadoopmaster:8050</value>
</property>

-mapred-site.xml中

<configuration>
        <property>
                <name>mapred.job.tracker</name>
                <value>hadoopmaster:54311</value>
        </property>
</configuration>

-masters

hadoopmaster

-slaves

hadoopslave1

从站

-hdfs-site.xml中

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/home/ubuntu/hadoop-2.6.0/hadoop_data/hdfs/datanode</value>
        </property>
</configuration>

其他人与主人相同。

当我运行hdfs namenode -format时,它看起来很好并退出状态0.当我运行start-all.sh时,它会提供以下输出:

ubuntu@hadoopmaster:~/hadoop-2.6.0$ sbin/start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoopmaster]
hadoopmaster: starting namenode, logging to /home/ubuntu/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-hadoopmaster.out
hadoopslave1: starting datanode, logging to /home/ubuntu/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-hadoopslave1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/ubuntu/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-hadoopmaster.out
starting yarn daemons
starting resourcemanager, logging to /home/ubuntu/hadoop-2.6.0/logs/yarn-ubuntu-resourcemanager-hadoopmaster.out
hadoopslave1: starting nodemanager, logging to /home/ubuntu/hadoop-2.6.0/logs/yarn-ubuntu-nodemanager-hadoopslave1.out

听起来不错,没有错误报道。但是,当我在主站点上运行jps时,它会提供以下输出:

ubuntu@hadoopmaster:~/hadoop-2.6.0$ jps
3640 ResourceManager
3501 SecondaryNameNode
3701 Jps

缺少NameNode!当我在奴隶网站上运行jps时,我得到了以下内容

ubuntu@hadoopslave1:~/hadoop-2.6.0$ jps
1686 DataNode
1870 Jps
1817 NodeManager

以下是NameNode的日志文件:

2015-06-11 04:16:18,987 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoopmaster/54.172.40.127
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0

.out文件:

ulimit -a for user ubuntu
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13357
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 13357
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

我已经多次重复这个并且得到了相同的结果。 NameNode始终缺失。谁能给我一些关于这个问题的建议呢?非常感谢你!

2 个答案:

答案 0 :(得分:1)

我猜你已经找到了解决方案,但这是针对其他任何一个遇到同样问题的人(像我一样)。

首先使用stop-yarn.shstop-dfs.sh(按此顺序)关闭您的hadoop群集。现在你要做的就是去hadoop临时目录。如果您未由用户配置,则它将位于/usr/local/hadoop/tmp/中。

否则,请从core-site.xml hadoop.tmp.dir中找到它。 然后只需输入:

  

rm -rf *

现在启动集群和中提琴,Namenode已启动。

答案 1 :(得分:0)

您是否检查了路径下文件夹的权限 文件:/home/ubuntu/hadoop-2.6.0/hadoop_data/hdfs /