Hadoop:NameNode,DataNode和SecondaryNameNode未运行

时间:2014-02-28 13:37:16

标签: ubuntu hadoop

我正在尝试使用本教程http://codesfusion.blogspot.gr/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1在我的计算机上的单节点群集上安装Hadoop 2.2.0。我按照我看到的每一条指令,一步一步地按照每次都有同样的问题。 NameNode,DataNode和SecondaryNameNode未运行。我在输入start-dfs.sh,start-yarn.sh和jps时看到的消息是:

hduser@victor-OEM:/usr/local/hadoop/sbin$ start-dfs.sh
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-victor-OEM.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-victor-OEM.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is 62:ec:99:e3:ce:2d:f8:79:1f:f8:9a:2a:25:9d:17:95.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-victor-OEM.out
hduser@victor-OEM:/usr/local/hadoop/sbin$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-victor-OEM.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-victor-OEM.out
hduser@victor-OEM:/usr/local/hadoop/sbin$ jps
10684 NodeManager
10745 Jps
10455 ResourceManager

5 个答案:

答案 0 :(得分:2)

某些版本的codefusion tutorial(例如this one)会忽略代码块中的xml标记,以便:

#add this to foo.txt   
<bizz>bar</bizz>

成为:

#add this to foo.txt
bar

在配置中包含xml标记解决了问题。

答案 1 :(得分:2)

您可以尝试以下链接:Leraning hadoop。它是0.23.9但也适用于2.2.0

答案 2 :(得分:1)

在hadoop-env.sh中禁用IPv6:

export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true

答案 3 :(得分:0)

我遇到了同样的问题。

我通过禁用防火墙解决了这个问题。

只需使用此命令

sudo ufw disbale 

答案 4 :(得分:-5)

我尝试了以下步骤:

  1. ssh-keygen -t rsa -P&#34;&#34;

  2. cat $ HOME / .ssh / id_rsa.pub&gt;&gt; $ HOME /的.ssh / authorized_keys中

  3. 之后,打开一个新终端并启动Hadoop集群,解决我的问题。