运行MapReduce示例时,RM会消失

时间:2013-12-12 20:45:22

标签: hadoop hdfs

我有3个VM,1Master,2Slaves 每当我在我的主VM中运行两个命令:start-dfs.shstart-yarn.sh时,日志显示我完全干净,没有错误! jps显示所有酷。但每当我开始hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5时,当我jps时,我的ResourceManager就会消失。当我重新开纱。出现此消息:

slave1: nodemanager running as process 6011. Stop it first.

在示例MR运行之后,控制台给了我:

INFO ipc.Client: Retrying connect to server: ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8/50.50.1.9:8040. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

欢迎提出任何建议,我整天都想弄明白。

来自我的MASTER VM

的/ etc /主机

127.0.0.1 localhost
50.50.1.9 ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8
50.50.1.8 slave1
50.50.1.4 slave2

芯-site.xml中

<name>fs.default.name</name>
<value>hdfs://ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:9000</value>

<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hadoop-2.2.0/tmp</value>

HDFS-site.xml中

<name>dfs.replication</name>
<value>3</value>

<name>dfs.namenode.name.dir</name>
<value>file:/home/ubuntu/hadoop-2.2.0/etc/hdfs/namenode</value>

<name>dfs.datanode.data.dir</name>
<value>file:/home/ubuntu/hadoop-2.2.0/etc/hdfs/datanode</value>

<name>dfs.permissions</name>
<value>false</value>

纱-site.xml中

<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>

<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>

<name>yarn.resourcemanager.resource-tracker.address</name>
<value>ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:8025</value>

<name>yarn.resourcemanager.scheduler.address</name>
<value>ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:8030</value>

<name>yarn.resourcemanager.address</name>
<value>ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:8040</value>

<name>yarn.resourcemanager.admin.address</name>
<value>ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:8141</value>

我的奴隶是这样的: 芯的site.xml

<name>fs.default.name</name>
<value>hdfs://ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:9000</value>
<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hadoop-2.2.0/tmp</value>

芯-site.xml中

<name>dfs.namenode.name.dir</name>
<value>file:/home/ubuntu/hadoop-2.2.0/etc/hdfs/namenode</value>

<name>dfs.datanode.data.dir</name>
<value>file:/home/ubuntu/hadoop-2.2.0/etc/hdfs/datanode</value>

纱-site.xml中

<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>

<name>yarn.resourcemanager.resource-tracker.address</name>
<value>ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:8025</value>

<name>yarn.resourcemanager.scheduler.address</name>
<value>ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:8030</value>

<name>yarn.resourcemanager.address</name>
<value>ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:8040</value>

<name>yarn.resourcemanager.admin.address</name>
<value>ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:8141</value>

1 个答案:

答案 0 :(得分:0)

由于我试图自己连接这个虚拟机ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8/50.50.1.9:8040.,我忘记了cat .ssh/id_rsa.pub >> .ssh/authorizedkeys并且它有效。