我正在尝试基于3个节点将Hadoop Cluster
部署到我的测试环境中:
我将具有主属性的文件配置到我的namenode中,并将slave属性配置到我的datananode中。
主持人:
127.0.0.1 localhost
172.30.10.64 master
172.30.10.62 slave2
172.30.10.72 slave1
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
</configuration>
core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
yarn-site.xml:
<configuration>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8050</value>
</property>
</configuration>
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
</configuration>
我有奴隶文件:
slave1
slave2
主文件:
master
我添加了根据主文件更改的文件。
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property>
</configuration>
我是从/usr/local/hadoop/sbin
启动的:
./ start-dfs.sh& amp ;& amp; ./start-yarn.sh
这就是我得到的:
hduser@master:/usr/local/hadoop/sbin$ ./start-dfs.sh && ./start-yarn.sh
18/03/14 10:45:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
hduser@master's password:
master: starting namenode, logging to /usr/local/hadoop-2.7.5/logs/hadoop-hduser-namenode-master.out
hduser@slave2's password: hduser@slave1's password:
slave2: starting datanode, logging to /usr/local/hadoop-2.7.5/logs/hadoop-hduser-datanode-slave2.out
所以我从slave2打开了日志文件:
2018-03-14 10:46:05,494 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$
2018-03-14 10:46:06,495 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$
2018-03-14 10:46:07,496 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$
到目前为止我尝试了一些但没有效果的事情:
hdfs namenode -format
sudo ufw status
- &gt;停用我有点失落,因为一切似乎都没问题,我不知道为什么我没有克服开始我的hadoop集群。
答案 0 :(得分:1)
我可能会找到答案:
我从主节点重新生成ssh密钥,然后复制到从属节点。它似乎现在有效。
#Generate a ssh key for hduser
$ ssh-keygen -t rsa -P ""
#Authorize the key to enable password less ssh
$ cat /home/hduser/.ssh/id_rsa.pub >> /home/hduser/.ssh/authorized_keys
$ chmod 600 authorized_keys
#Copy this key to slave1 to enable password less ssh and slave2 too
$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave1
$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave2