无法在多节点hadoop集群设置中运行datanode,需要建议

时间:2017-05-14 02:48:59

标签: hadoop

我正在尝试设置一个多节点hadoop集群,但是datanode无法启动,需要这样做。以下是详细信息。除此之外没有其他任何设置。到目前为止,我只有一个数据节点和一个名称节点设置。

NAMENODE setup -
CORE-SITE.xml
<property>
  <name>fs.defult.name</name>
  <value>hdfs://192.168.1.7:9000</value>
 </property>

HDFS-SITE.XML

<property>
  <name>dfs.name.dir</name>
  <value>/data/namenode</value>
 </property>




DATANODE SETUP:

NAMENODE setup -
CORE-SITE.xml
<property>
  <name>fs.defult.name</name>
  <value>hdfs://192.168.1.7:9000</value>
 </property>

HDFS-SITE.XML

<property>
  <name>dfs.data.dir</name>
  <value>/data/datanode</value>
 </property>

When I run namenode it runs fine however when I try to run data node on other machine whos IP is 192.168.1.8 it fails and log says 

2017-05-13 21:26:27,744 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2017-05-13 21:26:27,862 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2017-05-13 21:26:32,908 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:34,979 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:36,041 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:37,093 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:38,162 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:39,238 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
@        

and datanode dies
Is anything there to setup ?
let me any other details required. Is there any other files to change? I am using centos7 to setup the env. I did formatting of namenode also more than 2-3 times, and also permissions are proper. Only connectivity issue however when I try to scp from master to slave (namenode to datanode) its works fine.
Suggest if there are any other setup to be done to make it successful!

1 个答案:

答案 0 :(得分:0)

配置的属性名称中存在拼写错误。缺少'a':fs.defult.name(vs fs.default.name)。