Hadoop:奴隶的ip不正确

时间:2017-01-29 14:19:36

标签: hadoop

1.主机配置:

 127.0.0.1          localhost  
 192.168.1.3        master  
 172.16.226.129     slave1

2.slaves file:

slave1

3.JPS:

zqj@master:/usr/local/nodetmp$ jps
5377 Jps
4950 SecondaryNameNode
4728 NameNode
5119 ResourceManager

zqj@slave1:/usr/local/hadooptmp$ jps
2514 NodeManager
2409 DataNode
2639 Jps

4.hadoop dfsadmin -report:

zqj@master:/usr/local/nodetmp$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 22588977152 (21.04 GB)
Present Capacity: 16719790080 (15.57 GB)
DFS Remaining: 16719765504 (15.57 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (1):

Name: 192.168.1.3:50010 (master)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 22588977152 (21.04 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 5869187072 (5.47 GB)
DFS Remaining: 16719765504 (15.57 GB)
DFS Used%: 0.00%
DFS Remaining%: 74.02%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Jan 30 17:29:01 CST 2017

Datanode in localhost:50070

我想知道当namenode在真实机器中并且datanode在虚拟机中时IP为何不正确。谢谢!

当我使用虚拟机作为名称节点时,一切正常并且ip是正确的。是否有必要在VMware中配置网关或IP?

1 个答案:

答案 0 :(得分:0)

将您的从属文件放在主节点中。 (不在从属节点中)。我希望主机配置在主节点上..这将解决问题