我已经以伪分布式模式安装了Hadoop 3.1.1。我试图访问Hadoop Web界面,并很好地打开了NameNode(Server's public IP:9870
)和JobHistoryServer(public IP:19888
)UI,但DataNode({public IP:9864
),ResourceManager(public IP:8088
)UI已打开被阻止。
但是,当我输入命令jps
时,DataNode和ResourceManager仍在工作。另外,日志文件中没有任何特殊的错误消息。
我想知道是什么问题。
大师:
localhost
奴隶:
localhost
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>(Hadoop Home Dir)/hdata/dfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>(Hadoop Home Dir)/hdata/dfs/datanode</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>localhost:9864</value>
</property>
</configuration>
core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>(Hadoop Home Dir)/hdata</value>
</property>
</configuration>
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>localhost</value>
</property>
<property>
<name>yarn.web-proxy.address</name>
<value>localhost:8089</value>
</property>
</configuration>
ResourceManager日志:
2018-09-23 17:09:07,192 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting ResourceManager
STARTUP_MSG: host = ubuntu-1cpu-40gb_ssd-2gb_ram-2tb_bw/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.1.1
答案 0 :(得分:1)
我已经修改了如下配置文件,现在6个进程和Web界面运行良好。
/ etc / hosts(本地PC / WSL)
127.0.0.1 localhost
(Server's external IP) (Server's hostname)
需要删除127.0.1.1。
大师:
(Server's external IP)
奴隶:
(Server's external IP)
工人:
(Server's external IP)
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>(Hadoop Home Dir)/hdata/dfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>(Hadoop Home Dir)/hdata/dfs/datanode</value>
</property>
</configuration>
core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>(Hadoop Home Dir)/hdata</value>
</property>
</configuration>
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>