当我尝试在主节点上启动hadoop时,我得到以下输出。并且namenode没有启动。
[hduser@dellnode1 ~]$ start-dfs.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-dellnode1.library.out
dellnode1.library: datanode running as process 5123. Stop it first.
dellnode3.library: datanode running as process 4072. Stop it first.
dellnode2.library: datanode running as process 4670. Stop it first.
dellnode1.library: secondarynamenode running as process 5234. Stop it first.
[hduser@dellnode1 ~]$ jps
5696 Jps
5123 DataNode
5234 SecondaryNameNode
答案 0 :(得分:20)
“先停下来”。
首先致电stop-all.sh
输入jps
调用start-all.sh(或start-dfs.sh和start-mapred.sh)
输入jps(如果namenode没有显示类型“hadoop namenode”并检查错误)
答案 1 :(得分:4)
根据在较新版本的hardoop上运行“stop-all.sh”,不推荐使用此功能。你应该使用:
stop-dfs.sh
和
stop-yarn.sh
答案 2 :(得分:1)
今天,在执行pig脚本时,我遇到了问题中提到的相同错误:
[training@localhost bin]$ stop-all.sh
所以答案是:
[training@localhost bin]$ start-all.sh
然后输入:
{{1}}
问题将得到解决。现在你可以使用mapreduce运行pig脚本了!
答案 3 :(得分:0)
在Mac中(如果使用自制软件安装)其中3.0.0是Hadoop版本。在Linux中相应地更改安装路径(仅此部分将更改。/usr/local/Cellar/
)。
> /usr/local/Cellar/hadoop/3.0.0/sbin/stopyarn.sh
> /usr/local/Cellar/hadoop/3.0.0/sbin/stopdfs.sh
> /usr/local/Cellar/hadoop/3.0.0/sbin/stop-all.sh"
对于专业用户而言,最好在alias
或~/.bashrc
(如果您是zsh用户)结尾处写下此~/.zshrc
。每次要停止Hadoop和所有相关进程时,只需从命令行键入hstop
。
alias hstop="/usr/local/Cellar/hadoop/3.0.0/sbin/stop-yarn.sh;/usr/local/Cellar/hadoop/3.0.0/sbin/stop-dfs.sh;/usr/local/Cellar/hadoop/3.0.0/sbin/stop-all.sh"