namenode,datanode不使用jps列出

时间:2015-04-28 05:13:06

标签: ubuntu hadoop hdfs

环境:ubuntu 14.04,hadoop 2.6

在我输入int main() { // create water object to work with water w1; char input = 'Q'; do { // Ask user for input input = AskForInput(); ProcessInput(input, w1); // Process input until the user wishes to quit } while (input != 'Q') return 0; } start-all.sh后,jps在终端上没有列出

DataNode

根据这个答案:Datanode process not running in Hadoop

我尝试了最好的解决方案

  • >jps 9529 ResourceManager 9652 NodeManager 9060 NameNode 10108 Jps 9384 SecondaryNameNode
  • bin/stop-all.sh (or stop-dfs.sh and stop-yarn.sh in the 2.x serie)
  • rm -Rf /app/tmp/hadoop-your-username/*

然而,现在我明白了:

bin/hadoop namenode -format (or hdfs in the 2.x series)

如您所见,即使>jps 20369 ResourceManager 26032 Jps 20204 SecondaryNameNode 20710 NodeManager 缺失,也请帮帮我。

NameNodehttps://gist.github.com/fifiteen82726/b561bbd9cdcb9bf36032

DataNode logshttps://gist.github.com/fifiteen82726/02dcf095b5a23c1570b0

NmaeNode logs

mapred-site.xml

更新

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>

</configuration>

更新

coda@ubuntu:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/30 01:07:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’: Permission denied
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
Starting secondary namenodes [0.0.0.0]
coda@0.0.0.0's password: 
0.0.0.0: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
0.0.0.0: secondarynamenode running as process 20204. Stop it first.
15/04/30 01:07:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
resourcemanager running as process 20369. Stop it first.
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: nodemanager running as process 20710. Stop it first.
coda@ubuntu:/usr/local/hadoop/sbin$ jps
20369 ResourceManager
2934 Jps
20204 SecondaryNameNode
20710 NodeManager

6 个答案:

答案 0 :(得分:5)

  

FATAL org.apache.hadoop.hdfs.server.datanode.DataNode:secureMain中的异常   java.io.IOException:dfs.datanode.data.dir中的所有目录都无效:“/ usr / local / hadoop_store / hdfs / datanode /”

此错误可能是由于/usr/local/hadoop_store/hdfs/datanode/文件夹的权限错误造成的。

  

FATAL org.apache.hadoop.hdfs.server.namenode.NameNode:无法启动namenode。   org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:目录 / usr / local / hadoop_store / hdfs / namenode 处于不一致状态:存储目录不存在或无法访问。

此错误可能是由于/usr/local/hadoop_store/hdfs/namenode文件夹的权限错误,或者它不存在。要解决此问题,请遵循以下选项:

选项I:

如果您没有文件夹/usr/local/hadoop_store/hdfs,请按如下所示创建并授予该文件夹权限:

sudo mkdir /usr/local/hadoop_store/hdfs
sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
sudo chmod -R 755 /usr/local/hadoop_store/hdfs

分别将hadoopuserhadoopgroup更改为您的hadoop用户名和hadoop groupname。现在,尝试启动hadoop进程。如果问题仍然存在,请尝试选项2.

选项II:

删除/usr/local/hadoop_store/hdfs文件夹的内容:

sudo rm -r /usr/local/hadoop_store/hdfs/*

更改文件夹权限:

sudo chmod -R 755 /usr/local/hadoop_store/hdfs

现在,启动hadoop进程。它应该工作。

  

注意:如果错误仍然存​​在,请发布新日志。

<强>更新

如果您尚未创建hadoop用户和组,请按以下步骤操作:

sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop

现在,更改/usr/local/hadoop/usr/local/hadoop_store的所有权:

sudo chown -R hadoop:hadoop /usr/local/hadoop
sudo chown -R hadoop:hadoop /usr/local/hadoop_store

将您的用户更改为hadoop:

su - hadoop

输入您的hadoop用户密码。现在您的终端应该像:

hadoop@ubuntu:$

现在,输入:

$HADOOP_HOME/bin/start-all.sh

sh /usr/local/hadoop/bin/start-all.sh

答案 1 :(得分:3)

我遇到了类似的问题,jps 显示datanode。

删除hdfs文件夹的内容并更改文件夹权限。

sudo rm -r /usr/local/hadoop_store/hdfs/*
sudo chmod -R 755 /usr/local/hadoop_store/hdfs    
hadoop namenode =format
start-all.sh
jps

答案 2 :(得分:0)

设置权限时要记住的一件事:---- ssh-keygen -t rsa -P“” 上述命令只能在namenode中输入。 然后应将生成的公钥添加到所有数据节点 ssh-copy-id -i~ / .ssh / id_rsa.pub 然后按下命令 SSH 许可将设定...... 之后,在启动dfs时不需要密码......

答案 3 :(得分:0)

遇到同样的问题:在Jps命令中没有显示Namenode服务 解决方案:由于目录/ usr / local / hadoop_store / hdfs的权限问题 只需更改权限并格式化namenode并重新启动hadoop:

$ sudo chmod -R 755 / usr / local / hadoop_store / hdfs

$ hadoop namenode -format

$ start-all.sh

$ JPS

答案 4 :(得分:0)

解决方案是首先停止使用您的namenode 转到你的/ usr / local / hadoop

bin/hdfs namenode -format

然后从您家中删除hdfs和tmp目录

mkdir ~/tmp
mkdir ~/hdfs
chmod 750 ~/hdfs

转到hadoop目录并启动hadoop

`sbin/start-dfs.sh`

它将显示datanode

答案 5 :(得分:0)

为此,您需要授予您的hdfc文件夹权限。 然后运行以下命令:

  1. 通过命令创建群组:sudo adgroup hadoop
  2. 将您的用户添加到此:sudo usermod -a -G hadoop "ur_user" (您可以通过Who命令查看当前用户)
  3. 现在通过以下方式直接更改此hadoop_store的所有者: sudo chown -R "ur_user":"ur_gourp" /usr/local/hadoop_store
  4. 然后通过以下方式再次设置格式名称节点: hdfs namenode -format

并启动所有服务,您可以看到结果。....现在键入JPS(它将运行)。