没有要停止的Namenode或Datanode或Secondary NameNode

时间:2015-11-18 05:32:44

标签: hadoop mapreduce hdfs

我按照以下链接中的步骤在我的Ubuntu 12.04中安装了Hadoop。

http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php

一切都安装成功,当我运行start-all.sh时,只有部分服务正在运行。

wanderer@wanderer-Lenovo-IdeaPad-S510p:~$ su - hduse
Password:

hduse@wanderer-Lenovo-IdeaPad-S510p:~$ cd /usr/local/hadoop/sbin

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
hduse@localhost's password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password: 
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out
Starting secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ jps
7940 Jps
7545 ResourceManager
7885 NodeManager

通过运行脚本stop-all.sh

停止服务
hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
hduse@localhost's password: 
localhost: no namenode to stop
hduse@localhost's password: 
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password: 
0.0.0.0: no secondarynamenode to stop
stopping yarn daemons
stopping resourcemanager
hduse@localhost's password: 
localhost: stopping nodemanager
no proxyserver to stop

我的配置文件

  1. 编辑bashrc文件

    vi ~/.bashrc
    
    #HADOOP VARIABLES START
    export JAVA_HOME=/usr/lib/jvm/java-8-oracle/
    export HADOOP_INSTALL=/usr/local/hadoop
    export PATH=$PATH:$HADOOP_INSTALL/bin
    export PATH=$PATH:$HADOOP_INSTALL/sbin
    export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_HOME=$HADOOP_INSTALL
    export HADOOP_HDFS_HOME=$HADOOP_INSTALL
    export YARN_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
    #HADOOP VARIABLES END
    
  2. HDFS-site.xml中

    vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
    
    <configuration>
     <property>
      <name>dfs.replication</name>
      <value>1</value>
      <description>Default block replication.
      The actual number of replications can be specified when the file is created.
      The default is used if replication is not specified in create time.
      </description>
     </property>
     <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
     </property>
     <property>
       <name>dfs.datanode.data.dir</name>
       <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
     </property>
    </configuration>
    
  3. hadoop-env.sh

    vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
    
    export JAVA_HOME=/usr/lib/jvm/java-8-oracle/
    export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
    
    for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
      if [ "$HADOOP_CLASSPATH" ]; then
        export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
      else
        export HADOOP_CLASSPATH=$f
      fi
    done
    
    export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
    export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
    export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
    
    export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
    
    export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
    export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"
    
    # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
    export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
    export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
    
    export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
    export HADOOP_PID_DIR=${HADOOP_PID_DIR}
    export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
    
    # A string representing this instance of hadoop. $USER by default.
    export HADOOP_IDENT_STRING=$USER
    
  4. 芯-site.xml中

    vi /usr/local/hadoop/etc/hadoop/core-site.xml
    <configuration>
     <property>
      <name>hadoop.tmp.dir</name>
      <value>/app/hadoop/tmp</value>
      <description>A base for other temporary directories.</description>
     </property>
    
     <property>
      <name>fs.default.name</name>
      <value>hdfs://localhost:54310</value>
      <description>The name of the default file system.  A URI whose
      scheme and authority determine the FileSystem implementation.  The
      uri's scheme determines the config property (fs.SCHEME.impl) naming
      the FileSystem implementation class.  The uri's authority is used to
      determine the host, port, etc. for a filesystem.</description>
     </property>
    </configuration>
    
  5. mapred-site.xml中

    vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
    <configuration>
     <property>
      <name>mapred.job.tracker</name>
      <value>localhost:54311</value>
      <description>The host and port that the MapReduce job tracker runs
      at.  If "local", then jobs are run in-process as a single map
      and reduce task.
      </description>
     </property>
    </configuration>
    

    $ javac -version

    javac 1.8.0_66
    

    $ java -version

    java version "1.8.0_66"  
    Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
    Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
    
  6. 我是Hadoop的新手,无法找到问题。在哪里可以找到Jobtracker和NameNode的日志文件以跟踪服务?

4 个答案:

答案 0 :(得分:3)

如果不是ssh问题,请执行下一步:

  1. 删除临时目录中的所有内容: rm -Rf / app / hadoop / tmp 并格式化namenode服务器 bin / hadoop namenode -format 。 使用 bin / start-dfs.sh 启动namenode和datanode。 在命令行中键入 jps 以检查节点是否正在运行。

  2. 检查hduser是否有权使用 ls -ld目录

    编写hadoop_store / hdfs / namenode和datanode目录

    您可以通过 sudo chmod +777 / hadoop_store / hdfs / namenode /

  3. 更改权限

答案 1 :(得分:1)

如果仔细查看start-all.sh命令日志,可以轻松查看日志文件路径。尝试开始写入日志后的每个服务

localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
ocalhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out

答案 2 :(得分:0)

您必须为ssh设置无密码身份验证。 hduse用户应该能够通过ssh登录到localhost而无需密码。

答案 3 :(得分:0)

The namenode is not showing

插入$ jps命令后,namenode未显示,但是创建了datanode。因此,为了解决这些问题,我们可以按照下面给出的步骤进行操作,

它适用于配置hadoop 2.7.6

第1步:::(停止hadoop)

/ usr / local / hadoop / sbin $ stop-dfs.sh

第2步:: :(删除tmp文件夹)

/ usr / local / hadoop / sbin $ sudo rm -rf / app / hadoop / tmp /

第3步:: :(创建新的tmp文件)

/ usr / local / hadoop / sbin $ sudo mkdir -p / app / hadoop / tmp

/ usr / local / hadoop / sbin $ sudo chown hduser:hadoop / app / hadoop / tmp

/ usr / local / hadoop / sbin $ chmod 750 / app / hadoop / tmp

步骤4 :: :(格式名称节点)

/ usr / local / hadoop / sbin $ hdfs namenode -format

第5步:: :(开始dfs)

/ usr / local / hadoop / sbin $ start-all.sh

/ usr / local / hadoop / sbin $ jps

The namenode is now showing