hadoop namenode,datanode,secondarynamenode没有启动

时间:2015-02-11 12:48:53

标签: hadoop

我刚刚下载了hadoop-0.20 tar并提取。我设置了JAVA_HOME和HADOOP_HOME。我修改了core-site.xml,hdfs-site.xml和mapred-site.xml。

我开始服务。

  jps


 jps
 JobTracker
 TaskTracker

我检查日志。它说

 2015-02-11 18:07:52,278 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

 /************************************************************
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = scspn0022420004.lab.eng.btc.netapp.in/10.72.40.68
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 0.20.0
 STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
 ************************************************************/
  2015-02-11 18:07:52,341 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.NullPointerException
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
    at   org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:175)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:955)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:964)

    2015-02-11 18:07:52,346 INFO   org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
   /************************************************************
   SHUTDOWN_MSG: Shutting down NameNode at   scspn0022420004.lab.eng.btc.netapp.in/10.72.40.68
   ************************************************************/

我错了什么?

我的Conf文件如下:

芯现场

 <configuration>
  <property>
  <name>fs.defaultFS</name>
  <value>hdfs://localhost:8020</value>
  </property>
 </configuration>

HDFS现场

 <configuration>
  <property>
  <name>dfs.replication</name>
  <value>1</value>
 </property>
 <!-- Immediately exit safemode as soon as one DataNode checks in.
   On a multi-node cluster, these configurations must be removed.  -->
 <property>
   <name>dfs.safemode.extension</name>
   <value>0</value>
  </property>
  <property>
   <name>dfs.safemode.min.datanodes</name>
   <value>1</value>
  </property>
  <property>
   <name>hadoop.tmp.dir</name>
   <value>/var/lib/hadoop-hdfs/cache/${user.name}</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/name</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value>
   </property>
   <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value>
   </property>

  </configuration>

mapred-site.xml中

  <configuration>
   <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
   </property>
  </configuration>

有什么想法吗?

这是我在启动start-dfs.sh

时在控制台中看到的内容
 localhost: starting secondarynamenode, logging to /root/hadoop/hadoop-0.20.0/bin/../logs/hadoop-root-secondarynamenode- hostname.out
 localhost: Exception in thread "main" java.lang.NullPointerException
 localhost:      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>   (SecondaryNameNode.java:115)
 localhost:      at   org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)

2 个答案:

答案 0 :(得分:2)

我猜您没有正确设置您的hadoop群集请按照以下步骤操作:

第1步:开始设置.bashrc

vi $HOME/.bashrc

将以下行放在文件的末尾:(将hadoop home更改为你的)

# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin

第2步:编辑hadoop-env.sh,如下所示:

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun

步骤3:现在创建一个目录并设置所需的所有权和权限

$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp

第4步:修改core-site.xml

<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>

第5步:修改mapred-site.xml

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
</property>

第6步:修改hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>

最后格式化你的hdfs(你需要在第一次设置Hadoop集群时这样做)

 $ /usr/local/hadoop/bin/hadoop namenode -format

希望这会对你有所帮助

答案 1 :(得分:1)

我没有使用0.20.0版本,但您确定core-site.xml中的密钥是fs.defaultFS吗? core-default.xml似乎被命名为fs.default.name