Hadoop 0.23.9如何启动datanode

时间:2013-12-08 19:17:30

标签: hadoop mapreduce hdfs yarn

似乎我无法让hadoop正常启动。我正在使用hadoop 0.23.9:

[msknapp@localhost sbin]$ hadoop namenode -format
...
[msknapp@localhost sbin]$ ./start-dfs.sh 
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/cloud/hadoop-0.23.9/logs/hadoop-msknapp-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /usr/local/cloud/hadoop-0.23.9/logs/hadoop-msknapp-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/cloud/hadoop-0.23.9/logs/hadoop-msknapp-secondarynamenode-localhost.localdomain.out
[msknapp@localhost sbin]$ ./start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/local/cloud/hadoop-0.23.9/logs/yarn-msknapp-resourcemanager-localhost.localdomain.out
localhost: starting nodemanager, logging to /usr/local/cloud/hadoop-0.23.9/logs/yarn-msknapp-nodemanager-localhost.localdomain.out
[msknapp@localhost sbin]$ cd /var/local/stock/data
[msknapp@localhost data]$ hadoop fs -ls /
[msknapp@localhost data]$ hadoop fs -mkdir /stock
[msknapp@localhost data]$ ls
companies.csv  raw  slf_series.txt
[msknapp@localhost data]$ hadoop fs -put companies.csv /stock/companies.csv 
13/12/08 11:10:40 WARN hdfs.DFSClient: DataStreamer Exception
java.io.IOException: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1180)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:414)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:394)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1571)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1567)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1262)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1565)

    at org.apache.hadoop.ipc.Client.call(Client.java:1094)
    at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:195)
    at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:102)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:67)
    at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1130)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1006)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:458)
put: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
13/12/08 11:10:40 ERROR hdfs.DFSClient: Failed to close file /stock/companies.csv._COPYING_
java.io.IOException: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1180)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:414)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:394)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1571)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1567)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1262)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1565)

    at org.apache.hadoop.ipc.Client.call(Client.java:1094)
    at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:195)
    at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:102)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:67)
    at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1130)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1006)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:458)

这是我的core-site.xml:

<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost/</value>
</property>

和我的hdfs-site.xml:

<property>
        <name>dfs.replication</name>
        <value>1</value>
</property>

和mapred-site.xml:

    <property>
            <name>mapred.job.tracker</name>
            <value>localhost:8021</value>
    </property>

我查看了我的所有文档,我无法弄清楚如何正确启动hadoop。我在网上找不到关于hadoop-0.23.9的任何文档。我的Hadoop书写的是0.22。在线文档适用于2.1.1,巧合的是我无法开始工作。

有人可以告诉我如何让我的hadoop正确启动吗?

3 个答案:

答案 0 :(得分:2)

指定fs.default.name

的端口

像:

<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
</property>

之后,为hdfs创建一个tmp目录:

sudo mkdir -p /app/hadoop/tmp
sudo chown you /app/hadoop/tmp

并添加到core-site.xml:

<property>
   <name>hadoop.tmp.dir</name>
   <value>/app/hadoop/tmp</value>
   <description>A base for other temporary directories.</description>
</property>

确保重新启动群集。

$HADOOP_HOME/bin/stop-all.sh
$HADOOP_HOME/bin/start-all.sh

答案 1 :(得分:1)

尝试删除hadoop停止的所有数据:

$HADOOP_HOME/bin/hadoop datanode -format

或手动删除

的内容
  

/应用/ hadoop的/ TMP / DFS /数据/

然后再次启动hadoop:

$HADOOP_HOME/bin/start-all.sh

答案 2 :(得分:1)

配置中的关键问题如下:

java.io.IOException: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

确保您的HDFS特定配置至少具有以下最低值:

HDFS-site.xml中: 如xml中所示,您必须已存在/ tmp / hdfs23 / namenode和/ tmp / hdfs23 / datanode文件夹。您可以为hdfs root配置任何其他文件夹,然后在其中配置namenode和datanode文件夹。

<configuration>
        <property>
           <name>dfs.replication</name>
           <value>1</value>
        </property>
        <property>
           <name>dfs.namenode.name.dir</name>
           <value>file:///tmp/hdfs23/namenode</value>
        </property>
        <property>
            <name>fs.checkpoint.dir</name>
            <value>file:///tmp/hdfs23/secnamenode</value>
        </property>
        <property>
             <name>fs.checkpoint.edits.dir</name>
             <value>file:///tmp/hdfs23/secnamenode</value>
        </property>
        <property>
           <name>dfs.datanode.data.dir</name>
           <value>file:///tmp/hdfs23/datanode</value>
        </property>
</configuration>

芯-site.xml中

<configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9000</value>
        </property>
     <property>
         <name>fs.default.name</name>
         <value>hdfs://localhost:9000</value>
     </property>
     <property>
        <name>hadoop.http.staticuser.user</name>
        <value>hdfs</value>
     </property>
</configuration>

然后你需要像以前一样格式化你的名字节点:

$ hadoop namenode -format

之后您可以按如下方式启动HDFS:

[Hadoop023_ROOT]/sbin/start-dfs.sh