Cloudera:在HDFS异常中上传文件

时间:2015-10-18 15:12:31

标签: java apache hadoop hdfs cloudera

我使用MAC OS X Yosemite和VM cloudera-quickstart-vm-5.4.2-0-virtualbox。当我输入" hdfs dfs -put testfile.txt"将TEXT FILE放入HDFS我得到 DataStreamer异常。我注意到主要的问题是我拥有的节点数是null。我在这里复制完整的错误信息,我想知道如何解决这个问题。

> WARN hdfs.DFSClient: DataStreamer
> Exceptionorg.apache.hadoop.ipc.RemoteException(java.io.IOException):
> File /user/cloudera/testfile.txt._COPYING_ could only be replicated to
> 0 nodes instead of minReplication (=1). There are 0 datanode(s)
> running and no node(s) are excluded in this operation. at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1541)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3286)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:667)
> at
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:415) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at
> org.apache.hadoop.ipc.Client.call(Client.java:1468) at
> org.apache.hadoop.ipc.Client.call(Client.java:1399) at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606) at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy15.addBlock(Unknown Source) at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1544)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:600)put:
> File /user/cloudera/testfile.txt._COPYING_ could only be replicated to
> 0 nodes instead of minReplication (=1). There are 0 datanode(s)
> running and no node(s) are excluded in this
> operation.[cloudera@quickstart ~]$ hdfs dfs -put testfile.txt15/10/18
> 03:51:51 WARN hdfs.DFSClient: DataStreamer
> Exceptionorg.apache.hadoop.ipc.RemoteException(java.io.IOException):
> File /user/cloudera/testfile.txt._COPYING_ could only be replicated to
> 0 nodes instead of minReplication (=1). There are 0 datanode(s)
> running and no node(s) are excluded in this operation. at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1541)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3286)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:667)
> at
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:415) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at
> org.apache.hadoop.ipc.Client.call(Client.java:1468) at
> org.apache.hadoop.ipc.Client.call(Client.java:1399) at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606) at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy15.addBlock(Unknown Source) at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1544)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:600)put:
> File /user/cloudera/testfile.txt._COPYING_ could only be replicated to
> 0 nodes instead of minReplication (=1). There are 0 datanode(s)
> running and no node(s) are excluded in this
> operation.[cloudera@quickstart ~]$

1 个答案:

答案 0 :(得分:1)

<强> 1。按照 Stopping Services

中的描述停止Hadoop服务
for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x stop ; done

<强> 2。从 {strong> /var/lib/hadoop-hdfs/cache/

中删除所有文件
sudo rm -r /var/lib/hadoop-hdfs/cache/

第3。格式名称节点

sudo -u hdfs hdfs namenode -format

Note: Answer with a capital Y

Note: Data is lost during format process.

<强> 4。启动Hadoop服务

for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x start ; done

<强> 5。确保您的系统未在低磁盘空间上运行。如果日志文件中存在关于磁盘空间不足的警告,也可以确认这一点。

<强> 6。创建/ tmp目录

Remove the old /tmp if it exists:
    $ sudo -u hdfs hadoop fs -rm -r /tmp

Create a new /tmp directory and set permissions:

    $ sudo -u hdfs hadoop fs -mkdir /tmp 
    $ sudo -u hdfs hadoop fs -chmod -R 1777 /tmp

<强> 7。创建用户目录:

$ sudo -u hdfs hadoop fs -mkdir /user/<user> 
$ sudo -u hdfs hadoop fs -chown <user> /user/<user>

where <user> is the Linux username