无法通过Java写入HDFS:文件...只能写入1个minReplication节点中的0个

时间:2018-11-07 03:03:04

标签: java hadoop hdfs

试图写某事。 Java使用HDFS

HDFS运作良好,我可以手动将文件上传到HDFS

sudo -u hdfs hdfs dfs -put file /tmp

根据HDFS Web UI(http://nn_ip:9870),所有数据节点均正常。

但是当我尝试使用Java进行此操作时,出现异常:

Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File ... could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2103)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:287)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:865)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:550)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481)
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1337)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1733)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1536)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:658)

Java代码可以生成HDFS路径和文件,但是该文件为空:

    public static void main(String[] args) throws Exception { 
    String hdfsuri = "hdfs://namenodeserver:8020";

    String path="/user/hdfs/example1/hdfs/";
    String fileName="hello.csv";
    String fileContent="hello;world";

    // ====== Init HDFS File System Object
    Configuration conf = new Configuration(); 
    conf.set("fs.defaultFS", hdfsuri); 
    conf.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
    conf.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());

    System.setProperty("HADOOP_USER_NAME", "hdfs");
    System.setProperty("hadoop.home.dir", "/");

    FileSystem fs = FileSystem.get(URI.create(hdfsuri), conf); 

    //Create a path
    Path hdfswritepath = new Path(newFolderPath + "/" + fileName);
    //Init output stream
    FSDataOutputStream outputStream=fs.create(hdfswritepath);
    //Cassical output stream usage
    outputStream.writeBytes(fileContent);
    outputStream.close(); 
    //Create a path
    Path hdfsreadpath = new Path(newFolderPath + "/" + fileName);
    //Init input stream
    FSDataInputStream inputStream = fs.open(hdfsreadpath);
    //Classical input stream usage
    String out= IOUtils.toString(inputStream, "UTF-8"); 
    inputStream.close();
    fs.close();

}

如何解决此问题?

我看到了诸如this之类的问题。有人建议格式化namenode。但是为什么要格式化namenode?此步骤解决什么问题?我已经多次安装了HDFS。但是这个问题仍然存在。

感谢您的帮助。

0 个答案:

没有答案