在HDFS中创建文件,但无法写入内容

时间:2019-07-16 03:51:09

标签: hadoop hdfs hadoop2 webhdfs

  • 我在Vmware中安装了HDFP 3.0.1。
  • DataNode和NameNode正在运行
  • 我将文件从AmbariUI /终端上传到HDFS,一切正常。

当我尝试写入数据时:

    Configuration conf = new Configuration();
    conf.set("fs.defaultFS", "hdfs://172.16.68.131:8020");

    FileSystem fs = FileSystem.get(conf);
    OutputStream os = fs.create(new Path("hdfs://172.16.68.131:8020/tmp/write.txt"));
    InputStream is = new BufferedInputStream(new FileInputStream("/home/vq/hadoop/test.txt"));
    IOUtils.copyBytes(is, os, conf);

日志:

19/07/15 22:40:31 WARN hdfs.DataStreamer: Abandoning BP-1419118625-172.17.0.2-1543512323726:blk_1073760904_20134
19/07/15 22:40:31 WARN hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK]
19/07/15 22:40:32 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/write.txt could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operationa .

在HDFS中创建文件,但它为空。

读取数据时也是如此:

    Configuration conf = new Configuration();
    conf.set("fs.defaultFS", "hdfs://172.16.68.131:8020");
    FileSystem fs = FileSystem.get(conf);
    FSDataInputStream inputStream = fs.open(new Path("hdfs://172.16.68.131:8020/tmp/ui.txt"));
    System.out.println(inputStream.available());
    byte[] bs = new byte[inputStream.available()];

可以读取可用字节。但无法读取文件。

日志:

19/07/15 22:33:33 WARN hdfs.DFSClient: Failed to connect to /172.18.0.2:50010 for file /tmp/ui.txt for block BP-1419118625-172.17.0.2-1543512323726:blk_1073760902_20132, add to deadNodes and continue. 
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/172.18.0.2:50010]
19/07/15 22:33:33 WARN hdfs.DFSClient: No live nodes contain block BP-1419118625-172.17.0.2-1543512323726:blk_1073760902_20132 after checking nodes = [DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK]], ignoredNodes = null
19/07/15 22:33:33 INFO hdfs.DFSClient: Could not obtain BP-1419118625-172.17.0.2-1543512323726:blk_1073760902_20132 from any node:  No live nodes contain current block Block locations: DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK] Dead nodes:  DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK]. Will get new block locations from namenode and retry...
19/07/15 22:33:33 WARN hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for 6717.521796266041 msec

我在互联网上看到了很多答案,但没有成功。

0 个答案:

没有答案