我在hadoop上写小文件时遇到了一个奇怪的问题。下面是示例程序
public void writeFile(Configuration conf, String message, String filename) throws Exception {
FSDataOutputStream fsDataOutputStream = null;
DistributedFileSystem fs = null;
try {
fs = (DistributedFileSystem) FileSystem.get(URI.create(properties.getHadoop().getRawLocation()), conf);
Path hdfswritepath = new Path(properties.getHadoop().getRawLocation() + "/" + filename + ".json");
fsDataOutputStream = fs.create(hdfswritepath);
fsDataOutputStream.write(message.getBytes());
fsDataOutputStream.close();
fsDataOutputStream.hsync();
} catch (IllegalArgumentException | IOException e) {
System.out.println("Got Exception");
e.printStackTrace();
throw e;
} finally {
fs.close();
System.out.println("clean up done");
}
}
以上代码正在hadoop位置创建空文件。这是我尝试过的物品
仅创建0字节的文件。
我为此受到了例外。
09:12:02,129 INFO [org.apache.hadoop.hdfs.DFSClient] (Thread-118) Exception in createBlockOutputStream: java.net.ConnectException: Connection timed out: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1533)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1309)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1262)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
答案 0 :(得分:1)
我能够通过
解决此问题conf.set(“ dfs.client.use.datanode.hostname”,“ true”);