我使用java在hdfs中创建了一个新的csv文件,并且我试图将数据附加到该csv文件中,但是无法附加错误
由于无法再尝试使用良好的数据节点,因此无法替换现有管道上的不良数据节点。 (节点:current = [DatanodeInfoWithStorage [192.168.1.25:9866,DS-b6d8a63b-357d-4d39-9f27-1ab76b8b6ccc,DISK]],原始= [Dat
下面是代码
csv file created and uplaoded to HDFS from java code , but not able append data to the existing file . but a newly uploaded csv from ui interface was able to appended data with java code , please help to resolve this issue.
私有无效的appendFileToFile(字符串fileName)引发异常{
long testTime1 = System.currentTimeMillis();
String hdfsHostDetails = new String("hdfs://192.168.1.25:9000");
Configuration conf = new Configuration();
conf.setBoolean("dfs.support.append", true);
FileSystem fs = FileSystem.get(URI.create(hdfsHostDetails), conf);
String dirpath = new String(hdfsHostDetails);
String targetfilepath = new String(dirpath+"/"+fileName);
int count = 0;
while (count < 2) {
int offset = 0;
int limit = 10000;
IgniteTable table = new IgniteTable(ignite, "nok_customer_demand");
String query = "SELECT * FROM nok_customer_demand OFFSET "+ offset +" ROWS FETCH NEXT "+ limit +" ROWS ONLY";
List<List<?>> lists = table._select(query);
List<String[]> rows = new ArrayList();
System.out.println(":::::::::::::::::: Data Ready for iteration ::::::::::::::"+ count);
// create a new file on each iteration
File file = new File("/home/tejatest1"+count+".csv");
FileWriter outputfile = new FileWriter(file);
CSVWriter writer = new CSVWriter(outputfile);
for (List eachlist : lists) {
String[] eachRowAsString = new String[eachlist.size()];
;
int i = 0;
for (Object eachcol : eachlist) {
eachRowAsString[i] = String.valueOf(eachcol);
rows.add(eachRowAsString);
i++;
}
writer.writeNext(eachRowAsString);
}
// on each iteration append the data in the file to hdfs
InputStream in = new BufferedInputStream(new FileInputStream(file));
FSDataOutputStream out =null;
if(!fs.exists(new Path(targetfilepath))) {
out = fs.create(new Path(targetfilepath));
} else{
out = fs.append(new Path(targetfilepath));
}
IOUtils.copyBytes(in, out, 4096, true);
writer.close();
out.close();
outputfile.close();
lists.clear();
in.close();
file.delete();
count++;
}
long testTime2 = System.currentTimeMillis();
System.out.println("-----total time taken for data fetch for all records in table using limit and offset:-------" + (testTime2 - testTime1) + " ms");
fs.close();
}
答案 0 :(得分:0)
我使用以下配置解决了此问题
Configuration conf = new Configuration();
conf.set("fs.defaultFS",hdfsHostDetails);
conf.setInt("dfs.replication",1);
conf.setBoolean("dfs.client.block.write.replace-datanode-on-failure.enable",false);
conf.setBoolean("dfs.support.append", true);
FileSystem fs = FileSystem.get(URI.create(hdfsHostDetails), conf);