我有一个包含文档索引和发布日期的文件:
0,2012-05-26T00:00:00Z
1,02012-05-26T00:00:00Z
5,2010-06-26T00:00:00Z
10,2014-05-26T00:00:00Z
和第二个文本文件,其中包含术语频率和属于他的doc的索引:
是,15,1
杀人,10,1
突尼斯,5,5
和平,1,0,我有这个方法匹配这两个文件,所以我可以得到这个形式的第三个文件:
是,15,2012-05-26T00:00:00Z
kill,10,2012-05-26T00:00:00Z
tunisia,5,2010-06-26T00:00:00Z
和平,1,2012-05-26T00:00:00Z我测试了测试文件的方法,它工作正常,但我的文件大小是1T所以我的程序已经执行了4天仍在工作。你会帮我优化它还是给我另一种方法。
public void matchingDateTerme (String pathToDateFich, String pathTotermeFich) {
try {
BufferedReader inTerme = new BufferedReader(new FileReader(pathTotermeFich));
BufferedReader inDate = new BufferedReader(new FileReader(pathToDateFich));
String lineTerme,lineDate;
String idFich, idFichDate,dateterm,key;
Hashtable<String, String> table = new Hashtable<String, String>();
String[] tokens,dates;
Enumeration ID=null;
File tempFile = new File(pathTotermeFich.replace("fichierTermes", "fichierTermes_final"));
FileWriter fileWriter =new FileWriter(tempFile);
BufferedWriter writer = new BufferedWriter(fileWriter);
//read file date
while ((lineDate = inDate.readLine()) != null) {
dates = lineDate.split(", ");
idFichDate = dates[0].toLowerCase();
dateterm=dates[1];
table.put(idFichDate, dateterm);
}
while ((lineTerme = inTerme.readLine()) != null) {
tokens = lineTerme.split(", ");
idFich = tokens[2].toLowerCase();
String terme=tokens[0];
String freq=tokens[1];
//lire hachtable
ID = table.keys();
while(ID.hasMoreElements()) {
key = (String) ID.nextElement();
if(key.equalsIgnoreCase(idFich)){
String line=terme+", "+freq+", "+table.get(key);
System.out.println("Line: "+line);
writer.write(line);
writer.newLine();
}
}
}
writer.close();
inTerme.close();
inDate.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
答案 0 :(得分:1)
您没有使用Hashtable
来获取它的内容:将键映射到值的对象
迭代密钥是无用且昂贵的,只需使用get
方法:
if (table.get(idFich) != null) {
String line = terme + ", " + freq + ", " + table.get(key);
System.out.println("Line: " + line);
writer.write(line);
writer.newLine();
}
正如VGR在评论中所说,使用未同步的HashMap
会更快。更多信息here
答案 1 :(得分:0)
有几点需要考虑。
鉴于 文件1:
0,2012-05-26T00:00:00Z
1,2012-05-26T00:00:00Z
5,2010-06-26T00:00:00Z
10,2014-05-26T00:00:00Z
和file2:
was,15,1
kill,10,1
tunisia,5,5
peace,1,0
这是基于更新输入的基于awk的解决方案:
awk -F',' 'FNR==NR{a[$1]=$2;next}{if(a[$3]==""){a[$3]=0}; print $1,",",$2,",",a[$3]} ' file1 file2
输出:
was , 15 , 2012-05-26T00:00:00Z
kill , 10 , 2012-05-26T00:00:00Z
tunisia , 5 , 2010-06-26T00:00:00Z
peace , 1 , 2012-05-26T00:00:00Z
This answer帮助我推导出上述解决方案。
答案 2 :(得分:0)
您应该使用https://en.wikipedia.org/wiki/Divide_and_conquer_algorithms方法使用以下伪算法:
If A and B are your two large files
Open file A(1..n) for writing
Open file A for reading
for line in file A
let modulo = key % n
write line in file A(modulo)
Open file B(1..n) for writing
Open file B for reading
for line in file B
let modulo = key % n
write line in file B(modulo+1)
for i = 1..n
Open file R(i) for writing
Open files A(i) and B(i)
merge those files into R(i) using key matching as you do
Open file R for writing
for i = 1..n
append R(i) to R
尝试使用n = 1024,如果您的密钥是统一的,它将最终匹配1GB的文件
你需要磁盘上的可用空间(如果不清理文件,则需要三倍大小的A + B)