我在lucene中编写了一个代码,它首先索引xml文档,并查找索引中唯一术语的数量。
假设有n个(没有)独特术语。
我想生成一个维度为nXn的矩阵,其中
m[i][j] = (co_occurrence value of terms (i, j))/ (occurrence value of term i)
术语共同发生(i,j)=否。第i个术语和第j个术语的文件,都在发生 术语j的出现是否定的。发生术语j的文件。
我的代码工作正常。但它效率不高。对于大号没有。文件,没有。条款超过2000,超过10分钟。
这是我找到co_occurence的代码 -
int cooccurrence(IndexReader reader, String term_one, String term_two) throws IOException {
int common_doc_no = 0, finaldocno_one = 0, finaldocno_two = 0;
int termdocid_one[] = new int[6000];
int termdocid_two[] = new int[6000];
int first_docids[] = new int[6000];
int second_docids[] = new int[6000];
int k = 0;
for (java.util.Iterator<String> it = reader.getFieldNames(
FieldOption.ALL).iterator(); it.hasNext();) {
String fieldname = (String) it.next();
TermDocs t = reader.termDocs(new Term(fieldname, term_one));
while (t.next()) {
int x = t.doc();
if (termdocid_one[x] != 1) {
finaldocno_one++;
first_docids[k] = x;
k++;
}
termdocid_one[x] = 1;
}
}
/*
* System.out.println("value of finaldoc_one - " + finaldocno_one); for
* (int i = 0; i < finaldocno_one; i++) { System.out.println("" +
* first_docids[i]); }
*/
k = 0;
for (java.util.Iterator<String> it = reader.getFieldNames(
FieldOption.ALL).iterator(); it.hasNext();) {
String fieldname = (String) it.next();
TermDocs t = reader.termDocs(new Term(fieldname, term_two));
while (t.next()) {
int x = t.doc();
if (termdocid_two[x] != 1) {
finaldocno_two++;
second_docids[k] = x;
k++;
}
termdocid_two[x] = 1;
}
}
/*
* System.out.println("value of finaldoc_two - " + finaldocno_two);
*
* for (int i = 0; i < finaldocno_two; i++) { System.out.println("" +
* second_docids[i]); }
*/
int max;
int search = 0;
if (finaldocno_one > finaldocno_two) {
max = finaldocno_one;
search = 1;
} else {
max = finaldocno_two;
search = 2;
}
if (search == 1) {
for (int i = 0; i < max; i++) {
if (termdocid_two[first_docids[i]] == 1)
common_doc_no++;
}
} else if (search == 2) {
for (int i = 0; i < max; i++) {
if (termdocid_one[second_docids[i]] == 1)
common_doc_no++;
}
}
return common_doc_no;
}
知识矩阵计算代码: -
void knowledge_matrix(double matrix[][], IndexReader reader, double avg_matrix[][]) throws IOException {
ArrayList<String> unique_terms_array = new ArrayList<>();
int totallength = unique_term_count(reader, unique_terms_array);
int co_occur_matrix[][] = new int[totallength + 3][totallength + 3];
double rowsum = 0;
for (int i = 1; i <= totallength; i++) {
rowsum = 0;
for (int j = 1; j <= totallength; j++) {
int co_occurence;
int occurence = docno_single_term(reader,
unique_terms_array.get(j - 1));
if (i > j) {
co_occurence = co_occur_matrix[i][j];
} else {
co_occurence = cooccurrence(reader,
unique_terms_array.get(i - 1),
unique_terms_array.get(j - 1));
co_occur_matrix[i][j] = co_occurence;
co_occur_matrix[j][i] = co_occurence;
}
matrix[i][j] = (float) co_occurence / (float) occurence;
rowsum += matrix[i][j];
if (i > 1)
{
avg_matrix[i - 1][j] = matrix[i - 1][j] - matrix[i - 1][0];
}
}
matrix[i][0] = rowsum / totallength;
}
for (int j = 1; j <= totallength; j++) {
avg_matrix[totallength][j] = matrix[totallength][j]
- matrix[totallength][0];
}
}
请有人建议我实施它的任何有效方法。
答案 0 :(得分:0)
我认为你可以将term_one和term_two的查找过程放在一个for
循环中。并且您可以使用两个哈希集来保存您找到的docid。然后使用termOneSet.retainAll(termTwoSet)
获取同时包含term_one和term_two的文档。