我是Lucene世界的新手,对这个主题没有太多的工作知识。我需要提取文档术语向量,我在网上找到了以下代码How to extract Document Term Vector in Lucene 3.5.0。
/**
* Sums the term frequency vector of each document into a single term frequency map
* @param indexReader the index reader, the document numbers are specific to this reader
* @param docNumbers document numbers to retrieve frequency vectors from
* @param fieldNames field names to retrieve frequency vectors from
* @param stopWords terms to ignore
* @return a map of each term to its frequency
* @throws IOException
*/
private Map<String,Integer> getTermFrequencyMap(IndexReader indexReader, List<Integer> docNumbers, String[] fieldNames, Set<String> stopWords)
throws IOException {
Map<String,Integer> totalTfv = new HashMap<String,Integer>(1024);
for (Integer docNum : docNumbers) {
for (String fieldName : fieldNames) {
TermFreqVector tfv = indexReader.getTermFreqVector(docNum, fieldName);
if (tfv == null) {
// ignore empty fields
continue;
}
String terms[] = tfv.getTerms();
int termCount = terms.length;
int freqs[] = tfv.getTermFrequencies();
for (int t=0; t < termCount; t++) {
String term = terms[t];
int freq = freqs[t];
// filter out single-letter words and stop words
if (StringUtils.length(term) < 2 ||
stopWords.contains(term)) {
continue; // stop
}
Integer totalFreq = totalTfv.get(term);
totalFreq = (totalFreq == null) ? freq : freq + totalFreq;
totalTfv.put(term, totalFreq);
}
}
}
return totalTfv;
}
我已创建了位于以下目录中的索引。
String indexDir = "C:\\Lucene\\Output\\";
Directory dir = FSDirectory.open(new File(indexDir));
IndexReader reader = IndexReader.open(dir);
我的问题是我不知道如何获取上述功能所需的doc ids(List docNumbers)。我尝试了几种方法,比如
TermDocs docs = reader.termDocs();
但它不起作用。
答案 0 :(得分:2)
Lucene开始从零开始分配id,maxDoc()是上限,所以你可以简单地循环获取所有id,跳过已删除的文件(当你调用deleteDocument时Lucene将它们标记为删除):
for (int docNum=0; docNum < reader.maxDoc(); docNum++) {
if (reader.isDeleted(docNum)) {
continue;
}
TermFreqVector tfv = reader.getTermFreqVector(docNum, "fieldName");
...
}
要使其正常工作,您必须在编制索引时启用它们,请参阅Field.TermVector。