在我的Webmethods应用程序中,我需要实现一个搜索功能,我已经使用Lucene完成了它。但是,当我搜索标题以alpabet以外的其他内容结尾的文件时,搜索不会检索结果。例如: - doc1.txt或new $ .txt
在下面的代码中,当我尝试打印queryCmbd 时,其打印搜索结果>>>>>>>标题:“doc1 txt”(内容:doc1内容:txt)。我搜索像doc.txt这样的字符串,结果是搜索结果>>>>>>>标题:“doc.txt”内容:doc.txt。为了解析这些类型的字符串应该怎么做(比如doc1.txt,new $ .txt)?
public java.util.ArrayList<DocNames> searchIndex(String querystr,
String path, StandardAnalyzer analyzer) {
String FIELD_CONTENTS = "contents";
String FIELD_TITLE = "title";
String queryStringCmbd = null;
queryStringCmbd = new String();
String queryFinal = new String(querystr.replaceAll(" ", " AND "));
queryStringCmbd = FIELD_TITLE + ":\"" + queryFinal + "\" OR "
+ queryFinal;
try {
FSDirectory directory = FSDirectory.open(new File(path));
Query q = new QueryParser(Version.LUCENE_36, FIELD_CONTENTS,
analyzer).parse(querystr);
Query queryCmbd = new QueryParser(Version.LUCENE_36,
FIELD_CONTENTS, analyzer).parse(queryStringCmbd);
int hitsPerPage = 10;
IndexReader indexReader = IndexReader.open(directory);
IndexSearcher indexSearcher = new IndexSearcher(indexReader);
TopScoreDocCollector collector = TopScoreDocCollector.create(
hitsPerPage, true);
indexSearcher.search(queryCmbd, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
System.out
.println("Search Results>>>>>>>>>>>>"
+ queryCmbd);
docNames = new ArrayList<DocNames>();
for (int i = 0; i < hits.length; ++i) {
int docId = hits[i].doc;
Document d = indexSearcher.doc(docId);
DocNames doc = new DocNames();
doc.setIndex(i + 1);
doc.setDocName(d.get("title"));
doc.setDocPath(d.get("path"));
if (!(d.get("path").contains("indexDirectory"))) {
docNames.add(doc);
}
}
indexReader.flush();
indexReader.close();
indexSearcher.close();
return docNames;
} catch (CorruptIndexException e) {
closeIndex(analyzer);
e.printStackTrace();
return null;
} catch (IOException e) {
closeIndex(analyzer);
e.printStackTrace();
return null;
} catch (ParseException e) {
closeIndex(analyzer);
e.printStackTrace();
return null;
}
}
答案 0 :(得分:2)
您的问题来自您使用StandardAnalyzer
的事实。如果您阅读其javadoc,则会告知它正在使用StandardTokenizer
进行令牌拆分。这意味着doc1.txt
等词组将分为doc1
和txt
。
如果您想匹配整个文字,则需要使用KeywordAnalyzer
- 进行索引和搜索。以下代码显示了差异:使用StandardAnalyzer
令牌为{"doc1", "txt"}
并使用KeywordAnalyzer
唯一令牌为doc1.txt
。
String foo = "foo:doc1.txt";
StandardAnalyzer sa = new StandardAnalyzer(Version.LUCENE_34);
TokenStream tokenStream = sa.tokenStream("foo", new StringReader(foo));
while (tokenStream.incrementToken()) {
System.out.println(tokenStream.getAttribute(TermAttribute.class).term());
}
System.out.println("-------------");
KeywordAnalyzer ka = new KeywordAnalyzer();
TokenStream tokenStream2 = ka.tokenStream("foo", new StringReader(foo));
while (tokenStream2.incrementToken()) {
System.out.println(tokenStream2.getAttribute(TermAttribute.class).term());
}