Lucene - simpleAnalyzer - 如何获得匹配的单词?

时间:2012-03-15 07:26:00

标签: search lucene full-text-search analyzer

我无法通过使用以下算法来抵消或直接使用该词本身。任何帮助将不胜感激

   ...
   Analyzer analyzer = new SimpleAnalyzer();
   MemoryIndex index = new MemoryIndex();

   QueryParser parser = new QueryParser(Version.LUCENE_30, "content", analyzer);

   float score = index.search(parser.parse("+content:" + target));

   if(score > 0.0f)
        System.out.println("How to know matched word?");

1 个答案:

答案 0 :(得分:2)

这是整个内存索引和搜索示例。我刚刚为自己写过,它完美无缺。我知道你需要在内存中存储索引,但问题是为什么你需要MemoryIndex呢?您只需使用RAMDirectory代替,您的索引将存储在内存中,因此当您执行搜索时,索引将从RAMDirectory(内存)加载。

    StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_34);
    IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_34, analyzer);
    RAMDirectory directory = new RAMDirectory();
    try {
        IndexWriter indexWriter = new IndexWriter(directory, config);
        Document doc = new Document();
        doc.add(new Field("content", text, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_OFFSETS));
        indexWriter.addDocument(doc);
        indexWriter.optimize();
        indexWriter.close();

        QueryParser parser = new QueryParser(Version.LUCENE_34, "content", analyzer);
        IndexSearcher searcher = new IndexSearcher(directory, true);
        IndexReader reader = IndexReader.open(directory, true);

        Query query = parser.parse(word);
        TopScoreDocCollector collector = TopScoreDocCollector.create(10000, true);
        searcher.search(query, collector);
        ScoreDoc[] hits = collector.topDocs().scoreDocs;
        if (hits != null && hits.length > 0) {
            for (ScoreDoc hit : hits) {
                int docId = hit.doc;
                Document hitDoc = searcher.doc(docId);

                TermFreqVector termFreqVector = reader.getTermFreqVector(docId, "content");
                TermPositionVector termPositionVector = (TermPositionVector) termFreqVector;
                int termIndex = termFreqVector.indexOf(word);
                TermVectorOffsetInfo[] termVectorOffsetInfos = termPositionVector.getOffsets(termIndex);

                for (TermVectorOffsetInfo termVectorOffsetInfo : termVectorOffsetInfos) {
                    concordances.add(processor.processConcordance(hitDoc.get("content"), word, termVectorOffsetInfo.getStartOffset(), size));
                }
            }
        }

        analyzer.close();
        searcher.close();
        directory.close();