Lucene:通配符与点号后的数字不匹配

时间:2018-11-08 10:39:03

标签: java lucene

我最近从Lucene 3升级到Lucene 6,在v6中,我发现通配符?与点号后的数字不再匹配。这是一个示例:

要匹配的字符串:a.1a

查询:a.?a

在此示例中,查询与Lucene 3中的字符串匹配,而在Lucene 6中与 not 匹配。另一方面,查询a*在Lucene 3和6中都匹配。测试表明,行为的这种差异仅在点后跟数字时才会发生。顺便说一下,我在Lucene 3和6中都使用了StandardAnalyzer

有人知道这里发生了什么吗?如何恢复Lucene 3的行为,或者改编我的Lucene 6查询,使其等同于Lucene 3的查询?

更新

要求的Lucene 6.6代码段。

public List<ResultDocument> search(String queryString)
        throws SearchException, CheckedOutOfMemoryError {
    stopped =false;

    QueryWrapper queryWrapper = createQuery(queryString);
    Query query = queryWrapper.query;
    boolean isPhraseQuery = queryWrapper.isPhraseQuery;

    readLock.lock();
    try {
        checkIndexesExist();

        DelegatingCollector collector= new DelegatingCollector(){
            @Override
            public void collect(int doc) throws IOException {
                leafDelegate.collect(doc);
                if(stopped){
                    throw new StoppedSearcherException();
                }
            }
        };
        collector.setDelegate(TopScoreDocCollector.create(MAX_RESULTS, null));
        try{
            luceneSearcher.search(query, collector);
        }
        catch (StoppedSearcherException e){}
        ScoreDoc[] scoreDocs = ((TopScoreDocCollector)collector.getDelegate()).topDocs().scoreDocs;

        ResultDocument[] results = new ResultDocument[scoreDocs.length];
        for (int i = 0; i < scoreDocs.length; i++) {
            Document doc = luceneSearcher.doc(scoreDocs[i].doc);
            float score = scoreDocs[i].score;
            LuceneIndex index = indexes.get(((DecoratedMultiReader) luceneSearcher.getIndexReader()).decoratedReaderIndex(i));
            IndexingConfig config = index.getConfig();
            results[i] = new ResultDocument(
                doc, score, query, isPhraseQuery, config, fileFactory,
                outlookMailFactory);
        }
        return Arrays.asList(results);
    }
    catch (IllegalArgumentException e) {
        throw wrapEmptyIndexException(e);
    }
    catch (IOException e) {
        throw new SearchException(e.getMessage());
    }
    catch (OutOfMemoryError e) {
        throw new CheckedOutOfMemoryError(e);
    }
    finally {
        readLock.unlock();
    }
}

更多代码:

private static QueryWrapper createQuery(String queryString)
        throws SearchException {
    PhraseDetectingQueryParser queryParser = new PhraseDetectingQueryParser(
        Fields.CONTENT.key(), IndexRegistry.getAnalyzer());
    queryParser.setAllowLeadingWildcard(true);
    RewriteMethod rewriteMethod = MultiTermQuery.SCORING_BOOLEAN_REWRITE;
    queryParser.setMultiTermRewriteMethod(rewriteMethod);

    try {
        Query query = queryParser.parse(queryString);
        boolean isPhraseQuery = queryParser.isPhraseQuery();
        return new QueryWrapper(query, isPhraseQuery);
    }
    catch (IllegalArgumentException e) {
        throw new SearchException(e.getMessage());
    }
    catch (ParseException e) {
        throw new SearchException(e.getMessage());
    }
}

private static final class QueryWrapper {
    public final Query query;
    public final boolean isPhraseQuery;

    private QueryWrapper(Query query, boolean isPhraseQuery) {
        this.query = query;
        this.isPhraseQuery = isPhraseQuery;
    }
}

更多代码:

public final class PhraseDetectingQueryParser extends QueryParser {

    /*
     * This class is used for determining whether the parsed query is supported
     * by the fast-vector highlighter. The latter only supports queries that are
     * a combination of TermQuery, PhraseQuery and/or BooleanQuery.
     */

    private boolean isPhraseQuery = true;

    public PhraseDetectingQueryParser(  String defaultField,
                                        Analyzer analyzer) {
        super(defaultField, analyzer);
    }

    public boolean isPhraseQuery() {
        return isPhraseQuery;
    }

    protected Query newFuzzyQuery(  Term term,
                                    float minimumSimilarity,
                                    int prefixLength) {
        isPhraseQuery = false;
        return super.newFuzzyQuery(term, minimumSimilarity, prefixLength);
    }

    protected Query newMatchAllDocsQuery() {
        isPhraseQuery = false;
        return super.newMatchAllDocsQuery();
    }

    protected Query newPrefixQuery(Term prefix) {
        isPhraseQuery = false;
        return super.newPrefixQuery(prefix);
    }

    protected Query newWildcardQuery(org.apache.lucene.index.Term t) {
        isPhraseQuery = false;
        return super.newWildcardQuery(t);
    }

}

1 个答案:

答案 0 :(得分:0)

StandardAnalyzer在该句点将输入分成多个词条(除非它的任一侧都有字母或两侧都有数字)。因此将其分为两个术语:“ a”和“ 1a”

由于您使用的是通配符查询,因此不会在查询端进行任何分析,因此不会被标记化,并且索引中没有与查询匹配的任何术语。如果要搜索“ 1a”,没有通配符或其他任何内容,则应找到该文档。