如何搜索具有关键字&#34的名称;使用"使用Lucene / Hibernate?

时间:2017-04-17 12:42:34

标签: java hibernate lucene hibernate-search

要搜索的人的姓名是" Suleman Kumar With"哪里有姓氏。 它适用于所有其他名称,但不适用于此英语关键字

以下是我创建Lucene索引的方式:

@Fields({ @Field(index = Index.YES, store = Store.NO),
@Field(name = "LastName_Sort", index = Index.YES, analyzer = @Analyzer(definition = "sortAnalyzer")) })
@Column(name = "LASTNAME", length = 50)
public String getLastName() {
  return lastName;
 }

sortAnalyzer具有以下配置:

@AnalyzerDef(name = "sortAnalyzer",
  tokenizer = @TokenizerDef(factory = KeywordTokenizerFactory.class),
filters = {
    @TokenFilterDef(factory = LowerCaseFilterFactory.class),
    @TokenFilterDef(factory = PatternReplaceFilterFactory.class, params = {
        @Parameter(name = "pattern", value = "('-&\\.,\\(\\))"),
        @Parameter(name = "replacement", value = " "),
        @Parameter(name = "replace", value = "all")
    }),
    @TokenFilterDef(factory = PatternReplaceFilterFactory.class, params = {
        @Parameter(name = "pattern", value = "([^0-9\\p{L} ])"),
        @Parameter(name = "replacement", value = ""),
        @Parameter(name = "replace", value = "all")
    })
}
)

搜索姓氏和主键:ID,我得到令牌不匹配错误。

2 个答案:

答案 0 :(得分:1)

我使用自己的" Custom Analyzer"来实现它。

public class IgnoreStopWordsAnalyzer extends StopwordAnalyzerBase {

    public IgnoreStopWordsAnalyzer() {
        super(Version.LUCENE_36, null);
    }

    @Override
    protected ReusableAnalyzerBase.TokenStreamComponents createComponents(final String fieldName, final Reader reader) {
        final StandardTokenizer src = new StandardTokenizer(Version.LUCENE_36, reader);
        TokenStream tok = new StandardFilter(Version.LUCENE_36, src);
        tok = new LowerCaseFilter(Version.LUCENE_36, tok);
        tok = new StopFilter(Version.LUCENE_36, tok, this.stopwords);
        return new ReusableAnalyzerBase.TokenStreamComponents(src, tok);
    }
}

在Field中调用此分析器,将忽略停用词。

答案 1 :(得分:1)

对于休眠搜索版本5,您可以使用以下自定义分析器:

import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.core.StopFilter;
import org.apache.lucene.analysis.standard.StandardFilter;
import org.apache.lucene.analysis.standard.StandardTokenizer;
import org.apache.lucene.analysis.util.StopwordAnalyzerBase;

public class IgnoreStopWordsAnalyzer extends StopwordAnalyzerBase {

    public IgnoreStopWordsAnalyzer() {
        super(null);
    }

    @Override
    protected TokenStreamComponents createComponents(String fieldName) {
        final Tokenizer source = new StandardTokenizer();
        TokenStream tokenStream = new StandardFilter(source);
        tokenStream = new LowerCaseFilter(tokenStream);
        tokenStream = new StopFilter(tokenStream, this.stopwords);
        return new TokenStreamComponents(source, tokenStream);
    }

}