Lucene 4.2分析器在索引字段时

时间:2013-04-30 23:52:31

标签: lucene indexing field analyzer

我正在尝试使用Lucene 4.2索引一组文档。我创建了一个自定义分析器,它没有标记化,也没有小写术语,使用以下代码:

     public class NoTokenAnalyzer extends Analyzer{
public Version matchVersion;
public NoTokenAnalyzer(Version matchVersion){
    this.matchVersion=matchVersion;
}
@Override
protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
    // TODO Auto-generated method stub
    //final Tokenizer source = new NoTokenTokenizer(matchVersion, reader);
    final KeywordTokenizer source=new KeywordTokenizer(reader);
    TokenStream result = new LowerCaseFilter(matchVersion, source);
    return new TokenStreamComponents(source, result);

}

}

我使用分析器构建索引(受Lucene文档中提供的代码的启发):

    public static void IndexFile(Analyzer analyzer) throws IOException{
    boolean create=true;



String directoryPath="path";
File folderToIndex=new File(directoryPath);
File[]filesToIndex=folderToIndex.listFiles();

Directory directory=FSDirectory.open(new File("index path"));

IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_42, analyzer);

      if (create) {
        // Create a new index in the directory, removing any
        // previously indexed documents:
        iwc.setOpenMode(OpenMode.CREATE);
     } else {
        // Add new documents to an existing index:
        iwc.setOpenMode(OpenMode.CREATE_OR_APPEND);
      }

      IndexWriter writer = new IndexWriter(directory, iwc);
for (final File singleFile : filesToIndex) {


//process files in the directory and extract strings to index
    //..........
    String field1;
    String field2;

     //index fields

      Document doc=new Document();


     Field f1Field= new Field("f1", field1, TextField.TYPE_STORED);


      doc.add(f1Field);
      doc.add(new Field("f2", field2, TextField.TYPE_STORED));  
      }
writer.close();
   }

代码的问题在于索引字段不是标记化的,但它们也不是低级的,即,似乎在索引期间没有应用分析器。 我无法弄清楚出了什么问题?如何使分析仪工作?

1 个答案:

答案 0 :(得分:1)

代码正常运行。因此,它可能会在Lucene 4.2中创建自定义分析器,并将其用于索引和搜索。