重新加载Lucene建议指数

时间:2015-08-31 10:06:35

标签: lucene

如何存储和重新加载Lucene建议指数?

这是建立一个建议者指数的方式:

def buildAutoCompleteIndex(path:Path, data:List[Map[String,Any]])
  :BlendedInfixSuggester = {
    val directory = FSDirectory.open(path)
    val autoComplete = new BlendedInfixSuggester(directory, new StandardAnalyzer())
    autoComplete.build(new EntityIteratorStub())

    data.map { d =>
      autoComplete.add(d("text").asInstanceOf[BytesRef],
       d("contexts").asInstanceOf[Set[BytesRef]],
       d("weight").asInstanceOf[Long],
       d("payload").asInstanceOf[BytesRef])
    }
    autoComplete.refresh

    autoComplete
  }

但是,如果我尝试检查索引是否存在,比如服务器重启,我会收到“建议者未构建”异常。

def checkIfIndexExists(path:Path):BlendedInfixSuggester = {
  val directory = FSDirectory.open(path)
  val autoComplete = new BlendedInfixSuggester(directory, new StandardAnalyzer())

  try {
    // exception occurs here ->
    if (autoComplete.lookup("a", 1, true, false).length > 0) autoComplete
    else null
  } catch {
    case NonFatal(e) => {
      println("Index does not exist, recreating at " + path)
      null
    }
  }
}

编辑==========================

在Lucene AnalyzingInfixSuggester找到了这个:

  @Override
  public boolean store(DataOutput in) throws IOException {
    return false;
  }

  @Override
  public boolean load(DataInput out) throws IOException {
    return false;
  }

这是否意味着无法完成对Suggester索引的存储重新加载?

1 个答案:

答案 0 :(得分:0)

使用commit解决了问题。

def checkIfIndexExists(path:Path):BlendedInfixSuggester = {
  val directory = FSDirectory.open(path)

  val autoComplete = new BlendedInfixSuggester(directory, new StandardAnalyzer())

  try {
    if (autoComplete.getCount > 0) autoComplete
    else null
  } catch {
    case NonFatal(e) => null
  }
}


def buildAutoCompleteIndex(path:Path, data:List[Map[String,Any]])
:BlendedInfixSuggester = {
  val directory = FSDirectory.open(path)

  val autoComplete = new BlendedInfixSuggester(directory, new StandardAnalyzer())

  // Just build a stub iterator to get started with
  autoComplete.build(new EntityIteratorStub())

  data.map { d =>
    autoComplete.add(d("text").asInstanceOf[BytesRef],
     d("contexts").asInstanceOf[Set[BytesRef]],
     d("weight").asInstanceOf[Long],
     d("payload").asInstanceOf[BytesRef])
  }
  autoComplete.refresh
  autoComplete.commit

  autoComplete
}