OpenNLP模型构建器插件不会继续

时间:2017-10-26 09:02:37

标签: java machine-learning opennlp named-entity-recognition

我正在使用model builder addon OpenNLP来创建更好的NER模型。 根据此post,我使用了markg发布的代码:

public class ModelBuilderAddonUse {

  private static List<String> getSentencesFromSomewhere() throws Exception 
  {
      List<String> list = new ArrayList<String>();
      BufferedReader reader = new BufferedReader(new FileReader("D:\\Work\\workspaces\\default\\UpdateModel\\documentrequirements.docx"));
      String line;
      while ((line = reader.readLine()) != null) 
      {
          list.add(line);
      }
      reader.close();
      return list;

    }

  public static void main(String[] args) throws Exception {
    /**
     * establish a file to put sentences in
     */
    File sentences = new File("D:\\Work\\workspaces\\default\\UpdateModel\\sentences.text");

    /**
     * establish a file to put your NER hits in (the ones you want to keep based
     * on prob)
     */
    File knownEntities = new File("D:\\Work\\workspaces\\default\\UpdateModel\\knownentities.txt");

    /**
     * establish a BLACKLIST file to put your bad NER hits in (also can be based
     * on prob)
     */
    File blacklistedentities = new File("D:\\Work\\workspaces\\default\\UpdateModel\\blentities.txt");

    /**
     * establish a file to write your annotated sentences to
     */
    File annotatedSentences = new File("D:\\Work\\workspaces\\default\\UpdateModel\\annotatedSentences.txt");

    /**
     * establish a file to write your model to
     */
    File theModel = new File("D:\\Work\\workspaces\\default\\UpdateModel\\nl-ner-person.bin");


//------------create a bunch of file writers to write your results and sentences to a file

    FileWriter sentenceWriter = new FileWriter(sentences, true);
    FileWriter blacklistWriter = new FileWriter(blacklistedentities, true);
    FileWriter knownEntityWriter = new FileWriter(knownEntities, true);

//set some thresholds to decide where to write hits, you don't have to use these at all...
    double keeperThresh = .95;
    double blacklistThresh = .7;


    /**
     * Load your model as normal
     */
    TokenNameFinderModel personModel = new TokenNameFinderModel(new File("D:\\Work\\workspaces\\default\\UpdateModel\\nl-ner-person.bin"));
    NameFinderME personFinder = new NameFinderME(personModel);
    /**
     * do your normal NER on the sentences you have
     */
   for (String s : getSentencesFromSomewhere()) {
      sentenceWriter.write(s.trim() + "\n");
      sentenceWriter.flush();

      String[] tokens = s.split(" ");//better to use a tokenizer really
      Span[] find = personFinder.find(tokens);
      double[] probs = personFinder.probs();
      String[] names = Span.spansToStrings(find, tokens);
      for (int i = 0; i < names.length; i++) {
        //YOU PROBABLY HAVE BETTER HEURISTICS THAN THIS TO MAKE SURE YOU GET GOOD HITS OUT OF THE DEFAULT MODEL
        if (probs[i] > keeperThresh) {
          knownEntityWriter.write(names[i].trim() + "\n");
        }
        if (probs[i] < blacklistThresh) {
          blacklistWriter.write(names[i].trim() + "\n");
        }
      }
      personFinder.clearAdaptiveData();
      blacklistWriter.flush();
      knownEntityWriter.flush();
    }
    //flush and close all the writers
    knownEntityWriter.flush();
    knownEntityWriter.close();
    sentenceWriter.flush();
    sentenceWriter.close();
    blacklistWriter.flush();
    blacklistWriter.close();

    /**
     * THIS IS WHERE THE ADDON IS GOING TO USE THE FILES (AS IS) TO CREATE A NEW MODEL. YOU SHOULD NOT HAVE TO RUN THE FIRST PART AGAIN AFTER THIS RUNS, JUST NOW PLAY WITH THE
     * KNOWN ENTITIES AND BLACKLIST FILES AND RUN THE METHOD BELOW AGAIN UNTIL YOU GET SOME DECENT RESULTS (A DECENT MODEL OUT OF IT).
     */
    DefaultModelBuilderUtil.generateModel(sentences, knownEntities, blacklistedentities, theModel, annotatedSentences, "person", 3);


  }
}

它也会运行,但我的输出退出:

    annotated sentences: 1862
    knowns: 58
    Building Model using 1862 annotations
    reading training data...

但是在post的示例中,它应该更进一步:

Indexing events using cutoff of 5

    Computing event counts...  done. 561755 events
    Indexing...  done.
Sorting and merging events... done. Reduced 561755 events to 127362.
Done indexing.
Incorporating indexed data for training...  
done.
    Number of Event Tokens: 127362
        Number of Outcomes: 3
      Number of Predicates: 106490
...done.

任何人都可以帮我修复这个问题,所以我可以生成一个模型吗? 我已经搜索了很多,但无法找到任何关于它的好文档。 真的很感激,谢谢。

1 个答案:

答案 0 :(得分:0)

更正训练数据文件的路径,如下所示:

File sentences = new File("D:/Work/workspaces/default/UpdateModel/sentences.text");

而不是

File sentences = new File("D:\\Work\\workspaces\\default\\UpdateModel\\sentences.text");

<强>更新

这是通过将文件添加到项目文件夹的方式来使用的。试试这样 -

File sentences = new File("src/training/resources/CreateModel/sentences.txt");

Check my respository for reference on Github

这应该有所帮助。