我正在尝试使用Stanford NLP并遇到Error while loading a tagger model (probably missing model file)
错误。我不知道怎么了。我已经使用maven安装了主jar:
<dependency>
<groupId>edu.stanford.nlp</groupId>
<artifactId>stanford-corenlp</artifactId>
<version>3.9.1</version>
</dependency>
我尝试使用maven安装模型文件:
<dependency>
<groupId>edu.stanford.nlp</groupId>
<artifactId>stanford-corenlp</artifactId>
<version>3.9.1</version>
<classifier>models</classifier>
</dependency>
但是包裹不断变得光鲜亮丽:
所以我从here下载了一个模型文件,并使用文件-项目结构-添加库-编译-确定:
但是,当我运行测试类时,它不起作用,并且会出现上述错误:
package com.util;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.pipeline.CoreDocument;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import java.util.Properties;
public class TextEater {
public static void main(String[] args) {
Properties props = new Properties();
// set the list of annotators to run
props.setProperty("annotators", "tokenize,ssplit,pos,lemma,ner,parse,depparse,coref,kbp,quote");
// set a property for an annotator, in this case the coref annotator is being set to use the neural algorithm
props.setProperty("coref.algorithm", "neural");
// build pipeline
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
// create a document object
CoreDocument document = new CoreDocument("Hello how are you");
// annnotate the document
pipeline.annotate(document);
// examples
// 10th token of the document
CoreLabel token = document.tokens().get(10);
System.out.println("Example: token");
System.out.println(token);
System.out.println();
}
}
为什么手动导入模型库似乎不起作用?该如何解决?任何帮助表示赞赏。