我想做一个非常简单的工作:给定一个包含代词的字符串,我想解决它们。
例如,我想把句子改为“玛丽有一只小羊羔。她很可爱。”在“玛丽有一只小羊羔。玛丽很可爱。”。我曾尝试使用Stanford CoreNLP。但是,我似乎无法启动解析器。我使用Eclipse在项目中导入了所有包含的jar,我已经为JVM(-Xmx3g)分配了3GB。
错误非常尴尬:
线程“main”中的异常java.lang.NoSuchMethodError: edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava /郎/字符串; [Ljava /郎/字符串;)乐都/斯坦福/ NLP /解析器/ lexparser / LexicalizedParser;
我不明白L来自哪里,我认为这是我问题的根源......这很奇怪。我试图进入源文件,但那里没有错误的引用。
代码:
import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations.CollapsedCCProcessedDependenciesAnnotation;
import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefChainAnnotation;
import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefGraphAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.dcoref.CorefChain;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.Tree;
import edu.stanford.nlp.semgraph.SemanticGraph;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.util.IntTuple;
import edu.stanford.nlp.util.Pair;
import edu.stanford.nlp.util.Timing;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Properties;
public class Coref {
/**
* @param args the command line arguments
*/
public static void main(String[] args) throws IOException, ClassNotFoundException {
// creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution
Properties props = new Properties();
props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
// read some text in the text variable
String text = "Mary has a little lamb. She is very cute."; // Add your text here!
// create an empty Annotation just with the given text
Annotation document = new Annotation(text);
// run all Annotators on this text
pipeline.annotate(document);
// these are all the sentences in this document
// a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
for(CoreMap sentence: sentences) {
// traversing the words in the current sentence
// a CoreLabel is a CoreMap with additional token-specific methods
for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
// this is the text of the token
String word = token.get(TextAnnotation.class);
// this is the POS tag of the token
String pos = token.get(PartOfSpeechAnnotation.class);
// this is the NER label of the token
String ne = token.get(NamedEntityTagAnnotation.class);
}
// this is the parse tree of the current sentence
Tree tree = sentence.get(TreeAnnotation.class);
System.out.println(tree);
// this is the Stanford dependency graph of the current sentence
SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
}
// This is the coreference link graph
// Each chain stores a set of mentions that link to each other,
// along with a method for getting the most representative mention
// Both sentence and token offsets start at 1!
Map<Integer, CorefChain> graph =
document.get(CorefChainAnnotation.class);
System.out.println(graph);
}
}
完整堆栈跟踪:
添加注释器标记 添加注释器ssplit 添加注释器位置 加载POS模型[edu / stanford / nlp / models / pos-tagger / english-left3words / english-left3words-distsim.tagger] ...从训练有素的标记器加载默认属性edu / stanford / nlp / models / pos-tagger / english -left3words /英语 - left3words-distsim.tagger 从edu / stanford读取POS标记模型/ nlp / models / pos-tagger / english-left3words / english-left3words-distsim.tagger ...完成[2.1秒]。 完成[2.2秒]。 添加注释引理 添加注释器 从edu / stanford / nlp / models / ner / english.all.3class.distsim.crf.ser.gz加载分类器...完成[4.0秒]。 从edu / stanford / nlp / models / ner / english.muc.distsim.crf.ser.gz加载分类器...完成[3.0秒]。 从edu / stanford / nlp / models / ner / english.conll.distsim.crf.ser.gz加载分类器...完成[3.3秒]。 添加注释器解析 线程“main”中的异常java.lang.NoSuchMethodError:edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava / lang / String; [Ljava / lang / String;)Ledu / stanford / nlp / parser / lexparser / LexicalizedParser; 在edu.stanford.nlp.pipeline.ParserAnnotator.loadModel(ParserAnnotator.java:115) 在edu.stanford.nlp.pipeline.ParserAnnotator。(ParserAnnotator.java:64) 在edu.stanford.nlp.pipeline.StanfordCoreNLP $ 12.create(StanfordCoreNLP.java:603) 在edu.stanford.nlp.pipeline.StanfordCoreNLP $ 12.create(StanfordCoreNLP.java:585) 在edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:62) 在edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:329) 在edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:196) 在edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:186) 在edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:178) 在Coref.main(Coref.java:41)
答案 0 :(得分:9)
是的,自Java 1.0以来,L只是一个奇怪的Sun事件。
LexicalizedParser.loadModel(String, String ...)
是添加到解析器的新方法,但未找到该方法。我怀疑这意味着你的类路径中有另一个版本的解析器正在被使用。
试试这个:在任何IDE外部的shell中,给出这些命令(适当地给出stanford-corenlp的路径,并改变:to;如果在Windows上:
javac -cp ".:stanford-corenlp-2012-04-09/*" Coref.java
java -mx3g -cp ".:stanford-corenlp-2012-04-09/*" Coref
解析器加载并且您的代码正确运行 - 只需要添加一些打印语句,这样您就可以看到它做了什么: - )。