我正在尝试执行stanford corenlp软件包以获得共识解析。
这是给coref执行的命令:
java -cp <jars_in_corenlp> -Xmx8g edu.stanford.nlp.dcoref.SieveCoreferenceSystem -props <properties file>
我这样执行 -
java - cp "*" -Xmx2g edu.stanford.nlp.dcoref.SieveCoreferenceSystem -props annotators = pos, lemma, ner, parse dcoref.postprocessing = true dcoref.maxdist = -1 -file input.txt
java - cp "*" -Xmx2g edu.stanford.nlp.dcoref.SieveCoreferenceSystem -props annotators = pos, lemma, ner, parse dcoref.postprocessing = true dcoref.maxdist = -1 input.txt
给出错误 -
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
这种方式可行,但它会加载所有jar
个文件,这需要额外的时间,我希望以最短的执行时间执行。
java -cp "*" -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,ner,parse,dcoref -file input.txt