我使用以下命令为德语模型提供corenlp服务器,这些模型在类路径中作为jar下载,但它不输出德语标签或解析但仅加载英语模型:
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -props ./german.prop
german.prop内容:
annotators = tokenize, ssplit, pos, depparse, parse
tokenize.language = de
pos.model = edu/stanford/nlp/models/pos-tagger/german/german-hgc.tagger
ner.model = edu/stanford/nlp/models/ner/german.hgc_175m_600.crf.ser.gz
ner.applyNumericClassifiers = false
ner.useSUTime = false
parse.model = edu/stanford/nlp/models/lexparser/germanFactored.ser.gz
depparse.model = edu/stanford/nlp/models/parser/nndep/UD_German.gz
客户端命令:
wget --post-data ' Meine Mutter ist aus Wuppertal' 'localhost:9000/?properties"="{"tokenize.whitespace":"true","annotators":"tokenize, ssplit, pos, depparse, parse","outputFormat":"text","tokenize.language" :"de" ,
"pos.model":" edu/stanford/nlp/models/pos-tagger/german/german-hgc.tagger",
"depparse.model" : "edu/stanford/nlp/models/parser/nndep/UD_German.gz",
"parse.model" : "edu/stanford/nlp/models/lexparser/germanFactored.ser.gz"
}' -O -
我得到以下不正确的输出:
{"dep":"dep","governor":4,"governorGloss":"aus","dependent":5,"dependentGloss":"Wuppertal"}],"openie":[{"subject":"Wuppertal","subjectSpan":[4,5],"relation":"is ist aus of","relationSpan":[2,4],"object":"Meine Mutter","objectSpan":[0,2]}],"tokens":[{"index":1,"word":"Meine","originalText":"Meine","lemma":"Meine","characterOffsetBegin":1,"characterOffsetEnd":6,"pos":"NNP","ner":"PERSON","speaker":"PER0","before":" ","after":" "},{"index":2,"word":"Mutter","originalText":"Mutter","lemma":"Mutter","characterOffsetBegin":7,"characterOffsetEnd":13,"pos":"NNP","ner":"PERSON","speaker":"PER0","before":" ","after":" "},{"index":3,"word":"ist","originalText":"ist","lemma":"ist","characterOffsetBegin":14,"characterOffsetEnd":17,"pos":"NN","ner":"O","speaker":"PER0","before":" ","after":" "},{"index":4,"word":"aus","originalText":"aus","lemma":"aus","characterOffsetBegin":18,"characterOffsetEnd":21,"pos":"NN","ner":"O","speaker":"PER0","before":" ","after":" "},{"index":5,"word":"Wuppertal","originalText":"Wuppertal","lemma":"Wuppertal","characterOffsetBegin":22,"characterOffsetEnd":31,"pos":"NNP","ner":"LOCATI100%[==========================================================================>] 2,
在服务器日志中我看到它加载了英文模型,尽管它在启动时列出了德国模型:
pos.model=edu/stanford/nlp/models/pos-tagger/ge...
parse.model=edu/stanford/nlp/models/lexparser/ger...
tokenize.language=de
depparse.model=edu/stanford/nlp/models/parser/nndep/...
annotators=tokenize, ssplit, pos, depparse, parse
Starting server on port 9000 with timeout of 5000 milliseconds.
StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000
[/203.:61563] API call w/annotators tokenize,ssplit,pos,depparse
Die Katze liegt auf der Matte.
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator - TokenizerAnnotator: No tokenizer type provided. Defaulting to PTBTokenizer.
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [1.5 sec].
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse
Loading depparse model file: edu/stanford/nlp/models/parser/nndep/english_UD.gz ...
PreComputed 100000, Elapsed Time: 1.396 (s)
法语模型中同样错误的以下问题也指向同样的问题,但即使在以下后,它也无法解决服务器案例的问题,我能够在不使用服务器的情况下获得正确的输出并且仅使用edu.stanford.nlp.pipeline.StanfordCoreNLP command
,服务器命令edu.stanford.nlp.pipeline.StanfordCoreNLPServer
默认为英语:
French dependency parsing using CoreNLP
答案 0 :(得分:1)
在服务器上运行外语时出现了一些问题。
如果您使用我们的GitHub网站上提供的最新版本,它应该可以使用。
GitHub网站位于:https://github.com/stanfordnlp/CoreNLP
该链接包含使用最新版本的代码构建jar的说明。
我在一些示例德语文本上运行此命令,看起来它工作正常:
wget --post-data '<sample german text>' 'localhost:9000/?properties={"pipelineLanguage":"german","annotators":"tokenize,ssplit,pos,ner,parse", "parse.model":"edu/stanford/nlp/models/lexparser/germanFactored.ser.gz","tokenize.language":"de","pos.model":"edu/stanford/nlp/models/pos-tagger/german/german-hgc.tagger", "ner.model":"edu/stanford/nlp/models/ner/german.hgc_175m_600.crf.ser.gz", "ner.applyNumericClassifiers":"false", "ner.useSUTime":"false"}' -O -
我应该注意到神经网络德语依赖解析器已完全破坏,我们正在努力修复它,所以你应该只使用我在该命令中指定的德语设置。
有关服务器的更多信息,请访问:http://stanfordnlp.github.io/CoreNLP/corenlp-server.html