为什么使用nltk的Stanford解析器无法正确解析句子?

时间:2016-01-23 20:52:03

标签: python parsing nlp nltk stanford-nlp

我在python中使用带有nltk的Stanford解析器,并从Stanford Parser and NLTK获得帮助以建立斯坦福nlp库。

from nltk.parse.stanford import StanfordParser
from nltk.parse.stanford import StanfordDependencyParser
parser     = StanfordParser(model_path="edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
dep_parser = StanfordDependencyParser(model_path="edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
one = ("John sees Bill")
parsed_Sentence = parser.raw_parse(one)
# GUI
for line in parsed_Sentence:
       print line
       line.draw()

parsed_Sentence = [parse.tree() for parse in dep_parser.raw_parse(one)]
print parsed_Sentence

# GUI
for line in parsed_Sentence:
        print line
        line.draw()

我得到了错误的解析和依赖树,如下例所示,它正在处理'看到'作为名词而不是动词。

Example parse tree Example dependency tree

我该怎么办? 当我改变句子时,它完全正确,例如(一个=' John见Bill')。 可以从此处correct ouput of parse tree

查看此句子的正确输出

正确输出的示例如下所示:

correctly parsed

correct dependency parsed tree

1 个答案:

答案 0 :(得分:6)

再一次,没有模型是完美的(见Python NLTK pos_tag not returning the correct part-of-speech tag); P

您可以使用NeuralDependencyParser尝试“更准确”的解析器。

首先使用正确的环境变量正确设置解析器(请参阅Stanford Parser and NLTKhttps://gist.github.com/alvations/e1df0ba227e542955a8a),然后:

>>> from nltk.internals import find_jars_within_path
>>> from nltk.parse.stanford import StanfordNeuralDependencyParser
>>> parser = StanfordNeuralDependencyParser(model_path="edu/stanford/nlp/models/parser/nndep/english_UD.gz")
>>> stanford_dir = parser._classpath[0].rpartition('/')[0]
>>> slf4j_jar = stanford_dir + '/slf4j-api.jar'
>>> parser._classpath = list(parser._classpath) + [slf4j_jar]
>>> parser.java_options = '-mx5000m'
>>> sent = "John sees Bill"
>>> [parse.tree() for parse in parser.raw_parse(sent)]
[Tree('sees', ['John', 'Bill'])]

请注意NeuralDependencyParser仅生成依赖关系树:

enter image description here