我在python中使用带有nltk的Stanford解析器,并从Stanford Parser and NLTK获得帮助以建立斯坦福nlp库。
from nltk.parse.stanford import StanfordParser
from nltk.parse.stanford import StanfordDependencyParser
parser = StanfordParser(model_path="edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
dep_parser = StanfordDependencyParser(model_path="edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
one = ("John sees Bill")
parsed_Sentence = parser.raw_parse(one)
# GUI
for line in parsed_Sentence:
print line
line.draw()
parsed_Sentence = [parse.tree() for parse in dep_parser.raw_parse(one)]
print parsed_Sentence
# GUI
for line in parsed_Sentence:
print line
line.draw()
我得到了错误的解析和依赖树,如下例所示,它正在处理'看到'作为名词而不是动词。
我该怎么办? 当我改变句子时,它完全正确,例如(一个=' John见Bill')。 可以从此处correct ouput of parse tree
查看此句子的正确输出正确输出的示例如下所示:
答案 0 :(得分:6)
再一次,没有模型是完美的(见Python NLTK pos_tag not returning the correct part-of-speech tag); P
您可以使用NeuralDependencyParser
尝试“更准确”的解析器。
首先使用正确的环境变量正确设置解析器(请参阅Stanford Parser and NLTK和https://gist.github.com/alvations/e1df0ba227e542955a8a),然后:
>>> from nltk.internals import find_jars_within_path
>>> from nltk.parse.stanford import StanfordNeuralDependencyParser
>>> parser = StanfordNeuralDependencyParser(model_path="edu/stanford/nlp/models/parser/nndep/english_UD.gz")
>>> stanford_dir = parser._classpath[0].rpartition('/')[0]
>>> slf4j_jar = stanford_dir + '/slf4j-api.jar'
>>> parser._classpath = list(parser._classpath) + [slf4j_jar]
>>> parser.java_options = '-mx5000m'
>>> sent = "John sees Bill"
>>> [parse.tree() for parse in parser.raw_parse(sent)]
[Tree('sees', ['John', 'Bill'])]
请注意NeuralDependencyParser
仅生成依赖关系树: