如何使用斯坦福依赖解析从文本文件中解析多个句子?

时间:2015-12-13 09:14:11

标签: parsing python-3.x nltk stanford-nlp triples

我有一个有很多行的文本文件,我想解析所有句子,但似乎我得到所有句子但只解析第一句,不知道我犯错误的地方。

import nltk
from nltk.parse.stanford import StanfordDependencyParser
dependency_parser = StanfordDependencyParser(  model_path="edu\stanford\lp\models\lexparser\englishPCFG.ser.gz")
txtfile =open('sample.txt',encoding="latin-1")
s=txtfile.read()
print(s)
result = dependency_parser.raw_parse(s)
for i in result:
print(list(i.triples()))

但它只给出了第一句解析翻译而不是其他句子,任何帮助?

'i like this computer'
'The great Buddha, the .....'
'My Ashford experience .... great experience.'


[[(('i', 'VBZ'), 'nsubj', ("'", 'POS')), (('i', 'VBZ'), 'nmod', ('computer', 'NN')), (('computer', 'NN'), 'case', ('like', 'IN')), (('computer', 'NN'), 'det', ('this', 'DT')), (('computer', 'NN'), 'case', ("'", 'POS'))]]

2 个答案:

答案 0 :(得分:1)

您必须先拆分文本。您目前正在使用引号和所有内容解析您发布的文字文本。这部分解析结果很明显:("'", 'POS')

为此,您似乎可以在每一行使用ast.literal_eval。请注意,撇号(用“不要”这样的单词)会破坏格式化,你必须自己处理类似line = line[1:-1]的撇号:

import ast
from nltk.parse.stanford import StanfordDependencyParser
dependency_parser = StanfordDependencyParser(  model_path="edu\stanford\lp\models\lexparser\englishPCFG.ser.gz")

with open('sample.txt',encoding="latin-1") as f:
    lines = [ast.litral_eval(line) for line in f.readlines()]

for line in lines:
    parsed_lines = dependency_parser.raw_parse(line)

# now parsed_lines should contain the parsed lines from the file

答案 1 :(得分:0)

Try:

from nltk.parse.stanford import StanfordDependencyParser
dependency_parser = StanfordDependencyParser(model_path="edu\stanford\lp\models\lexparser\englishPCFG.ser.gz")

with open('sample.txt') as fin:
    sents = fin.readlines()
result = dep_parser.raw_parse_sents(sents)
for parse in results:
    print list(parse.triples()) 

Do check the docstring code or demo code in repository for examples, they're usually very helpful.