如何使用nltk或spacy从括号中的解析字符串中获取解析NLP Tree对象?

时间:2018-03-19 19:44:57

标签: parsing nlp nltk stanford-nlp spacy

我有一句“你可以说他们经常洗澡,这增加了他们的兴奋和生活乐趣。”并且我无法实现让NLP解析树像以下示例:

(ROOT (S (NP (PRP You)) (VP (MD could) (VP (VB say) (SBAR (IN that) (S (NP (PRP they)) (ADVP (RB regularly)) (VP (VB catch) (NP (NP (DT a) (NN shower)) (, ,) (SBAR (WHNP (WDT which)) (S (VP (VBZ adds) (PP (TO to) (NP (NP (PRP$ their) (NN exhilaration)) (CC and) (NP (FW joie) (FW de) (FW vivre))))))))))))) (. .)))

我想复制这个问题https://stackoverflow.com/a/39320379的解决方案,但我有一个字符串句子而不是NLP树。

BTW,我正在使用python 3

2 个答案:

答案 0 :(得分:0)

使用Tree.fromstring()方法:

>>> from nltk import Tree
>>> parse = Tree.fromstring('(ROOT (S (NP (PRP You)) (VP (MD could) (VP (VB say) (SBAR (IN that) (S (NP (PRP they)) (ADVP (RB regularly)) (VP (VB catch) (NP (NP (DT a) (NN shower)) (, ,) (SBAR (WHNP (WDT which)) (S (VP (VBZ adds) (PP (TO to) (NP (NP (PRP$ their) (NN exhilaration)) (CC and) (NP (FW joie) (FW de) (FW vivre))))))))))))) (. .)))')

>>> parse
Tree('ROOT', [Tree('S', [Tree('NP', [Tree('PRP', ['You'])]), Tree('VP', [Tree('MD', ['could']), Tree('VP', [Tree('VB', ['say']), Tree('SBAR', [Tree('IN', ['that']), Tree('S', [Tree('NP', [Tree('PRP', ['they'])]), Tree('ADVP', [Tree('RB', ['regularly'])]), Tree('VP', [Tree('VB', ['catch']), Tree('NP', [Tree('NP', [Tree('DT', ['a']), Tree('NN', ['shower'])]), Tree(',', [',']), Tree('SBAR', [Tree('WHNP', [Tree('WDT', ['which'])]), Tree('S', [Tree('VP', [Tree('VBZ', ['adds']), Tree('PP', [Tree('TO', ['to']), Tree('NP', [Tree('NP', [Tree('PRP$', ['their']), Tree('NN', ['exhilaration'])]), Tree('CC', ['and']), Tree('NP', [Tree('FW', ['joie']), Tree('FW', ['de']), Tree('FW', ['vivre'])])])])])])])])])])])])]), Tree('.', ['.'])])])

>>> parse.pretty_print()
                                                       ROOT                                                             
                                                        |                                                                
                                                        S                                                               
  ______________________________________________________|_____________________________________________________________   
 |         VP                                                                                                         | 
 |     ____|___                                                                                                       |  
 |    |        VP                                                                                                     | 
 |    |     ___|____                                                                                                  |  
 |    |    |       SBAR                                                                                               | 
 |    |    |    ____|_______                                                                                          |  
 |    |    |   |            S                                                                                         | 
 |    |    |   |     _______|____________                                                                             |  
 |    |    |   |    |       |            VP                                                                           | 
 |    |    |   |    |       |        ____|______________                                                              |  
 |    |    |   |    |       |       |                   NP                                                            | 
 |    |    |   |    |       |       |         __________|__________                                                   |  
 |    |    |   |    |       |       |        |          |         SBAR                                                | 
 |    |    |   |    |       |       |        |          |      ____|____                                              |  
 |    |    |   |    |       |       |        |          |     |         S                                             | 
 |    |    |   |    |       |       |        |          |     |         |                                             |  
 |    |    |   |    |       |       |        |          |     |         VP                                            | 
 |    |    |   |    |       |       |        |          |     |     ____|____                                         |  
 |    |    |   |    |       |       |        |          |     |    |         PP                                       | 
 |    |    |   |    |       |       |        |          |     |    |     ____|_____________________                   |  
 |    |    |   |    |       |       |        |          |     |    |    |                          NP                 | 
 |    |    |   |    |       |       |        |          |     |    |    |          ________________|________          |  
 NP   |    |   |    NP     ADVP     |        NP         |    WHNP  |    |         NP               |        NP        | 
 |    |    |   |    |       |       |     ___|____      |     |    |    |     ____|_______         |    ____|____     |  
PRP   MD   VB  IN  PRP      RB      VB   DT       NN    ,    WDT  VBZ   TO  PRP$          NN       CC  FW   FW   FW   . 
 |    |    |   |    |       |       |    |        |     |     |    |    |    |            |        |   |    |    |    |  
You could say that they regularly catch  a      shower  ,   which adds  to their     exhilaration and joie  de vivre  . 

答案 1 :(得分:0)

我将假设有一个很好的理由说明为什么需要该格式的依赖关系解析树。 Spacy通过使用CNN(卷积神经网络)生成CFG(无上下文语法),生产就绪,并且速度超快,做得很好。您可以执行以下操作以便自己查看(然后阅读之前链接中的文档):

import spacy

nlp = spacy.load('en')

text = 'You could say that they regularly catch a shower , which adds to their exhilaration and joie de vivre.'

for token in nlp(text):
    print(token.dep_, end='\t')
    print(token.idx, end='\t')
    print(token.text, end='\t')
    print(token.tag_, end='\t')
    print(token.head.text, end='\t')
    print(token.head.tag_, end='\t')
    print(token.head.idx, end='\t')
    print(' '.join([w.text for w in token.subtree]), end='\t')
    print(' '.join([w.text for w in token.children]))

现在,你可以制作一个算法来导航这个树,并相应地打印(我找不到一个快速的例子,抱歉,但你可以看到索引以及如何遍历解析) 。你可以做的另一件事是以某种方式提取CFG,然后使用NLTK以你想要的格式进行解析和后续显示。这是来自NLTK剧本(修改为与Python 3一起使用):

import nltk
from nltk import CFG

grammar = CFG.fromstring("""
  S -> NP VP
  VP -> V NP | V NP PP
  V -> "saw" | "ate"
  NP -> "John" | "Mary" | "Bob" | Det N | Det N PP
  Det -> "a" | "an" | "the" | "my"
  N -> "dog" | "cat" | "cookie" | "park"
  PP -> P NP
  P -> "in" | "on" | "by" | "with"
  """)

text = 'Mary saw Bob'

sent = text.split()
rd_parser = nltk.RecursiveDescentParser(grammar)
for p in rd_parser.parse(sent):
    print(p)
# (S (NP Mary) (VP (V saw) (NP Bob)))

但是,您可以看到需要定义CFG(因此,如果您尝试使用原始文本代替示例,则会看到它无法理解CFG中未定义的标记)。

获得所需格式似乎最简单的方法是使用Stanford的NLP解析器。取自this SO question(对不起,我还没有测试过):

parser = StanfordParser(model_path='edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz')
parsed = parser.raw_parse('Jack payed up to 5% more for each unit')
for line in parsed:
    print(line, end=' ') # This will print all in one line, as desired

我没有对此进行测试,因为我没有时间安装Stanford Parser,这可能是一个繁琐的过程(相对于安装Python模块),也就是说,假设您正在寻找Python溶液

我希望这会有所帮助,我很抱歉这不是一个直接的答案。