我已经从atis语法生成语法,现在我想添加一些我自己的规则,特别是句子中的终端可以这样做吗?
import nltk
grammar = nltk.data.load('grammars/large_grammars/atis.cfg')
到grammar
我想添加更多终端。
答案 0 :(得分:5)
简而言之:是的,但是你可能会遇到痛苦,使用atis.cfg
作为基础更容易重写你的CFG然后阅读新的CFG文本文件。比将每个新终端重新分配到正确的非终端以映射它们更容易
长,请参阅以下
首先让我们看看NLTK中的CFG语法及其包含的内容:
>>> import nltk
>>> g = nltk.data.load('grammars/large_grammars/atis.cfg')
>>> dir(g)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__unicode__', '__weakref__', '_all_unary_are_lexical', '_calculate_grammar_forms', '_calculate_indexes', '_calculate_leftcorners', '_categories', '_empty_index', '_immediate_leftcorner_categories', '_immediate_leftcorner_words', '_is_lexical', '_is_nonlexical', '_leftcorner_parents', '_leftcorner_words', '_leftcorners', '_lexical_index', '_lhs_index', '_max_len', '_min_len', '_productions', '_rhs_index', '_start', 'check_coverage', 'fromstring', 'is_binarised', 'is_chomsky_normal_form', 'is_flexible_chomsky_normal_form', 'is_leftcorner', 'is_lexical', 'is_nonempty', 'is_nonlexical', 'leftcorner_parents', 'leftcorners', 'max_len', 'min_len', 'productions', 'start', 'unicode_repr']
有关详细信息,请参阅https://github.com/nltk/nltk/blob/develop/nltk/grammar.py#L421
似乎终端和非终端属于Production
类型,见https://github.com/nltk/nltk/blob/develop/nltk/grammar.py#L236,即
语法制作。每个生产都映射一个符号 在"左侧"到一系列的符号 "右手边"。 (在无上下文制作的情况下, 左侧必须是
Nonterminal
,右侧必须是Nonterminals
side是一系列终端和Nonterminal
。) "端子"可以是任何不可变的哈希对象 不是"dog"
。通常,终端是字符串 表示单词,例如"under"
或>>> type(g._productions) <type 'list'> >>> g._productions[-1] zero -> 'zero' >>> type(g._productions[-1]) <class 'nltk.grammar.Production'>
。
让我们来看看语法如何存储产品:
nltk.grammar.Production
现在,似乎我们可以创建grammar._productions
个对象并将它们附加到>>> import nltk
>>> original_grammar = nltk.data.load('grammars/large_grammars/atis.cfg')
>>> original_parser = ChartParser(original_grammar)
>>> sent = ['show', 'me', 'northwest', 'flights', 'to', 'detroit', '.']
>>> for i in original_parser.parse(sent):
... print i
... break
...
(SIGMA
(IMPR_VB
(VERB_VB (show show))
(NP_PPO
(pt_pron_ppo me)
(NAPPOS_NP (NOUN_NP (northwest northwest))))
(NP_NNS (NOUN_NNS (pt207 flights)) (PREP_IN (to to)))
(AVPNP_NP (NOUN_NP (detroit detroit)))
(pt_char_per .)))
。
让我们尝试使用原始语法:
singapore
原始语法没有终端>>> sent = ['show', 'me', 'northwest', 'flights', 'to', 'singapore', '.']
>>> for i in original_parser.parse(sent):
... print i
... break
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/nltk/parse/api.py", line 49, in parse
return iter(self.parse_all(sent))
File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1350, in parse_all
chart = self.chart_parse(tokens)
File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1309, in chart_parse
self._grammar.check_coverage(tokens)
File "/usr/local/lib/python2.7/dist-packages/nltk/grammar.py", line 631, in check_coverage
"input words: %r." % missing)
ValueError: Grammar does not cover some of the input words: u"'singapore'".
:
singapore
在我们尝试将detroit
添加到语法中之前,让我们看看>>> original_grammar._rhs_index['detroit']
[detroit -> 'detroit']
>>> type(original_grammar._rhs_index['detroit'])
<type 'list'>
>>> type(original_grammar._rhs_index['detroit'][0])
<class 'nltk.grammar.Production'>
>>> original_grammar._rhs_index['detroit'][0]._lhs
detroit
>>> original_grammar._rhs_index['detroit'][0]._rhs
(u'detroit',)
>>> type(original_grammar._rhs_index['detroit'][0]._lhs)
<class 'nltk.grammar.Nonterminal'>
>>> type(original_grammar._rhs_index['detroit'][0]._rhs)
<type 'tuple'>
>>> original_grammar._rhs_index[original_grammar._rhs_index['detroit'][0]._lhs]
[NOUN_NP -> detroit, NOUN_NP -> detroit minneapolis toronto]
如何存储在语法中:
Production
现在我们可以尝试为singapore
重新创建相同的# First let's create Non-terminal for singapore.
>>> nltk.grammar.Nonterminal('singapore')
singapore
>>> lhs = nltk.grammar.Nonterminal('singapore')
>>> rhs = [u'singapore']
# Now we can create the Production for singapore.
>>> singapore_production = nltk.grammar.Production(lhs, rhs)
# Now let's try to add this Production the grammar's list of production
>>> new_grammar = nltk.data.load('grammars/large_grammars/atis.cfg')
>>> new_grammar._productions.append(singapore_production)
对象:
>>> new_grammar = nltk.data.load('grammars/large_grammars/atis.cfg')
>>> new_grammar._productions.append(singapore_production)
>>> new_parser = ChartParser(new_grammar)
>>> sent = ['show', 'me', 'northwest', 'flights', 'to', 'singapore', '.']
>>> new_parser.parse(sent)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/nltk/parse/api.py", line 49, in parse
return iter(self.parse_all(sent))
File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1350, in parse_all
chart = self.chart_parse(tokens)
File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1309, in chart_parse
self._grammar.check_coverage(tokens)
File "/usr/local/lib/python2.7/dist-packages/nltk/grammar.py", line 631, in check_coverage
"input words: %r." % missing)
ValueError: Grammar does not cover some of the input words: u"'singapore'".
但它仍然无法正常工作,但导致终端本身并没有真正帮助将其与其余的CFG联系起来,因此新加坡仍然无法解析:
NOUN_NP -> detroit
从下面我们知道新加坡就像底特律,底特律导致这个左手LHS >>> original_grammar._rhs_index[original_grammar._rhs_index['detroit'][0]._lhs]
[NOUN_NP -> detroit, NOUN_NP -> detroit minneapolis toronto]
:
NOUN_NP
所以我们需要做的是为新加坡增加另一个导致>>> lhs = nltk.grammar.Nonterminal('singapore')
>>> rhs = [u'singapore']
>>> singapore_production = nltk.grammar.Production(lhs, rhs)
>>> new_grammar._productions.append(singapore_production)
非终结者的作品,或者将我们的新加坡LHS添加到NOUN_NP非终端右侧:
NOUN_NP -> singapore
现在让我们为lhs2 = nltk.grammar.Nonterminal('NOUN_NP')
new_grammar._productions.append(nltk.grammar.Production(lhs2, [lhs]))
添加新作品:
sent = ['show', 'me', 'northwest', 'flights', 'to', 'singapore', '.']
print new_grammar.productions()[2091]
print new_grammar.productions()[-1]
new_parser = nltk.ChartParser(new_grammar)
for i in new_parser.parse(sent):
print i
现在我们应该期待我们的解析器工作:
Traceback (most recent call last):
File "test.py", line 31, in <module>
for i in new_parser.parse(sent):
File "/usr/local/lib/python2.7/dist-packages/nltk/parse/api.py", line 49, in parse
return iter(self.parse_all(sent))
File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1350, in parse_all
chart = self.chart_parse(tokens)
File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1309, in chart_parse
self._grammar.check_coverage(tokens)
File "/usr/local/lib/python2.7/dist-packages/nltk/grammar.py", line 631, in check_coverage
"input words: %r." % missing)
ValueError: Grammar does not cover some of the input words: u"'singapore'".
[OUT]:
import nltk
lhs = nltk.grammar.Nonterminal('singapore')
rhs = [u'singapore']
singapore_production = nltk.grammar.Production(lhs, rhs)
new_grammar = nltk.data.load('grammars/large_grammars/atis.cfg')
new_grammar._productions.append(singapore_production)
lhs2 = nltk.grammar.Nonterminal('NOUN_NP')
new_grammar._productions.append(nltk.grammar.Production(lhs2, [lhs]))
# Create newer grammar from new_grammar's string
newer_grammar = nltk.grammar.CFG.fromstring(str(new_grammar).split('\n')[1:])
# Reassign new_grammar's string to newer_grammar !!!
newer_grammar._start = new_grammar.start()
newer_grammar
sent = ['show', 'me', 'northwest', 'flights', 'to', 'singapore', '.']
print newer_grammar.productions()[2091]
print newer_grammar.productions()[-1]
newer_parser = nltk.ChartParser(newer_grammar)
for i in newer_parser.parse(sent):
print i
break
但看起来语法仍然无法识别我们已经添加的新终端和非终端,所以让我们尝试一下hack并将我们的新语法输出到字符串中并从输出字符串创建一个更新的语法:
(SIGMA
(IMPR_VB
(VERB_VB (show show))
(NP_PPO
(pt_pron_ppo me)
(NAPPOS_NP (NOUN_NP (northwest northwest))))
(NP_NNS (NOUN_NNS (pt207 flights)) (PREP_IN (to to)))
(AVPNP_NP (NOUN_NP (singapore singapore)))
(pt_char_per .)))
[OUT]:
{{1}}