from nltk.chunk.util import tagstr2tree
from nltk import word_tokenize, pos_tag
text = "John Rose Center is very beautiful place and i want to go there with Barbara Palvin. Also there are stores like Adidas ,Nike ,Reebok Center."
tagged_text = pos_tag(text.split())
grammar = "NP:{<NNP>+}"
cp = nltk.RegexpParser(grammar)
result = cp.parse(tagged_text)
print(result)
输出:
(S
(NP John/NNP Rose/NNP Center/NNP)
is/VBZ
very/RB
beautiful/JJ
place/NN
and/CC
i/NN
want/VBP
to/TO
go/VB
there/RB
with/IN
(NP Barbara/NNP Palvin./NNP)
Also/RB
there/EX
are/VBP
stores/NNS
like/IN
(NP Adidas/NNP ,Nike/NNP ,Reebok/NNP Center./NNP))
我用于分块的语法仅适用于nnp标签,但如果单词用逗号连续,它们仍会在同一行。我想要我的块如下:
(S
(NP John/NNP Rose/NNP Center/NNP)
is/VBZ
very/RB
beautiful/JJ
place/NN
and/CC
i/NN
want/VBP
to/TO
go/VB
there/RB
with/IN
(NP Barbara/NNP Palvin./NNP)
Also/RB
there/EX
are/VBP
stores/NNS
like/IN
(NP Adidas,/NNP)
(NP Nike,/NNP)
(NP Reebok/NNP Center./NNP))
我应该在“grammar =”中写什么,或者我可以像上面写的那样编辑输出吗?正如你所看到的,我只为我的命名实体项目解析专有名词,请帮助我。
答案 0 :(得分:2)
使用word_tokenize(string)
代替string.split()
:
>>> import nltk
>>> from nltk.chunk.util import tagstr2tree
>>> from nltk import word_tokenize, pos_tag
>>> text = "John Rose Center is very beautiful place and i want to go there with Barbara Palvin. Also there are stores like Adidas ,Nike ,Reebok Center."
>>> tagged_text = pos_tag(word_tokenize(text))
>>>
>>> grammar = "NP:{<NNP>+}"
>>>
>>> cp = nltk.RegexpParser(grammar)
>>> result = cp.parse(tagged_text)
>>>
>>> print(result)
(S
(NP John/NNP Rose/NNP Center/NNP)
is/VBZ
very/RB
beautiful/JJ
place/NN
and/CC
i/NN
want/VBP
to/TO
go/VB
there/RB
with/IN
(NP Barbara/NNP Palvin/NNP)
./.
Also/RB
there/EX
are/VBP
stores/NNS
like/IN
(NP Adidas/NNP)
,/,
(NP Nike/NNP)
,/,
(NP Reebok/NNP Center/NNP)
./.)