我正在使用NLTK学习自然语言处理。
我使用PunktSentenceTokenizer
找到了代码,我在给定代码中无法理解其实际用途。代码是:
import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer
train_text = state_union.raw("2005-GWBush.txt")
sample_text = state_union.raw("2006-GWBush.txt")
custom_sent_tokenizer = PunktSentenceTokenizer(train_text) #A
tokenized = custom_sent_tokenizer.tokenize(sample_text) #B
def process_content():
try:
for i in tokenized[:5]:
words = nltk.word_tokenize(i)
tagged = nltk.pos_tag(words)
print(tagged)
except Exception as e:
print(str(e))
process_content()
那么,为什么我们使用PunktSentenceTokenizer。标记为A和B的行中发生了什么。我的意思是有一个训练文本,另一个是示例文本,但是需要两个数据集才能获得词性标记。
标记为A
和B
的行是我无法理解的。
答案 0 :(得分:23)
PunktSentenceTokenizer
是NLTK中提供的默认句子标记化器的抽象类,即sent_tokenize()
。这是Unsupervised Multilingual Sentence
Boundary Detection (Kiss and Strunk (2005)的实施。见https://github.com/nltk/nltk/blob/develop/nltk/tokenize/init.py#L79
给出一个带有多个句子的段落,例如:
>>> from nltk.corpus import state_union
>>> train_text = state_union.raw("2005-GWBush.txt").split('\n')
>>> train_text[11]
u'Two weeks ago, I stood on the steps of this Capitol and renewed the commitment of our nation to the guiding ideal of liberty for all. This evening I will set forth policies to advance that ideal at home and around the world. '
您可以使用sent_tokenize()
:
>>> sent_tokenize(train_text[11])
[u'Two weeks ago, I stood on the steps of this Capitol and renewed the commitment of our nation to the guiding ideal of liberty for all.', u'This evening I will set forth policies to advance that ideal at home and around the world. ']
>>> for sent in sent_tokenize(train_text[11]):
... print sent
... print '--------'
...
Two weeks ago, I stood on the steps of this Capitol and renewed the commitment of our nation to the guiding ideal of liberty for all.
--------
This evening I will set forth policies to advance that ideal at home and around the world.
--------
sent_tokenize()
使用来自nltk_data/tokenizers/punkt/english.pickle
的预先训练的模型。您还可以指定其他语言,NLTK中预训练模型的可用语言列表为:
alvas@ubi:~/nltk_data/tokenizers/punkt$ ls
czech.pickle finnish.pickle norwegian.pickle slovene.pickle
danish.pickle french.pickle polish.pickle spanish.pickle
dutch.pickle german.pickle portuguese.pickle swedish.pickle
english.pickle greek.pickle PY3 turkish.pickle
estonian.pickle italian.pickle README
如果是其他语言的文字,请执行以下操作:
>>> german_text = u"Die Orgellandschaft Südniedersachsen umfasst das Gebiet der Landkreise Goslar, Göttingen, Hameln-Pyrmont, Hildesheim, Holzminden, Northeim und Osterode am Harz sowie die Stadt Salzgitter. Über 70 historische Orgeln vom 17. bis 19. Jahrhundert sind in der südniedersächsischen Orgellandschaft vollständig oder in Teilen erhalten. "
>>> for sent in sent_tokenize(german_text, language='german'):
... print sent
... print '---------'
...
Die Orgellandschaft Südniedersachsen umfasst das Gebiet der Landkreise Goslar, Göttingen, Hameln-Pyrmont, Hildesheim, Holzminden, Northeim und Osterode am Harz sowie die Stadt Salzgitter.
---------
Über 70 historische Orgeln vom 17. bis 19. Jahrhundert sind in der südniedersächsischen Orgellandschaft vollständig oder in Teilen erhalten.
---------
要训练自己的朋克模型,请参阅https://github.com/nltk/nltk/blob/develop/nltk/tokenize/punkt.py和training data format for nltk punkt
答案 1 :(得分:13)
PunktSentenceTokenizer
是一种句子边界检测算法,必须经过训练才能使用[1]。 NLTK已经包含了PunktSentenceTokenizer的预训练版本。
因此,如果您使用不带任何参数的tokenizer初始化,它将默认为预先训练的版本:
In [1]: import nltk
In [2]: tokenizer = nltk.tokenize.punkt.PunktSentenceTokenizer()
In [3]: txt = """ This is one sentence. This is another sentence."""
In [4]: tokenizer.tokenize(txt)
Out[4]: [' This is one sentence.', 'This is another sentence.']
您还可以提供自己的训练数据,以便在使用前训练标记器。 Punkt tokenizer使用无监督算法,这意味着您只需使用常规文本训练它。
custom_sent_tokenizer = PunktSentenceTokenizer(train_text)
对于大多数情况,使用预先训练的版本是完全没问题的。所以你可以简单地初始化tokenizer而不提供任何参数。
那么“所有这些与POS标签有什么关系”? NLTK POS标记符使用标记化的句子,因此您需要在将文本标记为POS标记之前将文本分解为句子和单词标记。
[1] Kiss and Strunk,“ Unsupervised Multilingual Sentence Boundary Detection“
答案 2 :(得分:1)
您可以参考以下链接,了解PunktSentenceTokenizer的使用情况。 它生动地解释了为什么使用PunktSentenceTokenizer而不是sent-tokenize()关于你的情况。
答案 3 :(得分:0)
def process_content(corpus):
tokenized = PunktSentenceTokenizer().tokenize(corpus)
try:
for sent in tokenized:
words = nltk.word_tokenize(sent)
tagged = nltk.pos_tag(words)
print(tagged)
except Exception as e:
print(str(e))
process_content(train_text)
即使不对其他文本数据进行训练,其效果也与预训练相同。