我正在使用NLTK来分析一些经典文本,而我正在努力解决逐句文字的问题。例如,以下是我从 Moby Dick 获取代码段的内容:
import nltk
sent_tokenize = nltk.data.load('tokenizers/punkt/english.pickle')
'''
(Chapter 16)
A clam for supper? a cold clam; is THAT what you mean, Mrs. Hussey?" says I, "but
that's a rather cold and clammy reception in the winter time, ain't it, Mrs. Hussey?"
'''
sample = 'A clam for supper? a cold clam; is THAT what you mean, Mrs. Hussey?" says I, "but that\'s a rather cold and clammy reception in the winter time, ain\'t it, Mrs. Hussey?"'
print "\n-----\n".join(sent_tokenize.tokenize(sample))
'''
OUTPUT
"A clam for supper?
-----
a cold clam; is THAT what you mean, Mrs.
-----
Hussey?
-----
" says I, "but that\'s a rather cold and clammy reception in the winter time, ain\'t it, Mrs.
-----
Hussey?
-----
"
'''
我不认为这里有完美,考虑到梅尔维尔的语法有点陈旧,但是NLTK应该能够处理终端双引号和“Mrs.”之类的标题。由于标记化器是无人监督的训练算法的结果,然而,我无法弄清楚如何修补它。
有人建议使用更好的句子标记器吗?我更喜欢一种简单的启发式方法,而不是必须训练我自己的解析器。
答案 0 :(得分:44)
您需要向tokenizer提供缩写列表,如下所示:
from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters
punkt_param = PunktParameters()
punkt_param.abbrev_types = set(['dr', 'vs', 'mr', 'mrs', 'prof', 'inc'])
sentence_splitter = PunktSentenceTokenizer(punkt_param)
text = "is THAT what you mean, Mrs. Hussey?"
sentences = sentence_splitter.tokenize(text)
句子现在是:
['is THAT what you mean, Mrs. Hussey?']
更新:如果句子的最后一个单词附有撇号或引号(如 Hussey?'),则不起作用。因此,围绕这个的快速和肮脏的方法是在撇号前面放置空格和在句末符号后面的引号(。!?):
text = text.replace('?"', '? "').replace('!"', '! "').replace('."', '. "')
答案 1 :(得分:28)
您可以修改NLTK预训练的英语句子标记器,通过将它们添加到集合_params.abbrev_types
来识别更多缩写。例如:
extra_abbreviations = ['dr', 'vs', 'mr', 'mrs', 'prof', 'inc', 'i.e']
sentence_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
sentence_tokenizer._params.abbrev_types.update(extra_abbreviations)
请注意,必须在没有最终期限的情况下指定缩写,但必须包括任何内部句点,如上面的'i.e'
所示。有关其他标记生成器参数的详细信息,请参阅the relevant documentation.
答案 2 :(得分:7)
通过将PunktSentenceTokenizer.tokenize
参数设置为realign_boundaries
,您可以告诉True
方法在句子的其余部分包含“终端”双引号。请参阅下面的代码以获取示例。
我不知道如何防止像Mrs. Hussey
这样的文字被分成两个句子。但是,这是一个黑客
Mrs. Hussey
变为Mrs._Hussey
,sent_tokenize.tokenize
,Mrs._Hussey
解开回Mrs. Hussey
我希望我知道更好的方法,但这可能会有所帮助。
import nltk
import re
import functools
mangle = functools.partial(re.sub, r'([MD]rs?[.]) ([A-Z])', r'\1_\2')
unmangle = functools.partial(re.sub, r'([MD]rs?[.])_([A-Z])', r'\1 \2')
sent_tokenize = nltk.data.load('tokenizers/punkt/english.pickle')
sample = '''"A clam for supper? a cold clam; is THAT what you mean, Mrs. Hussey?" says I, "but that\'s a rather cold and clammy reception in the winter time, ain\'t it, Mrs. Hussey?"'''
sample = mangle(sample)
sentences = [unmangle(sent) for sent in sent_tokenize.tokenize(
sample, realign_boundaries = True)]
print u"\n-----\n".join(sentences)
产量
"A clam for supper?
-----
a cold clam; is THAT what you mean, Mrs. Hussey?"
-----
says I, "but that's a rather cold and clammy reception in the winter time, ain't it, Mrs. Hussey?"
答案 3 :(得分:2)
所以我遇到了类似的问题并尝试了上面的vpekar解决方案。
也许我的是某种边缘情况但是我在应用替换后观察到了相同的行为,但是,当我尝试用放在他们之前的引号替换标点符号时,我得到了我正在寻找的输出。据推测,缺乏对MLA的依从性并不比将原始引用保留为单句更重要。
更清楚:
text = text.replace('?"', '"?').replace('!"', '"!').replace('."', '".')
如果MLA很重要,尽管你可以随时回过头来反转这些变化。