我使用nltk
通过首先删除给定的停用词来从句子生成n-gram。但是,nltk.pos_tag()
非常慢,在我的CPU(Intel i7)上占用0.6秒。
输出:
['The first time I went, and was completely taken by the live jazz band and atmosphere, I ordered the Lobster Cobb Salad.']
0.620481014252
["It's simply the best meal in NYC."]
0.640982151031
['You cannot go wrong at the Red Eye Grill.']
0.644664049149
代码:
for sentence in source:
nltk_ngrams = None
if stop_words is not None:
start = time.time()
sentence_pos = nltk.pos_tag(word_tokenize(sentence))
print time.time() - start
filtered_words = [word for (word, pos) in sentence_pos if pos not in stop_words]
else:
filtered_words = ngrams(sentence.split(), n)
这真的很慢还是我在这里做错了什么?
答案 0 :(得分:7)
使用pos_tag_sents
标记多个句子:
>>> import time
>>> from nltk.corpus import brown
>>> from nltk import pos_tag
>>> from nltk import pos_tag_sents
>>> sents = brown.sents()[:10]
>>> start = time.time(); pos_tag(sents[0]); print time.time() - start
0.934092998505
>>> start = time.time(); [pos_tag(s) for s in sents]; print time.time() - start
9.5061340332
>>> start = time.time(); pos_tag_sents(sents); print time.time() - start
0.939551115036
答案 1 :(得分:5)
nltk pos_tag is defined as:
from nltk.tag.perceptron import PerceptronTagger
def pos_tag(tokens, tagset=None):
tagger = PerceptronTagger()
return _pos_tag(tokens, tagset, tagger)
因此每次调用pos_tag都会实例化感知器模块,这需要花费大量的计算时间。您可以通过直接调用tagger.tag来节省这段时间:
from nltk.tag.perceptron import PerceptronTagger
tagger=PerceptronTagger()
sentence_pos = tagger.tag(word_tokenize(sentence))
答案 2 :(得分:0)
如果您正在寻找另一种在Python中具有快速性能的POS标记器,您可能需要尝试RDRPOSTagger。例如,在英文POS标记上,使用Core 2Duo 2.4GHz的计算机,Python中的单线程实现的标记速度为8K字/秒。只需使用多线程模式即可获得更快的标记速度。与最先进的标记器相比,RDRPOSTagger获得了极具竞争力的精度,现在支持40种语言的预训练模型。请参阅this paper中的实验结果。