NLTK令牌化 - 更快的方式?

时间:2017-01-28 16:26:42

标签: python time-complexity nltk tokenize frequency

我有一个接受String参数的方法,并使用NLTK将String分解为句子,然后分解为单词。然后,它将每个单词转换为小写,最后创建每个单词的频率字典。

import nltk
from collections import Counter

def freq(string):
    f = Counter()
    sentence_list = nltk.tokenize.sent_tokenize(string)
    for sentence in sentence_list:
        words = nltk.word_tokenize(sentence)
        words = [word.lower() for word in words]
        for word in words:
            f[word] += 1
    return f

我应该进一步优化上述代码,以加快预处理时间,并且不确定如何执行此操作。返回值显然应该与上面的完全相同,所以我预计使用nltk虽然没有明确要求这样做。

有什么方法可以加快上面的代码?感谢。

3 个答案:

答案 0 :(得分:11)

如果您只想要一个令牌的平面列表,请注意word_tokenize会隐式调用sent_tokenize,请参阅https://github.com/nltk/nltk/blob/develop/nltk/tokenize/init.py#L98

_treebank_word_tokenize = TreebankWordTokenizer().tokenize
def word_tokenize(text, language='english'):
    """
    Return a tokenized copy of *text*,
    using NLTK's recommended word tokenizer
    (currently :class:`.TreebankWordTokenizer`
    along with :class:`.PunktSentenceTokenizer`
    for the specified language).
    :param text: text to split into sentences
    :param language: the model name in the Punkt corpus
    """
    return [token for sent in sent_tokenize(text, language)
            for token in _treebank_word_tokenize(sent)]

以棕色语料库为例,Counter(word_tokenize(string_corpus))

>>> from collections import Counter
>>> from nltk.corpus import brown
>>> from nltk import sent_tokenize, word_tokenize
>>> string_corpus = brown.raw() # Plaintext, str type.
>>> start = time.time(); fdist = Counter(word_tokenize(string_corpus)); end = time.time() - start
>>> end
12.662328958511353
>>> fdist.most_common(5)
[(u',', 116672), (u'/', 89031), (u'the/at', 62288), (u'.', 60646), (u'./', 48812)]
>>> sum(fdist.values())
1423314
使用规格在我的机器上花了12秒(不保存标记化的语料库)大约140万个单词:

alvas@ubi:~$ cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model       : 69
model name  : Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz
stepping    : 1
microcode   : 0x17
cpu MHz     : 1600.027
cache size  : 3072 KB
physical id : 0
siblings    : 4
core id     : 0
cpu cores   : 2

$ cat /proc/meminfo
MemTotal:       12004468 kB

首先保存标记化的语料库tokenized_corpus = [word_tokenize(sent) for sent in sent_tokenize(string_corpus)],然后使用Counter(chain*(tokenized_corpus))

>>> from itertools import chain
>>> start = time.time(); tokenized_corpus = [word_tokenize(sent) for sent in sent_tokenize(string_corpus)]; fdist = Counter(chain(*tokenized_corpus)); end = time.time() - start
>>> end
16.421464920043945

使用ToktokTokenizer()

>>> from collections import Counter
>>> import time
>>> from itertools import chain
>>> from nltk.corpus import brown
>>> from nltk import sent_tokenize, word_tokenize
>>> from nltk.tokenize import ToktokTokenizer
>>> toktok = ToktokTokenizer()
>>> string_corpus = brown.raw()

>>> start = time.time(); tokenized_corpus = [toktok.tokenize(sent) for sent in sent_tokenize(string_corpus)]; fdist = Counter(chain(*tokenized_corpus)); end = time.time() - start 
>>> end
10.00472116470337

使用MosesTokenizer()

>>> from nltk.tokenize.moses import MosesTokenizer
>>> moses = MosesTokenizer()
>>> start = time.time(); tokenized_corpus = [moses.tokenize(sent) for sent in sent_tokenize(string_corpus)]; fdist = Counter(chain(*tokenized_corpus)); end = time.time() - start 
>>> end
30.783339023590088
>>> start = time.time(); tokenized_corpus = [moses.tokenize(sent) for sent in sent_tokenize(string_corpus)]; fdist = Counter(chain(*tokenized_corpus)); end = time.time() - start 
>>> end
30.559681177139282

为何使用MosesTokenizer

它的实现方式是有一种方法可以将标记反转回字符串,即“detokenize”。

>>> from nltk.tokenize.moses import MosesTokenizer, MosesDetokenizer
>>> t, d = MosesTokenizer(), MosesDetokenizer()
>>> sent = "This ain't funny. It's actually hillarious, yet double Ls. | [] < > [ ] & You're gonna shake it off? Don't?"
>>> expected_tokens = [u'This', u'ain', u'&apos;t', u'funny.', u'It', u'&apos;s', u'actually', u'hillarious', u',', u'yet', u'double', u'Ls.', u'&#124;', u'&#91;', u'&#93;', u'&lt;', u'&gt;', u'&#91;', u'&#93;', u'&amp;', u'You', u'&apos;re', u'gonna', u'shake', u'it', u'off', u'?', u'Don', u'&apos;t', u'?']
>>> expected_detokens = "This ain't funny. It's actually hillarious, yet double Ls. | [] < > [] & You're gonna shake it off? Don't?"
>>> tokens = t.tokenize(sent)
>>> tokens == expected_tokens
True
>>> detokens = d.detokenize(tokens)
>>> " ".join(detokens) == expected_detokens
True

使用ReppTokenizer()

>>> repp = ReppTokenizer('/home/alvas/repp')
>>> start = time.time(); sentences = sent_tokenize(string_corpus); tokenized_corpus = repp.tokenize_sents(sentences); fdist = Counter(chain(*tokenized_corpus)); end = time.time() - start
>>> end
76.44129395484924

为什么要使用ReppTokenizer

它返回原始字符串中的标记的偏移量。

>>> sents = ['Tokenization is widely regarded as a solved problem due to the high accuracy that rulebased tokenizers achieve.' ,
... 'But rule-based tokenizers are hard to maintain and their rules language specific.' ,
... 'We evaluated our method on three languages and obtained error rates of 0.27% (English), 0.35% (Dutch) and 0.76% (Italian) for our best models.'
... ]
>>> tokenizer = ReppTokenizer('/home/alvas/repp/') # doctest: +SKIP
>>> for sent in sents:                             # doctest: +SKIP
...     tokenizer.tokenize(sent)                   # doctest: +SKIP
... 
(u'Tokenization', u'is', u'widely', u'regarded', u'as', u'a', u'solved', u'problem', u'due', u'to', u'the', u'high', u'accuracy', u'that', u'rulebased', u'tokenizers', u'achieve', u'.')
(u'But', u'rule-based', u'tokenizers', u'are', u'hard', u'to', u'maintain', u'and', u'their', u'rules', u'language', u'specific', u'.')
(u'We', u'evaluated', u'our', u'method', u'on', u'three', u'languages', u'and', u'obtained', u'error', u'rates', u'of', u'0.27', u'%', u'(', u'English', u')', u',', u'0.35', u'%', u'(', u'Dutch', u')', u'and', u'0.76', u'%', u'(', u'Italian', u')', u'for', u'our', u'best', u'models', u'.')
>>> for sent in tokenizer.tokenize_sents(sents): 
...     print sent                               
... 
(u'Tokenization', u'is', u'widely', u'regarded', u'as', u'a', u'solved', u'problem', u'due', u'to', u'the', u'high', u'accuracy', u'that', u'rulebased', u'tokenizers', u'achieve', u'.')
(u'But', u'rule-based', u'tokenizers', u'are', u'hard', u'to', u'maintain', u'and', u'their', u'rules', u'language', u'specific', u'.')
(u'We', u'evaluated', u'our', u'method', u'on', u'three', u'languages', u'and', u'obtained', u'error', u'rates', u'of', u'0.27', u'%', u'(', u'English', u')', u',', u'0.35', u'%', u'(', u'Dutch', u')', u'and', u'0.76', u'%', u'(', u'Italian', u')', u'for', u'our', u'best', u'models', u'.')
>>> for sent in tokenizer.tokenize_sents(sents, keep_token_positions=True): 
...     print sent
... 
[(u'Tokenization', 0, 12), (u'is', 13, 15), (u'widely', 16, 22), (u'regarded', 23, 31), (u'as', 32, 34), (u'a', 35, 36), (u'solved', 37, 43), (u'problem', 44, 51), (u'due', 52, 55), (u'to', 56, 58), (u'the', 59, 62), (u'high', 63, 67), (u'accuracy', 68, 76), (u'that', 77, 81), (u'rulebased', 82, 91), (u'tokenizers', 92, 102), (u'achieve', 103, 110), (u'.', 110, 111)]
[(u'But', 0, 3), (u'rule-based', 4, 14), (u'tokenizers', 15, 25), (u'are', 26, 29), (u'hard', 30, 34), (u'to', 35, 37), (u'maintain', 38, 46), (u'and', 47, 50), (u'their', 51, 56), (u'rules', 57, 62), (u'language', 63, 71), (u'specific', 72, 80), (u'.', 80, 81)]
[(u'We', 0, 2), (u'evaluated', 3, 12), (u'our', 13, 16), (u'method', 17, 23), (u'on', 24, 26), (u'three', 27, 32), (u'languages', 33, 42), (u'and', 43, 46), (u'obtained', 47, 55), (u'error', 56, 61), (u'rates', 62, 67), (u'of', 68, 70), (u'0.27', 71, 75), (u'%', 75, 76), (u'(', 77, 78), (u'English', 78, 85), (u')', 85, 86), (u',', 86, 87), (u'0.35', 88, 92), (u'%', 92, 93), (u'(', 94, 95), (u'Dutch', 95, 100), (u')', 100, 101), (u'and', 102, 105), (u'0.76', 106, 110), (u'%', 110, 111), (u'(', 112, 113), (u'Italian', 113, 120), (u')', 120, 121), (u'for', 122, 125), (u'our', 126, 129), (u'best', 130, 134), (u'models', 135, 141), (u'.', 141, 142)]

<强> TL; DR

不同标记器的优点

  • word_tokenize()隐式调用sent_tokenize()
  • ToktokTokenizer()最快
  • MosesTokenizer()能够取消文字说明
  • ReppTokenizer()能够提供令牌偏移

问:是否有一个快速标记器,可以解除标记并提供偏移量并在NLTK中进行句子标记化?

答:我不这么认为,请尝试gensimspacy

答案 1 :(得分:5)

不必要的列表创建是邪恶的

您的代码隐含creating a lot of potentially very long list instances,不需要在那里,例如:

words = [word.lower() for word in words]

使用list comprehension[...]语法为输入中找到的 n 标记创建长度 n 的列表,但是您想要的只是do是获取每个令牌的频率,而不是实际存储它们:

f[word] += 1

因此,您应该使用generator代替:

words = (word.lower() for word in words)

同样,nltk.tokenize.sent_tokenize and nltk.tokenize.word_tokenize似乎都将列表作为输出生成,这也是不必要的;尝试使用更低级别的功能,例如nltk.tokenize.api.StringTokenizer.span_tokenize,它只生成一个迭代器,为您的输入流产生令牌偏移,即表示每个令牌的输入字符串的索引对。

更好的解决方案

以下是不使用中间列表的示例:

def freq(string):
    '''
    @param string: The string to get token counts for. Note that this should already have been normalized if you wish it to be so.
    @return: A new Counter instance representing the frequency of each token found in the input string.
    '''
    spans = nltk.tokenize.WhitespaceTokenizer().span_tokenize(string)   
    # Yield the relevant slice of the input string representing each individual token in the sequence
    tokens = (string[begin : end] for (begin, end) in spans)
    return Counter(tokens)

免责声明:我没有对此进行分析,因此有可能是NLTK人已经word_tokenize超速但忽略了span_tokenize;始终对您的应用程序进行概述。

TL; DR

当生成器足够时,不要使用列表:每次创建一个列表只是为了在使用它之后扔掉它,上帝杀了一只小猫。

答案 2 :(得分:0)

除了上述标记器之外,wordpunct_tokenize 还为我完成了这项工作。这尤其适用于文本相似性任务。我用这个函数替换了 jieba.lcut(s) 以获得更好的速度和相同的准确性。

from nltk.tokenize import wordpunct_tokenize
s = '''Good muffins cost $3.88\nin New York.  Please buy me two of them.\n\nThanks.'''
>>> wordpunct_tokenize(s)
>>> ['Good', 'muffins', 'cost', '$', '3', '.', '88', 'in', 'New', 'York', '.',
    'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.']

Link 用于文档。