在python

时间:2018-05-28 16:37:13

标签: python nlp nltk n-gram

我想生成大小为2到4的char-n-gram。这就是我现在所拥有的:

from nltk import ngrams
sentence = ['i have an apple', 'i like apples so much']

for i in range(len(sentence)):
    for n in range(2, 4):
        n_grams = ngrams(sentence[i].split(), n)
        for grams in n_grams:
            print(grams)

这会给我:

('i', 'have')
('have', 'an')
('an', 'apple')
('i', 'have', 'an')
('have', 'an', 'apple')
('i', 'like')
('like', 'apples')
('apples', 'so')
('so', 'much')
('i', 'like', 'apples')
('like', 'apples', 'so')
('apples', 'so', 'much')

如何以最佳方式执行此操作?我有一个非常大的条目数据,我的解决方案包含for in,因此复杂性有点大,算法完成需要花费大量时间。

2 个答案:

答案 0 :(得分:1)

假设你的意思是n-gram词而不是char ),不确定是否存在重复句子的可能性,但你可以尝试set输入句子,可能是{{1 }}:

list comprehension

结果:

%%timeit
from nltk import ngrams
sentence = ['i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much', 'so much']
n_grams = []
for i in range(len(sentence)):
    for n in range(2, 4):
        for item in ngrams(sentence[i].split(), n):
            n_grams.append(item)

只需使用1000 loops, best of 3: 228 µs per loop ,它就有了一些改进:

list comprehension

结果:

%%timeit
from nltk import ngrams
sentence = ['i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much', 'so much']
n_grams = [item for sent in sentence for n in range(2, 4) for item in ngrams(sent.split(), n)]

其他方式是使用1000 loops, best of 3: 214 µs per loop set

list comprehension

结果:

%%timeit
from nltk import ngrams
sentences = ['i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much', 'so much']
# use of set
sentence = set(sentences)
n_grams = [item for sent in sentence for n in range(2, 4) for item in ngrams(sent.split(), n)]

那么,如果那里有很多重复的句子,那可能会有所帮助。

答案 1 :(得分:1)

>>> from nltk import everygrams
>>> from collections import Counter

>>> sents = ['i have an apple', 'i like apples so much']

# For character ngrams, use the string directly as 
# the input to `ngrams` or `everygrams`

# If you like to keep the keys as tuple of characters.
>>> Counter(everygrams(sents[0], 1, 4))
Counter({('a',): 3, (' ',): 3, ('e',): 2, ('p',): 2, (' ', 'a'): 2, ('n',): 1, ('v', 'e'): 1, (' ', 'a', 'n'): 1, ('v', 'e', ' '): 1, (' ', 'h', 'a'): 1, ('l', 'e'): 1, ('n', ' '): 1, ('p', 'p', 'l', 'e'): 1, ('e', ' ', 'a'): 1, ('a', 'v', 'e'): 1, ('p', 'l'): 1, ('a', 'v', 'e', ' '): 1, ('a', 'v'): 1, (' ', 'a', 'p'): 1, (' ', 'a', 'p', 'p'): 1, ('h', 'a'): 1, ('i', ' ', 'h', 'a'): 1, ('i',): 1, ('i', ' ', 'h'): 1, ('v', 'e', ' ', 'a'): 1, ('p', 'p', 'l'): 1, ('e', ' '): 1, ('p', 'p'): 1, (' ', 'a', 'n', ' '): 1, ('n', ' ', 'a', 'p'): 1, (' ', 'h', 'a', 'v'): 1, ('a', 'p', 'p', 'l'): 1, ('a', 'n', ' '): 1, (' ', 'h'): 1, ('n', ' ', 'a'): 1, ('a', 'n', ' ', 'a'): 1, ('a', 'p', 'p'): 1, ('h', 'a', 'v'): 1, ('a', 'n'): 1, ('v',): 1, ('h', 'a', 'v', 'e'): 1, ('h',): 1, ('a', 'p'): 1, ('i', ' '): 1, ('p', 'l', 'e'): 1, ('l',): 1, ('e', ' ', 'a', 'n'): 1})

# If you like the keys to be just the string.
>>> Counter(map(''.join,everygrams(sents[0], 1, 4)))
Counter({' ': 3, 'a': 3, ' a': 2, 'e': 2, 'p': 2, 'ppl': 1, 've': 1, ' h': 1, 'i ha': 1, 'an': 1, 'ap': 1, 'have': 1, 'av': 1, 'ave': 1, 'pp': 1, 'le': 1, 'n ap': 1, ' app': 1, ' an': 1, ' ap': 1, 'appl': 1, 'i h': 1, 'app': 1, 'pl': 1, 'an ': 1, 'pple': 1, 'e ': 1, 'e a': 1, 'ple': 1, 'e an': 1, 'i ': 1, 'ha': 1, 'n a': 1, 've a': 1, ' an ': 1, 'i': 1, 'h': 1, 'ave ': 1, 'l': 1, 'n': 1, 'an a': 1, ' hav': 1, 'n ': 1, 've ': 1, 'v': 1, ' ha': 1, 'hav': 1})


# If you want word ngrams:

>>> Counter(map(' '.join,everygrams(sents[0].split(), 1, 4)))
Counter({'have an': 1, 'apple': 1, 'i': 1, 'i have an': 1, 'i have an apple': 1, 'an': 1, 'have': 1, 'have an apple': 1, 'i have': 1, 'an apple': 1})

# Or using word_tokenize
>>> from nltk import word_tokenize
>>> Counter(map(' '.join,everygrams(word_tokenize(sents[0]), 1, 4)))
Counter({'have an': 1, 'apple': 1, 'i': 1, 'i have an': 1, 'i have an apple': 1, 'an': 1, 'have': 1, 'have an apple': 1, 'i have': 1, 'an apple': 1})

如果考虑速度,那么Fast n-gram calculation

当你有M号时,O(MN)的复杂性很自然。句子和N没有。 ngram的顺序迭代。即使在everygrams中,它也会逐个迭代n-gram顺序。

我确信有更有效的方法来计算ngrams但我怀疑你会遇到内存问题,而不是大规模的ngram。在这种情况下,我可以建议https://github.com/kpu/kenlm