标记化文本中的ngram(字符串)频率

时间:2018-04-03 01:02:28

标签: string python-3.x list nltk n-gram

我有一组独特的ngrams(列表称为ngramlist)和ngram标记化文本(列表称为ngrams)。我想构造一个新的向量freqlist,其中freqlist的每个元素都是ngrams的一部分,它等于ngramlist的元素。我编写了以下代码来提供正确的输出,但我想知道是否有一种方法可以优化它:

freqlist = [
    sum(int(ngram == ngram_condidate)
        for ngram_condidate in ngrams) / len(ngrams)
    for ngram in ngramlist
]

我想在nltk或其他地方有一个功能可以更快地完成这项工作,但我不确定是哪一个。

谢谢!

编辑:因为nltk.util.ngramsngramlist的连接输出只是一个由所有找到的ngrams集合制作的列表,所以ngrams的值得制作。

Edit2:

这是可重现的代码来测试freqlist行(其余的代码并不是我真正关心的)

from nltk.util import ngrams
import wikipedia
import nltk
import pandas as pd

articles = ['New York City','Moscow','Beijing']
tokenizer  = nltk.tokenize.TreebankWordTokenizer()

data={'article':[],'treebank_tokenizer':[]}
for article in articles:
    data['article' ].append(wikipedia.page(article).content)
    data['treebank_tokenizer'].append(tokenizer.tokenize(data['article'][-1]))

df=pd.DataFrame(data)

df['ngrams-3']=df['treebank_tokenizer'].map(
    lambda x: [' '.join(t) for t in ngrams(x,3)])

ngramlist = list(set([trigram for sublist in df['ngrams-3'].tolist() for trigram in sublist]))

df['freqlist']=df['ngrams-3'].map(lambda ngrams_: [sum(int(ngram==ngram_condidate) for ngram_condidate in ngrams_)/len(ngrams_) for ngram in ngramlist])

2 个答案:

答案 0 :(得分:3)

您可以通过预先计算一些数量并使用Counter来优化这一点。如果ngramlist中包含ngrams中的大多数元素,这将非常有用。

freqlist = [
    sum(int(ngram == ngram_candidate)
            for ngram_candidate in ngrams) / len(ngrams)
        for ngram in ngramlist
]

每次检查ngrams时,您都不需要迭代ngramngrams之后的一次传递将使这个algorighm O(n)而不是现在的O(n2)。请记住,较短的代码不一定是更好或更有效的代码:

from collections import Counter
...

counter = Counter(ngrams)
size = len(ngrams)
freqlist = [counter.get(ngram, 0) / size for ngram in ngramlist]

要正确使用此功能,您必须编写def函数而不是lambda

def count_ngrams(ngrams):
    counter = Counter(ngrams)
    size = len(ngrams)
    freqlist = [counter.get(ngram, 0) / size for ngram in ngramlist]
    return freqlist
df['freqlist'] = df['ngrams-3'].map(count_ngrams)

答案 1 :(得分:2)

首先,不要通过覆盖它们并将它们用作变量来污染导入的函数,将ngrams名称保留为函数,并使用其他内容作为变量。

import time
from functools import partial
from itertools import chain
from collections import Counter

import wikipedia

import pandas as pd

from nltk import word_tokenize
from nltk.util import ngrams

接下来,您在原始问题中询问的行之前的步骤可能效率不高,您可以清理它们,使它们更容易阅读并按照这样的方式进行测量:

# Downloading the articles.
titles = ['New York City','Moscow','Beijing']
start = time.time()
df = pd.DataFrame({'article':[wikipedia.page(title).content for title in titles]})
end = time.time()
print('Downloading wikipedia articles took', end-start, 'seconds')

然后:

# Tokenizing the articles
start = time.time()
df['tokens'] = df['article'].apply(word_tokenize)
end = time.time()
print('Tokenizing articles took', end-start, 'seconds')

然后:

# Extracting trigrams.
trigrams = partial(ngrams, n=3)
start = time.time()
# There's no need to flatten them to strings, you could just use list()
df['trigrams'] = df['tokens'].apply(lambda x: list(trigrams(x)))
end = time.time()
print('Extracting trigrams took', end-start, 'seconds')

最后,到最后一行

# Instead of a set, we use a Counter here because 
# we can use an intersection between Counter objects later.
# see https://stackoverflow.com/questions/44012479/intersection-of-two-counters
all_trigrams = Counter(chain(*df['trigrams']))

# More often than not, you don't need to keep all the 
# zeros in the vectors (aka dense vector), 
# you could actually get the non-zero sparse vectors 
# as a dict as such
df['trigrams_count'] = df['trigrams'].apply(lambda x: Counter(x) & all_trigrams)

# Now to normalize the count, simply do:
def featurize(list_of_ngrams):
    nonzero_features = Counter(list_of_ngrams) & all_trigrams
    total = len(list_of_ngrams)
    return {ng:count/total for ng, count in nonzero_features.items()}

df['trigrams_count_normalize'] = df['trigrams'].apply(featurize)