我有一组文本文档,想要计算所有文本文档中的双字母数。
首先,我创建一个列表,其中每个元素又是一个表示一个特定文档中的单词的列表:
print(doc_clean)
# [['This', 'is', 'the', 'first', 'doc'], ['And', 'this', 'is', 'the', 'second'], ..]
然后,我在文档中提取bigrams并将它们存储在列表中:
bigrams = []
for doc in doc_clean:
bigrams.extend([(doc[i-1], doc[i])
for i in range(1, len(doc))])
print(bigrams)
# [('This', 'is'), ('is', 'the'), ..]
现在,我想计算每个独特二元组的频率:
bigrams_freq = [(b, bigrams.count(b))
for b in set(bigrams)]
通常,这种方法有效,但速度太慢。 bigrams列表非常安静,总共约有50个条目,约有30万个独特的双桅帆船。在我的笔记本电脑上,目前的方法是花费太多时间进行分析。
感谢您的帮助!
答案 0 :(得分:2)
您可以尝试以下方法:
from collections import Counter
from nltk import word_tokenize
from nltk.util import ngrams
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
doc_1 = 'Convolutional Neural Networks are very similar to ordinary Neural Networks from the previous chapter'
doc_2 = 'Convolutional Neural Networks take advantage of the fact that the input consists of images and they constrain the architecture in a more sensible way.'
doc_3 = 'In particular, unlike a regular Neural Network, the layers of a ConvNet have neurons arranged in 3 dimensions: width, height, depth.'
docs = [doc_1, doc_2, doc_3]
docs = (' '.join(filter(None, docs))).lower()
tokens = word_tokenize(docs)
tokens = [t for t in tokens if t not in stop_words]
word_l = WordNetLemmatizer()
tokens = [word_l.lemmatize(t) for t in tokens if t.isalpha()]
bi_grams = list(ngrams(tokens, 2))
counter = Counter(bi_grams)
counter.most_common(5)
Out[82]:
[(('neural', 'network'), 4),
(('convolutional', 'neural'), 2),
(('network', 'similar'), 1),
(('similar', 'ordinary'), 1),
(('ordinary', 'neural'), 1)]