我想用python计算文件中所有bigrams(一对相邻单词)的出现次数。在这里,我正在处理非常大的文件,所以我正在寻找一种有效的方法。我尝试在文件内容上使用带有正则表达式“\ w + \ s \ w +”的count方法,但它没有被证明是有效的。
e.g。假设我想从文件a.txt中计算出双字母组的数量,该文件包含以下内容:
"the quick person did not realize his speed and the quick person bumped "
对于上述文件,bigram集及其计数将为:
(the,quick) = 2
(quick,person) = 2
(person,did) = 1
(did, not) = 1
(not, realize) = 1
(realize,his) = 1
(his,speed) = 1
(speed,and) = 1
(and,the) = 1
(person, bumped) = 1
我在Python中遇到了一个Counter对象的例子,它用于计算unigrams(单个单词)。它还使用正则表达式方法。
示例如下:
>>> # Find the ten most common words in Hamlet
>>> import re
>>> from collections import Counter
>>> words = re.findall('\w+', open('a.txt').read())
>>> print Counter(words)
上述代码的输出为:
[('the', 2), ('quick', 2), ('person', 2), ('did', 1), ('not', 1),
('realize', 1), ('his', 1), ('speed', 1), ('bumped', 1)]
我想知道是否可以使用Counter对象获取bigrams的数量。 除了Counter对象或正则表达式之外的任何方法也将受到赞赏。
答案 0 :(得分:43)
一些itertools
魔术:
>>> import re
>>> from itertools import islice, izip
>>> words = re.findall("\w+",
"the quick person did not realize his speed and the quick person bumped")
>>> print Counter(izip(words, islice(words, 1, None)))
输出:
Counter({('the', 'quick'): 2, ('quick', 'person'): 2, ('person', 'did'): 1,
('did', 'not'): 1, ('not', 'realize'): 1, ('and', 'the'): 1,
('speed', 'and'): 1, ('person', 'bumped'): 1, ('his', 'speed'): 1,
('realize', 'his'): 1})
<强>加成强>
获取任何n-gram的频率:
from itertools import tee, islice
def ngrams(lst, n):
tlst = lst
while True:
a, b = tee(tlst)
l = tuple(islice(a, n))
if len(l) == n:
yield l
next(b)
tlst = b
else:
break
>>> Counter(ngrams(words, 3))
输出:
Counter({('the', 'quick', 'person'): 2, ('and', 'the', 'quick'): 1,
('realize', 'his', 'speed'): 1, ('his', 'speed', 'and'): 1,
('person', 'did', 'not'): 1, ('quick', 'person', 'did'): 1,
('quick', 'person', 'bumped'): 1, ('did', 'not', 'realize'): 1,
('speed', 'and', 'the'): 1, ('not', 'realize', 'his'): 1})
这也适用于懒惰的迭代和生成器。因此,您可以编写一个生成器,逐行读取文件,生成单词,并将其传递给ngarms
以便懒惰地使用,而无需读取内存中的整个文件。
答案 1 :(得分:8)
zip()
怎么样?
import re
from collections import Counter
words = re.findall('\w+', open('a.txt').read())
print(Counter(zip(words,words[1:])))
答案 2 :(得分:1)
随着即将到来的findMaxSum(Node)
Java passes parameters by value, but every object variable in Java implicitly references the actual object ,新的release schedule
函数提供了一种在成对的连续元素之间滑动的方法,从而使您的用例简单变成:
Python 3.10
中间结果的详细信息:
from itertools import pairwise
import re
from collections import Counter
# text = "the quick person did not realize his speed and the quick person bumped "
Counter(pairwise(re.findall('\w+', text)))
# Counter({('the', 'quick'): 2, ('quick', 'person'): 2, ('person', 'did'): 1, ('did', 'not'): 1, ('not', 'realize'): 1, ('realize', 'his'): 1, ('his', 'speed'): 1, ('speed', 'and'): 1, ('and', 'the'): 1, ('person', 'bumped'): 1})
答案 3 :(得分:0)
自问这个问题并成功回复以来已经有很长时间了。我从创建自己的解决方案的响应中受益。我想分享一下:
import regex
bigrams_tst = regex.findall(r"\b\w+\s\w+", open(myfile).read(), overlapped=True)
这将提供所有不会被标点符号打断的双字母组。
答案 4 :(得分:0)
对于任何n_gram,您都可以简单地使用Counter
,例如:
from collections import Counter
from nltk.util import ngrams
text = "the quick person did not realize his speed and the quick person bumped "
n_gram = 2
Counter(ngrams(text.split(), n_gram))
>>>
Counter({('and', 'the'): 1,
('did', 'not'): 1,
('his', 'speed'): 1,
('not', 'realize'): 1,
('person', 'bumped'): 1,
('person', 'did'): 1,
('quick', 'person'): 2,
('realize', 'his'): 1,
('speed', 'and'): 1,
('the', 'quick'): 2})
对于3克,只需将n_gram
更改为3:
n_gram = 3
Counter(ngrams(text.split(), n_gram))
>>>
Counter({('and', 'the', 'quick'): 1,
('did', 'not', 'realize'): 1,
('his', 'speed', 'and'): 1,
('not', 'realize', 'his'): 1,
('person', 'did', 'not'): 1,
('quick', 'person', 'bumped'): 1,
('quick', 'person', 'did'): 1,
('realize', 'his', 'speed'): 1,
('speed', 'and', 'the'): 1,
('the', 'quick', 'person'): 2})
答案 5 :(得分:0)
可以使用 CountVectorizer (pip install sklearn
) 中的 scikit-learn 来生成二元组(或更一般地说,任何 ngram)。
示例(使用 Python 3.6.7 和 scikit-learn 0.24.2 测试)。
import sklearn.feature_extraction.text
ngram_size = 2
train_set = ['the quick person did not realize his speed and the quick person bumped']
vectorizer = sklearn.feature_extraction.text.CountVectorizer(ngram_range=(ngram_size,ngram_size))
vectorizer.fit(train_set) # build ngram dictionary
ngram = vectorizer.transform(train_set) # get ngram
print('ngram: {0}\n'.format(ngram))
print('ngram.shape: {0}'.format(ngram.shape))
print('vectorizer.vocabulary_: {0}'.format(vectorizer.vocabulary_))
输出:
>>> print('ngram: {0}\n'.format(ngram)) # Shows the bi-gram count
ngram: (0, 0) 1
(0, 1) 1
(0, 2) 1
(0, 3) 1
(0, 4) 1
(0, 5) 1
(0, 6) 2
(0, 7) 1
(0, 8) 1
(0, 9) 2
>>> print('ngram.shape: {0}'.format(ngram.shape))
ngram.shape: (1, 10)
>>> print('vectorizer.vocabulary_: {0}'.format(vectorizer.vocabulary_))
vectorizer.vocabulary_: {'the quick': 9, 'quick person': 6, 'person did': 5, 'did not': 1,
'not realize': 3, 'realize his': 7, 'his speed': 2, 'speed and': 8, 'and the': 0,
'person bumped': 4}