如何使用Scikit Learn CountVectorizer在语料库中获得单词频率?

时间:2014-12-15 16:20:58

标签: python scikit-learn

我正在尝试使用scikit-learn的CountVectorizer来计算一个简单的单词频率。

import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer

texts=["dog cat fish","dog cat cat","fish bird","bird"]
cv = CountVectorizer()
cv_fit=cv.fit_transform(texts)

print cv.vocabulary_
{u'bird': 0, u'cat': 1, u'dog': 2, u'fish': 3}

我原以为它会返回{u'bird': 2, u'cat': 3, u'dog': 2, u'fish': 2}

5 个答案:

答案 0 :(得分:28)

在这个例子中,

cv.vocabulary_是一个字典,其中的键是您找到的字词(要素),值是索引,这就是为什么它们会0, 1, 2, 3 。只是运气不好,看起来与你的计数相似:)

您需要使用cv_fit对象来获取计数

from sklearn.feature_extraction.text import CountVectorizer

texts=["dog cat fish","dog cat cat","fish bird", 'bird']
cv = CountVectorizer()
cv_fit=cv.fit_transform(texts)

print(cv.get_feature_names())
print(cv_fit.toarray())
#['bird', 'cat', 'dog', 'fish']
#[[0 1 1 1]
# [0 2 1 0]
# [1 0 0 1]
# [1 0 0 0]]

数组中的每一行都是原始文档(字符串)之一,每列是一个要素(单词),元素是该特定单词和文档的计数。您可以看到,如果您对每列进行求和,您将获得正确的数字

print(cv_fit.toarray().sum(axis=0))
#[2 3 2 2]

老实说,我建议使用collections.Counter或来自NLTK的东西,除非你有一些特定的理由使用scikit-learn,因为它会更简单。

答案 1 :(得分:6)

cv_fit.toarray().sum(axis=0)肯定会给出正确的结果,但是在稀疏矩阵上执行求和然后将其转换为数组要快得多:

np.asarray(cv_fit.sum(axis=0))

答案 2 :(得分:2)

我们将使用zip方法从单词列表及其计数列表中做出字典。

import pandas as pd
import numpy as np    
from sklearn.feature_extraction.text import CountVectorizer

texts=["dog cat fish","dog cat cat","fish bird","bird"]    

cv = CountVectorizer()   
cv_fit=cv.fit_transform(texts)    
word_list = cv.get_feature_names();    
count_list = cv_fit.toarray().sum(axis=0)    

print word_list
['鸟','猫','狗','鱼']
print count_list
[2 3 2 2]
print dict(zip(word_list,count_list))
{'鱼':2,'狗':2,'鸟':2,'猫':3}

答案 3 :(得分:2)

结合其他人的观点和我自己的观点:) 这是我给你的东西

from collections import Counter
from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize

text='''Note that if you use RegexpTokenizer option, you lose 
natural language features special to word_tokenize 
like splitting apart contractions. You can naively 
split on the regex \w+ without any need for the NLTK.
'''

# tokenize
raw = ' '.join(word_tokenize(text.lower()))

tokenizer = RegexpTokenizer(r'[A-Za-z]{2,}')
words = tokenizer.tokenize(raw)

# remove stopwords
stop_words = set(stopwords.words('english'))
words = [word for word in words if word not in stop_words]

# count word frequency, sort and return just 20
counter = Counter()
counter.update(words)
most_common = counter.most_common(20)
most_common

#输出 (全部)

[('note', 1),
 ('use', 1),
 ('regexptokenizer', 1),
 ('option', 1),
 ('lose', 1),
 ('natural', 1),
 ('language', 1),
 ('features', 1),
 ('special', 1),
 ('word', 1),
 ('tokenize', 1),
 ('like', 1),
 ('splitting', 1),
 ('apart', 1),
 ('contractions', 1),
 ('naively', 1),
 ('split', 1),
 ('regex', 1),
 ('without', 1),
 ('need', 1)]

就效率而言,可以做得比这更好,但是如果您不太担心它,那么这段代码是最好的。

答案 4 :(得分:0)

结合@YASH-GUPTA 的答案以获得可读结果和@pieterbons 的 RAM 效率,但需要进行调整并添加几个括号。 工作代码:

import numpy as np    
from sklearn.feature_extraction.text import CountVectorizer

texts = ["dog cat fish", "dog cat cat", "fish bird", "bird"]    

cv = CountVectorizer()   
cv_fit = cv.fit_transform(texts)    
word_list = cv.get_feature_names()

# Added [0] here to get a 1d-array for iteration by the zip function. 
count_list = np.asarray(cv_fit.sum(axis=0))[0]

print(dict(zip(word_list, count_list)))
# Output: {'bird': 2, 'cat': 3, 'dog': 2, 'fish': 2}