使用countVectorizer为我自己在python中的词汇表计算单词出现次数

时间:2018-04-02 21:16:55

标签: python countvectorizer

Doc1: ['And that was the fallacy. Once I was free to talk with staff members']

Doc2: ['In the new, stripped-down, every-job-counts business climate, these human']

Doc3 : ['Another reality makes emotional intelligence ever more crucial']

Doc4: ['The globalization of the workforce puts a particular premium on emotional']

Doc5: ['As business changes, so do the traits needed to excel. Data tracking']

这是我的词汇量的一个示例:

my_vocabulary= [‘was the fallacy’, ‘free to’, ‘stripped-down’, ‘ever more’, ‘of the workforce’, ‘the traits needed’]

关键是我的词汇中的每个单词都是一个二元组或三元组。我的词汇包括我文档集中的所有可能的二元组和三元组,我在这里给你一个示例。根据应用程序,这就是我的词汇应该如何。我正在尝试使用countVectorizer作为以下内容:

from sklearn.feature_extraction.text import CountVectorizer
doc_set = [Doc1, Doc2, Doc3, Doc4, Doc5]
vectorizer = CountVectorizer( vocabulary=my_vocabulary)
tf = vectorizer.fit_transform(doc_set) 

我希望得到这样的东西:

print tf:
(0, 126)    1
(0, 6804)   1
(0, 5619)   1
(0, 5019)   2
(0, 5012)   1
(0, 999)    1
(0, 996)    1
(0, 4756)   4

其中第一列是文档ID,第二列是词汇表中的单词ID,第三列是该文档中该单词的出现次数。但是tf是空的。我知道在一天结束时,我可以编写一个代码来遍历词汇表中的所有单词并计算出现并生成矩阵,但是我可以使用countVectorizer来获取此输入并节省时间吗?我在这里做错了吗?如果countVectorizer不是正确的方法,那么任何建议都将受到赞赏。

1 个答案:

答案 0 :(得分:0)

您可以通过在CountVectorizer中指定ngram_range参数来构建所有可能的二元组和三元组的词汇表。在fit_tranform之后,您可以使用get_feature_names()和toarray()方法查看词汇和频率。后者返回每个文档的频率矩阵。更多信息:http://scikit-learn.org/stable/modules/feature_extraction.html#text-feature-extraction

from sklearn.feature_extraction.text import CountVectorizer

Doc1 = 'And that was the fallacy. Once I was free to talk with staff members'
Doc2 = 'In the new, stripped-down, every-job-counts business climate, these human'
Doc3 = 'Another reality makes emotional intelligence ever more crucial'
Doc4 = 'The globalization of the workforce puts a particular premium on emotional'
Doc5 = 'As business changes, so do the traits needed to excel. Data tracking'
doc_set = [Doc1, Doc2, Doc3, Doc4, Doc5]

vectorizer = CountVectorizer(ngram_range=(2, 3))
tf = vectorizer.fit_transform(doc_set)
vectorizer.vocabulary_
vectorizer.get_feature_names()
tf.toarray()

至于你试图做的事情,如果你在词汇表上训练CountVectorizer然后转换文件就行了。

my_vocabulary= ['was the fallacy', 'more crucial', 'particular premium', 'to excel', 'data tracking', 'another reality']

vectorizer = CountVectorizer(ngram_range=(2, 3))
vectorizer.fit_transform(my_vocabulary)
tf = vectorizer.transform(doc_set)

vectorizer.vocabulary_
Out[26]: 
{'another reality': 0,
 'data tracking': 1,
 'more crucial': 2,
 'particular premium': 3,
 'the fallacy': 4,
 'to excel': 5,
 'was the': 6,
 'was the fallacy': 7}

tf.toarray()
Out[25]: 
array([[0, 0, 0, 0, 1, 0, 1, 1],
       [0, 0, 0, 0, 0, 0, 0, 0],
       [1, 0, 1, 0, 0, 0, 0, 0],
       [0, 0, 0, 1, 0, 0, 0, 0],
       [0, 1, 0, 0, 0, 1, 0, 0]], dtype=int64)