我想用scikit进行矢量化,了解列表中的列表。我去了我读过他们的培训课本的路径,然后我得到了这样的东西:
corpus = [["this is spam, 'SPAM'"],["this is ham, 'HAM'"],["this is nothing, 'NOTHING'"]]
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer(analyzer='word')
vect_representation= vect.fit_transform(corpus)
print vect_representation.toarray()
我得到以下内容:
return lambda x: strip_accents(x.lower())
AttributeError: 'list' object has no attribute 'lower'
此问题还有每个文档末尾的标签,我应该如何对待它们才能进行正确的分类?
答案 0 :(得分:13)
对于未来的每个人来说,这解决了我的问题:
corpus = [["this is spam, 'SPAM'"],["this is ham, 'HAM'"],["this is nothing, 'NOTHING'"]]
from sklearn.feature_extraction.text import CountVectorizer
bag_of_words = CountVectorizer(tokenizer=lambda doc: doc, lowercase=False).fit_transform(splited_labels_from_corpus)
当我使用.toarray()
函数时,这是输出:
[[0 0 1]
[1 0 0]
[0 1 0]]
谢谢你们
答案 1 :(得分:2)
首先,您应该将标签与文本分开。如果你想使用CountVectorizer,你必须逐个转换你的文本:
corpus = [["this is spam, 'SPAM'"],["this is ham, 'HAM'"],["this is nothing, 'NOTHING'"]]
from sklearn.feature_extraction.text import CountVectorizer
... split labels from texts
vect = CountVectorizer(analyzer='word')
vect_representation= map(vect.fit_transform,corpus)
...
作为另一种选择,您可以直接使用TfidfVectorizer列表。