有人试图解释我为什么当我尝试fit_transform任何短字时,CountVectorizer会引发这个错误?即使我使用stopwords = None我仍然会得到相同的错误。 这是代码
from sklearn.feature_extraction.text import CountVectorizer
text = ['don\'t know when I shall return to the continuation of my scientific work. At the moment I can do absolutely nothing with it, and limit myself to the most necessary duty of my lectures; how much happier I would be to be scientifically active, if only I had the necessary mental freshness.']
cv = CountVectorizer(stop_words=None).fit(text)
并按预期工作。然后,如果我尝试fit_transform与另一个文本
cv.fit_transform(['q'])
,引发的错误是
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-acbd560df1a2> in <module>()
----> 1 cv.fit_transform(['q'])
~/.local/lib/python3.6/site-packages/sklearn/feature_extraction/text.py in fit_transform(self, raw_documents, y)
867
868 vocabulary, X = self._count_vocab(raw_documents,
--> 869 self.fixed_vocabulary_)
870
871 if self.binary:
~/.local/lib/python3.6/site-packages/sklearn/feature_extraction/text.py in _count_vocab(self, raw_documents, fixed_vocab)
809 vocabulary = dict(vocabulary)
810 if not vocabulary:
--> 811 raise ValueError("empty vocabulary; perhaps the documents only"
812 " contain stop words")
813
ValueError: empty vocabulary; perhaps the documents only contain stop words
我读了一些关于这个错误的话题,因为它看起来确实经常出现错误CV提升,但我发现的所有内容都涵盖了文本真的只包含停用词的情况。我真的无法弄清楚我的问题是什么,所以如果我得到任何帮助,我会非常感激!
答案 0 :(得分:3)
CountVectorizer(token_pattern='(?u)\\b\\w\\w+\\b')
仅标记包含2个以上字符的单词(标记)
您可以更改此默认行为:
vect = CountVectorizer(token_pattern='(?u)\\b\\w+\\b')
测试:
In [29]: vect.fit_transform(['q'])
Out[29]:
<1x1 sparse matrix of type '<class 'numpy.int64'>'
with 1 stored elements in Compressed Sparse Row format>
In [30]: vect.get_feature_names()
Out[30]: ['q']