我正在尝试使用Python的Tfidf来转换文本语料库。 但是,当我尝试fit_transform它时,我得到一个值错误ValueError:空词汇;也许这些文件只包含停用词。
In [69]: TfidfVectorizer().fit_transform(smallcorp)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-69-ac16344f3129> in <module>()
----> 1 TfidfVectorizer().fit_transform(smallcorp)
/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in fit_transform(self, raw_documents, y)
1217 vectors : array, [n_samples, n_features]
1218 """
-> 1219 X = super(TfidfVectorizer, self).fit_transform(raw_documents)
1220 self._tfidf.fit(X)
1221 # X is already a transformed view of raw_documents so
/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in fit_transform(self, raw_documents, y)
778 max_features = self.max_features
779
--> 780 vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
781 X = X.tocsc()
782
/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in _count_vocab(self, raw_documents, fixed_vocab)
725 vocabulary = dict(vocabulary)
726 if not vocabulary:
--> 727 raise ValueError("empty vocabulary; perhaps the documents only"
728 " contain stop words")
729
ValueError: empty vocabulary; perhaps the documents only contain stop words
我在这里阅读了SO问题:Problems using a custom vocabulary for TfidfVectorizer scikit-learn并尝试了ogrisel建议使用 TfidfVectorizer(** params).build_analyzer()(dataset2)来检查文本分析步骤的结果这似乎按预期工作:下面的代码段:
In [68]: TfidfVectorizer().build_analyzer()(smallcorp)
Out[68]:
[u'due',
u'to',
u'lack',
u'of',
u'personal',
u'biggest',
u'education',
u'and',
u'husband',
u'to',
还有别的我做错了吗?我正在喂它的语料库只是一条由换行符打断的巨大长串。
谢谢!
答案 0 :(得分:17)
我想这是因为你只有一个字符串。尝试将其拆分为字符串列表,例如:
In [51]: smallcorp
Out[51]: 'Ah! Now I have done Philosophy,\nI have finished Law and Medicine,\nAnd sadly even Theology:\nTaken fierce pains, from end to end.\nNow here I am, a fool for sure!\nNo wiser than I was before:'
In [52]: tf = TfidfVectorizer()
In [53]: tf.fit_transform(smallcorp.split('\n'))
Out[53]:
<6x28 sparse matrix of type '<type 'numpy.float64'>'
with 31 stored elements in Compressed Sparse Row format>
答案 1 :(得分:3)
在版本0.12中,我们将最小文档频率设置为2,这意味着只会考虑至少出现两次的单词。要使您的示例正常工作,您需要设置min_df=1
。从0.13开始,这是默认设置。
所以我猜你使用的是0.12,对吧?
答案 2 :(得分:0)
如果您坚持只有一个字符串,您也可以将单个字符串作为元组。而不是:
smallcorp = "your text"
你宁愿把它放在一个元组中。
In [22]: smallcorp = ("your text",)
In [23]: tf.fit_transform(smallcorp)
Out[23]:
<1x2 sparse matrix of type '<type 'numpy.float64'>'
with 2 stored elements in Compressed Sparse Row format>
答案 3 :(得分:0)
在大型语料库上运行TF-IDF Python 3脚本时遇到了类似的错误。一些小文件(显然)缺少关键字,从而引发错误消息。
我尝试了几种无济于事的解决方案(如果将filtered
添加到我的len(filtered = 0
列表中,请输入伪字符串)。最简单的解决方案是添加一个try: ... except ... continue
表达式。
pattern = "(?u)\\b[\\w-]+\\b"
cv = CountVectorizer(token_pattern=pattern)
# filtered is a list
filtered = [w for w in filtered if not w in my_stopwords and not w.isdigit()]
# ValueError:
# cv.fit(text)
# File "tfidf-sklearn.py", line 1675, in tfidf
# cv.fit(filtered)
# File "/home/victoria/venv/py37/lib/python3.7/site-packages/sklearn/feature_extraction/text.py", line 1024, in fit
# self.fit_transform(raw_documents)
# ...
# ValueError: empty vocabulary; perhaps the documents only contain stop words
# Did not help:
# https://stackoverflow.com/a/20933883/1904943
#
# if len(filtered) == 0:
# filtered = ['xxx', 'yyy', 'zzz']
# Solution:
try:
cv.fit(filtered)
cv.fit_transform(filtered)
doc_freq_term_matrix = cv.transform(filtered)
except ValueError:
continue
答案 4 :(得分:0)
我也有同样的问题。 将int(nums)列表转换为str(nums)列表没有帮助。 但我转换为:
['d'+str(nums) for nums in set] #where d is some letter which mention, we work with strings
这有帮助。