tfidf矢量化器进程显示错误

时间:2017-03-23 09:40:10

标签: nlp vectorization tf-idf dtmf

我正在研究非英语语料库分析,但面临几个问题。其中一个问题是tfidf_vectorizer。导入有关的证书后,我处理了以下代码以获得结果

contents = [open("D:\test.txt", encoding='utf8').read()]
#define vectorizer parameters
tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000,
                                 min_df=0.2, stop_words=stopwords,
                                 use_idf=True, tokenizer=tokenize_and_stem, ngram_range=(3,3))

%time tfidf_matrix = tfidf_vectorizer.fit_transform(contents) 

print(tfidf_matrix.shape)

处理完上述代码后,我收到以下错误消息。

ValueError                                Traceback (most recent call last)
<ipython-input-144-bbcec8b8c065> in <module>()
      5                                  use_idf=True, tokenizer=tokenize_and_stem, ngram_range=(3,3))
      6 
----> 7 get_ipython().magic('time tfidf_matrix = tfidf_vectorizer.fit_transform(contents) #fit the vectorizer to synopses')
      8 
      9 print(tfidf_matrix.shape)

C:\Users\mazhar\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py in magic(self, arg_s)
   2156         magic_name, _, magic_arg_s = arg_s.partition(' ')
   2157         magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2158         return self.run_line_magic(magic_name, magic_arg_s)
   2159 
   2160     #-------------------------------------------------------------------------

C:\Users\mazhar\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py in run_line_magic(self, magic_name, line)
   2077                 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
   2078             with self.builtin_trap:
-> 2079                 result = fn(*args,**kwargs)
   2080             return result
   2081 

<decorator-gen-60> in time(self, line, cell, local_ns)

C:\Users\mazhar\Anaconda3\lib\site-packages\IPython\core\magic.py in <lambda>(f, *a, **k)
    186     # but it's overkill for just that one bit of state.
    187     def magic_deco(arg):
--> 188         call = lambda f, *a, **k: f(*a, **k)
    189 
    190         if callable(arg):

C:\Users\mazhar\Anaconda3\lib\site-packages\IPython\core\magics\execution.py in time(self, line, cell, local_ns)
   1178         else:
   1179             st = clock2()
-> 1180             exec(code, glob, local_ns)
   1181             end = clock2()
   1182             out = None

<timed exec> in <module>()

C:\Users\mazhar\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py in fit_transform(self, raw_documents, y)
   1303             Tf-idf-weighted document-term matrix.
   1304         """
-> 1305         X = super(TfidfVectorizer, self).fit_transform(raw_documents)
   1306         self._tfidf.fit(X)
   1307         # X is already a transformed view of raw_documents so

C:\Users\mazhar\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py in fit_transform(self, raw_documents, y)
    836                                                        max_doc_count,
    837                                                        min_doc_count,
--> 838                                                        max_features)
    839 
    840             self.vocabulary_ = vocabulary

C:\Users\mazhar\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py in _limit_features(self, X, vocabulary, high, low, limit)
    731         kept_indices = np.where(mask)[0]
    732         if len(kept_indices) == 0:
--> 733             raise ValueError("After pruning, no terms remain. Try a lower"
    734                              " min_df or a higher max_df.")
    735         return X[:, kept_indices], removed_terms

ValueError: After pruning, no terms remain. Try a lower min_df or a higher max_df.

如果我更改了最小值和最大值,则错误为

1 个答案:

答案 0 :(得分:1)

假设您的tokeniser按预期工作,我发现您的代码存在两个问题。首先,TfIdfVectorizer需要一个字符串列表,而您提供一个字符串。其次,min_df=0.2非常高,包含在20%的所有文档中都需要一个术语,这对于trigram特征来说是非常不可能的。

以下为我工作

from sklearn.feature_extraction.text import TfidfVectorizer
with open("README.md") as infile:
    contents = infile.readlines() # Note: readlines() instead of read()

tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000,
                                   min_df=2, use_idf=True, ngram_range=(3,3))
# note: minimum of 2 occurrences, rather than 0.2 (20% of all documents)

tfidf_matrix = tfidf_vectorizer.fit_transform(contents) 

print(tfidf_matrix.shape)

输出(155, 28)