我正在尝试将TfidfVectorizer对象放入视频游戏评论列表中,但出于某种原因我收到了错误。
这是我的代码:
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(max_features = 50000, use_idf = True, ngram_range=(1,3),
preprocessor = data_preprocessor.preprocess_tokenized_review)
print(train_set_x[0])
%time tfidf_matrix = tfidf_vectorizer.fit_transform(train_set_x)
以下是错误消息:
I haven't gotten around to playing the campaign but the multiplayer is solid and pretty fun. Includes Zero Dark Thirty pack, an Online Pass, and the all powerful Battlefield 4 Beta access.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<timed exec> in <module>()
~/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py in fit_transform(self, raw_documents, y)
1379 Tf-idf-weighted document-term matrix.
1380 """
-> 1381 X = super(TfidfVectorizer, self).fit_transform(raw_documents)
1382 self._tfidf.fit(X)
1383 # X is already a transformed view of raw_documents so
~/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py in fit_transform(self, raw_documents, y)
867
868 vocabulary, X = self._count_vocab(raw_documents,
--> 869 self.fixed_vocabulary_)
870
871 if self.binary:
~/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py in _count_vocab(self, raw_documents, fixed_vocab)
790 for doc in raw_documents:
791 feature_counter = {}
--> 792 for feature in analyze(doc):
793 try:
794 feature_idx = vocabulary[feature]
~/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py in <lambda>(doc)
264
265 return lambda doc: self._word_ngrams(
--> 266 tokenize(preprocess(self.decode(doc))), stop_words)
267
268 else:
~/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py in <lambda>(doc)
239 return self.tokenizer
240 token_pattern = re.compile(self.token_pattern)
--> 241 return lambda doc: token_pattern.findall(doc)
242
243 def get_stop_words(self):
TypeError: expected string or bytes-like object
请注意,输出的第一部分代表我的视频游戏数据集中的一条评论。如果有人知道发生了什么,我将不胜感激。提前谢谢!
答案 0 :(得分:0)
我认为这个问题是由data_preprocessor.preprocess_tokenized_review
函数引起的(你没有共享)。
证明(使用默认preprocessor=None
):
In [19]: from sklearn.feature_extraction.text import TfidfVectorizer
In [20]: X = ["I haven't gotten around to playing the campaign but the multiplayer is solid and pretty fun. Includes Zero Dark Thirty pack, an Onlin
...: e Pass, and the all powerful Battlefield 4 Beta access."]
In [21]: tfidf_vectorizer = TfidfVectorizer(max_features=50000, use_idf=True, ngram_range=(1,3))
In [22]: r = tfidf_vectorizer.fit_transform(X)
In [25]: r
Out[25]:
<1x84 sparse matrix of type '<class 'numpy.float64'>'
with 84 stored elements in Compressed Sparse Row format>
所以当我们没有为preprocessor
参数传递任何值时,它工作得很好。