带有char_wb的TF-IDF忽略自定义的前处理器吗?

时间:2019-01-22 14:50:01

标签: python scikit-learn tfidfvectorizer

我有

import nltk
from nltk.stem.snowball import GermanStemmer

def my_tokenizer(doc):
   stemmer= GermanStemmer()

   return([stemmer.stem(t.lower()) for t in nltk.word_tokenize(doc) if 
   t.lower() not in my_stop_words])

text="hallo df sdfd"
singleTFIDF = TfidfVectorizer(analyzer='char_wb', ngram_range= 
(4,6),preprocessor=my_tokenizer, max_features=50).fit([str(text)])

从文档中可以明显看出,自定义Toenizer仅适用于Analyzer = word。

我明白了

Traceback (most recent call last):
  File "TfidF.py", line 95, in <module>
    singleTFIDF = TfidfVectorizer(analyzer='char_wb', ngram_range=(4,6),preprocessor=my_tokenizer, max_features=50).fit([str(text)])
  File "C:\Users\chris1\Anaconda3\envs\master\lib\site-packages\sklearn\feature_extraction\text.py", line 185, in _char_wb_ngrams
    text_document = self._white_spaces.sub(" ", text_document)
TypeError: expected string or bytes-like object

1 个答案:

答案 0 :(得分:2)

您必须先连接单词,然后返回单个字符串。 试试吧!

return(' '.join ([stemmer.stem(t.lower()) for t in nltk.word_tokenize(doc) if 
   t.lower() not in my_stop_words]))