我有这个代码用于计算与tf-idf的文本相似性。
from sklearn.feature_extraction.text import TfidfVectorizer
documents = [doc1,doc2]
tfidf = TfidfVectorizer().fit_transform(documents)
pairwise_similarity = tfidf * tfidf.T
print pairwise_similarity.A
问题是这个代码作为输入普通字符串,我想通过删除停用词,词干和tokkenize来准备文档。所以输入将是一个列表。如果我使用tokkenized文档调用documents = [doc1,doc2]
,则会出现错误:
Traceback (most recent call last):
File "C:\Users\tasos\Desktop\my thesis\beta\similarity.py", line 18, in <module>
tfidf = TfidfVectorizer().fit_transform(documents)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 1219, in fit_transform
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 780, in fit_transform
vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 715, in _count_vocab
for feature in analyze(doc):
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 229, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 195, in <lambda>
return lambda x: strip_accents(x.lower())
AttributeError: 'unicode' object has no attribute 'apply_freq_filter'
有没有办法更改代码并使其接受列表或让我再次将tokkenized文档更改为字符串?
答案 0 :(得分:5)
尝试将预处理跳到小写并提供自己的“nop”标记生成器:
tfidf = TfidfVectorizer(tokenizer=lambda doc: doc, lowercase=False).fit_transform(documents)
您还应该查看其他参数,例如stop_words
,以避免重复预处理。