我试图实现信息检索论文中描述的技术,其中文档被分解为向量,然后,计算它们的余弦相似度,就像这里解释的那样:http://blog.christianperone.com/2013/09/machine-learning-cosine-similarity-for-vector-space-models-part-iii/
在示例中,我们有:
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
documents = (
"The sky is blue",
"The sun is bright",
"The sun in the sky is bright",
"We can see the shining sun, the bright sun"
)
tfidf_vectorizer = TfidfVectorizer()
tfidf_matrix = tfidf_vectorizer.fit_transform(documents)
cosine_similarity(tfidf_matrix[0:1], tfidf_matrix)
但是,我会不时收到一份新文件。有没有办法计算这个新文档的余弦相似度而无需重新创建documents
元组和tfidf_matrix
?
答案 0 :(得分:0)
是的,你可以这样做:
new_docs = [
"This is new doc 1",
"This is new doc 2",
]
new_tfidf_matrix = tfidf_vectorizer.predict(new_docs)
cosine_similarity(new_tfidf_matrix, tfidf_matrix)
如果您认为新文档会在训练数据集中出现新的词汇表,那么您应该考虑使用tfidf_vectorizer.fit(all_docs)
重新训练Vectorizer。