我试图找到两个文件的余弦相似度,如下所示:
d1: [(0,1), (3,2), (6, 1)]
d2: [(1,1), (3,1), (5,4), (6,2)]
其中每个文档都是一个主题权重向量,其中主题是元组中的第一个元素,权重是第二个元素
我不确定如何使用此加权方案计算余弦相似度? Python中是否有任何模块/包可以让我做这样的事情?
答案 0 :(得分:1)
快速浏览一下,它似乎不是一个现成的功能,它将接受该表格的输入。您有两种选择,这取决于问题,数组的大小以及其他因素。您可以将两个主题权重向量中的每一个转换为稀疏的scipy向量,然后使用sklearn的cosine_similarity(http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_similarity.html)或者您可以编写自己的cosine_similarity。我将采用后者的方式是将每个向量转换为dict(用于更快的查找)。
import math
def vect_to_topic_weight(vector):
return {a:b for a,b in vector}
def norm(vector):
return math.sqrt(sum(vector[k]**2 for k in vector.iterkeys()))
def dot(a,b):
return sum(a[k]*b.get(k,0) for k in a.iterkeys())
# returns the cosine_similarity, with inputs as topic_weight dicts
def cosine_similarity(a, b):
return dot(a,b) / float(norm(a)*norm(b))
答案 1 :(得分:1)
是的,python中有包,例如scikit-learn's cosine sim. documentation here。下面我给你一个手动方式:
import numpy as np
d1 = dict([(0,1), (3,2), (6, 1)])
d2 = dict([(1,1), (3,1), (5,4), (6,2)])
l = max(d1.keys() + d2.keys()) + 1 ## Number of topics observed
v1 = np.zeros((l,))
for i in xrange(l):
if i in d1.keys():
v1[i] = d1[i]
v2 = np.zeros((l,))
for i in xrange(l):
if i in d2.keys():
v2[i] = d2[i]
## now v1 and v2 are 1-d np arrays representing your docs.
v1 = v1/np.sqrt(np.dot(v1,v1)) ## normalize
v2 = v2/np.sqrt(np.dot(v2,v2)) ## normalize
cos_sim = np.dot(v1,v2) ## should get .348155...
答案 2 :(得分:0)
一个非常简单的想法是创建权重向量,然后使用scipy.spatial.distance.cosine
计算余弦距离(等于1相似度):
In [1]: from scipy.spatial.distance import cosine
In [2]: import numpy as np
In [3]: d1 = [(0,1), (3,2), (6, 1)]
In [4]: d2 = [(1,1), (3,1), (5,4), (6,2)]
In [5]: def get_weights(d):
...: w = [ 0. ] * N
...: for i, weight in d:
...: w[i] = weight
...: return np.array(w)
...:
In [6]: w1 = get_weights(d1)
In [7]: w2 = get_weights(d2)
In [8]: 1-cosine(w1, w2)
Out[8]: 0.3481553119113957