gensim:自定义相似度量

时间:2016-06-27 12:45:42

标签: python time similarity gensim

使用gensim,我想计算文档列表中的相似度。这个库非常适合处理我所拥有的数据量。文档都缩减为时间戳,我有一个函数time_similarity来比较它们。但是,gensim使用余弦相似度。

我想知道是否有人之前有过这种情况或有不同的解决方案。

1 个答案:

答案 0 :(得分:1)

可以通过继承接口SimilarityABC来完成此操作。我没有找到任何相关的文档,但看起来在定义Word Mover Distance similarity之前已经完成了。这是执行此操作的通用方法。通过专注于您关心的相似性度量,您可以提高效率。

import numpy
from gensim import interfaces

class CustomSimilarity(interfaces.SimilarityABC):

    def __init__(self, corpus, custom_similarity, num_best=None, chunksize=256):
        self.corpus = corpus
        self.custom_similarity = custom_similarity
        self.num_best = num_best
        self.chunksize = chunksize
        self.normalize = False

    def get_similarities(self, query):
        """
        **Do not use this function directly; use the self[query] syntax instead.**
        """
        if isinstance(query, numpy.ndarray):
            # Convert document indexes to actual documents.
            query = [self.corpus[i] for i in query]
        if not isinstance(query[0], list):
            query = [query]
        n_queries = len(query)
        result = []
        for qidx in range(n_queries):
            qresult = [self.custom_similarity(document, query[qidx]) for document in self.corpus]
            qresult = numpy.array(qresult)
            result.append(qresult)
        if len(result) == 1:
            # Only one query.
            result = result[0]
        else:
            result = numpy.array(result)
        return result

实现自定义相似性:

def overlap_sim(doc1, doc2):
    # similarity defined by the number of common words
    return len(set(doc1) & set(doc2))
corpus = [['cat', 'dog'], ['cat', 'bird'], ['dog']]
cs = CustomSimilarity(corpus, overlap_sim, num_best=2)
print(cs[['bird', 'cat', 'frog']])

这会输出[(1, 2.0), (0, 1.0)]