有没有更有效的方式来编码和收缩稀疏张量?

时间:2019-08-23 14:31:14

标签: python algorithm optimization tensor

我有一个稀疏的3级张量beta,我需要针对2级张量alpha反复收缩。我目前正在将非零张量元素编码为一个线性列表,可以在其上轻松进行迭代,并在下面的示例代码中的sparse_mult函数中执行收缩。注意,我知道numpy可能在幕后进行了许多优化,因此在下面的特定情况下不需要这样做,这只是一个玩具示例,它说明了我所考虑的算法。

这是一个很好的优化,还是有更好的方法来编码和收缩稀疏张量?假定内存使用不是问题,我们关心的只是使收缩迅速发生。

import numpy as np


class TensorElement(): #basic element of my nonzero tensor element list
    def __init__(self, inds, val):
        self.val = val
        self.i = inds[0]
        self.j = inds[1]
        self.k = inds[2]

def sparse_mult(sparse, matrix, N):
    result = np.zeros(N)
    for s in sparse:
        result[s.i] += s.val * matrix[s.j, s.k] # a single linear pass over nonzero elements to perform the whole contraction
    return result

def slow_mult(tensor, matrix, N): #the naive implementation, for comparison
    result = np.zeros(N)
    for i in range(N):
        for j in range(N):
            for k in range(N):
                result[i] += tensor[i,j,k]*matrix[j,k]
    return result

def main():
    N = 5
    beta = np.zeros((N,N,N))
    alpha = np.random.random((N,N))
    indices = np.random.randint(0, N, size=(N,3)) #generate a list of indices randomly to make nonzero elements of beta

    for i in indices: #make a sparse tensor and keep a list of indices of nonzero elements
        beta[i[0],i[1],i[2]] = np.random.random()

    sparse_beta = []
    for i in indices:
        sparse_beta.append(TensorElement(i, beta[i[0],i[1],i[2]]))


    print np.tensordot(beta, alpha)
    print sparse_mult(sparse_beta, alpha, N)
    print slow_mult(beta, alpha, N)


if __name__=='__main__':
    main()

0 个答案:

没有答案