所以我有两个矩阵,A和B,我想计算这里给出的min-plus产品:Min-plus matrix multiplication。为此,我实施了以下内容:
def min_plus_product(A,B):
B = np.transpose(B)
Y = np.zeros((len(B),len(A)))
for i in range(len(B)):
Y[i] = (A + B[i]).min(1)
return np.transpose(Y)
这种方法很好,但对于大型矩阵来说速度很慢,有没有办法让它更快?我听说在C中实现或使用GPU可能是不错的选择。
答案 0 :(得分:2)
如果中间维度足够大且条目均匀分布,这是一个算法可以节省一点。它利用了这样一个事实,即最小的总和通常来自两个小项。
import numpy as np
def min_plus_product(A,B):
B = np.transpose(B)
Y = np.zeros((len(B),len(A)))
for i in range(len(B)):
Y[i] = (A + B[i]).min(1)
return np.transpose(Y)
def min_plus_product_opt(A,B, chop=None):
if chop is None:
# not sure this is optimal
chop = int(np.ceil(np.sqrt(A.shape[1])))
B = np.transpose(B)
Amin = A.min(1)
Y = np.zeros((len(B),len(A)))
for i in range(len(B)):
o = np.argsort(B[i])
Y[i] = (A[:, o[:chop]] + B[i, o[:chop]]).min(1)
if chop < len(o):
idx = np.where(Amin + B[i, o[chop]] < Y[i])[0]
for j in range(chop, len(o), chop):
if len(idx) == 0:
break
x, y = np.ix_(idx, o[j : j + chop])
slmin = (A[x, y] + B[i, o[j : j + chop]]).min(1)
slmin = np.minimum(Y[i, idx], slmin)
Y[i, idx] = slmin
nidx = np.where(Amin[idx] + B[i, o[j + chop]] < Y[i, idx])[0]
idx = idx[nidx]
return np.transpose(Y)
A = np.random.random(size=(1000,1000))
B = np.random.random(size=(1000,2000))
print(np.allclose(min_plus_product(A,B), min_plus_product_opt(A,B)))
import time
t = time.time();min_plus_product(A,B);print('naive {}sec'.format(time.time()-t))
t = time.time();min_plus_product_opt(A,B);print('opt {}sec'.format(time.time()-t))
示例输出:
True
naive 7.794037580490112sec
opt 1.65810227394104sec
答案 1 :(得分:0)
一种可能的简单路线是使用numba。
from numba import autojit
import numpy as np
@autojit(nopython=True)
def min_plus_product(A,B):
n = A.shape[0]
C = np.zeros((n,n))
for i in range(n):
for j in range(n):
minimum = A[i,0]+B[0,j]
for k in range(1,n):
minimum = min(A[i,k]+B[k,j],minimum)
C[i,j] = minimum
return C
1000x1000 A,B矩阵上的计时是:
1个循环,最好的3:每个循环的4.28秒原始代码
1个循环,对于numba代码,每个循环最好3:2.32秒