我有两个类型为float的矩阵:
A
,尺寸为7000x100000和
B
,尺寸为100000x20
当我将它们相乘时,即使输出很小,我的代码也会消耗我的整个RAM
有没有办法让这个内存更有效率?
我试过this from Matlab help page,但没有帮助。
答案 0 :(得分:0)
你可以尝试分成块:
按行A
:
row_blk = 1000;
C = zeros(size(A,1), size(B,2), class(A));
f = 1; t = row_blk;
while t <= size(A,1)
C(f:t,:) = A(f,t,:)*B;
f = t+1;
t = min(size(A,1), f+row_blk-1);
end
按A
:
col_blk = 10000;
C = zeros(size(A,1), size(B,2), class(A));
f = 1; t = col_blk;
while t <= size(A,2)
C = C + A(:,f:t)*B(f:t,:);
f = t+1;
t = min(size(A,2), f+col_blk-1);
end
答案 1 :(得分:0)
我在GNU Octave和此代码的内存消耗中运行此代码:
a= rand(7000,100000);
b = rand(100000,20);
是:
Absolute running time: 10.05 sec, cpu time: 9.97 sec, memory peak: 5375 Mb
和此代码的内存消耗:
a= rand(7000,100000);
b = rand(100000,20);
c = a * b;
是:
Absolute running time: 14.26 sec, cpu time: 14.19 sec, memory peak: 5376 Mb
因此没有观察到显着差异!