我已经针对不同的矩阵大小对GPU和其自身以及CPU的性能进行了调查,发现与大多数GPU文献所提出的相反:GPU的计算优势随阵列大小而减小。代码,结果和规格如下所示。值得注意的观察:
Xga
矩阵分解为四个增加矢量化速度的两倍罪魁祸首-我的代码,MATLAB或硬件配置未充分利用GPU?如何找出并解决它?
%% CODE: centroid indexing in K-means algorithm
% size(X) = [16000, 3]
% size(centroids) = [K, 3]
% Xga = gpuArray(single(X)); cga = gpuArray(single(centroids));
% Speed ratio = t2/t1, if t2 > t1 - else, t1/t2
%% TIMING
f1 = fasterFunction(...);
f2 = slowerFunction(...);
t1 = gputimeit(f1) % OR timeit(f1) for non-GPU arrays
t2 = timeit(f2) % OR gputimeit(f2) for GPU arrays
%% FUNCTIONS
function out = vecHammer(X, c, K, m)
[~, out] = min(reshape(permute(sum((X-permute(c,[3 2 1])).^2,2),[1 2 3]),m,K),[],2);
end
function out = forvecHammer(X, c, m)
out = zeros(m,1);
for j=1:m
[~,out(j)] = min(sum(((X(j,:))'-c').^2));
end
end
function out = forforHammer(X,c,m,K)
out = zeros(m,1); idxtemp = zeros(K,1);
for i=1:m
for j=1:K
idxtemp(j) = sum((X(i,:)-c(j,:)).^2,2);
end
[~, out(i)] = min(idxtemp);
end
end
答案 0 :(得分:0)
可能的答案是-数据太小了,只能并行化那么多;我的GPU提取了一个具有几个百分点的千兆字节数据集-这个数据集几乎只有10MB。