我为我的红黑色Gauss-Seidel求解器添加了OpenACC指令用于拉普拉斯方程(一个简单的加热板问题),但GPU加速代码并不比CPU快,即使对于大问题也是如此。
我还写了一个CUDA版本,这比两者都要快得多(512x512,大约2秒,而CPU和OpenACC则为25)。
有人能想到造成这种差异的原因吗?我意识到CUDA提供了最大的潜在速度,但OpenACC应该为更大的问题提供比CPU更好的东西(比如Jacobi求解器对于同样的问题所展示here)。
以下是相关代码(完整的工作源是here):
#pragma acc data copyin(aP[0:size], aW[0:size], aE[0:size], aS[0:size], aN[0:size], b[0:size]) copy(temp_red[0:size_temp], temp_black[0:size_temp])
// red-black Gauss-Seidel with SOR iteration loop
for (iter = 1; iter <= it_max; ++iter) {
Real norm_L2 = 0.0;
// update red cells
#pragma omp parallel for shared(aP, aW, aE, aS, aN, temp_black, temp_red) \
reduction(+:norm_L2)
#pragma acc kernels present(aP[0:size], aW[0:size], aE[0:size], aS[0:size], aN[0:size], b[0:size], temp_red[0:size_temp], temp_black[0:size_temp])
#pragma acc loop independent gang vector(4)
for (int col = 1; col < NUM + 1; ++col) {
#pragma acc loop independent gang vector(64)
for (int row = 1; row < (NUM / 2) + 1; ++row) {
int ind_red = col * ((NUM / 2) + 2) + row; // local (red) index
int ind = 2 * row - (col % 2) - 1 + NUM * (col - 1); // global index
#pragma acc cache(aP[ind], b[ind], aW[ind], aE[ind], aS[ind], aN[ind])
Real res = b[ind] + (aW[ind] * temp_black[row + (col - 1) * ((NUM / 2) + 2)]
+ aE[ind] * temp_black[row + (col + 1) * ((NUM / 2) + 2)]
+ aS[ind] * temp_black[row - (col % 2) + col * ((NUM / 2) + 2)]
+ aN[ind] * temp_black[row + ((col + 1) % 2) + col * ((NUM / 2) + 2)]);
Real temp_old = temp_red[ind_red];
temp_red[ind_red] = temp_old * (1.0 - omega) + omega * (res / aP[ind]);
// calculate residual
res = temp_red[ind_red] - temp_old;
norm_L2 += (res * res);
} // end for row
} // end for col
// update black cells
#pragma omp parallel for shared(aP, aW, aE, aS, aN, temp_black, temp_red) \
reduction(+:norm_L2)
#pragma acc kernels present(aP[0:size], aW[0:size], aE[0:size], aS[0:size], aN[0:size], b[0:size], temp_red[0:size_temp], temp_black[0:size_temp])
#pragma acc loop independent gang vector(4)
for (int col = 1; col < NUM + 1; ++col) {
#pragma acc loop independent gang vector(64)
for (int row = 1; row < (NUM / 2) + 1; ++row) {
int ind_black = col * ((NUM / 2) + 2) + row; // local (black) index
int ind = 2 * row - ((col + 1) % 2) - 1 + NUM * (col - 1); // global index
#pragma acc cache(aP[ind], b[ind], aW[ind], aE[ind], aS[ind], aN[ind])
Real res = b[ind] + (aW[ind] * temp_red[row + (col - 1) * ((NUM / 2) + 2)]
+ aE[ind] * temp_red[row + (col + 1) * ((NUM / 2) + 2)]
+ aS[ind] * temp_red[row - ((col + 1) % 2) + col * ((NUM / 2) + 2)]
+ aN[ind] * temp_red[row + (col % 2) + col * ((NUM / 2) + 2)]);
Real temp_old = temp_black[ind_black];
temp_black[ind_black] = temp_old * (1.0 - omega) + omega * (res / aP[ind]);
// calculate residual
res = temp_black[ind_black] - temp_old;
norm_L2 += (res * res);
} // end for row
} // end for col
// calculate residual
norm_L2 = sqrt(norm_L2 / ((Real)size));
if(iter % 100 == 0) printf("%5d, %0.6f\n", iter, norm_L2);
// if tolerance has been reached, end SOR iterations
if (norm_L2 < tol) {
break;
}
}
答案 0 :(得分:2)
好吧,我发现了一种半解决方案,可以在较小的问题上显着减少时间。
如果我插入行:
acc_init(acc_device_nvidia);
acc_set_device_num(0, acc_device_nvidia);
在我启动计时器之前,为了激活和设置GPU,512x512问题的时间下降到9.8秒,而1024x1024时下降到42。增加问题的大小进一步表明,即使OpenACC与在四个CPU核心上运行相比也有多快。
通过此更改,OpenACC代码的速度比CUDA代码慢2倍,随着问题规模越来越大,差距越来越小(~1.2)。
答案 1 :(得分:0)
我下载了您的完整代码,我编译并运行它!没有停止运行和指示
if(iter%100 == 0)printf(“%5d,%0.6f \ n”,iter,norm_L2);
结果是:
100,nan
200,nan
...
我将类型 Real 的所有变量更改为 float 类型,结果为:
100,0.000654
200,0.000370
......,....
......,....
8800,0.000002
8900,0.000002
9000,0.000001
9100,0.000001
9200,0.000001
9300,0.000001
9400,0.000001
9500,0.000001
9600,0.000001
9700,0.000001
CPU
迭代次数:9796
总时间:5.594017 s
NUM = 1024时,结果是:
迭代次数:27271
总时间:25.949905 s