有人可以帮我将嵌套for循环转换为CUDA内核吗? 这是我试图转换为CUDA内核的函数:
// Convolution on Host
void conv(int* A, int* B, int* out) {
for (int i = 0; i < N; ++i)
for (int j = 0; j < N; ++j)
out[i + j] += A[i] * B[j];
}
我已经非常努力地将此代码并行化 这是我的尝试:
__global__ void conv_Kernel(int* A, int* B, int* out) {
int i = blockIdx.x;
int j = threadIdx.x;
__shared__ int temp[N];
__syncthreads();
temp[i + j] = A[i] * B[j];
__syncthreads();
int sum = 0;
for (int k = 0; k < N; k++)
sum += temp[k];
out[i + j] = sum;
}
答案 0 :(得分:5)
您的cpu conv
函数似乎正在执行此操作(例如N
= 4):
A0B0 A0B1 A0B2 A0B3 + ^
A1B0 A1B1 A1B2 A1B3 + N
A2B0 A2B1 A2B2 A2B3 + rows
A3B0 A3B1 A3B2 A3B3 = v
------------------------------------------
out0 out1 out2 out3 out4 out5 out6
<- (2*N)-1 columns ->
你的卷积(对我来说)是因为它卷积了2个相等长度的信号。因为GPU喜欢在&#34;大&#34;问题,这意味着N
应该很大。但是,conv_Kernel
实现的一个直接问题是它意味着块维度将用于索引A
,并且线程维度将用于索引B
。但是当前CUDA GPU的线程维度(threadIdx.x
)限制为512或1024。这将使我们只能解决相当小的问题。
您的实现还有其他各种问题。一个问题是分配的共享存储器大小不足以适合i+j
范围(可以从0-> 2 *(N-1))。当然,这很容易解决,但更严重的问题是我没有看到将算法映射到类似上述所需模式的任何方法的方法。在花了一会儿思考你的内核之后,我放弃了它。
卷积问题有很多与之相关的研究,并且可以通过各种方式针对大规模并行架构(如GPU)进行优化。因此,我将重点关注两个非常简单的实现,这些实现立即基于上图提示自己。
第一个实现只是重新创建上图。我们将创建一个中间temp
数组来存储所有单独的AxBy产品,并在conv_Kernel
中计算和存储这些产品。然后,我们将启动第二个内核(sum_Kernel
),它只是对temp
数组的列求和,以生成各种out
值。第一个内核需要N
个线程,它们将以倾斜的方式连续计算上图的每一行,因为我们遍历N
for循环迭代,每行一个。第二个内核需要(2 * N)-1个线程,每个列/ out
值一个。
我的第二个实现(conv_Kernel2)不需要temp
数组,只需为每个列/ out
赋值一个线程,并遍历N
行,计算必要的产品一行一行,并在运行中对这些产品进行总结&#34;。然后将总和结果直接存储在out
数组中。
仅考虑计算,而不考虑数据移动/初始化所需的时间,GPU实现开始比K20x GPU上大约N
= 512的幼稚单线程CPU实现更快,这是我碰巧用的是什么。第二个实现也受到以下事实的赞扬:所需的唯一数据移动是A,B和结果。第一个实现还需要分配temp
数组并将其初始化为全零。 temp
数组的大小与N
* N
成比例,因此第二个实现的优点还在于它不需要此临时存储。
这是一个完整的测试案例,运行和计时您提供的CPU实现以及我创建的两个略有不同的GPU实现:
$ cat t617.cu
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <sys/time.h>
#define N 4096
#define RG 10
#define USECPSEC 1000000ULL
#define nTPB 256
void conv(int* A, int* B, int* out) {
for (int i = 0; i < N; ++i)
for (int j = 0; j < N; ++j)
out[i + j] += A[i] * B[j];
}
unsigned long long dtime_usec(unsigned long long prev){
timeval tv1;
gettimeofday(&tv1,0);
return ((tv1.tv_sec * USECPSEC)+tv1.tv_usec) - prev;
}
__global__ void conv_Kernel(int* A, int *B, int* temp) {
int idx = threadIdx.x+blockDim.x*blockIdx.x;
if (idx < N){
int my_B = B[idx];
for (int i = 0; i < N; i++)
temp[idx + (i*2*N) + i] = my_B * A[i];
}
}
__global__ void sum_Kernel(int *temp, int *out){
int idx = threadIdx.x+blockDim.x*blockIdx.x;
if (idx < (2*N)-1){
int my_sum = 0;
for (int i = 0; i < N; i++) my_sum += temp[idx + (i*2*N)];
out[idx] = my_sum;}
}
__global__ void conv_Kernel2(int *A, int *B, int *out){
int idx = threadIdx.x+blockDim.x*blockIdx.x;
if (idx < (2*N)-1){
int my_sum = 0;
for (int i = 0; i < N; i++)
if (((idx < N) && (i <= idx)) || ((idx >= N) && (i > (idx-N)))) my_sum += A[i]*B[idx-i];
out[idx] = my_sum;
}
}
int main(){
int *h_A, *d_A, *h_result, *d_result, *result, *h_B, *d_B, *A, *B, *d_temp;
B = (int *)malloc(N*sizeof(int));
A = (int *)malloc(N*sizeof(int));
h_A = (int *)malloc(N*sizeof(int));
h_B = (int *)malloc(N*sizeof(int));
h_result = (int *)malloc(2*N*sizeof(int));
result = (int *)malloc(2*N*sizeof(int));
cudaMalloc(&d_B, N*sizeof(int));
cudaMalloc(&d_A, N*sizeof(int));
cudaMalloc(&d_result, 2*N*sizeof(int));
cudaMalloc(&d_temp, 2*N*N*sizeof(int));
for (int i=0; i < N; i++){
A[i] = rand()%RG;
B[i] = rand()%RG;
h_A[i] = A[i];
h_B[i] = B[i];}
for (int i=0; i < 2*N; i++){
result[i] = 0;
h_result[i] = 0;}
unsigned long long cpu_time = dtime_usec(0);
conv(A, B, result);
cpu_time = dtime_usec(cpu_time);
cudaMemcpy(d_A, h_A, N*sizeof(int), cudaMemcpyHostToDevice);
cudaMemcpy(d_B, h_B, N*sizeof(int), cudaMemcpyHostToDevice);
cudaMemset(d_result, 0, 2*N*sizeof(int));
cudaMemset(d_temp, 0, 2*N*N*sizeof(int));
unsigned long long gpu_time = dtime_usec(0);
conv_Kernel<<<(N+nTPB-1)/nTPB,nTPB>>>(d_A, d_B, d_temp);
sum_Kernel<<<((2*(N-1))+nTPB-1)/nTPB, nTPB>>>(d_temp, d_result);
cudaDeviceSynchronize();
gpu_time = dtime_usec(gpu_time);
cudaMemcpy(h_result, d_result, 2*N*sizeof(int), cudaMemcpyDeviceToHost);
for (int i = 0; i < 2*N; i++) if (result[i] != h_result[i]) {printf("mismatch at %d, cpu: %d, gpu %d\n", i, result[i], h_result[i]); return 1;}
printf("Finished. Results match. cpu time: %ldus, gpu time: %ldus\n", cpu_time, gpu_time);
cudaMemset(d_result, 0, 2*N*sizeof(int)); // just for error checking, the kernel2 require no initialization of the result
gpu_time = dtime_usec(0);
conv_Kernel2<<<((2*(N-1))+nTPB-1)/nTPB,nTPB>>>(d_A, d_B, d_result);
cudaDeviceSynchronize();
gpu_time = dtime_usec(gpu_time);
cudaMemcpy(h_result, d_result, 2*N*sizeof(int), cudaMemcpyDeviceToHost);
for (int i = 0; i < 2*N; i++) if (result[i] != h_result[i]) {printf("mismatch2 at %d, cpu: %d, gpu %d\n", i, result[i], h_result[i]); return 1;}
printf("Finished. Results match. cpu time: %ldus, gpu2 time: %ldus\n", cpu_time, gpu_time);
return 0;
}
$ nvcc -arch=sm_35 -o t617 t617.cu
$ ./t617
Finished. Results match. cpu time: 69059us, gpu time: 3204us
Finished. Results match. cpu time: 69059us, gpu2 time: 1883us
$ nvcc -arch=sm_35 -O3 -o t617 t617.cu
$ ./t617
Finished. Results match. cpu time: 13750us, gpu time: 3214us
Finished. Results match. cpu time: 13750us, gpu2 time: 1886us
$
(请注意,即使只使用-O3参数也会对CPU代码执行产生重大影响)
正如我所提到的,我会认为我的两个例子都很天真&#34;天真&#34;对于GPU代码(例如,他们使用共享内存),但它们可能会为您提供一些入门知识。
为简洁起见,我已经免除了CUDA错误检查。但是,我建议您在使用CUDA代码时遇到问题,执行proper cuda error checking。对于conv_Kernel
,我认为它会指出一些错误(如果您尝试运行它。)作为快速测试,您始终可以使用cuda-memcheck
运行任何CUDA代码,看看是否有任何错误API错误正在发生。
编辑:我尝试了conv_Kernel2
的简单共享内存版本,但速度不快。我认为这样做的原因是这些数据集(N
= 4096,A
和B
各为16K字节,out
大约为32K字节)小到足以容易适合GPU L2缓存,没有任何颠簸。
但是,对于较新的体系结构(cc 3.5和更高版本),CUDA编译器有时可以对内核进行额外的优化if the read-only input data is properly identified。因此,如果我将conv_Kernel2
定义更改为:
__global__ void conv_Kernel2(const int * __restrict__ A, const int * __restrict__ B, int *out){
然后我见证了执行时间略有改善,在我的情况下:
$ ./t617
Finished. Results match. cpu time: 13792us, gpu time: 3209us
Finished. Results match. cpu time: 13792us, gpu2 time: 1626us
$
我创建了一个代码的修改版本,它执行以下操作:
N
在命令行conv
和gpu conv_Kernel2
。typedef ... mytype;
,以便可以轻松地重新编译代码以测试具有各种数据类型的行为。修改后的代码:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <sys/time.h>
// RG*RG*MAXN must fit within mytype
#define MAXN 100000
#define RG 10
#define USECPSEC 1000000ULL
#define nTPB 256
typedef double mytype;
void conv(const mytype *A, const mytype *B, mytype* out, int N) {
for (int i = 0; i < N; ++i)
for (int j = 0; j < N; ++j)
out[i + j] += A[i] * B[j];
}
unsigned long long dtime_usec(unsigned long long prev){
timeval tv1;
gettimeofday(&tv1,0);
return ((tv1.tv_sec * USECPSEC)+tv1.tv_usec) - prev;
}
__global__ void conv_Kernel2(const mytype * __restrict__ A, const mytype * __restrict__ B, mytype *out, const int N){
int idx = threadIdx.x+blockDim.x*blockIdx.x;
if (idx < (2*N)-1){
mytype my_sum = 0;
for (int i = 0; i < N; i++)
if (((idx < N) && (i <= idx)) || ((idx >= N) && (i > (idx-N)))) my_sum += A[i]*B[idx-i];
out[idx] = my_sum;
}
}
int main(int argc, char *argv[]){
mytype *h_A, *d_A, *h_result, *d_result, *result, *h_B, *d_B, *A, *B;
if (argc != 2) {printf("must specify N on the command line\n"); return 1;}
int my_N = atoi(argv[1]);
if ((my_N < 1) || (my_N > MAXN)) {printf("N out of range\n"); return 1;}
B = (mytype *)malloc(my_N*sizeof(mytype));
A = (mytype *)malloc(my_N*sizeof(mytype));
h_A = (mytype *)malloc(my_N*sizeof(mytype));
h_B = (mytype *)malloc(my_N*sizeof(mytype));
h_result = (mytype *)malloc(2*my_N*sizeof(mytype));
result = (mytype *)malloc(2*my_N*sizeof(mytype));
cudaMalloc(&d_B, my_N*sizeof(mytype));
cudaMalloc(&d_A, my_N*sizeof(mytype));
cudaMalloc(&d_result, 2*my_N*sizeof(mytype));
for (int i=0; i < my_N; i++){
A[i] = rand()%RG;
B[i] = rand()%RG;
h_A[i] = A[i];
h_B[i] = B[i];}
for (int i=0; i < 2*my_N; i++){
result[i] = 0;
h_result[i] = 0;}
unsigned long long cpu_time = dtime_usec(0);
conv(A, B, result, my_N);
cpu_time = dtime_usec(cpu_time);
cudaMemset(d_result, 0, 2*my_N*sizeof(mytype));
unsigned long long gpu_time = dtime_usec(0);
cudaMemcpy(d_A, h_A, my_N*sizeof(mytype), cudaMemcpyHostToDevice);
cudaMemcpy(d_B, h_B, my_N*sizeof(mytype), cudaMemcpyHostToDevice);
conv_Kernel2<<<((2*(my_N-1))+nTPB-1)/nTPB,nTPB>>>(d_A, d_B, d_result, my_N);
cudaDeviceSynchronize();
cudaMemcpy(h_result, d_result, 2*my_N*sizeof(mytype), cudaMemcpyDeviceToHost);
gpu_time = dtime_usec(gpu_time);
for (int i = 0; i < 2*my_N; i++) if (result[i] != h_result[i]) {printf("mismatch2 at %d, cpu: %d, gpu %d\n", i, result[i], h_result[i]); return 1;}
printf("Finished. Results match. cpu time: %ldus, gpu time: %ldus\n", cpu_time, gpu_time);
printf("cpu/gpu = %f\n", cpu_time/(float)gpu_time);
return 0;
}