使用Cuda进行非方矩阵乘法

时间:2016-09-01 06:38:45

标签: c matrix cuda

我在最后几天开始和cuda一起工作。编写一个乘以N×N大小的两个矩阵的程序是没有问题的。在内核函数中,我使用了这段代码:

    for(int i = 0; i < width; i++){
        sum += a[row * width + i] * b[i * width + col];
        c[row * width + col] = sum;
    }

如何设计核函数以将大小为1 x N的矩阵与大小为N x M的矩阵相乘

1 个答案:

答案 0 :(得分:-2)

我现在找到了解决这个问题的方法:

    #include <stdio.h>
#include <iostream>

using namespace std;

__global__
void kernel(float *a, float *b, float *c, int N, int M) {
    int tid = threadIdx.x + blockIdx.x * blockDim.x;
    float sum = 0;
    if (tid < M) {
        for (int i = 0; i < N; i++)
            sum += a[i] * b[(i * M) + tid];
        c[tid] = sum;

    }
}

int main(void) {

    float *dev_a, *dev_b, *dev_c;

    int N = 16;
    int M = 12;

    float a[N];
    float b[N][M];
    float c[M];

    for (int i = 0; i < N; i++) {
        a[i] = 1.0;
    }

    for (int i = 0; i < N; i++) {
        for (int e = 0; e < M; e++) {
            b[i][e] = 1.0;
        }
    }

    cudaMalloc((void**) &dev_a, sizeof(float) * N);
    cudaMalloc((void**) &dev_b, sizeof(float) * N * M);
    cudaMalloc((void**) &dev_c, sizeof(float) * M);

    cudaMemcpy(dev_a, a, sizeof(float) * N, cudaMemcpyHostToDevice);
    cudaMemcpy(dev_b, b, sizeof(float) * N * M, cudaMemcpyHostToDevice);

    kernel<<<M / 256 + 1, 256>>>(dev_a, dev_b, dev_c, N, M);

    cudaMemcpy(c, dev_c, sizeof(float) * M, cudaMemcpyDeviceToHost);

    cudaFree(dev_a);
    cudaFree(dev_b);
    cudaFree(dev_c);

    for (int i = 0; i < M; i++) {
        cout << c[i] << endl;
    }

    return 0;
}

但我还有一个问题。出于性能原因,在几个内核中拆分for循环操作是否有意义?