我可以在没有并行化的情况下运行CUDA设备功能或将其作为内核的一部分来调用吗?

时间:2018-02-02 17:45:27

标签: cuda

我有一个程序将图像加载到CUDA设备上,用袖带和一些自定义内容进行分析,并更新设备上的单个数字,主机随后根据需要进行查询。分析主要是并行化的,但最后一步是将所有内容(使用thrust :: reduce)总结为一些并行的最终计算。

一旦所有东西都减少了,就没有什么东西可以并行化了,但是我无法弄清楚如何只运行一个设备函数而不用它自己的微内核调用<<<< 1了, 1 GT;>取代。这似乎是一个黑客。有一个更好的方法吗?也许是一种告诉并行化内核的方法"只需在并行部分完成后执行这些最后一行"?

我觉得以前一定要问过,但我找不到。可能只是不知道该搜索什么。

下面的代码片段,我希望我没有删除任何相关内容:

float *d_phs_deltas;        // Allocated using cudaMalloc (data is on device)
__device__ float d_Z;   

static __global__ void getDists(const cufftComplex* data, const bool* valid, float* phs_deltas)
{
    const int i = blockIdx.x*blockDim.x + threadIdx.x;

    // Do stuff with the line indicated by index i
    // ...

    // Save result into array, gets reduced to single number in setDist
    phs_deltas[i] = phs_delta;
}

static __global__ void setDist(const cufftComplex* data, const bool* valid, const float* phs_deltas)
{
    // Final step; does it need to be it's own kernel if it only runs once??
    d_Z += phs2dst * thrust::reduce(thrust::device, phs_deltas, phs_deltas + d_y);

    // Save some other stuff to refer to next frame
    // ...
}

void fftExec(unsigned __int32 *host_data)
    {
        // Copy image to device, do FFT, etc
        // ...

        // Last parallel analysis step, sets d_phs_deltas
        getDists<<<out_blocks, N_THREADS>>>(d_result, d_valid, d_phs_deltas);

        // Should this be a serial part at the end of getDists somehow?
        setDist<<<1, 1>>>(d_result, d_valid, d_phs_deltas);
    }

// d_Z is copied out only on request
void getZ(float *Z) { cudaMemcpyFromSymbol(Z, d_Z, sizeof(float)); }

谢谢!

1 个答案:

答案 0 :(得分:1)

There is no way to run a device function directly without launching a kernel. As pointed out in comments, there is a working example in the Programming Guide which shows how to use memory fence functions and an atomically incremented counter to signal that a given block is the last block:

__device__ unsigned int count = 0; 

__global__ void sum(const float* array, unsigned int N, volatile float* result) 
{
    __shared__ bool isLastBlockDone; 

    float partialSum = calculatePartialSum(array, N); 

    if (threadIdx.x == 0) {     
        result[blockIdx.x] = partialSum; 

        // Thread 0 makes sure that the incrementation 
        // of the "count" variable is only performed after 
        // the partial sum has been written to global memory. 
        __threadfence(); 

        // Thread 0 signals that it is done. 
        unsigned int value = atomicInc(&count, gridDim.x); 

        // Thread 0 determines if its block is the last 
        // block to be done. 
        isLastBlockDone = (value == (gridDim.x - 1)); 
    }

    // Synchronize to make sure that each thread reads 
    // the correct value of isLastBlockDone. 
    __syncthreads(); 

    if (isLastBlockDone) { 
        // The last block sums the partial sums 
        // stored in result[0 .. gridDim.x-1] float totalSum = 
        calculateTotalSum(result); 
        if (threadIdx.x == 0) { 
            // Thread 0 of last block stores the total sum 
            // to global memory and resets the count 
            // varilable, so that the next kernel call 
            // works properly. 
            result[0] = totalSum; 
            count = 0; 
        } 
    } 
}

I would recommend benchmarking both ways and choosing which is faster. On most platforms kernel launch latency is only a few microseconds, so a short running kernel to finish an action after a long running kernel can be the most efficient way to get this done.