使用cuda在gpu上运行一个线程,为什么gpu负载如此之高?

时间:2015-06-30 02:22:40

标签: cuda gpu

以下是我的gpu信息:

Device 0: "GeForce GT 440"
  CUDA Driver Version / Runtime Version          7.0 / 7.0
  CUDA Capability Major/Minor version number:    2.1
  Total amount of global memory:                 1536 MBytes (1610612736 bytes)
  ( 3) Multiprocessors, ( 48) CUDA Cores/MP:     144 CUDA Cores
  GPU Max Clock rate:                            1189 MHz (1.19 GHz)
  Memory Clock rate:                             800 Mhz
  Memory Bus Width:                              192-bit
  L2 Cache Size:                                 393216 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65535),
3D=(2048, 2048, 2048)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (65535, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Mo
del)
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:

cuda代码非常简单:

__global__ void kernel(float *d_data)
{
    *d_data = -1;
    *d_data = 1/(*d_data);
    *d_data = (*d_data) / (*d_data);
}
int main()
{
    float *d_data;
    cudaMalloc(&d_data, sizeof(float));
    while (1)
        kernel << <1, 1 >> >(d_data);
    float data;
    cudaMemcpy(&data, d_data, sizeof(int), cudaMemcpyDeviceToHost);
    printf("%f\n",data);
    return 0;
}

然后运行代码,我得到GPU-Z的gpu负载是99%!!

GPU-Z:http://www.techpowerup.com/gpuz/

我错过了什么吗?如何理解gpu负载?

1 个答案:

答案 0 :(得分:1)

GPU“load”只是衡量gpu忙碌时间比例除以总时间间隔的指标。

因此,如果您的程序运行1.0秒,内核需要0.8秒才能运行,那么该间隔的GPU负载将达到80%。使用GPU-Z,由于此数字会定期更新,如果您的内核在整个更新期间运行,它将显示大约100%忙碌。

因为对于您的给定代码,您的内核一直在运行,所以GPU负载应该接近100%。内核正在做什么并不重要。如果内核正在运行,则GPU正忙,这就是测量负载的方式。