全局内存重放开销来自哪里?

时间:2013-06-25 23:09:34

标签: memory cuda overhead replay coalescing

运行下面的代码在NVIDIA Visual Profiler的全局内存中写入1 GB,我得到:
- 100%的存储效率
- 69.4%(128.6 GB / s)DRAM利用率
- 18.3%的总重播开销
- 18.3%的全局内存重放开销
内存写入应该是合并的,内核中没有分歧,所以问题是全局内存重放开销来自何处?我在Ubuntu 13.04上运行它,使用的是nvidia-cuda-toolkit版本5.0.35-4ubuntu1。

#include <cuda.h>
#include <unistd.h>
#include <getopt.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <stdint.h>
#include <ctype.h>
#include <sched.h>
#include <assert.h>

static void
HandleError( cudaError_t err, const char *file, int line )
{
    if (err != cudaSuccess) {
        printf( "%s in %s at line %d\n", cudaGetErrorString(err), file, line);
        exit( EXIT_FAILURE );
    }
}
#define HANDLE_ERROR(err) (HandleError(err, __FILE__, __LINE__))

// Global memory writes
__global__ void
kernel_write(uint32_t *start, uint32_t entries)
{
    uint32_t tid = threadIdx.x + blockIdx.x*blockDim.x;

    while (tid < entries) {
        start[tid] = tid;
        tid += blockDim.x*gridDim.x;
    }
}

int main(int argc, char *argv[])
{
    uint32_t *gpu_mem;               // Memory pointer
    uint32_t n_blocks  = 256;        // Blocks per grid
    uint32_t n_threads = 192;        // Threads per block
    uint32_t n_bytes   = 1073741824; // Transfer size (1 GB)
    float elapsedTime;               // Elapsed write time

    // Allocate 1 GB of memory on the device
    HANDLE_ERROR( cudaMalloc((void **)&gpu_mem, n_bytes) );

    // Create events
    cudaEvent_t start, stop;
    HANDLE_ERROR( cudaEventCreate(&start) );
    HANDLE_ERROR( cudaEventCreate(&stop) );

    // Write to global memory
    HANDLE_ERROR( cudaEventRecord(start, 0) );
    kernel_write<<<n_blocks, n_threads>>>(gpu_mem, n_bytes/4);
    HANDLE_ERROR( cudaGetLastError() );
    HANDLE_ERROR( cudaEventRecord(stop, 0) );
    HANDLE_ERROR( cudaEventSynchronize(stop) );
    HANDLE_ERROR( cudaEventElapsedTime(&elapsedTime, start, stop) );

    // Report exchange time
    printf("#Delay(ms)  BW(GB/s)\n");
    printf("%10.6f  %10.6f\n", elapsedTime, 1e-6*n_bytes/elapsedTime);

    // Destroy events
    HANDLE_ERROR( cudaEventDestroy(start) );
    HANDLE_ERROR( cudaEventDestroy(stop) );

    // Free memory
    HANDLE_ERROR( cudaFree(gpu_mem) );

    return 0;
}

1 个答案:

答案 0 :(得分:1)

nvprof探查器和API探查器给出了不同的结果:

$ nvprof --events gst_request ./app
======== NVPROF is profiling app...
======== Command: app
#Delay(ms)  BW(GB/s)
 13.345920   80.454690
======== Profiling result:
          Invocations       Avg       Min       Max  Event Name
Device 0
    Kernel: kernel_write(unsigned int*, unsigned int)
                    1   8388608   8388608   8388608  gst_request

$ nvprof --events global_store_transaction ./app
======== NVPROF is profiling app...
======== Command: app
#Delay(ms)  BW(GB/s)
  9.469216  113.392892
======== Profiling result:
          Invocations       Avg       Min       Max  Event Name
Device 0
    Kernel: kernel_write(unsigned int*, unsigned int)
                    1   8257560   8257560   8257560  global_store_transaction

我的印象是global_store_transation不能低于gst_request。这里发生了什么?我不能在同一个命令中询问这两个事件,所以我必须运行两个单独的命令。这可能是问题吗?

奇怪的是,API分析器显示了完美合并的不同结果。这是输出,我必须运行两次以获得正确的计数器:

$ cat config.txt
inst_issued
inst_executed
gst_request

$ COMPUTE_PROFILE=1 COMPUTE_PROFILE_CSV=1 COMPUTE_PROFILE_LOG=log.csv COMPUTE_PROFILE_CONFIG=config.txt ./app

$ cat log.csv
# CUDA_PROFILE_LOG_VERSION 2.0
# CUDA_DEVICE 0 GeForce GTX 580
# CUDA_CONTEXT 1
# CUDA_PROFILE_CSV 1
# TIMESTAMPFACTOR fffff67eaca946b8
method,gputime,cputime,occupancy,inst_issued,inst_executed,gst_request,gld_request
_Z12kernel_writePjj,7771.776,7806.000,1.000,4737053,3900426,557058,0

$ cat config2.txt
global_store_transaction

$ COMPUTE_PROFILE=1 COMPUTE_PROFILE_CSV=1 COMPUTE_PROFILE_LOG=log2.csv COMPUTE_PROFILE_CONFIG=config2.txt ./app

$ cat log2.csv
# CUDA_PROFILE_LOG_VERSION 2.0
# CUDA_DEVICE 0 GeForce GTX 580
# CUDA_CONTEXT 1
# CUDA_PROFILE_CSV 1
# TIMESTAMPFACTOR fffff67eea92d0e8
method,gputime,cputime,occupancy,global_store_transaction
_Z12kernel_writePjj,7807.584,7831.000,1.000,557058

这里gst_request和global_store_transactions完全相同,显示出完美的合并效果。哪一个是正确的(nvprof或API分析器)?为什么NVIDIA Visual Profiler说我有非合并写?仍然有重要的指令重播,我不知道他们来自哪里:(

有什么想法吗?我不认为这是硬件故障,因为我在同一台机器上有两块板,两者都表现出相同的行为。