从nvprof输出计算内存带宽的奇怪结果

时间:2017-09-05 22:11:18

标签: c++ cuda

如何使用给定的计算gpu内存带宽:

  1. 数据样本大小(Gb)。
  2. 内核执行时间(nvprof输出)。
  3. GPU:gtx 1050 ti
    Cuda:8.0
    操作系统:Windows 10
    IDE:Visual studio 2015

    通常我会使用这个公式:bandwidth [Gb/s] = data_size [Gb] / average_time [s]

    但是当我对get_mem_kernel()内核使用上面的公式时,我得到了错误的结果:441,93 [Gb/s]

    我认为这个结果是错误的,因为在gtx 1050 ti的技术规范中,全局内存带宽为112 [Gb\s]

    我在哪里犯了错误,或者还有其他我不理解的内容?

    示例代码:

    // cpp libs:
    #include <iostream>
    #include <sstream>
    #include <fstream>
    #include <iomanip>
    #include <stdexcept>
    
    // cuda libs:
    #include <cuda_runtime.h>
    #include <device_launch_parameters.h>
    
    #define ERROR_CHECK(CHECK_) if (CHECK_ != cudaError_t::cudaSuccess) { std::cout << "cuda error" << std::endl; throw std::runtime_error("cuda error"); }
    
    using data_type = double;
    
    template <typename T> constexpr __forceinline__
    T div_s(T dividend, T divisor)
    {
        using P = double;
        return static_cast <T> (static_cast <P> (dividend + divisor - 1) / static_cast <P> (divisor));
    }
    
    __global__
    void set_mem_kernel(const unsigned int size, data_type * const in_data)
    {
        int idx = blockIdx.x * blockDim.x + threadIdx.x;
        if (idx < size)
        {
            in_data[idx] = static_cast <data_type> (idx);
        }
    }
    
    __global__
    void get_mem_kernel(const unsigned int size, data_type * const in_data)
    {
        int idx = blockIdx.x * blockDim.x + threadIdx.x;
        data_type val = 0;
        if (idx < size)
        {
            val = in_data[idx];
        }
    }
    
    struct quit_program
    {
    public:
        ~quit_program()
        {
            try
            {
                ERROR_CHECK(cudaDeviceReset());
            }
            catch (...) {}
        }
    } quit;
    
    int main()
    {
        unsigned int size = 12500000; // 100 mb;
        size_t       byte = size * sizeof(data_type);
    
        dim3 threads (256, 1, 1);
        dim3 blocks  (div_s(size, threads.x), 1, 1);
    
        std::cout << size << std::endl;
        std::cout << byte << std::endl;
        std::cout << std::endl;
    
        std::cout << threads.x << std::endl;
        std::cout << blocks.x  << std::endl;
        std::cout << std::endl;
    
        // data:
        data_type * d_data = nullptr;
        ERROR_CHECK(cudaMalloc(&d_data, byte));
    
        for (int i = 0; i < 20000; i++)
        {
            set_mem_kernel <<<blocks, threads>>> (size, d_data);
            ERROR_CHECK(cudaDeviceSynchronize());
            ERROR_CHECK(cudaGetLastError());
    
            get_mem_kernel <<<blocks, threads>>> (size, d_data);
            ERROR_CHECK(cudaDeviceSynchronize());
            ERROR_CHECK(cudaGetLastError());
        }
    
        // Exit:
        ERROR_CHECK(cudaFree(d_data));
        ERROR_CHECK(cudaDeviceReset());
        return EXIT_SUCCESS;
    }
    

    nvproof结果:

    D:\Dev\visual_studio\nevada_test_site\x64\Release>nvprof ./cuda_test.exe
    12500000
    100000000
    
    256
    48829
    
    ==10508== NVPROF is profiling process 10508, command: ./cuda_test.exe
    ==10508== Warning: Unified Memory Profiling is not supported on the current configuration because a pair of devices without peer-to-peer support is detected on this multi-GPU setup. When peer mappings are not available, system falls back to using zero-copy memory. It can cause kernels, which access unified memory, to run slower. More details can be found at: http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-managed-memory
    ==10508== Profiling application: ./cuda_test.exe
    ==10508== Profiling result:
    Time(%)      Time     Calls       Avg       Min       Max  Name
     81.12%  19.4508s     20000  972.54us  971.22us  978.32us  set_mem_kernel(unsigned int, double*)
     18.88%  4.52568s     20000  226.28us  224.45us  271.14us  get_mem_kernel(unsigned int, double*)
    
    ==10508== API calls:
    Time(%)      Time     Calls       Avg       Min       Max  Name
     97.53%  26.8907s     40000  672.27us  247.98us  1.7566ms  cudaDeviceSynchronize
      1.61%  443.32ms     40000  11.082us  5.8340us  183.43us  cudaLaunch
      0.51%  141.10ms         1  141.10ms  141.10ms  141.10ms  cudaMalloc
      0.16%  43.648ms         1  43.648ms  43.648ms  43.648ms  cudaDeviceReset
      0.08%  22.182ms     80000     277ns       0ns  121.07us  cudaSetupArgument
      0.06%  15.437ms     40000     385ns       0ns  24.433us  cudaGetLastError
      0.05%  12.929ms     40000     323ns       0ns  57.253us  cudaConfigureCall
      0.00%  1.1932ms        91  13.112us       0ns  734.09us  cuDeviceGetAttribute
      0.00%  762.17us         1  762.17us  762.17us  762.17us  cudaFree
      0.00%  359.93us         1  359.93us  359.93us  359.93us  cuDeviceGetName
      0.00%  8.3880us         1  8.3880us  8.3880us  8.3880us  cuDeviceTotalMem
      0.00%  2.5520us         3     850ns     364ns  1.8230us  cuDeviceGetCount
      0.00%  1.8240us         3     608ns     365ns  1.0940us  cuDeviceGet
    

    CUDA Samples\v8.0\1_Utilities\bandwidthTest结果:

    [CUDA Bandwidth Test] - Starting...
    Running on...
    
     Device 0: GeForce GTX 1050 Ti
     Quick Mode
    
     Host to Device Bandwidth, 1 Device(s)
     PINNED Memory Transfers
       Transfer Size (Bytes)        Bandwidth(MB/s)
       33554432                     11038.4
    
     Device to Host Bandwidth, 1 Device(s)
     PINNED Memory Transfers
       Transfer Size (Bytes)        Bandwidth(MB/s)
       33554432                     11469.6
    
     Device to Device Bandwidth, 1 Device(s)
     PINNED Memory Transfers
       Transfer Size (Bytes)        Bandwidth(MB/s)
       33554432                     95214.0
    
    Result = PASS
    
    NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
    

1 个答案:

答案 0 :(得分:0)

编译器正在优化内存读取。 Robert Crovella指出了这一点。谢谢你的帮助 - 我永远不会猜到它。

详细:
我的编译器正在优化远离val变量和扩展内存读取。