CUDA:cufftExecR2C-不必要的内存副本

时间:2018-07-01 12:22:10

标签: cuda gpu cufft

我正在尝试cuda,观察到我调用

时数据已从主机复制到设备
cufftExecR2C(plan, src, dst);
我不理解的

,因为我的 src 指针是我要转换的设备内存的有效句柄。在cufftExecR2C(...)之前,我按如下方式初始化了参数:

  float* src;
  cudaMalloc((&src),  image_rows * image_cols  * sizeof(float) );
  cudaMemcpy(src, image.data()  ,  image_rows * image_cols  * sizeof(float)  , cudaMemcpyHostToDevice);

cufftComplex* dst;
cudaMalloc((void**)&dst    , image_rows * (image_cols/2+1) * sizeof(cufftComplex) );

 cufftHandle plan;
 cufftPlan2d(&plan, image_rows, image_cols, CUFFT_R2C))

启动nvidia分析器(nvprof)-仅考虑fft-我得到以下结果

...
cudaProfilerStart();
cufftExecR2C(plan, src, dst);
cudaProfilerStop();
...

enter image description here

我想避免3个不必要的主机到设备的复制调用。我看不到cuda为什么执行这些附加副本(特别是为什么要主机到设备-数据已经在设备内存中)?

程序使用Cuda 8.0在GeForce GT 540M上执行。

谢谢!

1 个答案:

答案 0 :(得分:1)

尽管您对在执行cufftExecR2C期间cuFFT执行不必要的数据传输的主张非常认真,但可以证明事实并非如此。

请考虑以下示例,将您在问题中提供的代码片段拼凑在一起:

#include "cufft.h"
#include "cuda_profiler_api.h"
#include <random>
#include <algorithm>
#include <iterator>
#include <iostream>
#include <functional>

int main()
{
  const int image_rows = 1600, image_cols = 2048;

  std::random_device rnd_device;
  std::mt19937 mersenne_engine {rnd_device()};
  std::uniform_real_distribution<float> dist {0.0, 255.0};

  auto gen = [&dist, &mersenne_engine](){
                 return dist(mersenne_engine);
             };

  std::vector<float> image(image_rows * image_cols);
  std::generate(std::begin(image), std::end(image), gen);

  float* src;
  cudaMalloc((&src),  image_rows * image_cols  * sizeof(float) );
  cudaMemcpy(src, &image[0],  image_rows * image_cols  * sizeof(float)  , cudaMemcpyHostToDevice);
  cufftComplex* dst;
  cudaMalloc((void**)&dst    , image_rows * (image_cols/2+1) * sizeof(cufftComplex) );

  cufftHandle plan;
  cufftPlan2d(&plan, image_rows, image_cols, CUFFT_R2C);

  cudaProfilerStart();
  cufftExecR2C(plan, src, dst);
  cudaProfilerStop();

  return 0;
}

我为您的图像替换了一个随机值数组。现在,我们对其进行编译和分析:

$ nvcc -std=c++11 -o unecessary unecessary.cu -lcufft
$ nvprof ./unecessary
==10314== NVPROF is profiling process 10314, command: ./unecessary
==10314== Profiling application: ./unecessary
==10314== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:   74.39%  2.2136ms         1  2.2136ms  2.2136ms  2.2136ms  [CUDA memcpy HtoD]
                    6.66%  198.30us         1  198.30us  198.30us  198.30us  void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>)
                    6.50%  193.47us         1  193.47us  193.47us  193.47us  void spRadix0025B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=64, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>)
                    6.25%  185.98us         1  185.98us  185.98us  185.98us  void spVector1024C::kernelMem<unsigned int, float, fftDirection_t=-1, unsigned int=2, unsigned int=5, LUT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_t, unsigned int, float>)
                    6.20%  184.38us         1  184.38us  184.38us  184.38us  __nv_static_45__32_spRealComplex_compute_70_cpp1_ii_1f28721c__ZN13spRealComplex24postprocessC2C_kernelMemIjfL9fftAxii_t3EEEvP7ComplexIT0_EPKS4_T_15coordDivisors_tIS8_E7coord_tIS8_ESC_S8_S3_10callback_t

[为简洁起见,删除了API调用]

看来您是对的! GPU摘要统计信息中有一个巨大的memcpy

所以让我们再次正确地配置文件:

$ nvprof --profile-from-start off ./unecessary
==11674== NVPROF is profiling process 11674, command: ./unecessary
==11674== Profiling application: ./unecessary
==11674== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:   25.96%  196.28us         1  196.28us  196.28us  196.28us  void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>)
                   25.25%  190.91us         1  190.91us  190.91us  190.91us  void spRadix0025B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=64, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>)
                   24.65%  186.39us         1  186.39us  186.39us  186.39us  void spVector1024C::kernelMem<unsigned int, float, fftDirection_t=-1, unsigned int=2, unsigned int=5, LUT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_t, unsigned int, float>)
                   24.15%  182.59us         1  182.59us  182.59us  182.59us  __nv_static_45__32_spRealComplex_compute_70_cpp1_ii_1f28721c__ZN13spRealComplex24postprocessC2C_kernelMemIjfL9fftAxii_t3EEEvP7ComplexIT0_EPKS4_T_15coordDivisors_tIS8_E7coord_tIS8_ESC_S8_S3_10callback_t

[为简洁起见,再次删除了API调用]

memcpy不见了。分析器报告的全部是与转换执行相关的四个内核启动。没有内存传输。原始探查器输出中报告的内存传输是程序开始时主机到设备的传输,与cuFFT调用无关。包含它的原因是nvprof默认从程序执行开始就开始分析,并且最初的cudaProfilerStart调用无效,因为已经进行了分析。您可以在工具链文档here中了解配置文件代码的正确方法。

在没有承诺的MCVE的情况下,我将提供自己的假设-您没有正确使用分析器,并且报告的传输实际上是代码中其他地方发生的传输,并且包含在分析器中输出,但与cuFFT的操作完全无关。