我的图像大小为1920 x 1080.我正在从H2D转移,使用三个CUDA流从D2H处理和传输,其中每个流负责处理总数据的1/3。通过理解SM,SP,warp的概念,我能够优化每个块的块大小和线程数。如果必须在内核中进行简单的计算,代码运行令人满意(需要2毫秒)。下面的简单计算代码可以从源图像中找到R,G和B值,然后将这些值放在同一个源图像中。
ptr_source[numChannels* (iw*y + x) + 0] = ptr_source[numChannels* (iw*y + x) + 0];
ptr_source[numChannels* (iw*y + x) + 1] = ptr_source[numChannels* (iw*y + x) + 1];
ptr_source[numChannels* (iw*y + x) + 2] = ptr_source[numChannels* (iw*y + x) + 2];
但我必须执行更多的计算,这些计算独立于所有其他线程,计算时间增加了6 ms,这对我的应用来说太多了。我已经尝试在constant memory
内声明最常用的常量值。这些计算的代码如下所示。在该代码中,我再次找到R,G和B值。然后,我通过将旧值与一些常数相乘来计算R,G和B的新值,最后我将这些新的R,G和B值再次放在相应位置的相同源图像中。
__constant__ int iw = 1080;
__constant__ int ih = 1920;
__constant__ int numChannels = 3;
__global__ void cudaKernel(unsigned char *ptr_source, int numCudaStreams)
{
// Calculate our pixel's location
int x = (blockIdx.x * blockDim.x) + threadIdx.x;
int y = (blockIdx.y * blockDim.y) + threadIdx.y;
// Operate only if we are in the correct boundaries
if (x >= 0 && x < iw && y >= 0 && y < ih / numCudaStreams)
{
const int index_b = numChannels* (iw*y + x) + 0;
const int index_g = numChannels* (iw*y + x) + 1;
const int index_r = numChannels* (iw*y + x) + 2;
//GET VALUES: get the R,G and B values from Source image
unsigned char b_val = ptr_source[index_b];
unsigned char g_val = ptr_source[index_g];
unsigned char r_val = ptr_source[index_r];
float float_r_val = ((1.574090) * (float)r_val + (0.088825) * (float)g_val + (-0.1909) * (float)b_val);
float float_g_val = ((-0.344198) * (float)r_val + (1.579802) * (float)g_val + (-1.677604) * (float)b_val);
float float_b_val = ((-1.012951) * (float)r_val + (-1.781485) * (float)g_val + (2.404436) * (float)b_val);
unsigned char dst_r_val = (float_r_val > 255.0f) ? 255 : static_cast<unsigned char>(float_r_val);
unsigned char dst_g_val = (float_g_val > 255.0f) ? 255 : static_cast<unsigned char>(float_g_val);
unsigned char dst_b_val = (float_b_val > 255.0f) ? 255 : static_cast<unsigned char>(float_b_val);
//PUT VALUES---put the new calculated values of R,G and B
ptr_source[index_b] = dst_b_val;
ptr_source[index_g] = dst_g_val;
ptr_source[index_r] = dst_r_val;
}
}
问题:我认为将图像片段(即ptr_src
)传输到共享内存会有所帮助,但我对如何操作非常困惑。我的意思是,共享内存的范围仅适用于一个块,因此如何管理图像段到共享内存的传输。
PS:我的GPU是Quadro K2000,每个SM计算3.0,2 SM,192 SP。
答案 0 :(得分:2)
我暂时不会添加太多评论来添加此代码:
const int iw = 1080;
const int ih = 1920;
const int numChannels = 3;
__global__ void cudaKernel3(unsigned char *ptr_source, int n)
{
int idx = threadIdx.x + blockIdx.x * blockDim.x;
int stride = blockDim.x * gridDim.x;
uchar3 * p = reinterpret_cast<uchar3 *>(ptr_source);
for(; idx < n; idx+=stride) {
uchar3 vin = p[idx];
unsigned char b_val = vin.x;
unsigned char g_val = vin.y;
unsigned char r_val = vin.z;
float float_r_val = ((1.574090f) * (float)r_val + (0.088825f) * (float)g_val + (-0.1909f) * (float)b_val);
float float_g_val = ((-0.344198f) * (float)r_val + (1.579802f) * (float)g_val + (-1.677604f) * (float)b_val);
float float_b_val = ((-1.012951f) * (float)r_val + (-1.781485f) * (float)g_val + (2.404436f) * (float)b_val);
uchar3 vout;
vout.x = (unsigned char)fminf(255.f, float_r_val);
vout.y = (unsigned char)fminf(255.f, float_g_val);
vout.z = (unsigned char)fminf(255.f, float_b_val);
p[idx] = vout;
}
}
// Original kernel with a bit of template magic to conditionally correct
// accidental double precision arithmetic removed for brevity
int main()
{
const size_t sz = iw * ih * numChannels;
typedef unsigned char uchar;
uchar * image = new uchar[sz];
uchar v = 0;
for(int i=0; i<sz; i++) {
image[i] = v;
v = (++v > 128) ? 0 : v;
}
uchar * image_;
cudaMalloc((void **)&image_, sz);
cudaMemcpy(image_, image, sz, cudaMemcpyHostToDevice);
dim3 blocksz(32,32);
dim3 gridsz(1+iw/blocksz.x, 1+ih/blocksz.y);
cudaKernel<1><<<gridsz, blocksz>>>(image_, 1);
cudaDeviceSynchronize();
cudaMemcpy(image_, image, sz, cudaMemcpyHostToDevice);
cudaKernel<0><<<gridsz, blocksz>>>(image_, 1);
cudaDeviceSynchronize();
cudaMemcpy(image_, image, sz, cudaMemcpyHostToDevice);
cudaKernel3<<<16, 512>>>(image_, iw * ih);
cudaDeviceSynchronize();
cudaDeviceReset();
return 0;
}
这里的想法是只拥有可以驻留在设备上的线程,并让它们处理整个图像,每个线程发出多个输出。块调度在CUDA中非常便宜,但它不是免费的,也不是索引计算和一个线程执行有用工作所需的所有其他“设置”代码。因此,这个想法只是将这些成本摊销到许多产品上。因为您的图像只是线性记忆,并且您对每个条目执行的操作完全独立,所以使用2D网格和2D索引没有任何意义。它只是额外的设置代码,会降低代码速度。您还将看到使用向量类型(char3),它可以通过减少每个像素的内存转换次数来提高内存吞吐量。
另请注意,在具有双精度功能的GPU上,将编译双精度常量并生成64位浮点运算。与单精度相比,执行双精度时会有2到12倍的性能损失,具体取决于您的GPU。当我编译您发布的内核并查看PTX时,CUDA 7发布编译器为sm_30架构发出的内容(与您的GPU相同),我在像素计算代码中看到了这一点:
cvt.f64.f32 %fd1, %f4;
mul.f64 %fd2, %fd1, 0d3FF92F78FEEF5EC8;
ld.global.u8 %rs9, [%rd1+1];
cvt.rn.f32.u16 %f5, %rs9;
cvt.f64.f32 %fd3, %f5;
fma.rn.f64 %fd4, %fd3, 0d3FB6BD3C36113405, %fd2;
ld.global.u8 %rs10, [%rd1];
cvt.rn.f32.u16 %f6, %rs10;
cvt.f64.f32 %fd5, %f6;
fma.rn.f64 %fd6, %fd5, 0dBFC86F694467381D, %fd4;
cvt.rn.f32.f64 %f1, %fd6;
mul.f64 %fd7, %fd1, 0dBFD607570C564F98;
fma.rn.f64 %fd8, %fd3, 0d3FF946DE76427C7C, %fd7;
fma.rn.f64 %fd9, %fd5, 0dBFFAD7774ABA3876, %fd8;
cvt.rn.f32.f64 %f2, %fd9;
mul.f64 %fd10, %fd1, 0dBFF0350C1B97353B;
fma.rn.f64 %fd11, %fd3, 0dBFFC80F66A550870, %fd10;
fma.rn.f64 %fd12, %fd5, 0d40033C48F10A99B7, %fd11;
cvt.rn.f32.f64 %f3, %fd12;
注意,所有内容都提升到64位浮点,并且乘法都以64位完成,浮点常数采用IEEE754双格式,然后结果降级为32位。这是一个真正的性能成本,你应该小心避免它通过适当定义的浮点常量作为单精度。
在GT620M(2个SM Fermi移动部件,使用电池运行)上运行时,我们从nvprof获得以下配置文件数据
Time(%) Time Calls Avg Min Max Name
39.44% 17.213ms 1 17.213ms 17.213ms 17.213ms void cudaKernel<int=1>(unsigned char*, int)
35.02% 15.284ms 3 5.0947ms 5.0290ms 5.2022ms [CUDA memcpy HtoD]
18.51% 8.0770ms 1 8.0770ms 8.0770ms 8.0770ms void cudaKernel<int=0>(unsigned char*, int)
7.03% 3.0662ms 1 3.0662ms 3.0662ms 3.0662ms cudaKernel3(unsigned char*, int)
==5504== API calls:
Time(%) Time Calls Avg Min Max Name
95.37% 1.01433s 1 1.01433s 1.01433s 1.01433s cudaMalloc
3.17% 33.672ms 3 11.224ms 4.8036ms 19.039ms cudaDeviceSynchronize
1.29% 13.706ms 3 4.5687ms 4.5423ms 4.5924ms cudaMemcpy
0.12% 1.2560ms 83 15.132us 427ns 541.81us cuDeviceGetAttribute
0.03% 329.28us 3 109.76us 91.086us 139.41us cudaLaunch
0.02% 209.54us 1 209.54us 209.54us 209.54us cuDeviceGetName
0.00% 23.520us 1 23.520us 23.520us 23.520us cuDeviceTotalMem
0.00% 13.685us 3 4.5610us 2.9930us 7.6980us cudaConfigureCall
0.00% 9.4090us 6 1.5680us 428ns 3.4210us cudaSetupArgument
0.00% 5.1320us 2 2.5660us 2.5660us 2.5660us cuDeviceGetCount
0.00% 2.5660us 2 1.2830us 1.2830us 1.2830us cuDeviceGet
当在更大的东西上运行时(带有7个SMX的GTX 670 Kepler设备):
==9442== NVPROF is profiling process 9442, command: ./a.out
==9442== Profiling application: ./a.out
==9442== Profiling result:
Time(%) Time Calls Avg Min Max Name
65.68% 2.6976ms 3 899.19us 784.56us 1.0829ms [CUDA memcpy HtoD]
20.84% 856.05us 1 856.05us 856.05us 856.05us void cudaKernel<int=1>(unsigned char*, int)
7.90% 324.64us 1 324.64us 324.64us 324.64us void cudaKernel<int=0>(unsigned char*, int)
5.58% 229.12us 1 229.12us 229.12us 229.12us cudaKernel3(unsigned char*, int)
==9442== API calls:
Time(%) Time Calls Avg Min Max Name
55.88% 45.443ms 1 45.443ms 45.443ms 45.443ms cudaMalloc
38.16% 31.038ms 1 31.038ms 31.038ms 31.038ms cudaDeviceReset
3.55% 2.8842ms 3 961.40us 812.99us 1.1982ms cudaMemcpy
1.92% 1.5652ms 3 521.72us 294.16us 882.27us cudaDeviceSynchronize
0.32% 262.49us 83 3.1620us 150ns 110.94us cuDeviceGetAttribute
0.09% 74.253us 3 24.751us 15.575us 41.784us cudaLaunch
0.03% 22.568us 1 22.568us 22.568us 22.568us cuDeviceTotalMem
0.03% 20.815us 1 20.815us 20.815us 20.815us cuDeviceGetName
0.01% 7.3900us 6 1.2310us 200ns 5.3890us cudaSetupArgument
0.00% 3.6510us 2 1.8250us 674ns 2.9770us cuDeviceGetCount
0.00% 3.1440us 3 1.0480us 516ns 1.9410us cudaConfigureCall
0.00% 2.1600us 2 1.0800us 985ns 1.1750us cuDeviceGet
因此,只需修复基本错误并在较小和较大的设备上使用合理的设计模式,就可以实现大的速度。信不信由你。
答案 1 :(得分:1)
共享内存对您的情况没有帮助,您的内存访问不是可靠的。
您可以尝试以下操作:将您的char * ptr_source替换为uchar3 *应该可以帮助您的线程访问阵列中的连续数据。 uchar3只是意味着:3个连续的unsigned char。
由于同一warp中的线程同时执行相同的指令,因此您将拥有这种访问模式:
假设您尝试访问地址处的内存:0x3F0000。
thread 1 copies data at : 0x3F0000 then 0x3F0001 then 0x3F0002
thread 2 copies data at : 0x3F0003 then 0x3F0004 then 0x3F0005
0x3F0000和0x3F0003不是连续的,因此您访问数据的性能会很差。
使用uchar3:
thread 1 : 0x3F0000 to 0x3F0002
thread 2 : 0x3F0003 to 0x3F0005
就像每个线程复制连续数据一样,内存控制器可以快速复制它。
您也可以替换:
(float_r_val > 255.0f) ? 255 : static_cast<unsigned char>(float_r_val);
带
float_r_val = fmin(255.0f, float_r_val);
这应该给你一个像这样的内核:
__global__ void cudaKernel(uchar3 *ptr_source, int numCudaStreams)
{
// Calculate our pixel's location
int x = (blockIdx.x * blockDim.x) + threadIdx.x;
int y = (blockIdx.y * blockDim.y) + threadIdx.y;
// Operate only if we are in the correct boundaries
if (x >= 0 && x < iw && y >= 0 && y < ih / numCudaStreams)
{
const int index = (iw*y + x);
uchar3 val = ptr_source)[index];
float float_r_val = ((1.574090f) * (float)val.x + (0.088825f) * (float)val.y + (-0.1909f) * (float)b_val.z);
float float_g_val = ((-0.344198f) * (float)val.x + (1.579802f) * (float)val.y + (-1.677604f) * (float)b_val.z);
float float_b_val = ((-1.012951f) * (float)val.x + (-1.781485f) * (float)val.y + (2.404436f) * (float)b_val.z);
ptr_source[index] = make_uchar3( fmin(255.0f, float_r_val), fmin(255.0f, float_g_val), fmin(255.0f, float_b_val) );
}
}
我希望这些更新能够提高性能。