为什么我的数据不能放入CUDA纹理对象中?

时间:2019-01-22 14:21:07

标签: c++ cuda

我正在尝试用一些数据填充CUDA纹理对象,但是对cudaCreateTextureObject的调用失败,并出现以下错误( edit :在两个 GTX 1080TI上 strong>和 RTX 2080TI ):

  

GPU ERROR! 'invalid argument' (err code 11)

如果我将较少的数据放入纹理中,则可以正常工作,因此我的猜测是我对可以放入纹理的数据量的计算已关闭。

我的思考过程如下: (可执行代码如下)

我的数据以(76,76)张图像的形式出现,其中每个像素都是浮点数。我想做的是在纹理对象中存储一列图像。据我了解,cudaMallocPitch是实现此目的的方法。

在计算可以存储在一个纹理中的图像数量时,我使用以下公式确定单个图像需要多少空间:

  

GTX_1080TI_MEM_PITCH * img_dim_y * sizeof(float)

第一个参数应为GTX 1080TI卡的内存间距(512字节)。我可以在1D纹理中存储的字节数为2 ^ 27 here。当我将后者除以前者时,我得到862.3,假设这是我可以存储在一个纹理对象中的图像数量。但是,当我尝试在缓冲区中存储超过855张图像时,程序崩溃,并显示上述错误。

以下是代码:

以下主要功能(a)设置了所有相关参数,(b)使用cudaMallocPitch分配了内存,并且 (c)配置并创建CUDA纹理对象:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include <cassert>

#define GTX_1080TI_MEM_PITCH   512
#define GTX_1080TI_1DTEX_WIDTH 134217728 // 2^27

//=====================================================================[ util ]

// CUDA error checking for library functions
#define CUDA_ERR_CHK(func){ cuda_assert( (func), __FILE__, __LINE__ ); }
inline void cuda_assert( const cudaError_t cu_err, const char* file, int line ){
    if( cu_err != cudaSuccess ){
        fprintf( stderr, "\nGPU ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
        exit( EXIT_FAILURE );
    }
}

// CUDA generic error checking (used after kernel calls)
#define GPU_ERR_CHK(){ gpu_assert(__FILE__, __LINE__); }
inline void gpu_assert( const char* file, const int line ){
    cudaError cu_err = cudaGetLastError();
    if( cu_err != cudaSuccess ){
        fprintf( stderr, "\nGPU KERNEL ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
        exit(EXIT_FAILURE);
    }
}

//=====================================================================[ main ]

int main(){

    // setup
    unsigned int img_dim_x = 76;
    unsigned int img_dim_y = 76;
    unsigned int img_num   = 856;  // <-- NOTE: set this to 855 and it should work - but we should be able to put 862 here?

    unsigned int pitched_img_size = GTX_1080TI_MEM_PITCH * img_dim_y * sizeof(float);
    unsigned int img_num_per_tex  = GTX_1080TI_1DTEX_WIDTH / pitched_img_size;

    fprintf( stderr, "We should be able to stuff %d images into one texture.\n", img_num_per_tex );
    fprintf( stderr, "We use %d (more than 855 leads to a crash).\n", img_num );

    // allocate pitched memory
    size_t img_tex_pitch;
    float* d_img_tex_data;

    CUDA_ERR_CHK( cudaMallocPitch( &d_img_tex_data, &img_tex_pitch, img_dim_x*sizeof(float), img_dim_y*img_num ) );

    assert( img_tex_pitch == GTX_1080TI_MEM_PITCH );
    fprintf( stderr, "Asking for %zd bytes allocates %zd bytes using pitch %zd. Available: %zd/%d\n", 
        img_num*img_dim_x*img_dim_y*sizeof(float), 
        img_num*img_tex_pitch*img_dim_y*sizeof(float), 
        img_tex_pitch,
        GTX_1080TI_1DTEX_WIDTH - img_num*img_tex_pitch*img_dim_y*sizeof(float),
        GTX_1080TI_1DTEX_WIDTH );

    // generic resource descriptor
    cudaResourceDesc res_desc;
    memset(&res_desc, 0, sizeof(res_desc));
    res_desc.resType = cudaResourceTypePitch2D;
    res_desc.res.pitch2D.desc = cudaCreateChannelDesc<float>();
    res_desc.res.pitch2D.devPtr = d_img_tex_data;
    res_desc.res.pitch2D.width  = img_dim_x;
    res_desc.res.pitch2D.height = img_dim_y*img_num;
    res_desc.res.pitch2D.pitchInBytes = img_tex_pitch;

    // texture descriptor
    cudaTextureDesc tex_desc;
    memset(&tex_desc, 0, sizeof(tex_desc));
    tex_desc.addressMode[0] = cudaAddressModeClamp;
    tex_desc.addressMode[1] = cudaAddressModeClamp;
    tex_desc.filterMode     = cudaFilterModeLinear;  // for linear interpolation (NOTE: this breaks normal integer indexing!)
    tex_desc.readMode       = cudaReadModeElementType;
    tex_desc.normalizedCoords = false;  // we want to index using [0;img_dim] rather than [0;1]              

    // make sure there are no lingering errors
    GPU_ERR_CHK();
    fprintf(stderr, "No CUDA error until now..\n");

    // create texture object
    cudaTextureObject_t img_tex_obj;
    CUDA_ERR_CHK( cudaCreateTextureObject(&img_tex_obj, &res_desc, &tex_desc, NULL) );

    fprintf(stderr, "bluppi\n");
}

在调用cudaCreateTextureObject时,这应该会崩溃。但是,如果将img_num(从{{1}开始)的参数从 856 更改为 855 ,则代码应成功执行。 ( edit:预期的行为将是代码以862的值运行,但失败以863的值,因为实际上所需的字节数超过了所记录的缓冲区大小提供的字节数。)

任何帮助将不胜感激!

1 个答案:

答案 0 :(得分:1)

由于此处使用的是 2D纹理,因此可以在 1D纹理中存储的字节数(“宽度”)与此处无关

2D纹理可能具有不同的特性,具体取决于为纹理提供支持的内存类型。两个示例是线性存储器和CUDA阵列。您已选择使用线性内存支持(由cudaMalloc*以外的cudaMallocArray操作提供的支持)。

您遇到的主要问题是最大纹理高度。要了解这是什么,我们可以参考编程指南中的table 14,其中列出:

绑定到线性内存65000 x 65000的2D纹理参考的最大宽度和高度

当从855张图像扩展到856张图像时,图像高度为76行,您超出了65000个数字。 856 * 76 = 65056、855 * 76 = 64980

“等等,”您说,表14条目显示纹理引用,而我正在使用纹理对象

您是正确的,并且表14没有明确列出纹理对象的相应限制。在这种情况下,我们必须使用cudaGetDeviceProperties()引用运行时可从设备读取的设备属性。如果我们查看可用的数据there,则会看到以下可读项目:

maxTexture2DLinear[3] contains the maximum 2D texture dimensions for 2D textures bound to pitch linear memory.

(我怀疑3是一个错字,但无论如何,我们只需要前2个值即可。)

这是我们想要确定的值。如果我们修改您的代码以遵守该限制,则没有问题:

$ cat t382.cu
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include <cassert>

#define GTX_1080TI_MEM_PITCH   512
#define GTX_1080TI_1DTEX_WIDTH 134217728 // 2^27

//=====================================================================[ util ]

// CUDA error checking for library functions
#define CUDA_ERR_CHK(func){ cuda_assert( (func), __FILE__, __LINE__ ); }
inline void cuda_assert( const cudaError_t cu_err, const char* file, int line ){
    if( cu_err != cudaSuccess ){
        fprintf( stderr, "\nGPU ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
        exit( EXIT_FAILURE );
    }
}

// CUDA generic error checking (used after kernel calls)
#define GPU_ERR_CHK(){ gpu_assert(__FILE__, __LINE__); }
inline void gpu_assert( const char* file, const int line ){
    cudaError cu_err = cudaGetLastError();
    if( cu_err != cudaSuccess ){
        fprintf( stderr, "\nGPU KERNEL ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
        exit(EXIT_FAILURE);
    }
}

//=====================================================================[ main ]

int main(){

    cudaDeviceProp prop;
    cudaGetDeviceProperties(&prop, 0);
    size_t max2Dtexturelinearwidth = prop.maxTexture2DLinear[0];  // texture x dimension
    size_t max2Dtexturelinearheight = prop.maxTexture2DLinear[1]; // texture y dimension
    fprintf( stderr, "maximum 2D linear texture dimensions (width,height): %lu,%lu\n", max2Dtexturelinearwidth, max2Dtexturelinearheight);



    // setup
    unsigned int img_dim_x = 76;
    unsigned int img_dim_y = 76;
    //unsigned int img_num   = 856;  // <-- NOTE: set this to 855 and it should work - but we should be able to put 862 here?
    unsigned int img_num = max2Dtexturelinearheight/img_dim_y;
    fprintf( stderr, "maximum number of images per texture: %u\n", img_num);

    unsigned int pitched_img_size = GTX_1080TI_MEM_PITCH * img_dim_y * sizeof(float);
    unsigned int img_num_per_tex  = GTX_1080TI_1DTEX_WIDTH / pitched_img_size;

    fprintf( stderr, "We should be able to stuff %d images into one texture.\n", img_num_per_tex );
    fprintf( stderr, "We use %d (more than 855 leads to a crash).\n", img_num );

    // allocate pitched memory
    size_t img_tex_pitch;
    float* d_img_tex_data;

    CUDA_ERR_CHK( cudaMallocPitch( &d_img_tex_data, &img_tex_pitch, img_dim_x*sizeof(float), img_dim_y*img_num ) );

    assert( img_tex_pitch == GTX_1080TI_MEM_PITCH );
    fprintf( stderr, "Asking for %zd bytes allocates %zd bytes using pitch %zd. Available: %zd/%d\n",
        img_num*img_dim_x*img_dim_y*sizeof(float),
        img_num*img_tex_pitch*img_dim_y*sizeof(float),
        img_tex_pitch,
        GTX_1080TI_1DTEX_WIDTH - img_num*img_tex_pitch*img_dim_y*sizeof(float),
        GTX_1080TI_1DTEX_WIDTH );

    // generic resource descriptor
    cudaResourceDesc res_desc;
    memset(&res_desc, 0, sizeof(res_desc));
    res_desc.resType = cudaResourceTypePitch2D;
    res_desc.res.pitch2D.desc = cudaCreateChannelDesc<float>();
    res_desc.res.pitch2D.devPtr = d_img_tex_data;
    res_desc.res.pitch2D.width  = img_dim_x;
    res_desc.res.pitch2D.height = img_dim_y*img_num;
    res_desc.res.pitch2D.pitchInBytes = img_tex_pitch;

    // texture descriptor
    cudaTextureDesc tex_desc;
    memset(&tex_desc, 0, sizeof(tex_desc));
    tex_desc.addressMode[0] = cudaAddressModeClamp;
    tex_desc.addressMode[1] = cudaAddressModeClamp;
    tex_desc.filterMode     = cudaFilterModeLinear;  // for linear interpolation (NOTE: this breaks normal integer indexing!)
    tex_desc.readMode       = cudaReadModeElementType;
    tex_desc.normalizedCoords = false;  // we want to index using [0;img_dim] rather than [0;1]

    // make sure there are no lingering errors
    GPU_ERR_CHK();
    fprintf(stderr, "No CUDA error until now..\n");

    // create texture object
    cudaTextureObject_t img_tex_obj;
    CUDA_ERR_CHK( cudaCreateTextureObject(&img_tex_obj, &res_desc, &tex_desc, NULL) );

    fprintf(stderr, "bluppi\n");
}
$ nvcc -o t382 t382.cu
$ cuda-memcheck ./t382
========= CUDA-MEMCHECK
maximum 2D linear texture dimensions (width,height): 131072,65000
maximum number of images per texture: 855
We should be able to stuff 862 images into one texture.
We use 855 (more than 855 leads to a crash).
Asking for 19753920 bytes allocates 133079040 bytes using pitch 512. Available: 1138688/134217728
No CUDA error until now..
bluppi
========= ERROR SUMMARY: 0 errors
$