CUDA,Qt创作者和Mac

时间:2015-11-23 15:59:53

标签: c++ macos qt cuda qt-creator

我很难将CUDA融入Qt创作者。

我确信问题来自于我的.pro文件中没有正确的信息。我已经发布了我当前的.pro文件,我的.cu文件(DT_GPU.cu),然后发布了错误。

我尝试过很多从linux和windows中获取的.pro文件的组合,但没有什么可行的。此外,我从未见过Mac / CUDA .pro文件,所以这可能是未来人们希望让所有三个人一起工作的有用资源。

提前感谢您的帮助。

.pro文件:

CUDA_SOURCES += ../../Source/DT_GPU/DT_GPU.cu

CUDA_DIR = "/Developer/NVIDIA/CUDA-7.5"


SYSTEM_TYPE = 64            # '32' or '64', depending on your system
CUDA_ARCH = sm_21           # Type of CUDA architecture, for example 'compute_10', 'compute_11', 'sm_10'
NVCC_OPTIONS = --use_fast_math


# include paths
INCLUDEPATH += $$CUDA_DIR/include

# library directories
QMAKE_LIBDIR += $$CUDA_DIR/lib/

CUDA_OBJECTS_DIR = ./


# Add the necessary libraries
CUDA_LIBS = -lcublas_device \
    -lcublas_static \
    -lcudadevrt \
    -lcudart_static \
    -lcufft_static \
    -lcufftw_static \
    -lculibos \
    -lcurand_static \
    -lcusolver_static \
    -lcusparse_static \
    -lnppc_static \
    -lnppi_static \
    -lnpps_static

# The following makes sure all path names (which often include spaces) are put between quotation marks
CUDA_INC = $$join(INCLUDEPATH,'" -I"','-I"','"')
LIBS += $$join(CUDA_LIBS,'.so ', '', '.so')
#LIBS += $$CUDA_LIBS

# Configuration of the Cuda compiler
CONFIG(debug, debug|release) {
    # Debug mode
    cuda_d.input = CUDA_SOURCES
    cuda_d.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
    cuda_d.commands = $$CUDA_DIR/bin/nvcc -D_DEBUG $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
    cuda_d.dependency_type = TYPE_C
    QMAKE_EXTRA_COMPILERS += cuda_d
}
else {
    # Release mode
    cuda.input = CUDA_SOURCES
    cuda.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
    cuda.commands = $$CUDA_DIR/bin/nvcc $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
    cuda.dependency_type = TYPE_C
    QMAKE_EXTRA_COMPILERS += cuda
}

DT_GPU.cu

#include <cuda.h>
#include <cuda_runtime.h>
#include <device_launch_parameters.h>

__global__ void zero_GPU(double *l_p_array_gpu)
{
    int i = threadIdx.x;
    printf("  %i: Hello World!\n", i);
    l_p_array_gpu[i] = 0.;
}

void zero(double *l_p_array, int a_numElements)
{
    double *l_p_array_gpu;

    int size = a_numElements * int(sizeof(double));

    cudaMalloc((void**) &l_p_array_gpu, size);

    cudaMemcpy(l_p_array_gpu, l_p_array, size, cudaMemcpyHostToDevice);

    zero_GPU<<<size,1>>>(l_p_array_gpu);

    cudaMemcpy(l_p_array, l_p_array_gpu, size, cudaMemcpyDeviceToHost);

    cudaFree(l_p_array_gpu);
}

警告:

Makefile:848: warning: overriding commands for target `DT_GPU_cuda.o'
Makefile:792: warning: ignoring old commands for target `DT_GPU_cuda.o'
Makefile:848: warning: overriding commands for target `DT_GPU_cuda.o'
Makefile:792: warning: ignoring old commands for target `DT_GPU_cuda.o'

错误:

In file included from ../SimplexSphereSource.cpp:8:
../../../Source/DT_GPU/DT_GPU.cu:75:19: error: expected expression
        zero_GPU<<<size,1>>>(l_p_array_gpu);
                  ^
../../../Source/DT_GPU/DT_GPU.cu:75:28: error: expected expression
        zero_GPU<<<size,1>>>(l_p_array_gpu);
                           ^
2 errors generated.
make: *** [SimplexSphereSource.o] Error 1
16:47:18: The process "/usr/bin/make" exited with code 2.
Error while building/deploying project SimplexSphereSource (kit: Desktop Qt 5.4.0 clang 64bit)
When executing step "Make"

1 个答案:

答案 0 :(得分:1)

我设法通过对.pro文件进行一些小修改来运行您的示例。如果您或其他任何人仍然对Mac和Linux的更大C ++ / CUDA / Qt示例感兴趣,请在几个月前检查this answer。您的特定情况(或至少您提供的内容)不需要所有额外的Qt框架和GUI设置,因此.pro文件保持非常简单。

如果您还没有这样做,则应确保拥有最新的CUDA Mac驱动程序,并检查一些基本的CUDA示例是否已编译并运行。 我目前正在使用:

  • OSX版本10.10.5
  • Qt 5.5.0
  • NVCC v7.5.17

我在您提供的DP_GPU.cu文件中添加了一个主要方法,并使用您的.pro文件成功运行了该程序并进行了一些更改:

#CUDA_SOURCES += ../../Source/DT_GPU/DT_GPU.cu
CUDA_SOURCES += DT_GPU.cu # <-- same dir for this small example

CUDA_DIR = "/Developer/NVIDIA/CUDA-7.5"


SYSTEM_TYPE = 64            # '32' or '64', depending on your system
CUDA_ARCH = sm_21           # (tested with sm_30 on my comp) Type of CUDA architecture, for example 'compute_10', 'compute_11', 'sm_10'
NVCC_OPTIONS = --use_fast_math


# include paths
INCLUDEPATH += $$CUDA_DIR/include

# library directories
QMAKE_LIBDIR += $$CUDA_DIR/lib/

CUDA_OBJECTS_DIR = ./


# Add the necessary libraries
CUDA_LIBS = -lcudart # <-- changed this

# The following makes sure all path names (which often include spaces) are put between quotation marks
CUDA_INC = $$join(INCLUDEPATH,'" -I"','-I"','"')
#LIBS += $$join(CUDA_LIBS,'.so ', '', '.so') <-- didn't need this
LIBS += $$CUDA_LIBS # <-- needed this


# SPECIFY THE R PATH FOR NVCC (this caused me a lot of trouble before)
QMAKE_LFLAGS += -Wl,-rpath,$$CUDA_DIR/lib # <-- added this
NVCCFLAGS = -Xlinker -rpath,$$CUDA_DIR/lib # <-- and this

# Configuration of the Cuda compiler
CONFIG(debug, debug|release) {
    # Debug mode
    cuda_d.input = CUDA_SOURCES
    cuda_d.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
    cuda_d.commands = $$CUDA_DIR/bin/nvcc -D_DEBUG $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
    cuda_d.dependency_type = TYPE_C
    QMAKE_EXTRA_COMPILERS += cuda_d
}
else {
    # Release mode
    cuda.input = CUDA_SOURCES
    cuda.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
    cuda.commands = $$CUDA_DIR/bin/nvcc $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
    cuda.dependency_type = TYPE_C
    QMAKE_EXTRA_COMPILERS += cuda
}

DP_GPU.cu文件带有主要功能和一些小改动:

#include <cuda.h>
#include <cuda_runtime.h>
#include <device_launch_parameters.h>
#include <stdio.h> // <-- added for 'printf'


__global__ void zero_GPU(double *l_p_array_gpu)
{
    int i = blockIdx.x * blockDim.x + threadIdx.x; // <-- in case you use more blocks
    printf("  %i: Hello World!\n", i);
    l_p_array_gpu[i] = 0.;
}


void zero(double *l_p_array, int a_numElements)
{
    double *l_p_array_gpu;

    int size = a_numElements * int(sizeof(double));

    cudaMalloc((void**) &l_p_array_gpu, size);

    cudaMemcpy(l_p_array_gpu, l_p_array, size, cudaMemcpyHostToDevice);

    // use one block with a_numElements threads
    zero_GPU<<<1, a_numElements>>>(l_p_array_gpu);

    cudaMemcpy(l_p_array, l_p_array_gpu, size, cudaMemcpyDeviceToHost);

    cudaFree(l_p_array_gpu);
}

// added a main function to run the program
int main(void)
{
    // host variables
    const int a_numElements = 5;
    double l_p_array[a_numElements];

    // run cuda function
    zero(l_p_array, a_numElements);

    // Print l_p_array
    printf("l_p_array: { ");
    for (int i = 0; i < a_numElements; ++i)
    {
        printf("%.2f ", l_p_array[i]);
    }
    printf("}\n");

    return 0;
}

输出:

  0: Hello World!
  1: Hello World!
  2: Hello World!
  3: Hello World!
  4: Hello World!
l_p_array: { 0.00 0.00 0.00 0.00 0.00 }

一旦你完成了这项工作,一定要花一些时间检查一下基本的CUDA语法和例子,然后才能进入太远。否则调试将是一件非常麻烦的事情。因为我在这里虽然我想我也会让你知道CUDA内核语法是
kernel_function<<<block_size, thread_size>>>(args)
当你真正想要相反时,你的内核调用zero_GPU<<<size,1>>>(l_p_array_gpu)实际上会创建一堆具有单个线程的块。

以下函数来自CUDA示例,有助于确定给定数量元素所需的线程数和块数:

typedef unsigned int uint;

inline uint iDivUp(uint a, uint b)
{
    return (a % b != 0) ? (a / b + 1) : (a / b);
}

// compute grid and thread block size for a given number of elements
inline void computeGridSize(uint n, uint blockSize, uint &numBlocks, uint &numThreads)
{
    numThreads = min(blockSize, n);
    numBlocks = iDivUp(n, numThreads);
}

您可以将它们添加到.cu文件的顶部或帮助程序头文件中,并使用它们来正确调用内核函数。如果您想在DP_GPU.cu文件中使用它们,您只需添加:

// desired thread count (may change if there aren't enough elements)
dim3 threads(64);
// default block count (will also change based on number of elements)
dim3 blocks(1);
computeGridSize(a_numElements, threads.x, blocks.x, threads.x);

// run kernel
zero_GPU<<<blocks, threads>>>(l_p_array_gpu);

无论如何,有点偏离但我希望这有帮助!干杯!