我是CUDA和GPGPU的新手。我正在尝试检查大量数字(大于32位)的属性,我想尝试使用配备nVidia GTX 1080的Windows 7 64位机器来执行此操作:
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1080"
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 8192 MBytes (8589934592 bytes)
(20) Multiprocessors, (128) CUDA Cores/MP: 2560 CUDA Cores
GPU Max Clock rate: 1734 MHz (1.73 GHz)
Memory Clock rate: 5005 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
当我运行以下代码时,“sum”的值是无意义的(28,20等),即使我可以看到threadId从0到4095:
#include <cuda.h>
#include <cuda_runtime.h>
#include "device_launch_parameters.h"
#include "stdio.h"
__global__ void Simple(unsigned long long int *sum)
{
unsigned long long int blockId = blockIdx.x + blockIdx.y * gridDim.x + gridDim.x * gridDim.y * blockIdx.z;
unsigned long long int threadId = blockId * (blockDim.x * blockDim.y * blockDim.z)
+ (threadIdx.z * (blockDim.x * blockDim.y))
+ (threadIdx.y * blockDim.x)
+ threadIdx.x;
printf("threadId = %llu.\n", threadId);
// Check threadId for property. Possibly introduce a grid stride for loop to give each thread a range to check.
sum[0]++;
}
int main(int argc, char **argv)
{
unsigned long long int sum[] = { 0 };
unsigned long long int *dev_sum;
cudaMalloc((void**)&dev_sum, sizeof(unsigned long long int));
cudaMemcpy(dev_sum, sum, sizeof(unsigned long long int), cudaMemcpyHostToDevice);
dim3 grid(2, 1, 1);
dim3 block(1024, 1, 1);
printf("--------- Start kernel ---------\n\n");
Simple <<< grid, block >>> (dev_sum);
cudaDeviceSynchronize();
cudaMemcpy(sum, dev_sum, sizeof(unsigned long long int), cudaMemcpyDeviceToHost);
printf("sum = %llu.\n", sum[0]);
cudaFree(dev_sum);
getchar();
return 0;
}
如何通过添加网格步长循环来修改此内核调用以获取最大线程数(使用我的设置)在0到10 ^ 12之间的数字范围内运行?
dim3 grid(2, 1, 1);
dim3 block(1024, 1, 1);
Simple <<< grid, block >>> (dev_sum);
答案 0 :(得分:2)
所有线程都在内存中的相同位置进行递增,这会导致竞争条件。这就是结果不正确的原因。你应该使用原子加法使它正确(在CUDA中有一个函数)。