我的计算机中有一个专用的计算GPU(不用于显示)。它的属性是:
Device 0: "Tesla C2050"
CUDA Driver Version / Runtime Version 6.0 / 6.0
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 2688 MBytes (2818244608 bytes)
(14) Multiprocessors, ( 32) CUDA Cores/MP: 448 CUDA Cores
GPU Clock rate: 1147 MHz (1.15 GHz)
Memory Clock rate: 1500 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 786432 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (65535, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Enabled
Device supports Unified Addressing (UVA): Yes
我正在尝试在其上运行以下简单程序(将数组复制到设备):
#include <cuda.h>
#include <curand_kernel.h>
#define N 252000
int main( void ) {
int a[N];
int *dev_a;
cudaSetDevice(0);
cudaMalloc( (void**)&dev_a, N * sizeof(int) );
for (long i=0; i<N; i++) {
a[i] = 1;
}
cudaMemcpy( dev_a, a, N * sizeof(int), cudaMemcpyHostToDevice ); //**Crashes here**
cudaFree( dev_a );
cudaDeviceReset();
return 0;
}
如果N = 251000
该程序有效。但如果N = 252000
程序在cudaMemcpy()
崩溃。知道为什么会这样吗?
答案 0 :(得分:5)
恭喜,您刚刚发现了堆栈大小的限制:
int a[N];
动态分配主机阵列:
int *a = (int *)malloc(N*sizeof(int));
这将从堆中分配。如果你搜索SO,你会发现许多问题,例如解释堆栈与堆分配以及限制的问题。