cuda triple riemann sum

时间:2012-09-17 17:15:49

标签: cuda

我正试图通过cuda执行三重黎曼数。我试图使用多维网格迭代器为我的和迭代器,以避免嵌套循环。我使用2.0 telsa卡,所以我无法使用嵌套内核。

我似乎没有得到一个完整的0 - >我需要的每个x,y,z变量的N次迭代。

__global__ void test(){
uint xIteration = blockDim.x * blockIdx.x + threadIdx.x;
uint yIteration = blockDim.y * blockIdx.y + threadIdx.y;
uint zIteration = blockDim.z * blockIdx.z + threadIdx.z;
printf("x: %d * %d + %d = %d\n y: %d * %d + %d = %d\n z: %d * %d + %d = %d\n", blockDim.x, blockIdx.x, threadIdx.x, xIteration, blockDim.y, blockIdx.y, threadIdx.y, yIteration,  blockDim.z, blockIdx.z, threadIdx.z, zIteration);
}

----由-----

调用
int totalIterations = 128; // N value for single sum (i = 0; i < N)
dim3 threadsPerBlock(8,8,8);
dim3 blocksPerGrid((totalIterations + threadsPerBlock.x - 1) / threadsPerBlock.x, 
                   (totalIterations + threadsPerBlock.y - 1) / threadsPerBlock.y, 
                   (totalIterations + threadsPerBlock.z - 1) / threadsPerBlock.z);
test<<<blocksPerGrid, threadsPerBlock>>>();

----输出-----

x   y   z
...
7 4 0
7 4 1
7 4 2
7 4 3
7 4 4
7 4 5
7 4 6
7 4 7
7 5 0
7 5 1
7 5 2
7 5 3
7 5 4
7 5 5
7 5 6
7 5 7
7 6 0
7 6 1
7 6 2
7 6 3
7 6 4
7 6 5
7 6 6
7 6 7
7 7 0
7 7 1
7 7 2
7 7 3
7 7 4
7 7 5
7 7 6
7 7 7
...

输出被截断,我现在得到每个排列,0 <&lt; x,y,z&lt; 7,但我需要0&lt; x,y,z&lt; 127当totalIterations是128时。例如,在该执行中,40&lt; z&lt; 49,其应该是0&lt; = z&lt; = 127.我对多暗网格的理解可能是错误的,但对于黎曼,每个迭代器,x,y和z必须具有0到127的值。

如果我制作totalIterations&gt; 128,ex 1024,程序死于cudaError代码为6,我理解为启动计时器到期。内核除了打印之外什么都不做,所以我不明白为什么它会超时。在辅助设备上运行此功能似乎暂时解决了这个问题。我们正在使用其中一个特斯拉来运行X,但geforce正在邮件中成为新的显示设备,以释放两个特斯拉计算。

printf(...)将被执行要求和的函数替换。

想法是替换

的序列号版本
for (int i = 0...)
    for (int j = 0 ..)
       for (int k = 0...)

此外,我不确定如何存储函数值,因为创建一个可能巨大的(数百万x百万x百万)3D数组然后减少它,似乎没有内存效率,但以某种方式将函数值连接到某些一种共享变量。

----设备信息(我们有2张这些卡,两者的输出相同----

Device 1: "Tesla C2050"
  CUDA Driver Version / Runtime Version          5.0 / 5.0
  CUDA Capability Major/Minor version number:    2.0
  Total amount of global memory:                 2687 MBytes (2817982464 bytes)
  (14) Multiprocessors x ( 32) CUDA Cores/MP:    448 CUDA Cores
  GPU Clock rate:                                1147 MHz (1.15 GHz)
  Memory Clock rate:                             1500 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 786432 bytes
  Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)
  Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16384) x 2048
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Maximum sizes of each dimension of a block:    1024 x 1024 x 64
  Maximum sizes of each dimension of a grid:     65535 x 65535 x 65535
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and execution:                 Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Concurrent kernel execution:                   Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support enabled:                Yes
  Device is using TCC driver mode:               No
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           132 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

1 个答案:

答案 0 :(得分:1)

我认为正如已经提到的那样,在设备代码中使用printf来验证线程是否触摸了(x,y,z)数组中的每个元素对于x,y,z的大值是不明智的。< / p>

我根据您的代码创建了以下内容,以证明线程触及每个元素x,y,z:

#include <stdio.h>
#define DATAVAL 1
#define cudaCheckErrors(msg) \
    do { \
        cudaError_t __err = cudaGetLastError(); \
        if (__err != cudaSuccess) { \
            fprintf(stderr, "Fatal error: %s (%s at %s:%d)\n", \
                msg, cudaGetErrorString(__err), \
                __FILE__, __LINE__); \
            fprintf(stderr, "*** FAILED - ABORTING\n"); \
            exit(1); \
        } \
    } while (0)

__global__ void test(int *data, int dim){
  uint xIteration = blockDim.x * blockIdx.x + threadIdx.x;
  uint yIteration = blockDim.y * blockIdx.y + threadIdx.y;
  uint zIteration = blockDim.z * blockIdx.z + threadIdx.z;

  data[((((zIteration*dim)+yIteration)*dim)+xIteration)]=DATAVAL;
}

int main(){
  int *testdata;
  int *result;
  int totalIterations = 128; // N value for single sum (i = 0; i < N)
  int testsize = totalIterations*totalIterations*totalIterations;
  dim3 threadsPerBlock(8,8,8);
  dim3 blocksPerGrid((totalIterations + threadsPerBlock.x - 1) / threadsPerBlock.x,  (totalIterations + threadsPerBlock.y - 1) / threadsPerBlock.y,  (totalIterations + threadsPerBlock.z - 1) / threadsPerBlock.z);
  cudaMalloc(&testdata, testsize*sizeof(int));
  cudaCheckErrors("cudaMalloc fail");
  cudaMemset(testdata, 0, testsize*sizeof(int));
  cudaCheckErrors("cudaMemset fail");
  result=(int *)malloc(testsize*sizeof(int));
  if (result == 0) {printf("malloc fail \n"); return 1;}
  memset(result, 0, testsize*sizeof(int));
  test<<<blocksPerGrid, threadsPerBlock>>>(testdata, totalIterations);
  cudaDeviceSynchronize();
  cudaCheckErrors("Kernel launch failure");
  cudaMemcpy(result, testdata, testsize*sizeof(int), cudaMemcpyDeviceToHost);
  cudaCheckErrors("cudaMemcpy failure");

  for (unsigned i=0; i<testsize; i++)
    if (result[i] != DATAVAL) {printf("fail! \n"); return 1;}

  printf("Success \n");
  return 0;

}