CUDA Fortran的最大减少量

时间:2015-02-16 17:33:21

标签: cuda fortran reduction

我正在尝试减少CUDA Fortran;到目前为止我所做的就是这样,通过两个步骤执行减少(参见下面的CUDA内核)。

在第一个内核中,我正在进行一些简单的计算,并为一个线程块声明一个共享数组来存储abs(a - anew)的值;一旦线程被同步,我计算这个共享数组的最大值,我存储在维度gridDim%x * gridDim%y的中间数组中。

在第二个内核中,我正在读取此数组(在单个线程块中)并尝试计算它的最大值。

以下是整个代码:

module commons
   integer, parameter :: dp=kind(1.d0)
   integer, parameter :: nx=1024, ny=1024
   integer, parameter :: block_dimx=16, block_dimy=32
end module commons

module kernels
  use commons
contains
  attributes(global) subroutine kernel_gpu_reduce(a, anew, error, nxi, nyi)
    implicit none

    integer, value, intent(in) :: nxi, nyi
    real(dp), dimension(nxi,nyi), intent(in) :: a
    real(dp), dimension(nxi,nyi), intent(inout) :: anew
    real(dp), dimension(nxi/block_dimx+1,nyi/block_dimy+1), intent(inout) :: error
    real(dp), shared, dimension(block_dimx,block_dimy) :: err_sh
    integer :: i, j, k, tx, ty

    i = (blockIdx%x - 1)*blockDim%x + threadIdx%x
    j = (blockIdx%y - 1)*blockDim%y + threadIdx%y
    tx = threadIdx%x
    ty = threadIdx%y

    if (i > 1 .and. i < nxi .and. j > 1 .and. j < nyi) then
       anew(i,j) = 0.25d0*(a(i-1,j) + a(i+1,j) &
                       & + a(i,j-1) + a(i,j+1))
       err_sh(tx,ty) = abs(anew(i,j) - a(i,j))
    endif
    call syncthreads()

    error(blockIdx%x,blockIdx%y) = maxval(err_sh)

  end subroutine kernel_gpu_reduce

  attributes(global) subroutine max_reduce(local_error, error, nxi, nyi)
    implicit none

    integer, value, intent(in) :: nxi, nyi
    real(dp), dimension(nxi,nyi), intent(in) :: local_error
    real(dp), intent(out) :: error
    real(dp), shared, dimension(nxi) :: shared_error
    integer :: tx, i

    tx = threadIdx%x

    shared_error(tx) = 0.d0
    if (tx >=1 .and. tx <= nxi) shared_error(tx) = maxval(local_error(tx,:))
    call syncthreads()

    error = maxval(shared_error)

  end subroutine max_reduce
end module kernels

program laplace
  use cudafor
  use kernels
  use commons
  implicit none

  real(dp), allocatable, dimension(:,:) :: a, anew
  real(dp) :: error=1.d0
  real(dp), device, allocatable, dimension(:,:) :: adev, adevnew
  real(dp), device, allocatable, dimension(:,:) :: edev
  real(dp), allocatable, dimension(:,:) :: ehost
  real(dp), device :: error_dev
  integer    :: i
  integer    :: num_device, h_status, ierrSync, ierrAsync
  type(dim3) :: dimGrid, dimBlock

  num_device = 0
  h_status   = cudaSetDevice(num_device)

  dimGrid  = dim3(nx/block_dimx+1, ny/block_dimy+1, 1)
  dimBlock = dim3(block_dimx, block_dimy, 1)

  allocate(a(nx,ny), anew(nx,ny))
  allocate(adev(nx,ny), adevnew(nx,ny))
  allocate(edev(dimGrid%x,dimGrid%y), ehost(dimGrid%x,dimGrid%y))

  do i = 1, nx
     a(i,:) = 1.d0
     anew(i,:) = 1.d0
  enddo

  adev    = a
  adevnew = anew

  call kernel_gpu_reduce<<<dimGrid, dimBlock>>>(adev, adevnew, edev, nx, ny)

  ierrSync = cudaGetLastError()
  ierrAsync = cudaDeviceSynchronize()
  if (ierrSync /= cudaSuccess) write(*,*) &
     & 'Sync kernel error - 1st kernel:', cudaGetErrorString(ierrSync)
  if (ierrAsync /= cudaSuccess) write(*,*) &
     & 'Async kernel error - 1st kernel:', cudaGetErrorString(ierrAsync)

  call max_reduce<<<1, dimGrid%x>>>(edev, error_dev, dimGrid%x, dimGrid%y)

  ierrSync = cudaGetLastError()
  ierrAsync = cudaDeviceSynchronize()
  if (ierrSync /= cudaSuccess) write(*,*) &
     & 'Sync kernel error - 2nd kernel:', cudaGetErrorString(ierrSync)
  if (ierrAsync /= cudaSuccess) write(*,*) &
     & 'Async kernel error - 2nd kernel:', cudaGetErrorString(ierrAsync)

  error = error_dev
  print*, 'error from kernel: ', error
  ehost = edev
  error = maxval(ehost)
  print*, 'error from host: ', error

  deallocate(a, anew, adev, adevnew, edev, ehost)

end program laplace

由于第二个内核(<<<1, dimGrid>>>)的内核配置,我首先遇到了问题。我按照罗伯特的回答修改了代码。现在我有一个内存访问错误:

 Async kernel error - 2nd kernel:
 an illegal memory access was encountered                                                                                        
0: copyout Memcpy (host=0x666bf0, dev=0x4203e20000, size=8) FAILED: 77(an illegal memory access was encountered)

而且,如果我使用cuda-memcheck

运行它
========= Invalid __shared__ write of size 8
=========     at 0x00000060 in kernels_max_reduce_
=========     by thread (1,0,0) in block (0,0,0)
=========     Address 0x00000008 is out of bounds
=========     Saved host backtrace up to driver entry point at kernel launch time
=========     Host Frame:/usr/lib/libcuda.so (cuLaunchKernel + 0x2c5) [0x14ad95]
每个帖子

代码使用PGI Fortran 14.9和CUDA 6.5在Tesla K20卡上编译(具有CUDA功能3.5)。我用以下代码编译它:

pgfortran -Mcuda -ta:nvidia,cc35 laplace.f90 -o laplace

1 个答案:

答案 0 :(得分:3)

你可以proper cuda error checking in CUDA Fortran。你应该在你的代码中这样做。

一个问题是你在第二个内核中尝试启动太多线程(每个块):

call max_reduce<<<1, dimGrid>>>(edev, error_dev, dimGrid%x, dimGrid%y)
                     ^^^^^^^

先前已将dimGrid参数计算为:

dimGrid  = dim3(nx/block_dimx+1, ny/block_dimy+1, 1);

代替实际值,我们有:

dimGrid = dim3(1024/16 + 1, 1024/32 +1);

即。

dimGrid = dim3(65,33);

但是不允许每个块请求65 * 33 = 2145个线程。最大值为512或1024,具体取决于您要编译的设备体系结构目标。

由于此错误,您的第二个内核根本没有运行。