通用的'mpi_send没有特定的子例程

时间:2019-02-05 19:00:17

标签: fortran mpi

subroutine collect(rank, nprocs, n_local, n_global, u_initial_local)

use mpi

implicit none

integer*8                           :: i_local_low, i_local_high
integer*8                           :: i_global_low, i_global_high
integer*8                           :: i_local, i_global
integer*8                           :: n_local, n_global
real*8                              :: u_initial_local(n_local)
real*8, dimension(:), allocatable   :: u_global
integer                             :: procs
integer*8                           :: n_local_procs

! Data declarations for MPI
integer     :: ierr ! error signal variable, Standard value - 0
integer     :: rank ! process ID (pid) / Number
integer     :: nprocs ! number of processors

! MPI send/ receive arguments
integer                             :: buffer(2)
integer, parameter                  :: collect1 = 10
integer, parameter                  :: collect2 = 20


! status variable - tells the status of send/ received calls
! Needed for receive subroutine
integer, dimension(MPI_STATUS_SIZE) :: status1

i_global_low  = (rank       *(n_global-1))/nprocs
i_global_high = ((rank+1)   *(n_global-1))/nprocs

if (rank > 0) then
    i_global_low = i_global_low - 1
end if

i_local_low = 0
i_local_high = i_global_high - i_global_low

if (rank == 0) then
    allocate(u_global(1:n_global))

    do i_local = i_local_low, i_local_high
        i_global = i_global_low + i_local - i_local_low
        u_global(i_global) = u_initial_local(i_local)
    end do

    do procs = 1,nprocs-1

        call MPI_RECV(buffer, 2, MPI_INTEGER, procs, collect1, MPI_COMM_WORLD, status1, ierr)

        i_global_low = buffer(1)
        n_local_procs = buffer(2)

        call MPI_RECV(u_global(i_global_low+1), n_local_procs, MPI_DOUBLE_PRECISION, procs, collect2, MPI_COMM_WORLD, status1, ierr)        
    end do

    print *, u_global

else

    buffer(1) = i_global_low
    buffer(2) = n_local


    call MPI_SEND(buffer, 2, MPI_INTEGER, 0, collect1, MPI_COMM_WORLD, ierr)


    call MPI_SEND(u_initial_local, n_local, MPI_DOUBLE_PRECISION, 0, collect2, MPI_COMM_WORLD, ierr)
end if

return
end subroutine collect

我收到与collect2标签相对应的MPI_SEND和MPI_RECV错误。 “在(1)处没有通用'mpi_recv'的特定子例程”,而1在....... ierr的末尾)。 collect2标签的MPI_SEND正在发送一个数组,而MPI_RECV正在接收该数组。 对于collect1标签不会发生这种情况。

1 个答案:

答案 0 :(得分:2)

您的n_localinteger*8,但必须是integer(请参阅https://myorg.api.crm2.dynamics.com/api/)。

有很多关于大型数组(超过maxint元素)和MPI的问题的文章(如How to debug Fortran 90 compile error "There is no specific subroutine for the generic 'foo' at (1)"?)。如果您确实遇到n_local对于integer而言过大的问题,则可以使用派生类型(如MPI_Type_contiguous)来减少传递给MPI过程的元素数量,使其适合一个4字节的整数。