可变2D阵列传送到可变数量的节点

时间:2015-09-30 14:05:45

标签: arrays fortran mpi

我必须面对这种情况:

给定N个MPI节点 和 给定[N_ROWS,N_COLS]维度的二维实数组

我必须将其分区为加速微积分,给予每个节点 二维数组的一个子部分,利用节点数。

按照Fortran的方式将数据存储在内存中,首先使用最快速变化的变量对数组进行索引,数组的每个[:,i]列在逻辑上与其他变量分开。

我已经四处寻找非常有启发性的问题,例如Sending 2D arrays in Fortran with MPI_Gather

我已经达到了使用mpi_scattervmpi_gatherv的想法,但我坚持认为,因为在问题约束中,没有可能保证每个MPI节点它被赋予相同数量的数据,或者,在伪代码中:

#Number_of_MPI_nodes != N_ROWS*N_COLS

我当时想要使用向量,因为每个“列”都有自己的“独立”系列数据,当我说“独立”时,我的意思是我必须对属于同一列的数据进行一些操作,而不会影响其他专栏。

显然,由于给出了不等式,一些MPI节点将有不同数量的“列”进行分析。

在做了一些数学运算之后,我需要使用mpi_gatherv

收集数据

我会在几个小时内用一个工作示例更新问题!

非常感谢大家!

CODE:

program main

use mpi

implicit none

integer:: N_COLS=100, N_ROWS=200
integer:: i, j
integer:: ID_mpi, COM_mpi, ERROR_mpi
integer:: master = 0, SIZE_mpi=0 
integer:: to_each_cpu=0, to_each_cpu_oddment=0
integer:: sub_matrix_size=0 
integer:: nans=0, infs=0, array_split =0, my_type=0

integer ,dimension(:), allocatable :: elem_to_each_cpu
integer ,dimension(:), allocatable :: displacements
integer,parameter:: seed = 12345

character*160:: message

real :: tot_sum = 0.0

real ,dimension(:,:), allocatable:: Data_Matrix
real ,dimension(:,:), allocatable:: sub_split_Data_Matrix

call srand(seed)    
call MPI_INIT(ERROR_mpi)
COM_mpi = MPI_COMM_WORLD
call MPI_COMM_RANK(COM_mpi,ID_mpi,ERROR_mpi)
call MPI_COMM_SIZE(COM_mpi,SIZE_mpi,ERROR_mpi)

!! allocation Data_Matrix 
i = 1; j = 1
if (ID_mpi .eq. master) then
    i = N_ROWS; j = N_COLS
end if
allocate(Data_Matrix(i, j))

do j = 1, N_COLS
    do i = 1, N_ROWS
        Data_Matrix(i, j) = rand()
        tot_sum = tot_sum + Data_Matrix(i, j)
    enddo
enddo

write(message,*) "N_COLS:",N_COLS, "N_ROWS:", N_ROWS, " TOTAL_SUM:", tot_sum
write(*,*) message

!! SINCE THERE ARE NO RESTRICTIONS ON MPI NUMBER OR CPUS OR 
!! SIZE OR Data_Matrix I NEED TO DO THIS 
to_each_cpu =N_COLS / SIZE_mpi
to_each_cpu_oddment = N_COLS -( to_each_cpu * SIZE_mpi )

allocate(elem_to_each_cpu(SIZE_mpi))
elem_to_each_cpu = to_each_cpu
allocate(displacements(SIZE_mpi))
displacements = 0

!! I CHOOSE TO SPLIT THE DATA IN THIS WAY
if (ID_mpi .eq. master) then

    write(message,*) "N_COLS:",N_COLS, "mpisize:", SIZE_mpi, "to_each_cpu\oddment:", to_each_cpu, " \ ", to_each_cpu_oddment
    write(*,*) message

    j=1
    do i = 1 , to_each_cpu_oddment
        elem_to_each_cpu(j) = elem_to_each_cpu(j) + 1
        j = j + 1
        if(j .gt. SIZE_mpi) j = 1
    enddo

    do j = 2, SIZE_mpi
        displacements(j) = elem_to_each_cpu(j-1) + displacements(j-1)
    enddo

    do i = 1 , SIZE_mpi
        write(message,*)i, " to_each_cpu:", &
        elem_to_each_cpu(i), " sub_split_buff_displ:",displacements(i), "=",elem_to_each_cpu(i)+displacements(i)
        write(*,*) message
    enddo

end if

call MPI_BCAST(elem_to_each_cpu, SIZE_mpi, MPI_INT, 0, COM_mpi, ERROR_mpi)
call MPI_BCAST(displacements, SIZE_mpi, MPI_INT, 0, COM_mpi, ERROR_mpi)

allocate( sub_split_Data_Matrix(N_ROWS,elem_to_each_cpu(ID_mpi+1)) )

call MPI_TYPE_VECTOR(N_COLS,N_ROWS,N_ROWS,MPI_FLOAT,my_type,ERROR_mpi) 
call MPI_TYPE_COMMIT(my_type, ERROR_mpi) 

sub_split_Data_Matrix=0
sub_matrix_size = N_ROWS*elem_to_each_cpu(ID_mpi+1)

call MPI_scatterv( Data_Matrix,elem_to_each_cpu,displacements,&
    MPI_FLOAT, sub_split_Data_Matrix, sub_matrix_size ,MPI_FLOAT, &
    0, COM_mpi, ERROR_mpi)

!!! DOING SOME MATH ON SCATTERED MATRIX 

call MPI_gatherv(&
    sub_split_Data_Matrix, sub_matrix_size,MPI_FLOAT ,&
    Data_Matrix, elem_to_each_cpu, displacements, &
    MPI_FLOAT, 0, COM_mpi, ERROR_mpi)

!!! DOING SOME MATH ON GATHERED MATRIX 
tot_sum = 0.0
do j = 1, N_COLS
    do i = 1, N_ROWS
        tot_sum = tot_sum + Data_Matrix(i, j)
    enddo
enddo

write(message,*) "N_COLS:",N_COLS, "N_ROWS:", N_ROWS, " TOTAL_SUM:", tot_sum
write(*,*) message


deallocate(Data_Matrix)

if (ID_mpi .eq. master) then
    deallocate(elem_to_each_cpu )
    deallocate(displacements )
endif

deallocate(sub_split_Data_Matrix)

end 

结果:

MPI_Gahterv发生错误 关于传播者MPI_COMM_WORLD

内存参考无效

问题:

你能帮我找到错误吗? 或者更好,你能帮我展示一下这种方法吗? 我用的是合适的吗?

非常感谢!

1 个答案:

答案 0 :(得分:1)

我查看了您的代码,并做了一些修改来修复它:

  • 不重要:这里和那里的一些风格/化妆品元素(从我的观点来看,这是有争议的)提高了可读性。对不起,如果你不喜欢它。
  • 过程0不需要是唯一一个计算MPI_Scatterv() / MPI_Gatherv()调用的长度和位移的过程。所有进程都应该计算它们,因为它们都有必要的数据。此外,它为您提供了两个MPI_Bcast(),这很好。
  • 奇怪的计算长度。我怀疑这是错的,但我不确定,因为它是如此错综复杂,我只是重写了它。
  • 主要问题是矢量类型和标量类型之间的混淆:您的矢量类型计算了长度和位移,但是您使用标量类型调用MPI_Scatterv() / MPI_Gatherv() 。此外,对于Fortran,此标量类型为MPI_REAL,而不是MPI_FLOAT。在我在下面发布的代码中,我计算了MPI_REAL的长度和位移,但如果您愿意,可以将它们全部除以N_ROWS,并使用MPI_Type_contiguous( N_ROWS, MPI_REAL, my_type )的结果代替{在分散/聚集中{1}},并得到相同的结果。

以下是修改后的代码:

MPI_REAL

通过这些修改,代码按预期工作:

program main
    use mpi
    implicit none

    integer, parameter :: N_COLS=100, N_ROWS=200, master=0
    integer :: i, j
    integer :: ID_mpi,SIZE_mpi, COM_mpi, ERROR_mpi, my_type
    integer :: to_each_cpu, to_each_cpu_oddment, sub_matrix_size 
    integer, allocatable :: elem_to_each_cpu(:), displacements(:)
    real :: tot_sum = 0.0
    real, allocatable :: Data_Matrix(:,:), sub_split_Data_Matrix(:,:)

    call MPI_Init( ERROR_mpi )
    COM_mpi = MPI_COMM_WORLD
    call MPI_Comm_rank( COM_mpi, ID_mpi, ERROR_mpi )
    call MPI_Comm_size( COM_mpi, SIZE_mpi, ERROR_mpi )

    !! allocation Data_Matrix 
    if ( ID_mpi == master ) then
        allocate( Data_Matrix( N_ROWS, N_COLS ) )
        call random_number( Data_Matrix )
        do j = 1, N_COLS
            do i = 1, N_ROWS
                tot_sum = tot_sum + Data_Matrix(i, j)
            enddo
        enddo
        print *, "N_COLS:", N_COLS, "N_ROWS:", N_ROWS, " TOTAL_SUM:", tot_sum
    end if

    !! SINCE THERE ARE NO RESTRICTIONS ON MPI NUMBER OR CPUS OR 
    !! SIZE OR Data_Matrix I NEED TO DO THIS 
    to_each_cpu = N_COLS / SIZE_mpi
    to_each_cpu_oddment = N_COLS - ( to_each_cpu * SIZE_mpi )

    allocate( elem_to_each_cpu(SIZE_mpi) )
    elem_to_each_cpu = to_each_cpu * N_ROWS
    allocate( displacements(SIZE_mpi) )
    displacements = 0

    !! I CHOOSE TO SPLIT THE DATA IN THIS WAY
    if ( ID_mpi == master ) then
        print *, "N_COLS:", N_COLS, "mpisize:", SIZE_mpi, "to_each_cpu\oddment:", to_each_cpu, " \ ", to_each_cpu_oddment
    end if 

    do i = 1, to_each_cpu_oddment
       elem_to_each_cpu(i) = elem_to_each_cpu(i) + N_ROWS
    enddo

    do i = 1, SIZE_mpi-1
        displacements(i+1) = displacements(i) + elem_to_each_cpu(i)
    enddo

    if ( ID_mpi == master ) then
        do i = 1, SIZE_mpi
            print *, i, " to_each_cpu:", &
                elem_to_each_cpu(i), " sub_split_buff_displ:", displacements(i), &
                "=", elem_to_each_cpu(i) + displacements(i)
        enddo
    end if

    allocate( sub_split_Data_Matrix(N_ROWS, elem_to_each_cpu(ID_mpi+1)/N_ROWS) )

    sub_split_Data_Matrix = 0
    sub_matrix_size = elem_to_each_cpu(ID_mpi+1)

    call MPI_scatterv( Data_Matrix, elem_to_each_cpu ,displacements, MPI_REAL, &
                       sub_split_Data_Matrix, sub_matrix_size, MPI_REAL, &
                       master, COM_mpi, ERROR_mpi )

    !!! DOING SOME MATH ON SCATTERED MATRIX 

    call MPI_gatherv( sub_split_Data_Matrix, sub_matrix_size, MPI_REAL, &
                      Data_Matrix, elem_to_each_cpu, displacements, MPI_REAL, &
                      master, COM_mpi, ERROR_mpi )

    !!! DOING SOME MATH ON GATHERED MATRIX 
    if ( ID_mpi == master ) then
        tot_sum = 0.0
        do j = 1, N_COLS
            do i = 1, N_ROWS
                tot_sum = tot_sum + Data_Matrix(i, j)
            enddo
        enddo

        print *, "N_COLS:", N_COLS, "N_ROWS:", N_ROWS, " TOTAL_SUM:", tot_sum
        deallocate( Data_Matrix )
    endif

    deallocate( elem_to_each_cpu )
    deallocate( displacements )
    deallocate( sub_split_Data_Matrix )

end program main