这个问题的背景是在一些计算领域,如计算流体动力学(CFD)。我们在某些关键区域经常需要更精细的网格/网格,而背景网格可能更粗糙。例如,自适应细化网格用于跟踪气象学中的冲击波和嵌套域。
使用笛卡尔拓扑,并在下面的草图中显示域分解。在这种情况下,使用4 * 2 = 8个处理器。单个数字表示处理器的等级,(x,y)表示其拓扑坐标。
假设在具有等级2,3,4,5(在中间)的区域中细化网格,并且在这种情况下局部细化比率被定义为R = D_coarse / D_fine = 2。由于网格是精细的,因此也应该改进时间推进。这需要在精炼区域中计算时间步长t,t + 1/2 * dt,t + dt,同时仅在全局区域中计算时间步长t和t + dt。这需要一个较小的通信器,其中仅包括中间的等级以进行额外的计算。全局等级+坐标和对应的本地等级(红色)草图显示如下:
但是,我在实现此方案时遇到一些错误,并显示了Fortran中的代码段(未完成):
integer :: global_comm, local_comm ! global and local communicators
integer :: global_rank, local_rank !
integer :: global_grp, local_grp ! global and local groups
integer :: ranks(4) ! ranks in the refined region
integer :: dim ! dimension
integer :: left(-2:2), right(-2:2) ! ranks of neighbouring processors in 2 directions
ranks=[2,3,4,5]
!---- Make global communicator and their topological relationship
call mpi_init(ierr)
call mpi_cart_create(MPI_COMM_WORLD, 2, [4,2], [.false., .false.], .true., global_comm, ierr)
call mpi_comm_rank(global_comm, global_rank, ierr)
do dim=1, 2
call mpi_cart_shift(global_comm, dim-1, 1, left(-dim), right(dim), ierr)
end do
!---- make local communicator and its topological relationship
! Here I use group and create communicator
! create global group
call mpi_comm_group(MPI_COMM_WORLD, global_grp, ierr)
! extract 4 ranks from global group to make a local group
call mpi_group_incl(global_grp, 4, ranks, local_grp, ierr)
! make new communicator based on local group
call mpi_comm_create(MPI_COMM_WORLD, local_grp, local_comm, ierr)
! make topology for local communicator
call mpi_cart_create(global_comm, 2, [2,2], [.false., .false.], .true., local_comm, ierr)
! **** get rank for local communicator
call mpi_comm_rank(local_comm, local_rank, ierr)
! Do the same thing to make topological relationship as before in local communicator.
...
当我运行该程序时,问题来自'****获得本地通信器的排名'步骤。我的想法是建立两个传播者:全球和本地传播者,本地传播者嵌入全球传播者。然后分别在全球和本地传播者中创建他们的对应拓扑关系。如果我的概念错误或某些语法错误,我不会。如果你能给我一些建议,非常感谢你。
错误消息是
*** An error occurred in MPI_Comm_rank
*** reported by process [817692673,4]
*** on communicator MPI_COMM_WORLD
*** MPI_ERR_COMM: invalid communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
答案 0 :(得分:2)
您正在从全局通信器组创建一个2x2笛卡尔拓扑,该拓扑包含八个等级。因此,其中四个local_comm
返回的MPI_Cart_create
的值为MPI_COMM_NULL
。在空通信器上调用MPI_Comm_rank
会导致错误。
如果我理解你的逻辑,你应该做类似的事情:
if (local_comm /= MPI_COMM_NULL) then
! make topology for local communicator
call mpi_cart_create(local_comm, 2, [2,2], [.false., .false.], .true., &
local_cart_comm, ierr)
! **** get rank for local communicator
call mpi_comm_rank(local_cart_comm, local_rank, ierr)
...
end if