由于MPI_comm_size

时间:2018-06-30 08:11:27

标签: c fortran mpi

我有一个Fortran代码,该代码旨在与默认的通信器MPI_COMM_WORLD一起运行,但是我打算仅在几个处理器上运行它。我有另一个使用MPI_comm_split来获得另一个传播者MyComm的代码。它是一个整数,在打印它的值时得到3。现在,我在Fortran代码中调用C函数以获取与MyComm相对应的等级和大小。但是我在这里面临几个问题。

  1. 在Fortran中,当我打印MyComm时,其值为3,但是当我在C函数中打印时,其值为17278324。我还打印了MPI_COMM_WORLD的值,该值约为1140850688。我不知道这些值的含义,为什么MyComm的值发生变化?

  2. 我的代码可以正常运行并创建可执行文件,但是当我执行它时,出现了分段错误错误。我使用gdb来调试代码,并且该过程在以下行终止

程序因信号11,分段错误而终止。

#0  0x00007fe5e8f6248c in PMPI_Comm_size (comm=0x107a574, size=0x13c4ba0) at pcomm_size.c:62
62      *size = ompi_comm_size((ompi_communicator_t*)comm);

我注意到MPI_comm_rank的排名与MyComm相对应,但是问题仅在于MPI_comm_sizeMPI_COMM_WORLD没有这样的问题。所以我不明白是什么原因造成的。我检查了输入内容,但没有任何线索。这是我的C代码,

#include <stdio.h>
#include "utils_sub_names.h"
#include <mpi.h>
#define MAX_MSGTAG 1000
int flag_msgtag=0;
MPI_Request mpi_msgtags[MAX_MSGTAG];

char *ibuff;
int ipos,nbuff;

MPI_Comm MyComm;
 void par_init_fortran (MPI_Fint *MyComm_r,MPI_Fint*machnum,MPI_Fint *machsize)
{
 MPI_Fint comm_in
 comm_in=*MyComm_r;
 MyComm=MPI_Comm_f2c(comm_in);
 printf("my comm is %d \n",MyComm);

  MPI_Comm_rank(MyComm,machnum);
  printf("my machnum is %d \n ", machnum);
  MPI_Comm_rank(MyComm,machsize);
  printf("my machnum is %d \n ", machsize);
}

编辑:

我想将MyComm声明为C代码中列出的所有功能的全局通信器。但是我不知道为什么我的沟通者仍然无效。请注意,MPI例程仅在Fortran中初始化和完成,我希望不必在C中再次初始化它们。我正在使用以下Fortran代码。

     implicit none
      include 'mpif.h'
      integer :: MyColor, MyCOMM, MyError, MyKey, Nnodes
      integer :: MyRank, pelast
      CALL mpi_init (MyError)
      CALL mpi_comm_size (MPI_COMM_WORLD, Nnodes, MyError)
      CALL mpi_comm_rank (MPI_COMM_WORLD, MyRank, MyError)
      MyColor=1
      MyKey=0 

   CALL mpi_comm_split (MPI_COMM_WORLD, MyColor, MyKey, MyComm,MyError)
   CALL ramcpl (MyComm)
   CALL mpi_barrier (MPI_COMM_WORLD, MyError)
   CALL MCTWorld_clean ()
   CALL mpi_finalize (MyError)

我的子例程ramcpl位于另一个地方

subroutine ramcpl (MyComm_r)
implicit none
integer :: MyComm_r, ierr
.
.
.
CALL par_init_fortran (MyComm_r, my_mpi_num,nmachs);
End Subroutine ramcpl

命令行和输出为

    mpirun -np 4 ./ramcplM ramcpl.in

       Model Coupling: 

[localhost:31472] *** Process received signal ***
[localhost:31473] *** Process received signal ***
[localhost:31472] Signal: Segmentation fault (11)
[localhost:31472] Signal code: Address not mapped (1)
[localhost:31472] Failing at address: (nil)
[localhost:31473] Signal: Segmentation fault (11)
[localhost:31473] Signal code: Address not mapped (1)
[localhost:31473] Failing at address: (nil)
[localhost:31472] [ 0] /lib64/libpthread.so.0() [0x3120c0f7e0]
[localhost:31472] [ 1] ./ramcplM(par_init_fortran_+0x122) [0x842db2]
[localhost:31472] [ 2] ./ramcplM(__rams_MOD_rams_cpl+0x7a0) [0x8428c0]
[localhost:31472] [ 3] ./ramcplM(MAIN__+0xea6) [0x461086]
[localhost:31472] [ 4] ./ramcplM(main+0x2a) [0xc3eefa]
[localhost:31472] [ 5] /lib64/libc.so.6(__libc_start_main+0xfd) [0x312081ed1d]
[localhost:31472] [ 6] ./ramcplM() [0x45e2d9]
[localhost:31472] *** End of error message ***
[localhost:31473] [ 0] /lib64/libpthread.so.0() [0x3120c0f7e0]
[localhost:31473] [ 1] ./ramcplM(par_init_fortran_+0x122) [0x842db2]
[localhost:31473] [ 2] ./ramcplM(__rammain_MOD_ramcpl+0x7a0) [0x8428c0]
[localhost:31473] [ 3] ./ramcplM(MAIN__+0xea6) [0x461086]
[localhost:31473] [ 4] ./ramcplM(main+0x2a) [0xc3eefa]
[localhost:31473] [ 5] /lib64/libc.so.6(__libc_start_main+0xfd) [0x312081ed1d]
[localhost:31473] [ 6] ./ramcplM() [0x45e2d9]
[localhost:31473] *** End of error message ***

1 个答案:

答案 0 :(得分:6)

Fortran和C中的句柄不兼容。使用MPI_Comm_f2c https://linux.die.net/man/3/mpi_comm_f2c和相关的转换功能。将它作为整数而不是MPI_Comm在C和Fortran之间传递。