没有OpenFabrics连接

时间:2018-07-09 08:53:25

标签: python fortran mpi openmpi f2py

我正在尝试使用pyOM2的fortran-mpi组件进行海洋模拟。因此,我已经使用Python 3和f2py编译了pyOM。这都是Veros项目的一部分。

当我使用fortran在此处运行基准测试时,一切正常。当我在fortran-mpi的{​​{1}}机器上使用AMD A10-7850K APU with Radeon(TM) R7 Graphics运行它时,它就可以正常工作。

现在,我尝试在/proc/cpuinfo机器上运行相同的文件和配置。在这里,我收到以下MPI错误:

Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz

我已经尝试了running benchmark isoneutral_benchmark.py current size: 980 fortran-mpi ... failed -------------------------------------------------------------------------- No OpenFabrics connection schemes reported that they were able to be used on a specific port. As such, the openib BTL (OpenFabrics support) will be disabled for this port. Local host: scigpu01 Local device: mlx5_3 Local port: 1 CPCs attempted: udcm -------------------------------------------------------------------------- testing mpi routines -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 73 in communicator MPI_COMM_WORLD with errorcode 99. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- ERROR: domain decompositon impossible in j-direction choose other number of PEs in j-direction global pe # 73 : in pe_decomposition global pe # 73 aborting ... # From ERROR down it repeats 10-12 times [scigpu01:58598] 79 more processes have sent help message help-mpi-btl-openib-cpc-base.txt / no cpcs for port [scigpu01:58598] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages [scigpu01:58598] 16 more processes have sent help message help-mpi-api.txt / mpi-abort 环境变量的各种设置,例如OMPI_MCA_btl^openib,sm,self,但是却无所适从。

我可以运行任何魔术命令以使其在我的Intel机器上正常工作吗?

谢谢

0 个答案:

没有答案