某些节点上的Python mpi4py脚本分段错误

时间:2019-07-13 21:44:55

标签: python distributed-computing openmpi mpi4py

一个简单的python mpi脚本在出现分段错误的群集的特定节点上崩溃。

脚本的正文是:

import mpi4py 
mpi4py.rc.threads = False 
from mpi4py import MPI 
comm = MPI.COMM_WORLD
name=MPI.Get_processor_name()

print("hello world")
print(("name:",name,"my rank is",comm.rank)) 

在单个节点上运行脚本之前,我曾尝试在加载的批处理文件中加载所有模块,但这是行不通的。 sbatch文件如下所示:

#!/bin/bash
#SBATCH --ntasks=256
#SBATCH --mem-per-cpu=150mb
#SBATCH -J jobname
#SBATCH --time=11:00:00
#SBATCH --mail-type=ALL
#SBATCH --mail-user=abc@xyz.com

module load python/3.6.4        
module load gcc             
module load openmpi         
module load mpi4py

echo $SLURM_NODELIST
echo $SLURM_NTASKS
echo $SLURM_JOBID
echo $SLURM_SUBMIT_DIR
export OPENBLAS_NUM_THREADS=1

time mpirun --verbose -np $SLURM_NTASKS python3 testmpi.py

输出的前几行如下所示:节点的实际名称被NODENAME代替,而INSTITUTEIST则是我工作的地方的占位符:

[NODENAME:24753] *** Process received signal ***
[NODENAME:24753] Signal: Segmentation fault (11)
[NODENAME:24753] Signal code: Address not mapped (1)
[NODENAME:24753] Failing at address: 0x7f68a835a008
[NODENAME:24753] [ 0] /lib64/libpthread.so.0(+0xf7e0)
[0x7f68a7f197e0]
[NODENAME:24753] [ 1] /usr/INSTITUTE/gcc/9.1-pkgs/openmpi- 
4.0.1/lib/pmix/mca_gds_ds21.so(pmix_gds_ds21_lock_init+0x124) 
[0x7f689d41c184]
[NODENAME:24753] [ 2] /usr/INSTITUTE/gcc/9.1-pkgs/openmpi- 
4.0.1/lib/libmca_common_dstore.so.1(pmix_common_dstor_init+0x983) 
[0x7f689d20ae43]

我的猜测是这些节点上没有加载模块。

0 个答案:

没有答案