在群集上测试MPI

时间:2010-01-31 02:40:23

标签: cluster-computing hpc openmpi pbs torque

我正在学习集群上的OpenMPI。这是我的第一个例子。我希望输出显示来自不同节点的响应,但它们都响应来自同一节点node062。我只是想知道为什么以及如何从不同节点实际获取报告以显示MPI实际上是将进程分发到不同的节点?谢谢和问候!

ex1.c

/* test of MPI */  
#include "mpi.h"  
#include <stdio.h>  
#include <string.h>  

int main(int argc, char **argv)  
{  
char idstr[2232]; char buff[22128];  
char processor_name[MPI_MAX_PROCESSOR_NAME];  
int numprocs; int myid; int i; int namelen;  
MPI_Status stat;  

MPI_Init(&argc,&argv);  
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);  
MPI_Comm_rank(MPI_COMM_WORLD,&myid);  
MPI_Get_processor_name(processor_name, &namelen);  

if(myid == 0)  
{  
  printf("WE have %d processors\n", numprocs);  
  for(i=1;i<numprocs;i++)  
  {  
    sprintf(buff, "Hello %d", i);  
    MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD); }  
    for(i=1;i<numprocs;i++)  
    {  
      MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat);  
      printf("%s\n", buff);  
    }  
}  
else  
{   
  MPI_Recv(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &stat);  
  sprintf(idstr, " Processor %d at node %s ", myid, processor_name);  
  strcat(buff, idstr);  
  strcat(buff, "reporting for duty\n");  
  MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD);  
}  
MPI_Finalize();  

}  

ex1.pbs

#!/bin/sh  
#  
#This is an example script example.sh  
#  
#These commands set up the Grid Environment for your job:  
#PBS -N ex1  
#PBS -l nodes=10:ppn=1,walltime=1:10:00  
#PBS -q dque    

# export OMP_NUM_THREADS=4  

 mpirun -np 10 /home/tim/courses/MPI/examples/ex1  

编译并运行:

[tim@user1 examples]$ mpicc ./ex1.c -o ex1   
[tim@user1 examples]$ qsub ex1.pbs  
35540.mgt  
[tim@user1 examples]$ nano ex1.o35540  
----------------------------------------  
Begin PBS Prologue Sat Jan 30 21:28:03 EST 2010 1264904883  
Job ID:         35540.mgt  
Username:       tim  
Group:          Brown  
Nodes:          node062 node063 node169 node170 node171 node172 node174 node175  
node176 node177  
End PBS Prologue Sat Jan 30 21:28:03 EST 2010 1264904883  
----------------------------------------  
WE have 10 processors  
Hello 1 Processor 1 at node node062 reporting for duty  
Hello 2 Processor 2 at node node062 reporting for duty        
Hello 3 Processor 3 at node node062 reporting for duty        
Hello 4 Processor 4 at node node062 reporting for duty        
Hello 5 Processor 5 at node node062 reporting for duty        
Hello 6 Processor 6 at node node062 reporting for duty        
Hello 7 Processor 7 at node node062 reporting for duty        
Hello 8 Processor 8 at node node062 reporting for duty        
Hello 9 Processor 9 at node node062 reporting for duty  

----------------------------------------  
Begin PBS Epilogue Sat Jan 30 21:28:11 EST 2010 1264904891  
Job ID:         35540.mgt  
Username:       tim  
Group:          Brown  
Job Name:       ex1  
Session:        15533  
Limits:         neednodes=10:ppn=1,nodes=10:ppn=1,walltime=01:10:00  
Resources:      cput=00:00:00,mem=420kb,vmem=8216kb,walltime=00:00:03  
Queue:          dque  
Account:  
Nodes:  node062 node063 node169 node170 node171 node172 node174 node175 node176  
node177  
Killing leftovers...  

End PBS Epilogue Sat Jan 30 21:28:11 EST 2010 1264904891  
----------------------------------------

更新:

我想在一个PBS脚本中运行几个后台作业,以便作业可以同时运行。例如在上面的例子中,我添加了另一个调用来运行ex1并将两个运行更改为ex1.pbs

中的后台
#!/bin/sh  
#  
#This is an example script example.sh  
#  
#These commands set up the Grid Environment for your job:  
#PBS -N ex1  
#PBS -l nodes=10:ppn=1,walltime=1:10:00  
#PBS -q dque 

echo "The first job starts!"  
mpirun -np 5 --machinefile /home/tim/courses/MPI/examples/machinefile /home/tim/courses/MPI/examples/ex1 &  
echo "The first job ends!"  
echo "The second job starts!"  
mpirun -np 5 --machinefile /home/tim/courses/MPI/examples/machinefile /home/tim/courses/MPI/examples/ex1 &  
echo "The second job ends!" 

(1)qsub此脚本与之前编译的可执行文件ex1之后的结果很好。

The first job starts!  
The first job ends!  
The second job starts!  
The second job ends!  
WE have 5 processors  
WE have 5 processors  
Hello 1 Processor 1 at node node063 reporting for duty        
Hello 2 Processor 2 at node node169 reporting for duty        
Hello 3 Processor 3 at node node170 reporting for duty        
Hello 1 Processor 1 at node node063 reporting for duty        
Hello 4 Processor 4 at node node171 reporting for duty        
Hello 2 Processor 2 at node node169 reporting for duty        
Hello 3 Processor 3 at node node170 reporting for duty        
Hello 4 Processor 4 at node node171 reporting for duty  

(2)但是,我认为ex1的运行时间太快,可能两个后台作业没有太多的运行时间重叠,而当我以同样的方式应用于我的真实项目时则不是这样。因此我将sleep(30)添加到ex1.c以延长ex1的运行时间,以便在后台运行ex1的两个作业几乎一直在同时运行。

/* test of MPI */  
#include "mpi.h"  
#include <stdio.h>  
#include <string.h>  
#include <unistd.h>

int main(int argc, char **argv)  
{  
char idstr[2232]; char buff[22128];  
char processor_name[MPI_MAX_PROCESSOR_NAME];  
int numprocs; int myid; int i; int namelen;  
MPI_Status stat;  

MPI_Init(&argc,&argv);  
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);  
MPI_Comm_rank(MPI_COMM_WORLD,&myid);  
MPI_Get_processor_name(processor_name, &namelen);  

if(myid == 0)  
{  
  printf("WE have %d processors\n", numprocs);  
  for(i=1;i<numprocs;i++)  
  {  
    sprintf(buff, "Hello %d", i);  
    MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD); }  
    for(i=1;i<numprocs;i++)  
    {  
      MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat);  
      printf("%s\n", buff);  
    }  
}  
else  
{   
  MPI_Recv(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &stat);  
  sprintf(idstr, " Processor %d at node %s ", myid, processor_name);  
  strcat(buff, idstr);  
  strcat(buff, "reporting for duty\n");  
  MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD);  
}  

sleep(30); // new added to extend the running time
MPI_Finalize();  

}  

但是在重新编译和qsub之后,结果似乎不太好。流程已中止。 在ex1.o35571中:

The first job starts!  
The first job ends!  
The second job starts!  
The second job ends!  
WE have 5 processors  
WE have 5 processors  
Hello 1 Processor 1 at node node063 reporting for duty  
Hello 2 Processor 2 at node node169 reporting for duty  
Hello 3 Processor 3 at node node170 reporting for duty  
Hello 4 Processor 4 at node node171 reporting for duty  
Hello 1 Processor 1 at node node063 reporting for duty  
Hello 2 Processor 2 at node node169 reporting for duty  
Hello 3 Processor 3 at node node170 reporting for duty  
Hello 4 Processor 4 at node node171 reporting for duty  
4 additional processes aborted (not shown)  
4 additional processes aborted (not shown)  
ex1.e35571中的

mpirun: killing job...  
mpirun noticed that job rank 0 with PID 25376 on node node062 exited on signal 15 (Terminated).  
mpirun: killing job...  
mpirun noticed that job rank 0 with PID 25377 on node node062 exited on signal 15 (Terminated).  

我想知道为什么有流程中止?如何在PBS脚本中正确配置后台作业?

4 个答案:

答案 0 :(得分:3)

几件事情: 你需要告诉mpi在哪里启动进程, 假设您正在使用mpich,请查看mpiexec帮助部分并查找机器文件或等效说明。除非提供了机器文件,否则它将在一个主机上运行

PBS自动创建节点文件。其名称存储在PBS命令文件中可用的PBS_NODEFILE环境变量中。请尝试以下方法:

mpiexec -machinefile $PBS_NODEFILE ...

如果你使用mpich2,你有两个使用mpdboot启动你的mpi运行时。我不记得命令的细节,你将不得不阅读手册页。请记住创建机密文件,否则mpdboot将失败。

我再次阅读你的帖子,你将使用open mpi,你仍然需要提供机器文件到mpiexec命令,但你不必乱用mpdboot

答案 1 :(得分:2)

默认情况下,PBS(我假设扭矩)以独占模式分配节点,因此每个节点只能有一个作业。如果您有多个处理器,则每个CPU有一个进程,这有点不同。可以在分时模式下更改PBS分配点,查看qmgr.long故事的手册页,很可能你不会在节点文件中有重叠节点,因为节点文件是在资源可用时而不是在时间点创建的提交。

PBS的目的是资源控制,最常见的是时间,节点分配(自动)。

PBS文件中的

命令按顺序执行。您可以将流程放在后台,但这可能会破坏资源分配的目的,但我不知道您的确切工作流程。我使用PBS脚本中的后台进程在主程序并行运行之前复制数据,使用&amp ;. PBS脚本实际上只是一个shell脚本。

你可以假设PBS对你脚本的内部工作没有任何了解。您当然可以在via脚本中运行多个进程/线程。如果您这样做,那么由您和您的操作系统以平衡的方式分配核心/处理器。如果您使用的是多线程程序,最有可能的方法是为节点运行一个mpi进程,然后生成OpenMP线程。

如果您需要澄清,请告诉我

答案 2 :(得分:1)

作为诊断,请在致电MPI_GET_PROCESSOR_NAME后立即插入这些声明。

printf("Hello, world.  I am %d of %d on %s\n", myid, numprocs, name);
fflush(stdout); 

如果所有进程都返回相同的节点ID,那么我会告诉你,你不太了解作业管理系统和集群上发生了什么 - 也许是PBS(尽管你显然告诉它)将所有10个进程放在一个节点上(节点中有10个核心?)。

如果这会产生不同的结果,那表明我的代码有问题,虽然它看起来不错。

答案 3 :(得分:0)

你的代码中存在一个与mpich无关的错误,你在两个循环中重用了我。

for(i=1;i<numprocs;i++)  
  {  
    sprintf(buff, "Hello %d", i);  
    MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD); }  
    for(i=1;i<numprocs;i++)  

第二个for循环会搞砸。