OpenMPI使用MINLOC减少

时间:2011-11-23 21:47:19

标签: c++ parallel-processing mpi

我目前正在研究一些图形理论问题的MPI代码,其中许多节点都可以包含答案和答案的长度。为了让所有东西都回到主节点,我正在做一个MPI_Gather的答案,我正在尝试使用MPI_MINLOC操作来确定谁拥有最短的解决方案。现在我存储长度和节点ID的数据类型定义为(根据http://www.open-mpi.org/doc/v1.4/man3/MPI_Reduce.3.php等众多网站上显示的示例):

struct minType
{
    float len;
    int index;
};

在每个节点上,我按以下方式初始化此结构的本地副本:

int commRank;
MPI_Comm_rank (MPI_COMM_WORLD, &commRank);
minType solutionLen;
solutionLen.len = 1e37;
solutionLen.index = commRank;

在执行结束时,我有一个MPI_Gather调用,成功地将所有解决方案(我已经从内存中打印出来以验证它们)以及调用:

MPI_Reduce (&solutionLen, &solutionLen, 1, MPI_FLOAT_INT, MPI_MINLOC, 0, MPI_COMM_WORLD);

我的理解是这些论点应该是:

  1. 数据来源
  2. 是结果的目标(仅在指定的根节点上有效)
  3. 每个节点发送的项目数
  4. 数据类型(MPI_FLOAT_INT似乎是根据上述链接定义的)
  5. 操作(MPI_MINLOC似乎也被定义)
  6. 指定通信组中的根节点ID
  7. 要等待的通信组。
  8. 当我的代码进入reduce操作时,我收到此错误:

    [compute-2-19.local:9754] *** An error occurred in MPI_Reduce
    [compute-2-19.local:9754] *** on communicator MPI_COMM_WORLD
    [compute-2-19.local:9754] *** MPI_ERR_ARG: invalid argument of some other kind
    [compute-2-19.local:9754] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
    --------------------------------------------------------------------------
    mpirun has exited due to process rank 0 with PID 9754 on
    node compute-2-19.local exiting improperly. There are two reasons this could occur:
    
    1. this process did not call "init" before exiting, but others in
    the job did. This can cause a job to hang indefinitely while it waits
    for all processes to call "init". By rule, if one process calls "init",
    then ALL processes must call "init" prior to termination.
    
    2. this process called "init", but exited without calling "finalize".
    By rule, all processes that call "init" MUST call "finalize" prior to
    exiting or it will be considered an "abnormal termination"
    
    This may have caused other processes in the application to be
    terminated by signals sent by mpirun (as reported here).
    --------------------------------------------------------------------------
    

    我承认自己完全被这个难过了。如果重要的话,我正在使用基于CentOS 5.5的Rocks群集上的OpenMPI 1.5.3(使用gcc 4.4构建)进行编译。

1 个答案:

答案 0 :(得分:4)

我认为不允许对输入和输出使用相同的缓冲区(前两个参数)。 man page说:

  

当沟通者是一个内部通信者时,你可以执行一个   原地减少操作(输出缓冲区用作输入   缓冲)。使用变量MPI_IN_PLACE作为根的值   进程sendbuf。在这种情况下,输入数据在根处获取   来自接收缓冲区,它将被输出数据替换。