在多个节点上运行时MPI_Reduce()中的死锁

时间:2012-10-09 01:16:29

标签: c++ parallel-processing mpi distributed-computing

我的MPI代码有问题,代码在多个节点上运行时会挂起。在单个节点上运行时成功完成。我不知道如何调试这个。有人可以帮我调试这个问题吗?

程序使用:

mpicc -o string strin.cpp
mpirun -np 4 -npernode 2 -hostfile hosts ./string 12 0.1 0.9 10 2

我的代码:

#include <iostream>
#include <vector>
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"

int main ( int argc, char **argv )
{

    float *y, *yold;
    float *v, *vold;
    int nprocs, myid;
    FILE *f = NULL;
    MPI_Status   status;
    int namelen;
    char processor_name[MPI_MAX_PROCESSOR_NAME];


    //  const int NUM_MASSES = 1000;
    //  const float Ktension = 0.1;
    //  const float Kdamping = 0.9;
    //  const float duration = 10.0;

#if 0
    if ( argc != 5 ) {
        std::cout << "usage: " << argv[0] << " NUM_MASSES durationInSecs Ktension Kdamping\n";
        return 2;
    }
#endif

    int NUM_MASSES  = atoi ( argv[1] );
    float duration = atof ( argv[2] );
    float Ktension = atof ( argv[3] );
    float Kdamping = atof ( argv[4] );
    const int PICKUP_POS = NUM_MASSES / 7;    // change this for diff harmonics
    const int OVERSAMPLING = 16;  // run sim at this multiple of audio sampling rate

    MPI_Init(&argc,&argv);
    MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
    MPI_Comm_rank(MPI_COMM_WORLD,&myid);
    MPI_Get_processor_name(processor_name, &namelen);

    // open output file
    if (myid  == 0) {
        f = fopen ( "rstring.raw", "wb" );
        if (!f) {
            std::cout << "can't open output file\n";
            return 1;
        }
    }

    // allocate displacement and velocity arrays
    y = new float[NUM_MASSES];
    yold = new float[NUM_MASSES];
    v = new float[NUM_MASSES];

    // initialize displacements (pluck it!) and velocities
    for (int i = 0; i < NUM_MASSES; i++ ) {
        v[i]  = 0.0f;
        yold[i] = y[i] = 0.0f;
        if (i == NUM_MASSES/2 )
            yold[i] = 1.0; // impulse at string center
    }

    // Broadcast data
    //MPI_Bcast(y, NUM_MASSES, MPI_FLOAT, 0, MPI_COMM_WORLD);
    //MPI_Bcast(yold, NUM_MASSES, MPI_FLOAT, 0, MPI_COMM_WORLD);
    //MPI_Bcast(v, NUM_MASSES, MPI_FLOAT, 0, MPI_COMM_WORLD);

    //int numIters = duration * 44100 * OVERSAMPLING; 
    int numIters = atoi( argv[5] );
    for ( int t = 0; t < numIters; t++ ) {
        // for each mass element
        float sum = 0;
        float gsum = 0;
        int i_start;
        int i_end ;

        i_start = myid * (NUM_MASSES/nprocs);
        i_end = i_start + (NUM_MASSES/nprocs);

        for ( int i = i_start; i < i_end; i++ ) {
            if ( i == 0 || i == NUM_MASSES-1 ) {
            } else {
                float accel = Ktension * (yold[i+1] + yold[i-1] - 2*yold[i]);
                v[i] += accel;
                v[i] *= Kdamping;
                y[i] = yold[i] + v[i];
                sum += y[i];
            }
        }

        MPI_Reduce(&sum, &gsum, 1, MPI_FLOAT, MPI_SUM, 0, MPI_COMM_WORLD);

        float *tmp = y;
        y = yold;
        yold = tmp;


        if (myid == 0) {
            //printf("%f\n", gsum);
            if ( t % OVERSAMPLING == 0 ) {
                fwrite ( &gsum, sizeof(float), 1, f );
            }
        }
    }
    if (myid  == 0) {
        fclose ( f );
    }
    MPI_Finalize();
}

3 个答案:

答案 0 :(得分:1)

如果您有可能这样做,您可以尝试在并行调试器(如Totalview)内运行您的应用程序。

否则,当程序挂起时,您可以将一个免费提供的串行调试器(如GDB)一次附加到一个进程,以查看潜在问题的位置。

答案 1 :(得分:1)

我猜您正在接收消息,该消息不是由任何节点发送的。如果每个节点首先尝试接收消息,哪个节点会发送它?

您可以修改程序,例如if id == 0 send(msg) else receive(&msg)并尝试使用超时。

在一张纸上写下它是如何工作的以及节点如何相互作用,你会看到问题出在哪里。

答案 2 :(得分:0)

我终于从OpenMPI邮件列表中找到了答案。我认为问题是因为我的主机设置方式。

我想在多个节点上运行时,TCP BTL会被虚拟接口(vmnet?)搞糊涂。我使用“--mca btl_tcp_if_include eth0”参数限制了使用的接口。这解决了我的问题。