具有集体功能的MPI死锁

时间:2017-05-05 14:28:27

标签: c++ mpi

我用MPI库用C ++编写程序。只有一个节点工作就会出现死锁!我不使用发送或接收集体操作,而只使用两个集合函数(MPI_AllreduceMPI_Bcast)。 如果有节点等待其他节点发送内容或收到,我实际上并不了解导致此死锁的原因。

void ParaStochSimulator::first_reacsimulator() {
    SimulateSingleRun();
}

double ParaStochSimulator::deterMinTau() {
    //calcualte minimum tau for this process
    l_nLocalMinTau = calc_tau(); //min tau for each node
    MPI_Allreduce(&l_nLocalMinTau, &l_nGlobalMinTau, 1, MPI_DOUBLE, MPI_MIN, MPI_COMM_WORLD);    
    //min tau for all nodes
    //check if I have the min value
    if (l_nLocalMinTau <= l_nGlobalMinTau && m_nCurrentTime < m_nOutputEndPoint) {
        FireTransition(m_nMinTransPos);
        CalculateAllHazardValues(); 
    }
    return l_nGlobalMinTau;
}

void ParaStochSimulator::SimulateSingleRun() {
    //prepare a run
    PrepareRun();
    while ((m_nCurrentTime < m_nOutputEndPoint) && IsSimulationRunning()) {
        deterMinTau();
        if (mnprocess_id == 0) { //master
            SimulateSingleStep();
            std::cout << "current time:*****" << m_nCurrentTime << std::endl;
            broad_casting(m_nMinTransPos);
            MPI_Bcast(&l_anMarking, l_nMinplacesPos.size(), MPI_DOUBLE, 0, MPI_COMM_WORLD);
            //std::cout << "size of mani place :" << l_nMinplacesPos.size() << std::endl;
        }
    }
    MPI_Bcast(&l_anMarking, l_nMinplacesPos.size(), MPI_DOUBLE, 0, MPI_COMM_WORLD);
    PostProcessRun();
}

1 个答案:

答案 0 :(得分:1)

当你的“主”进程正在执行A MPI_Bcast时,所有其他进程仍在运行你的循环,然后进入deterMinTau,然后执行MPI_Allreduce。

这是一个死锁,因为主节点正在等待所有节点执行Brodcast,而所有其他节点都在等待主节点执行Reduce。

我相信你要找的是:

void ParaStochSimulator::SimulateSingleRun() {
    //prepare a run
    PrepareRun();
    while ((m_nCurrentTime < m_nOutputEndPoint) && IsSimulationRunning()) {
        //All the nodes reduce tau at the same time
        deterMinTau();
        if (mnprocess_id == 0) { //master
            SimulateSingleStep();
            std::cout << "current time:*****" << m_nCurrentTime << std::endl;
            broad_casting(m_nMinTransPos);
            //Removed bordcast for master here
        }
        //All the nodes broadcast at every loop iteration
        MPI_Bcast(&l_anMarking, l_nMinplacesPos.size(), MPI_DOUBLE, 0, MPI_COMM_WORLD);
    }
    PostProcessRun();
}