具有MPI的同步MST

时间:2015-12-02 00:17:22

标签: c++ mpi distributed minimum-spanning-tree

我正在尝试使用分布式计算原理,算法和系统(书)中描述的Gallagher-Humblet-Spira算法构建分布式同步最小生成树。

我已经在一周前开始了,我还在尝试使用本书中描述的伪代码构建C ++ / MPI代码。有人可以给我一个很好的参考或代码,我可以用作指南吗?

直到现在我这样做了吗?

#include <mpi.h>
#include <cstdio>
#include <vector>
#include <cmath>
#include <cfloat>

static const int SEARCH_MWOE_TAG    = 1;
static const int EXAMINE_TAG        = 2;
static const int MAX_NODES          = 4;

//used in the graph structure
struct Edge {
    int    target;
    double weight;

    Edge(){
        weight = target = 0;
    }

    Edge(int t, double w) {
        target = t;
        weight = w;
    }
};

//the minimum weight outgoing edge used for messages
struct Mwoe_type {

    int leader;
    int remot_id;
    int local_id;
    double weight;

    Mwoe_type(){
        leader = remot_id = local_id = 0;
        weight = DBL_MAX;
    }

    Mwoe_type(int le, int lo, int re , double we) {
        leader   = le;
        local_id = lo;
        remot_id = re;
        weight   = we;
    }

    bool operator< (const Mwoe_type& other){
        return weight < other.weight;
    }
};

using adj_list = std::vector< Edge >;



/*
    Creates a new mpi structure to use for send/receive.
 */
void commit_edge_type(MPI_Datatype *newType);

/* 
 sends an EXAMINE message along unmarked (i.e., non-tree) edges 
 to determine if the other end of the edge is in the same component.

 Returns the request vector to be used for wait the replys of mwoe.
*/
std::vector< MPI_Request > examine(adj_list &, int &);

/*
 broadcast SEARCH_MWOE(leader) along marked edges of tree
 */
void search_mwoe(adj_list &, int &);

//convergecast
void reply_mwoe(Mwoe_type &);


void makeUnmarked( int id_process, adj_list &unmarked_edges ){
    //a small test graph
    switch (id_process) {
        case 0:
            unmarked_edges.push_back( Edge( 1, 9.0 ) );
            unmarked_edges.push_back( Edge( 2, 5.0  ) );
            break;

        case 1:
            unmarked_edges.push_back( Edge( 0, 9.0 ) );
            unmarked_edges.push_back( Edge( 2, 2.0  ) );
            unmarked_edges.push_back( Edge( 3, 6.0  ) );
            break;

        case 2:
            unmarked_edges.push_back( Edge( 0, 5.0 ) );
            unmarked_edges.push_back( Edge( 1, 2.0  ) );
            unmarked_edges.push_back( Edge( 3, 3.0  ) );
            break;

        case 3:
            unmarked_edges.push_back( Edge( 1, 6.0 ) );
            unmarked_edges.push_back( Edge( 2, 3.0  ) );
            break;

        default:
            break;
    }

}

int main(int argc, char** argv) {

    MPI_Init( &argc, &argv);
    MPI_Request request;
    MPI_Status status;

    MPI_Datatype MPI_mwoe;
    commit_edge_type(&MPI_mwoe);

    int id_process;
    MPI_Comm_rank( MPI_COMM_WORLD, &id_process );

    adj_list marked_edges  ;
    adj_list unmarked_edges;

    //every node begins with itself on marked edges list
    marked_edges.push_back( Edge(id_process, 0.0) );

    //a simple example
    makeUnmarked( id_process, unmarked_edges );


    int leader = id_process;

    int nodes_on_mst = 1;
    //while all nodes arent on the MST
    while ( nodes_on_mst != MAX_NODES ) {

        //1) [the root always initiate with a broadcast]
        if( leader == id_process ) {
            search_mwoe( marked_edges , leader);
        }

        //synchronize and read messages ... how to make this ?
        //MPI_Barrier(MPI_COMM_WORLD);
        //MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);

        //received search message and isn`t a leader ... propagates the broadcast
        if( leader != id_process && status.MPI_TAG == SEARCH_MWOE_TAG ){
            search_mwoe( marked_edges , leader);
        }

        //2)
        std::vector< MPI_Request > request_vector;
        Mwoe_type minimum;
        if (status.MPI_TAG == SEARCH_MWOE_TAG) {

            //2) A [on receive of a search message broadcast a examine on unmarked edges]
            request_vector = examine( unmarked_edges, leader );

            //2) B [Pick the minimum outgoing edge]
            for (auto &e : unmarked_edges) {
                Mwoe_type local( leader, id_process, e.target, e.weight );
                if( local < minimum )
                    minimum = local;
            }
        }

        //3) [Convergecast] to be continued ...
        if( status.MPI_TAG == EXAMINE_TAG ) {

            if( marked_edges.size() == 1 ) { //leaf node
                //faz um send para o no que enviou mensagem de examine com a mwoe
            }

            else {
                //use request_vector to check the answer of each examine TAG sent and continue the convergecast
                for ( auto& r: request_vector ) {
                    MPI_Wait(&r, &status);
                }
            }

        }

        //broadcast add mwoe... to be continued
        if( id_process == leader ) {
            ++nodes_on_mst;
        }

    }


    MPI_Finalize();
}

//this will be used to send the minimum weight outgoing edge
void commit_edge_type(MPI_Datatype *newType){

    int      lengths[2] = { 3, 1 };
    MPI_Aint offsset[2];
    MPI_Datatype types[2] = { MPI_INT, MPI_DOUBLE };

    MPI_Aint adress1,adress2;

    Mwoe_type obj;
    MPI_Get_address( &obj, &adress1 );

    MPI_Get_address( &obj.leader, &adress2 );
    offsset[0] = adress2 - adress1;

    MPI_Get_address( &obj.weight   , &adress2 );
    offsset[1] = adress2 - adress1;

    MPI_Type_create_struct( 2, lengths, offsset, types, newType );
    MPI_Type_commit(newType);
}

std::vector< MPI_Request > examine(adj_list &unmarked_edges, int &leader){
    std::vector< MPI_Request > request_vector;
    for (auto &e : unmarked_edges) {
        MPI_Request request;
        MPI_Isend( &leader, 1, MPI_INT, e.target , EXAMINE_TAG , MPI_COMM_WORLD, &request );
        request_vector.push_back(request);
    }
    return request_vector;
}

void reply_mwoe(Mwoe_type &, int target){
    //nothing here for now
}

void search_mwoe(adj_list &marked_edges, int &leader){
    MPI_Request request;
    for (auto &e : marked_edges) {
        MPI_Isend( &leader, 1, MPI_INT, e.target , SEARCH_MWOE_TAG , MPI_COMM_WORLD, &request );
    }
}

以下是算法的伪代码:

1. if leader = i then
    broadcast SEARCH_MWOE(leader) along marked edges of tree (Section 5.5.5).

2. On receiving a SEARCH_MWOE(leader) message that was broadcast on marked edges:
    (a) Each process i (including leader) sends an EXAMINE message along unmarked (i.e., non-tree) edges to determine if the other end of the edge is in the same component (i.e., whether its leader is the same).
    (b) From among all incident edges at i, for which the other end belongs to a different component, process i picks its incident MWOE(localID,remoteID).

3. The leaf nodes in the MST within the component initiate the convergecast     
(Section 5.5.5) using REPLY_MWOEs, informing their parent of their MWOE(localID,remoteID). 
All the nodes participate in this convergecast.

4. if leader = i then
    await convergecast replies along marked edges.
    Select the minimum MWOE(localID,remoteID) from all the replies. 
    broadcast ADD_MWOE(localID,remoteID) along marked edges of tree (Section 5.5.5).
    // To ask process localID to mark the  localID  remoteID  
   // edge, i.e., include it in MST of component.

5. if an MWOE edge gets marked by both the components on which it is incident then
    (a) Define new_leader as the process with the larger ID on which that MWOE is incident (i.e., process whose ID is max localID  remoteID ).
    (b) new_leader identifies itself as the leader for the next round.
    (c) new_leader broadcasts NEW_LEADER in the newly formed compo- nent along the marked edges (Section 5.5.5) announcing itself as the leader for the next round.

0 个答案:

没有答案