MPI:负载平衡算法(Master-Slave模型)

时间:2016-05-15 11:56:54

标签: algorithm performance mpi load-balancing

我使用MPI并行化循环[0, max ]。我想要一个主进程(让我们说进程0)最初将该循环划分为一小组 n 任务(n次迭代),然后逐步影响一组任务 x 只要一个人完成之前的工作(上一组任务),就会有进程。换句话说,我想使用MPI的阻塞和/或非阻塞发送/接收来实现负载平衡算法,但不知道如何继续。另外,有没有办法在" max"的函数中找到一组任务的最佳大小(n参数)。和" x" ?

非常感谢你的帮助。

1 个答案:

答案 0 :(得分:0)

我终于从here找到了以下代码,它基本上是基于MPI Master / Slave模型的动态负载平衡的骨架,正是我所寻找的。我仍然无法看到如何最佳地划分初始工作集。

#include <mpi.h>
#define WORKTAG     1
#define DIETAG     2
main(argc, argv)
int argc;
char *argv[];
{
    int         myrank;
    MPI_Init(&argc, &argv);   /* initialize MPI */
    MPI_Comm_rank(
    MPI_COMM_WORLD,   /* always use this */
    &myrank);      /* process rank, 0 thru N-1 */
    if (myrank == 0) {
        master();
    } else {
        slave();
    }
    MPI_Finalize();       /* cleanup MPI */
}

master()
{
    int ntasks, rank, work;
    double       result;
    MPI_Status     status;
    MPI_Comm_size(
    MPI_COMM_WORLD,   /* always use this */
    &ntasks);          /* #processes in application */
/*
* Seed the slaves.
*/
    for (rank = 1; rank < ntasks; ++rank) {
        work = /* get_next_work_request */;
        MPI_Send(&work,         /* message buffer */
        1,              /* one data item */
        MPI_INT,        /* data item is an integer */
        rank,           /* destination process rank */
        WORKTAG,        /* user chosen message tag */
        MPI_COMM_WORLD);/* always use this */
    }

/*
* Receive a result from any slave and dispatch a new work
* request work requests have been exhausted.
*/
    work = /* get_next_work_request */;
    while (/* valid new work request */) {
        MPI_Recv(&result,       /* message buffer */
        1,              /* one data item */
        MPI_DOUBLE,     /* of type double real */
        MPI_ANY_SOURCE, /* receive from any sender */
        MPI_ANY_TAG,    /* any type of message */
        MPI_COMM_WORLD, /* always use this */
        &status);       /* received message info */
        MPI_Send(&work, 1, MPI_INT, status.MPI_SOURCE,
        WORKTAG, MPI_COMM_WORLD);
        work = /* get_next_work_request */;
    }
/*
* Receive results for outstanding work requests.
*/
    for (rank = 1; rank < ntasks; ++rank) {
        MPI_Recv(&result, 1, MPI_DOUBLE, MPI_ANY_SOURCE,
        MPI_ANY_TAG, MPI_COMM_WORLD, &status);
    }
/*
* Tell all the slaves to exit.
*/
    for (rank = 1; rank < ntasks; ++rank) {
        MPI_Send(0, 0, MPI_INT, rank, DIETAG, MPI_COMM_WORLD);
    }
}

slave()
{
    double              result;
    int                 work;
    MPI_Status          status;
    for (;;) {
        MPI_Recv(&work, 1, MPI_INT, 0, MPI_ANY_TAG,
        MPI_COMM_WORLD, &status);
/*
* Check the tag of the received message.
*/
        if (status.MPI_TAG == DIETAG) {
            return;
        }
        result = /* do the work */;
        MPI_Send(&result, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);
    }
}