Windows中的MPI(将数据从从任务返回到主任务)

时间:2018-04-11 03:27:53

标签: c mpi multitasking

我正在学习如何使用MPI。

现在我要做的就是在主任务和从任务之间发送和接收数据。将数据从主任务发送到从任务可以正常工作。 (我通过让每个从站打印从主站接收的数据来测试它。)

从奴隶任务接收主任务中的数据似乎不起作用。

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define MASTER_TAG 0    
#define WORKTAG 1
#define DIETAG 2

#define N 4
#define ARRAY_SIZE N*N

int getMin(int a, int b);
void print1D(int *arr, int n);
int* allocateIntegerArray(int n);
void master();
void slave();

int *inputArray = NULL;
int *outputArray = NULL;

int main(int argc, char **argv)
{
    int myrank = 0;

    MPI_Init(&argc, &argv); // initialize MPI

    MPI_Comm_rank(MPI_COMM_WORLD, &myrank); // process rank, 0 thru N-1

    if (myrank == 0)
        master();
    else
        slave();

    MPI_Finalize(); // cleanup MPI

    return 0;
}

void master()
{
    int numOfTasks;
    int rank = 0;
    int work = 0;
    int startIndex = 0;
    int dataPerTask = 0;
    int *tempArray = NULL;
    int i = 0;
    int k = 0;

    MPI_Status status;

    MPI_Comm_size( MPI_COMM_WORLD, &numOfTasks); // #processes in application

    // calculate the amount of data that each task will receive
    dataPerTask = ARRAY_SIZE;

    // create the array that will hold the data that will be transferred back and forth between the master and slave tasks
    tempArray = allocateIntegerArray( dataPerTask );

    for (i = 1; i < numOfTasks; ++i)
    {
        MPI_Send(&dataPerTask, 1, MPI_INT, i, WORKTAG, MPI_COMM_WORLD); // send the size of the data chunk to the slave task
        MPI_Send(tempArray, dataPerTask, MPI_INT, i, WORKTAG, MPI_COMM_WORLD); // send the actual chunk of data to the slave task

        print1D(tempArray, dataPerTask);
        MPI_Recv(tempArray, dataPerTask, MPI_INT, MPI_ANY_TAG, WORKTAG, MPI_COMM_WORLD, &status); // receive results from slave task
        print1D(tempArray, dataPerTask);
    }

    // tell all the slaves to exit
    for (i = 1; i < numOfTasks; ++i)
        MPI_Send(0, 0, MPI_INT, i, DIETAG, MPI_COMM_WORLD);

    free(tempArray);
}

void slave()
{
    MPI_Status status;
    int *in = NULL; // input array
    int *out = NULL; // outpu array
    int dataPerTask = 0;

    for (;;)
    {
        MPI_Recv(&dataPerTask, 1, MPI_INT, MASTER_TAG, MPI_ANY_TAG, MPI_COMM_WORLD, &status); // get the number of integers in the incoming array

        if (status.MPI_TAG == DIETAG) // check the tag of the received message (if the master task sent the DIETAG, then the slave must stop processing and return)
            return;

        in = allocateIntegerArray(dataPerTask); // array 'in' holds the data received from the master task
        out = allocateIntegerArray(dataPerTask); // array 'out' holds the data that is returned to the master task

        MPI_Recv(in, dataPerTask, MPI_INT, MASTER_TAG, MPI_ANY_TAG, MPI_COMM_WORLD, &status); // get the actual data from the master task

        out[0] = 1; // modify the data in some way
        MPI_Send(out, dataPerTask, MPI_INT, 0, WORKTAG, MPI_COMM_WORLD);
    }

    free(in);
    free(out);
}

void print1D(int *arr, int n)
{
    int i = 0;

    for (i = 0; i < n; i++)
        printf("%d ", arr[i]);

    printf("\n");
}

int* allocateIntegerArray(int n)
{
    int i = 0;

    if (n <= 0)
        return NULL;

    int *arr = (int*) malloc(sizeof(int)*n*n);
    memset(arr, 0, sizeof(int)*n*n);

    return arr;
}

1 个答案:

答案 0 :(得分:3)

根本原因是您的程序调用{​​{1}}

MPI_Recv(...,source=MPI_ANY_TAG, ...)不是有效来源,您应该使用正确的来源(例如MPI_ANY_TAG中的imaster中的0)或{ {1}}。

FWIW,认为这是理所当然的,因为它可能在将来发生变化

  • slave中,MPI_ANY_SOURCEOpen MPI具有相同的价值,这就是我最初认为它对我有用的原因。
  • MPI_ANY_SOURCE中,MPI_ANY_TAG的值与MPICH相同,这就是为什么邮件在您的环境中只有零。