我正在尝试使用mpi_comm_spawn&来实现以下场景分散:
1- Master会产生2个有工作的流程。
2-他将数组分散到那些衍生的进程。
3-生成的进程接收分散的数组,然后将其发回。
4-主设备接收数组的已排序部分。
我想知道如何进行第2步,到目前为止我已经尝试过发送和接收,它们工作得很完美,但我想用散点函数来做。
编辑:这是我想在主代码中做的事情,我错过了奴隶的部分,我收到了分散的数组
/*Master Here*/
MPI_Comm_spawn(slave, MPI_ARGV_NULL, 2, MPI_INFO_NULL,0, MPI_COMM_WORLD, &inter_comm, array_of_errcodes);
printf("MASTER Sending a message to slaves \n");
MPI_Send(message, 50, MPI_CHAR,0 , tag, inter_comm);
MPI_Scatter(array, 10, MPI_INT, &array_r, 10, MPI_INT, MPI_ROOT, inter_comm);
感谢。
答案 0 :(得分:3)
<强> master.c 强>
#include "mpi.h"
int main(int argc, char *argv[])
{
int n_spawns = 2;
MPI_Comm intercomm;
MPI_Init(&argc, &argv);
MPI_Comm_spawn("worker_program", MPI_ARGV_NULL, n_spawns, MPI_INFO_NULL, 0, MPI_COMM_SELF, &intercomm, MPI_ERRCODES_IGNORE);
int sendbuf[2] = {3, 5};
int recvbuf; // redundant for master.
MPI_Scatter(sendbuf, 1, MPI_INT, &recvbuf, 1, MPI_INT, MPI_ROOT, intercomm);
MPI_Finalize();
return 0;
}
<强> worker.c 强>
#include "mpi.h"
#include <stdio.h>
int main(int argc, char *argv[])
{
MPI_Init(&argc, &argv);
MPI_Comm intercomm;
MPI_Comm_get_parent(&intercomm);
int sendbuf[2]; // redundant for worker.
int recvbuf;
MPI_Scatter(sendbuf, 1, MPI_INT, &recvbuf, 1, MPI_INT, 0, intercomm);
printf("recvbuf = %d\n", recvbuf);
MPI_Finalize();
return 0;
}
命令行
mpicc master.c -o master_program
mpicc worker.c -o worker_program
mpirun -n 1 master_program