从MPI_Irecv()访问数据

时间:2015-04-12 17:06:02

标签: c arrays mpi

我想知道为什么我无法从MPI_Recv命令访问数据。我有一个包含100个元素的数组,我想将它分成8个进程。由于100/8返回不等长的块,我手动执行此操作。然后我计算块然后单独提交给每个进程。然后每个进程对数组的一个块执行一个动作,让它重新洗牌,然后返回其重新洗牌的部分,然后我将它组合成一个原始数组。该程序运行良好,直到我必须将来自进程的结果分组在一起。特别是我想访问刚刚由slave进程返回的数组

for (i=1; i<numProcs; i++) {
MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]);
MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]);
MPI_Irecv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &recv_req[i]);

// how to access chunk, take part from msgsA[i] to msgsB[i] and assign to a part of a different array??

} 

整个代码

#include <mpi.h>
#include <stdio.h>
#define MAXPROCS 8 /* max number of processes */

int main(int argc, char *argv[])
{
int i, j, n=100, numProcs, myid, tag=55, msgsA[MAXPROCS], msgsB[MAXPROCS], myStart, myEnd;
double *chunk = malloc(n*sizeof(double));
double *K1 = malloc (n*sizeof(double));

MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numProcs);
MPI_Comm_rank(MPI_COMM_WORLD, &myid);

if(myid==0) {
/* split the array into pieces and send the starting and finishing indices to the slave processes */
for (i=1; i<numProcs; i++) {
    myStart = (n / numProcs) * i + ((n % numProcs) < i ? (n % numProcs) : i);
    myEnd = myStart + (n / numProcs) + ((n % numProcs) > i) - 1;
    if(myEnd>n) myEnd=n;
    MPI_Isend(&myStart, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &send_req[i]);
    MPI_Isend(&myEnd, 1, MPI_INT, i, tag+1, MPI_COMM_WORLD, &send_req[i]);
}
/* starting and finish values for the master process */
myStart = (n / numProcs) * myid + ((n % numProcs) < myid ? (n % numProcs) : myid);
myEnd = myStart + (n / numProcs) + ((n % numProcs) > myid) - 1;

for (i=1; i<numProcs; i++) {
  MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]);
  MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]);
  MPI_Irecv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &recv_req[i]);

// --- access the chunk array here, take part from msgsA[i] to msgsB[i] and assign to a part of a different array

}
//calculate a function on fragments of K1 and returns void

/* Wait until all chunks have been collected */
MPI_Waitall(numProcs-1, &recv_req[1], &status[1]);
}

else {  
    //calculate a function on fragments of K1 and returns void

    MPI_Isend (K1, n, MPI_DOUBLE, 0, tag+2, MPI_COMM_WORLD, &send_req[0]);
    MPI_Wait(&send_req[0], &status[0]);
}
MPI_Finalize();
return 0;
}

1 个答案:

答案 0 :(得分:0)

我想我找到了解决方案。造成这些问题的原因是MPI_Irecv()。使用非阻塞接收器,我无法访问块变量。所以解决方案似乎只是

MPI_Status status[MAXPROCS];

for (i=1; i<numProcs; i++) {
MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]);
MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]);
MPI_Recv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &status[i]);

//do whatever I need on chunk[j] variables
}