我试图以并行方式生成导数矩阵。我有整个热方程求解器使用两个线程,但现在我试图找出如何将底行发送到下一个等级的rec2,并将顶行发送到前一个等级的rec1。我试图在请求矩阵中使用数字,但没有任何效果。出于某种原因,我被告知,通过iSend,两个不同的接收器具有相同的请求。
任何修复此问题的建议或帮助我更好地理解这一点都会很棒。
double** change = alloc(sizeX,sizeY);
double* rec1;
double* rec2;
int rank,size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Request req[4];
if(rank != 0)
{
rec1 = calloc(sizeY,sizeof(double));
}
if(rank != size-1)
{
rec2 = calloc(sizeY,sizeof(double));
}
if ( rank == 0 )
{
MPI_Irecv(rec2, sizeY, MPI_DOUBLE, rank+1, 1244, MPI_COMM_WORLD, &req[1]);
MPI_Isend(u[sizeX-1], sizeY, MPI_DOUBLE, rank+1, 1244, MPI_COMM_WORLD, &req[0]);
}
else if ( rank == size-1 )
{
MPI_Irecv(rec1, sizeY, MPI_DOUBLE, rank - 1, 1244, MPI_COMM_WORLD, &req[2]);
MPI_Isend(u[0], sizeY, MPI_DOUBLE, rank - 1, 1244, MPI_COMM_WORLD, &req[3]);
}
else if ( rank != 0 && rank != size -1)
{
MPI_Irecv(rec1, sizeY, MPI_DOUBLE, rank-1, 1234, MPI_COMM_WORLD, &req[1]);
MPI_Isend(u[0], sizeY, MPI_DOUBLE, rank-1, 1234, MPI_COMM_WORLD, &req[0]);
MPI_Irecv(rec2, sizeY, MPI_DOUBLE, rank+1, 1234, MPI_COMM_WORLD, &req[2]);
MPI_Isend(u[sizeX-1], sizeY, MPI_DOUBLE, rank+1, 1234, MPI_COMM_WORLD, &req[3]);
}
// setting elements of most of the points
int xStart = 1;
int xBound = sizeX-1;
for(int x = xStart; x < xBound; x++)
{
for(int y = 1; y < sizeY-1; y++)
{
change[x][y] = fpp(u[x-1][y],u[x][y],u[x+1][y],dx)
+ fpp(u[x][y-1],u[x][y],u[x][y+1],dx);
}
}
MPI_Waitall(size+1,req,MPI_STATUSES_IGNORE );
答案 0 :(得分:1)
观察请求数量及其开始! rank 0和rank size-1处理2条消息,而其他处理4条消息。这可能会导致MPI_Waitall()出现问题。并且rank size-1不初始化req [0]和req [1]
使用@JonathanDursi的reqcnt
技巧:
int nbreq=0;
if(rank!=0 && size>1){
MPI_Irecv(rec1, sizeY, MPI_DOUBLE, rank - 1, 1244, MPI_COMM_WORLD, &req[nbreq]);
nbreq++;
MPI_Isend(u[0], sizeY, MPI_DOUBLE, rank - 1, 1244, MPI_COMM_WORLD, &req[nbreq]);
nbreq++;
}
if(rank!=size-1){
MPI_Irecv(rec2, sizeY, MPI_DOUBLE, rank+1, 1244, MPI_COMM_WORLD, &req[nbreq]);
nbreq++;
MPI_Isend(u[sizeX-1], sizeY, MPI_DOUBLE, rank+1, 1244, MPI_COMM_WORLD, &req[nbreq]);
nbreq++;
}
...
MPI_Waitall(nbreq,req,MPI_STATUSES_IGNORE );