MPI_Isend / Irecv仅在for循环的第一次迭代中执行。是什么阻止它在循环的后续迭代中执行

时间:2019-01-26 12:38:24

标签: c parallel-processing mpi

我正在创建一个程序,其中将数组中的信息传递给不同的处理器。在下面的代码中,我试图使用for循环反复向处理器发送信息或从处理器发送信息。当我在5核和2核上运行该程序时,所有打印语句均按预期在第一次迭代中执行,但此后不再执行任何打印语句。该程序不会退出并显示任何错误消息。它只是挂了。有什么想法吗?

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
#include <mpi.h>

int main(int argc, char *argv[])
{
  /*MPI Specific Variables*/
  int my_size, my_rank, up, down;
  MPI_Request reqU, reqD, sreqU, sreqD;
  MPI_Status rUstatus, rDstatus, sUstatus, sDstatus;



  /*Other Variables*/
  int max_iter = 10;
  int grid_size = 1000;
  int slice;
  int x,y,j;


  MPI_Init(&argc, &argv);
  MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
  MPI_Comm_size(MPI_COMM_WORLD, &my_size);

  /*Determining neighbours*/
  if (my_rank != 0) /*if statemets used to stop highest and lowest rank neighbours arent outside 0 - my_size-1 range of ranks*/
    {
      up = my_rank-1;
    }
  else
    {
      up = 0;
    }

  if(my_rank != my_size-1)
    {
      down = my_rank+1;
    }
  else
    {
      down = my_size-1;
    }

  /*cross-check: presumed my_size is a factor of gridsize else there are odd sized slices and this is not coded for*/
  if (grid_size%my_size != 0)
    {
      printf("ERROR - number of procs =  %d, this is not a factor of grid_size %d\n", my_size, grid_size);
      exit(0);
    }

  /*Set Up Distributed Data Approach*/
  slice = grid_size/my_size;
  printf("slice = %d\n", slice);




  double phi[slice+2][grid_size]; /*extra 2 rows to allow for halo data*/

  for (y=0; y < slice+2; y++)
    {
      for (x=0; x < grid_size; x++)
        {
          phi[y][x] = 0.0;
        }
    }



  for (j=0; j<max_iter +1; j++)
    {
      if (my_rank > 0)
        {
          printf("1. myrank =%d\n",my_rank);
          /*send top most strip up one node to be recieved as bottom halo*/
          MPI_Isend(&phi[1][0], grid_size, MPI_DOUBLE, down, 1, MPI_COMM_WORLD, &sreqU);
          printf("2. myrank =%d\n",my_rank);
          /*recv top halo from up one node*/
          MPI_Irecv(&phi[slice + 1][0], grid_size, MPI_DOUBLE, down, 2, MPI_COMM_WORLD, &reqU);
          printf("3. myrank =%d\n",my_rank);
        }    

      if (my_rank < my_size -1)
        {
         printf("4. myrank =%d\n",my_rank);
          /*recv top halo from down one node*/
         MPI_Irecv(&phi[0][0], grid_size, MPI_DOUBLE, up, 1, MPI_COMM_WORLD, &reqD);
         printf("5. myrank =%d\n",my_rank);
         /*send bottom most strip down one node to be recieved as top halo*/
         MPI_Isend(&phi[slice][0], grid_size, MPI_DOUBLE, up, 2, MPI_COMM_WORLD, &sreqD);
         printf("6. myrank =%d\n",my_rank);
        }


      if (my_rank>0)
        {
          printf("7. myrank =%d\n",my_rank);
          /*Wait for send to down one rank to complete*/
          MPI_Wait(&sreqU, &sUstatus);
          printf("8. myrank =%d\n",my_rank);
          /*Wait for recieve from up one rank to complete*/
          MPI_Wait(&reqU, &rUstatus);
          printf("9. myrank =%d\n",my_rank);
        }

      if (my_rank < my_size-1)
        {
          printf("10. myrank =%d\n",my_rank);
          /*Wait for send to up down one rank to complete*/
          MPI_Wait(&sreqD, &sDstatus);;
          printf("11. myrank =%d\n",my_rank);
          /*Wait for recieve from down one rank to complete*/
          MPI_Wait(&reqD, &rDstatus);
          printf("12. myrank =%d\n",my_rank);
        }
  }

  printf("l\n");
  MPI_Finalize();

  return 0;
}

1 个答案:

答案 0 :(得分:1)

这与迭代无关,剩下的问题与up / down的计算有关。在需要up的时候定义的代码down中,这是相反的。在您以前的代码中没有显示,因为MPI_PROC_NULL只会跳过这些通信。