MPI生成前20个数字

时间:2018-04-25 23:54:25

标签: c++ mpi

以下是我试图从0开始生成前20个数字以尝试学习MPI的代码。

我的代码如下:

#include <mpi.h>
#include <stdio.h>

int i = 0;
void test(int edge_count){
   while(i < edge_count){
    printf("Edge count %d\n",i);
    i++;
   }
}

int main(int argc, char** argv) {
 int edge_count = 20;
  // int *p = &i;
  // Initialize the MPI environment. The two arguments to MPI Init are not
  // currently used by MPI implementations, but are there in case future
  // implementations might need the arguments.
  MPI_Init(NULL, NULL);

  // Get the number of processes
  int world_size;
  MPI_Comm_size(MPI_COMM_WORLD, &world_size);

  // Get the rank of the process
  int world_rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

  // Get the name of the processor
  char processor_name[MPI_MAX_PROCESSOR_NAME];
  int name_len;
  MPI_Get_processor_name(processor_name, &name_len);

  // Print off a hello world message
  printf("Hello world from processor %s, rank %d out of %d processors\n",
         processor_name, world_rank, world_size);
  test(edge_count);
  printf("The value of i is %d \n",i);

  // Finalize the MPI environment. No more MPI calls can be made after this
  MPI_Finalize();
}

我的输出是:

Hello world from processor ENG401651, rank 0 out of 2 processors
Edge count 0
Edge count 1
Edge count 2
Edge count 3
Edge count 4
Edge count 5
Edge count 6
Edge count 7
Edge count 8
Edge count 9
Edge count 10
Edge count 11
Edge count 12
Edge count 13
Edge count 14
Edge count 15
Edge count 16
Edge count 17
Edge count 18
Edge count 19
The value of i is 20 
Hello world from processor ENG401651, rank 1 out of 2 processors
Edge count 0
Edge count 1
Edge count 2
Edge count 3
Edge count 4
Edge count 5
Edge count 6
Edge count 7
Edge count 8
Edge count 9
Edge count 10
Edge count 11
Edge count 12
Edge count 13
Edge count 14
Edge count 15
Edge count 16
Edge count 17
Edge count 18
Edge count 19
The value of i is 20 

我用来运行它的代码是:

mpirun -np 2 execFile

我原以为处理器只能通信并生成0到19之间的数字,但似乎每个处理器都独立生成自己的数字。

我做错了什么?我是MPI的新手,无法弄清楚这背后的原因是什么。

1 个答案:

答案 0 :(得分:1)

计算机只执行您告诉他们的操作。这不仅适用于MPI,也适用于任何类型的编程。

你的脚本中哪些地方明确告诉处理器划分它们之间的工作?问题是,你没有。并且它不会自动发生。

以下修改后的代码版本显示了如何使用world_sizeworld_rank让每个流程独立计算应该执行的工作份额。

为了更好地展示并行性的好处,我使用线程休眠来模拟实际实现中工作所需的时间。

#include <mpi.h>
#include <stdio.h>
#include <chrono>
#include <thread>

void test(int start, int end){
  for(int i=start;i<end;i++){
    printf("Edge count %d\n",i);
    //Simulates complicated, time-consuming work
    std::this_thread::sleep_for(std::chrono::milliseconds(500));
  }
}

int main(int argc, char** argv) {
 int edge_count = 20;
  // int *p = &i;
  // Initialize the MPI environment. The two arguments to MPI Init are not
  // currently used by MPI implementations, but are there in case future
  // implementations might need the arguments.
  MPI_Init(NULL, NULL);

  // Get the number of processes
  int world_size;
  MPI_Comm_size(MPI_COMM_WORLD, &world_size);

  // Get the rank of the process
  int world_rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

  // Get the name of the processor
  char processor_name[MPI_MAX_PROCESSOR_NAME];
  int name_len;
  MPI_Get_processor_name(processor_name, &name_len);

  // Print off a hello world message
  printf("Hello world from processor %s, rank %d out of %d processors\n",
         processor_name, world_rank, world_size);

  const int interval   = edge_count/world_size;
  const int iter_start = world_rank*interval;
  const int iter_end   = (world_rank+1)*interval;

  test(iter_start, iter_end);

  // Finalize the MPI environment. No more MPI calls can be made after this
  MPI_Finalize();
}