MPI_send和MPI_Resc - 更新单个数组值

时间:2014-05-01 01:44:27

标签: c++ arrays multithreading process mpi

我正在编写一个MPI版本的数组更新,我正在从多个进程更新单个数组。以下是我的代码 -

uint n_sigm;
int *suma_sigm;
int my_first_i = 0;
int my_last_i = 0;


using namespace std;
int main(int argc, char *argv[])

{
int rank, size, i;

MPI_Status status; 
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
n_sigm=40;
int allocatedTask = n_sigm / size;
suma_sigm=(int *)malloc(sizeof(int)*n_sigm);    
if (size < 2)
{
    printf("Please run with two processes.\n");fflush(stdout);
    MPI_Finalize();
    return 0;
}
if (rank != 0)
{

    my_first_i = rank*allocatedTask;
    my_last_i = my_first_i+allocatedTask;       
    cout<<rank<<" is rank and "<<my_first_i<<" is first and "<<my_last_i<<" is my last "<<endl;

        for (i=my_first_i; i<my_last_i; i++)
    {
                suma_sigm[i] = rand()%n_sigm;
        cout<<"value at "<<i<<" is : "<<suma_sigm[i]<<endl;
    }
        MPI_Send(suma_sigm, allocatedTask, MPI_INT, 0, 123, MPI_COMM_WORLD); 


}
else
{
    for (i=0; i<allocatedTask; i++)
{   // process 0 executing its array
        suma_sigm[i] = rand()%n_sigm;
}
    MPI_Send(suma_sigm, allocatedTask, MPI_INT, 0, 123, MPI_COMM_WORLD);    
for (i=0; i<n_sigm; i++){
        suma_sigm[i] = 0;}
for (int q = 0; q < size; q++)
{ 
    MPI_Recv(suma_sigm, allocatedTask, MPI_INT, q, 123, MPI_COMM_WORLD, &status);
    cout<<" Process_"<<q<<" :";
    int start = q*allocatedTask;
    int last = start +allocatedTask;
    for (int h=start; h<last; h++)
    {
        cout<<"value2 at "<<h<<" is : "<<suma_sigm[h]<<endl;
    }cout<<endl;
}
    fflush(stdout);
}
free(suma_sigm);
MPI_Finalize();
return 0;

}

正如您所看到的,我正在为数组生成值&#34; suma_sigm&#34;从所有等级然后传递它,在传递值之前显示正常。但是,除了过程0之外,所有过程的接收值都显示为零。只有过程零能够发送在接收函数中成功使用的值。

1 个答案:

答案 0 :(得分:2)

使用MPI_Gather可以更轻松地解决您要解决的任务。

文档:http://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-1.1/node69.htm#Node69

  

每个进程(包括根进程)将其发送缓冲区的内容发送到根进程。根进程接收消息并按排名顺序存储它们。

文档还显示了与您的代码类似的等效MPI_Send / MPI_Recv用法,但请注意+i*recvcount*extent中的“MPI_Recv”偏移量:

  

结果就好像组中的每个进程(包括根进程)都已执行了对

的调用
 MPI_Send(sendbuf, sendcount, sendtype, root , ...),
  

并且root执行了n次调用

 MPI_Recv(recvbuf+i · recvcount· extent(recvtype), recvcount, recvtype, i ,...), 

示例:http://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-1.1/node70.htm

Idea of MPI_Gather