从MPI_Scatter到MPI_Scatterv

时间:2013-12-01 11:47:05

标签: c++ mpi

这是代码的一部分:

#include<mpi.h>
#include<stdio.h>
int main(int argc, char *argv[])
{
int numtask;
double *Matr_Init,*Matr_Fin;
int i, myrank, root=0;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &numtask);
double Rows[numtask];

int sendcount[numtask],reccount[numtask],source,displs[numtask];
//Procesul cu rankul root aloca spatiul si initializeaza matrice initiala
if(myrank==root)
{
    for (i = 0; i<numtask; i++){
        sendcount[i]=(i+2)*numtask;
        reccount[i]=sendcount[i];
        //if(i==0)
        displs[i]=numtask;
        //else
        //displs[i]=sendcount[i]+displs[i-1];
    }
    Matr_Init=(double*)malloc(numtask*numtask*sizeof(double));
    Matr_Fin=(double*)malloc(numtask*numtask*sizeof(double));
    for(int i=0;i<numtask*numtask;i++)
      Matr_Init[i]=rand()/1000000000.0;
    printf("Tipar datele initiale\n");
    for(int i=0;i<numtask;i++)
    {
        printf("\n");
        for(int j=0;j<numtask;j++)
           printf("Matr_Init[%d,%d]=%5.2f ",i,j,Matr_Init[i*numtask+j]);
    }
    printf("\n");
    MPI_Barrier(MPI_COMM_WORLD);
}
else MPI_Barrier(MPI_COMM_WORLD);

MPI_Scatterv(Matr_Init, sendcount, displs, MPI_DOUBLE, Rows, numtask, MPI_DOUBLE, root, MPI_COMM_WORLD);

printf("\n");
printf("Resultatele f-tiei MPI_Scatter pentru procesul cu rankul %d \n", myrank);
for (i=0; i<sendcount[myrank]; ++i) 
    printf("Rows[%d]=%5.2f ",i, Rows[i]);
printf("\n");       
MPI_Barrier(MPI_COMM_WORLD);    
MPI_Finalize();
return 0;
}

输出是:     Tipar datele initiale

Matr_Init[0,0]= 1.80 Matr_Init[0,1]= 0.85 Matr_Init[0,2]= 1.68 Matr_Init[0,3]= 1.71 
Matr_Init[1,0]= 1.96 Matr_Init[1,1]= 0.42 Matr_Init[1,2]= 0.72 Matr_Init[1,3]= 1.65 
Matr_Init[2,0]= 0.60 Matr_Init[2,1]= 1.19 Matr_Init[2,2]= 1.03 Matr_Init[2,3]= 1.35 
Matr_Init[3,0]= 0.78 Matr_Init[3,1]= 1.10 Matr_Init[3,2]= 2.04 Matr_Init[3,3]= 1.97 

Resultatele f-tiei MPI_Scatter pentru procesul cu rankul 0 
Rows[0]= 1.96 Rows[1]= 0.42 Rows[2]= 0.72 Rows[3]= 1.65 Rows[4]= 0.00 Rows[5]= 0.00 Rows[6]= 0.00 Rows[7]= 0.00 
[compute-0-1.local:18772] *** An error occurred in MPI_Scatterv
[compute-0-1.local:18772] *** on communicator MPI_COMM_WORLD
[compute-0-1.local:18772] *** MPI_ERR_TRUNCATE: message truncated
[compute-0-1.local:18772] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 3 with PID 18774 on
node compute-0-1 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[hpc.usm.md:09043] 2 more processes have sent help message help-mpi-errors.txt /     mpi_errors_are_fatal
[hpc.usm.md:09043] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

问题是:我真的不了解MPI_Scatterv函数的输入选项。请帮我解释一下。 谢谢。

1 个答案:

答案 0 :(得分:0)

几乎从错误中解脱出来!您可能需要切换sendcount(要发送的值的数量)和displs(在数组中的位置)来获取此信息:

 sendcount[i]=numtask;
 reccount[i]=sendcount[i];
 //if(i==0)
 displs[i]=i*numtask;

此外,即使root以外的进程不需要sendbuf,它们似乎也需要sendcounts或displs。我不得不从“if(rank = 0)”测试中删除for循环

MPI_Scatterv的文档:http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Scatterv.html

然后代码运行时没有提示错误。事实上,你清楚地理解了scatterv的输入选项...并且在路上犯了一点错误!

再见,

弗朗西斯