带有2D数组的MPI_Gather上的分段错误

时间:2016-06-23 13:34:51

标签: c parallel-processing mpi

我在C中遇到了MPI代码的问题。

我认为我创建了一个很好的算法来处理带有2D数组的双循环。但是,当我尝试使用MPI_Gather从进程收集数据时,存在分段错误错误。这是代码:

#define NN 4096
#define NM 4096

double global[NN][NM];

void range(int n1, int n2, int nprocs, int irank, int *ista, int *iend){
    int iwork1;
    int iwork2;
    iwork1 = ( n2 - n1 + 1 ) / nprocs;
    iwork2 = ( ( n2 - n1 + 1 ) % nprocs );
    *ista = irank * iwork1 + n1 + fmin(irank, iwork2);
    *iend = *ista + iwork1 - 1;
    if ( iwork2 > irank ) 
        iend = iend + 1;
}

void runCalculation(int n, int m, int argc, char** argv)
{
    const int iter_max = 1000;

    const double tol = 1.0e-6;
    double error     = 1.0;

    int rank, size;
    int start, end;

    MPI_Init( &argc, &argv );

    MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    MPI_Comm_size( MPI_COMM_WORLD, &size );

    if (size != 16) MPI_Abort( MPI_COMM_WORLD, 1 );

    memset(global, 0, n * m * sizeof(double));

    if(rank == 0){
        for (int j = 0; j < n; j++)
        {
            global[j][0] = 1.0;
        }
    }

    int iter = 0;

    while ( error > tol && iter < iter_max )
    {
        error = 0.0;

        MPI_Bcast(global, NN*NM, MPI_DOUBLE, 0, MPI_COMM_WORLD); 

        if(iter == 0)
            range(1, n, size, rank, &start, &end);

        int size = end - start;

        double local[size][NM];
        memset(local, 0, size * NM * sizeof(double));

        for( int j = 1; j < size - 1; j++)
        {   
            for( int i = 1; i < m - 1; i++ )
            {   
                local[j][i] = 0.25 * ( global[j][i+1] + global[j][i-1]
                                + global[j-1][i] + global[j+1][i]);
                error = fmax( error, fabs(local[j][i] - global[j][i]));
            }
        }

        MPI_Gather(&local[0][0], size*NM, MPI_DOUBLE, &global[0][0], NN*NM, MPI_DOUBLE, 0, MPI_COMM_WORLD);

        printf("%d\n", iter);

        if(iter % 100 == 0) 
            printf("%5d, %0.6f\n", iter, error);

        iter++;
    }

    MPI_Finalize();

}

我用4096x4096阵列运行它。在进程等级为0的情况下,它会在MPI_Gather行中创建分段错误。我检查了本地阵列的大小是否正常,我觉得它很好用。

编辑:添加了本地初始化行。新的分段错误:

*** Process received signal ***
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: 0x10602000
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 19216 on machine_name exited on signal 11 (Segmentation fault).

1 个答案:

答案 0 :(得分:0)

recvcount MPI_Gather参数表示从每个进程收到的项目数,而不是它收到的项目总数。

MPI_Gather(&local[0][0], size*NM, MPI_DOUBLE, &global[0][0], NN*NM, MPI_DOUBLE, 0, MPI_COMM_WORLD);

应该是:

MPI_Gather(&local[0][0], size*NM, MPI_DOUBLE, &global[0][0], size*NM, MPI_DOUBLE, 0, MPI_COMM_WORLD);