MPI_Scatterv / Gatherv使用C ++和#34; large" 2D矩阵会引发MPI错误

时间:2017-05-17 08:30:20

标签: c++ multidimensional-array mpi scatter

我为并行矩阵矩阵乘法实现了一些MPI_ScattervMPI_Gatherv例程。如果我超过这个尺寸,例如,对于小于N = 180的小矩阵,一切都能正常工作。 N = 184 MPI在使用MPI_Scatterv时会抛出一些错误。

对于2D Scatter,我使用了MPI_Type_create_subarrayMPI_TYPE_create_resized的一些结构。这些结构的解释可以在this question中找到。

我写的最小的示例代码填充了一个矩阵A,其中有一些值将它分散到本地进程并在散乱的A的本地副本中写入每个进程的等级号。之后,本地副本将被收集到主服务器过程

#include "mpi.h"

#define N 184 // grid size
#define procN 2  // size of process grid

int main(int argc, char **argv) {
    double* gA = nullptr; // pointer to array
    int rank, size;       // rank of current process and no. of processes

    // mpi initialization
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    // force to use correct number of processes
    if (size != procN * procN) {
        if (rank == 0) fprintf(stderr,"%s: Only works with np = %d.\n", argv[0], procN *  procN);
        MPI_Abort(MPI_COMM_WORLD,1);
    }

    // allocate and print global A at master process
    if (rank == 0) {
        gA = new double[N * N];
        for (int i = 0; i < N; i++) {
            for (int j = 0; j < N; j++) {
                gA[j * N + i] = j * N + i;
            }
        }

        printf("A is:\n");
        for (int i = 0; i < N; i++) {
            for (int j = 0; j < N; j++) {
                printf("%f ", gA[j * N + i]);
            }
            printf("\n");
        }
    }

    // create local A on every process which we'll process
    double* lA = new double[N / procN * N / procN];

    // create a datatype to describe the subarrays of the gA array
    int sizes[2]    = {N, N}; // gA size
    int subsizes[2] = {N / procN, N / procN}; // lA size
    int starts[2]   = {0,0}; // where this one starts
    MPI_Datatype type, subarrtype;
    MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_C, MPI_DOUBLE, &type);
    MPI_Type_create_resized(type, 0, N / procN * sizeof(double), &subarrtype);
    MPI_Type_commit(&subarrtype);

    // compute number of send blocks
    // compute distance between the send blocks
    int sendcounts[procN * procN];
    int displs[procN * procN];

    if (rank == 0) {
        for (int i = 0; i < procN * procN; i++) {
            sendcounts[i] = 1;
        }
        int disp = 0;
        for (int i = 0; i < procN; i++) {
            for (int j = 0; j < procN; j++) {
                displs[i * procN + j] = disp;
                disp += 1;
            }
            disp += ((N / procN) - 1) * procN;
        }
    }

    // scatter global A to all processes
    MPI_Scatterv(gA, sendcounts, displs, subarrtype, lA,
                 N*N/(procN*procN), MPI_DOUBLE,
                 0, MPI_COMM_WORLD);

    // print local A's on every process
    for (int p = 0; p < size; p++) {
        if (rank == p) {
            printf("la on rank %d:\n", rank);
            for (int i = 0; i < N / procN; i++) {
                for (int j = 0; j < N / procN; j++) {
                    printf("%f ", lA[j * N / procN + i]);
                }
                printf("\n");
            }
        }
        MPI_Barrier(MPI_COMM_WORLD);
    }
    MPI_Barrier(MPI_COMM_WORLD);

    // write new values in local A's
    for (int i = 0; i < N / procN; i++) {
        for (int j = 0; j < N / procN; j++) {
            lA[j * N / procN + i] = rank;
        }
    }

    // gather all back to master process
    MPI_Gatherv(lA, N*N/(procN*procN), MPI_DOUBLE,
                gA, sendcounts, displs, subarrtype,
                0, MPI_COMM_WORLD);

    // print processed global A of process 0
    if (rank == 0) {
        printf("Processed gA is:\n");
        for (int i = 0; i < N; i++) {
            for (int j = 0; j < N; j++) {
                printf("%f ", gA[j * N + i]);
            }
            printf("\n");
        }
    }

    MPI_Type_free(&subarrtype);

    if (rank == 0) {
        delete gA;
    }

    delete lA;

    MPI_Finalize();

    return 0;
}

可以使用

编译和运行它
mpicxx -std=c++11 -o test test.cpp
mpirun -np 4 ./test

对于小N = 4,......,180一切都很顺利

A is:
0.000000 6.000000 12.000000 18.000000 24.000000 30.000000 
1.000000 7.000000 13.000000 19.000000 25.000000 31.000000 
2.000000 8.000000 14.000000 20.000000 26.000000 32.000000 
3.000000 9.000000 15.000000 21.000000 27.000000 33.000000 
4.000000 10.000000 16.000000 22.000000 28.000000 34.000000 
5.000000 11.000000 17.000000 23.000000 29.000000 35.000000 
la on rank 0:
0.000000 6.000000 12.000000 
1.000000 7.000000 13.000000 
2.000000 8.000000 14.000000 
la on rank 1:
3.000000 9.000000 15.000000 
4.000000 10.000000 16.000000 
5.000000 11.000000 17.000000 
la on rank 2:
18.000000 24.000000 30.000000 
19.000000 25.000000 31.000000 
20.000000 26.000000 32.000000 
la on rank 3:
21.000000 27.000000 33.000000 
22.000000 28.000000 34.000000 
23.000000 29.000000 35.000000 
Processed gA is:
0.000000 0.000000 0.000000 2.000000 2.000000 2.000000 
0.000000 0.000000 0.000000 2.000000 2.000000 2.000000 
0.000000 0.000000 0.000000 2.000000 2.000000 2.000000 
1.000000 1.000000 1.000000 3.000000 3.000000 3.000000 
1.000000 1.000000 1.000000 3.000000 3.000000 3.000000 
1.000000 1.000000 1.000000 3.000000 3.000000 3.000000 

我在这里看到使用N = 184时的错误:

Fatal error in PMPI_Scatterv: Other MPI error, error stack:
PMPI_Scatterv(655)..............: MPI_Scatterv(sbuf=(nil), scnts=0x7ffee066bad0, displs=0x7ffee066bae0, dtype=USER<resized>, rbuf=0xe9e590, rcount=8464, MPI_DOUBLE, root=0, MPI_COMM_WORLD) failed
MPIR_Scatterv_impl(205).........: fail failed
I_MPIR_Scatterv_intra(265)......: Failure during collective
I_MPIR_Scatterv_intra(259)......: fail failed
MPIR_Scatterv(141)..............: fail failed
MPIC_Recv(418)..................: fail failed
MPIC_Wait(269)..................: fail failed
PMPIDI_CH3I_Progress(623).......: fail failed
pkt_RTS_handler(317)............: fail failed
do_cts(662).....................: fail failed
MPID_nem_lmt_dcp_start_recv(288): fail failed
dcp_recv(154)...................: Internal MPI error!  cannot read from remote process
Fatal error in PMPI_Scatterv: Other MPI error, error stack:
PMPI_Scatterv(655)..............: MPI_Scatterv(sbuf=(nil), scnts=0x7ffef0de9b50, displs=0x7ffef0de9b60, dtype=USER<resized>, rbuf=0x21a7610, rcount=8464, MPI_DOUBLE, root=0, MPI_COMM_WORLD) failed
MPIR_Scatterv_impl(205).........: fail failed
I_MPIR_Scatterv_intra(265)......: Failure during collective
I_MPIR_Scatterv_intra(259)......: fail failed
MPIR_Scatterv(141)..............: fail failed
MPIC_Recv(418)..................: fail failed
MPIC_Wait(269)..................: fail failed
PMPIDI_CH3I_Progress(623).......: fail failed
pkt_RTS_handler(317)............: fail failed
do_cts(662).....................: fail failed
MPID_nem_lmt_dcp_start_recv(288): fail failed
dcp_recv(154)...................: Internal MPI error!  cannot read from remote process

我的猜测是使用子阵列出了问题,但为什么它适用于N = 4,...,180?另一种可能性是我的阵列数据对于大数据来说不是线性的,因此散射不再起作用。缓存大小会出现问题吗?我无法相信MPI无法散射2D阵列N&gt; 180 ...

我希望你们中的某些人可以帮助我。非常感谢!

1 个答案:

答案 0 :(得分:1)

首先,你的代码不适用于小N.如果我设置N = 6并初始化矩阵,以便所有条目都是唯一的,即

    gA[j * N + i] = j*N+i;

然后你可以看到有一个错误:

mpiexec -n 4 ./gathervorig
A is:
0.000000 6.000000 12.000000 18.000000 24.000000 30.000000 
1.000000 7.000000 13.000000 19.000000 25.000000 31.000000 
2.000000 8.000000 14.000000 20.000000 26.000000 32.000000 
3.000000 9.000000 15.000000 21.000000 27.000000 33.000000 
4.000000 10.000000 16.000000 22.000000 28.000000 34.000000 
5.000000 11.000000 17.000000 23.000000 29.000000 35.000000 
la on rank 0:
0.000000 2.000000 7.000000 
1.000000 6.000000 8.000000 
2.000000 7.000000 12.000000 
la on rank 1:
3.000000 5.000000 10.000000 
4.000000 9.000000 11.000000 
5.000000 10.000000 15.000000 
la on rank 2:
18.000000 20.000000 25.000000 
19.000000 24.000000 26.000000 
20.000000 25.000000 30.000000 
la on rank 3:
21.000000 23.000000 28.000000 
22.000000 27.000000 29.000000 
23.000000 28.000000 33.000000 

此处的错误不在代码中,而是在打印中:

printf("%f ", lA[j * procN + i]);

应该是

printf("%f ", lA[j * N/procN + i]);

现在至少给出散点的正确答案:

mpiexec -n 4 ./gathervorig
A is:
0.000000 6.000000 12.000000 18.000000 24.000000 30.000000 
1.000000 7.000000 13.000000 19.000000 25.000000 31.000000 
2.000000 8.000000 14.000000 20.000000 26.000000 32.000000 
3.000000 9.000000 15.000000 21.000000 27.000000 33.000000 
4.000000 10.000000 16.000000 22.000000 28.000000 34.000000 
5.000000 11.000000 17.000000 23.000000 29.000000 35.000000 
la on rank 0:
0.000000 6.000000 12.000000 
1.000000 7.000000 13.000000 
2.000000 8.000000 14.000000 
la on rank 1:
3.000000 9.000000 15.000000 
4.000000 10.000000 16.000000 
5.000000 11.000000 17.000000 
la on rank 2:
18.000000 24.000000 30.000000 
19.000000 25.000000 31.000000 
20.000000 26.000000 32.000000 
la on rank 3:
21.000000 27.000000 33.000000 
22.000000 28.000000 34.000000 
23.000000 29.000000 35.000000 

由于类似的原因,聚集失败了 - 本地初始化:

  lA[j * procN + i] = rank;

应该是

  lA[j * N/procN + i] = rank;

在此更改之后,聚集也起作用:

Processed gA is:
0.000000 0.000000 0.000000 2.000000 2.000000 2.000000 
0.000000 0.000000 0.000000 2.000000 2.000000 2.000000 
0.000000 0.000000 0.000000 2.000000 2.000000 2.000000 
1.000000 1.000000 1.000000 3.000000 3.000000 3.000000 
1.000000 1.000000 1.000000 3.000000 3.000000 3.000000 
1.000000 1.000000 1.000000 3.000000 3.000000 3.000000 

我认为这里的教训始终是使用唯一的测试数据 - 初始化为i * j,即使在小型系统中也很难发现初始错误。

实际上,真正的问题是你设置N = 4以便procN = N / procN = 2.我总是尝试使用导致奇数/异常数字的大小,例如N = 6给出N / procN = 3,因此没有与procN = 2的混淆。