MPI_Reduce是否需要用于接收缓冲区的现有指针?

时间:2019-01-09 16:40:10

标签: c pointers malloc mpi

MPI documentation断言,接收缓冲区(recvbuf)的地址仅在根目录有效。这意味着可能无法在其他进程中分配内存。 this question已确认。

int MPI_Reduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype,
               MPI_Op op, int root, MPI_Comm comm)

起初,我认为recvbuf甚至不必存在:recvbuf本身的内存不必分配(例如,通过动态分配)。不幸的是(我花了很多时间来理解我的错误!),看来即使指向的内存无效,指针本身也必须存在。

请参阅下文,了解我想到的代码,其中包含提供段错误的版本,不提供段错误的版本。

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char **argv) {
   // MPI initialization
    int world_rank, world_size;
    MPI_Init(NULL, NULL);
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    int n1 = 3, n2 = 10; // Sizes of the 2d arrays

    long **observables = (long **) malloc(n1 * sizeof(long *));
    for (int k = 0 ; k < n1 ; ++k) {
        observables[k] = (long *) calloc(n2, sizeof(long));
        for (long i = 0 ; i < n2 ; ++i) {
            observables[k][i] = k * i * world_rank; // Whatever
        }
    }

    long **obs_sum; // This will hold the sum on process 0
#ifdef OLD  // Version that gives a segfault
    if (world_rank == 0) {
        obs_sum = (long **) malloc(n2 * sizeof(long *));
        for (int k = 0 ; k < n2 ; ++k) {
            obs_sum[k] = (long *) calloc(n2, sizeof(long));
        }
    }
#else // Correct version
   // We define all the pointers in all the processes.
    obs_sum = (long **) malloc(n2 * sizeof(long *));
    if (world_rank == 0) {
        for (int k = 0 ; k < n2 ; ++k) {
            obs_sum[k] = (long *) calloc(n2, sizeof(long));
        }
    }
#endif

    for (int k = 0 ; k < n1 ; ++k) {
        // This is the line that results in a segfault if OLD is defined
        MPI_Reduce(observables[k], obs_sum[k], n2, MPI_LONG, MPI_SUM, 0,
                   MPI_COMM_WORLD);
    }

    MPI_Barrier(MPI_COMM_WORLD);
    MPI_Finalize();
    // You may free memory here

    return 0;
}

我正确解释了吗?这种行为背后的原因是什么?

1 个答案:

答案 0 :(得分:2)

问题不是MPI,而是您传递obs_sum[k]的事实,但是您根本没有定义/分配它。

for (int k = 0 ; k < n1 ; ++k) {
    // This is the line that results in a segfault if OLD is defined
    MPI_Reduce(observables[k], obs_sum[k], n2, MPI_LONG, MPI_SUM, 0,
               MPI_COMM_WORLD);
}

即使MPI_Reduce()未获得其值,生成的代码也将获得obs_sum(未定义且未分配),向其添加k并尝试读取此指针(segfault)传递给MPI_Reduce()

例如,行的分配应足以使其正常工作:

#else // Correct version
      // We define all the pointers in all the processes.
      obs_sum = (long **) malloc(n2 * sizeof(long *));
      // try commenting out the following lines
      // if (world_rank == 0) {
      //   for (int k = 0 ; k < n2 ; ++k) {
      //     obs_sum[k] = (long *) calloc(n2, sizeof(long));
      //   }
      // }
#endif

我会将2D数组分配为平面数组-我真的讨厌这种数组表示。这会更好吗?

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char **argv) {
   // MPI initialization
    int world_rank, world_size;
    MPI_Init(NULL, NULL);
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    int n1 = 3, n2 = 10; // Sizes of the 2d arrays

    long *observables = (long *) malloc(n1*n2*sizeof(long));
    for (int k = 0 ; k < n1 ; ++k) {
        for (long i = 0 ; i < n2 ; ++i) {
            observables[k*n2+i] = k * i * world_rank; // Whatever
        }
    }

    long *obs_sum = nullptr; // This will hold the sum on process 0
    if (world_rank == 0) {
        obs_sum = (long *) malloc(n1*n2*sizeof(long));
    }

    MPI_Reduce(observables, obs_sum, n1*n2, MPI_LONG, MPI_SUM, 0, MPI_COMM_WORLD);

    MPI_Barrier(MPI_COMM_WORLD);
    MPI_Finalize();
    // You may free memory here

    return 0;
}