MPI Scatterv:如何处理根进程?

时间:2016-08-23 17:37:09

标签: mpi

我仍然不太确定的事情是MPI Scatter / Scatterv中的根进程会发生什么。

如果我在我的代码中尝试划分数组,是否需要将根进程包含在接收器的数量中(因此使得发送大小为nproc)或者是否将其排除?

在我的Matrix Multiplication代码中,我仍然遇到一个错误,其中一个进程遇到了异常行为,过早地终止了程序:

void readMatrix();

double StartTime;
int rank, nproc, proc;
//double matrix_A[N_ROWS][N_COLS];
double **matrix_A;
//double matrix_B[N_ROWS][N_COLS];
double **matrix_B;
//double matrix_C[N_ROWS][N_COLS];
double **matrix_C;
int low_bound = 0; //low bound of the number of rows of each process
int upper_bound = 0; //upper bound of the number of rows of [A] of each process
int portion = 0; //portion of the number of rows of [A] of each process


int main (int argc, char *argv[]) {

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &nproc);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    matrix_A = (double **)malloc(N_ROWS * sizeof(double*));
    for(int i = 0; i < N_ROWS; i++) matrix_A[i] = (double *)malloc(N_COLS * sizeof(double));
    matrix_B = (double **)malloc(N_ROWS * sizeof(double*));
    for(int i = 0; i < N_ROWS; i++) matrix_B[i] = (double *)malloc(N_COLS * sizeof(double));
    matrix_C = (double **)malloc(N_ROWS * sizeof(double*));
    for(int i = 0; i < N_ROWS; i++) matrix_C[i] = (double *)malloc(N_COLS * sizeof(double));

    int *counts = new int[nproc](); // array to hold number of items to be sent to each process

    // -------------------> If we have more than one process, we can distribute the work through scatterv
    if (nproc > 1) {

        // -------------------> Process 0 initalizes matrices and scatters the portions of the [A] Matrix
        if (rank==0) {
            readMatrix();
        }
        StartTime = MPI_Wtime();
        int counter = 0;
        for (int proc = 0; proc < nproc; proc++) {
            counts[proc] = N_ROWS / nproc ;
            counter += N_ROWS / nproc ;
        }
        counter = N_ROWS - counter;
        counts[nproc-1] = counter;
        //set bounds for each process
        low_bound = rank*(N_ROWS/nproc);
        portion = counts[rank];
        upper_bound = low_bound + portion;
        printf("I am process %i and my lower bound is %i and my portion is %i and my upper bound is %i \n",rank,low_bound, portion,upper_bound);
        //scatter the work among the processes
        int *displs = new int[nproc]();
        displs[0] = 0;
        for (int proc = 1; proc < nproc; proc++) displs[proc] = displs[proc-1] + (N_ROWS/nproc);
        MPI_Scatterv(matrix_A, counts, displs, MPI_DOUBLE, &matrix_A[low_bound][0], portion, MPI_DOUBLE, 0, MPI_COMM_WORLD);
        //broadcast [B] to all the slaves
        MPI_Bcast(&matrix_B, N_ROWS*N_COLS, MPI_DOUBLE, 0, MPI_COMM_WORLD);


        // -------------------> Everybody does their work
        for (int i = low_bound; i < upper_bound; i++) {//iterate through a given set of rows of [A]
            for (int j = 0; j < N_COLS; j++) {//iterate through columns of [B]
                for (int k = 0; k < N_ROWS; k++) {//iterate through rows of [B]
                    matrix_C[i][j] += (matrix_A[i][k] * matrix_B[k][j]);
                }
            }
        }

        // -------------------> Process 0 gathers the work
        MPI_Gatherv(&matrix_C[low_bound][0],portion,MPI_DOUBLE,matrix_C,counts,displs,MPI_DOUBLE,0,MPI_COMM_WORLD);
    }
...

1 个答案:

答案 0 :(得分:1)

根进程也发生在接收方。如果您对此不感兴趣,只需设置sendcounts[root] = 0

有关您必须准确传递的值的具体信息,请参阅MPI_Scatterv

但是,请注意你在做什么。我强烈建议您改变将矩阵分配为一维数组的方式,使用如下的单个malloc:

double* matrix = (double*) malloc( N_ROWS * N_COLS * sizeof(double) );

如果您仍想使用二维数组,则可能需要将类型定义为MPI derived datatype

如果要在单个MPI传输中发送多行,则传递的数据类型无效。

使用MPI_DOUBLE,您告诉MPI缓冲区包含 count MPI_DOUBLE值的连续数组。

由于您使用多个malloc调用分配二维数组,因此您的数据不是连续的。