MPI - 随着进程数量的增加没有加速

时间:2015-05-28 16:59:11

标签: c++ performance mpi

我正在编写用于测试数字是否为素数的程序。在开始时,我计算分配给每个流程的数量,然后将此金额发送给流程。接下来,执行计算并将数据发送回过程0以保存结果。下面的代码有效但当我增加进程数时,我的程序没有加速。在我看来,我的程序并不是并行工作。怎么了?这是我在MPI的第一个项目,所以欢迎任何建议。

我使用mpich2,我在Intel Core i7-950上测试我的程序。

main.cpp中:

if (rank == 0) {
    int workers = (size-1);
    readFromFile(path);
    int elements_per_proc = (N + (workers-1)) / workers;
    int rest = N % elements_per_proc;

    for (int i=1; i <= workers; i++) {
        if((i == workers) && (rest != 0))
            MPI_Send(&rest, 1, MPI_INT, i, 0, MPI_COMM_WORLD);
        else
            MPI_Send(&elements_per_proc, 1, MPI_INT, i, 0, MPI_COMM_WORLD);
    }

    int it = 1;
    for (int i=0; i < N; i++) {
        if((i != 0) && ((i % elements_per_proc) == 0))
        it++;
        MPI_Isend(&input[i], 1, MPI_INT, it, 0, MPI_COMM_WORLD, &send_request);
    }
}

if (rank != 0) {
    int count;
    MPI_Recv(&count, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
    for (int j=0; j < count; j++) {
        MPI_Recv(&number, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
        result = test(number, k);
        send_array[0] = number;
        send_array[1] = result;
        MPI_Send(send_array, 2, MPI_INT, 0, 0, MPI_COMM_WORLD);
    }
}   

if (rank == 0) {
    for (int i=0; i < N; i++) {
        MPI_Recv(rec_array, 2, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
        //  save results
    }
}

1 个答案:

答案 0 :(得分:3)

您的实施可能无法很好地扩展到许多流程,因为您在每个步骤都进行通信。您当前正在传达每个单个输入的数字和结果,这会产生很大的延迟开销。相反,您应该考虑传递输入批量(即使用单个消息)。

此外,使用MPI集体操作(MPI_Scatter / MPI_Gather)而不是MPI_Send / MPI_Recv的循环可能会进一步提高您的效果。

此外,您还可以利用进程处理大量输入。

更具伸缩性的实现可能如下所示:

// tell everybody how many elements there are in total
MPI_Bcast(&N, 1, MPI_INT, 0, MPI_COMM_WORLD);

// everybody determines how many elements it will work on
// (include the master process)
int num_local_elements = N / size + (N % size < rank ? 1 : 0);
// allocate local size
int* local_input = (int*) malloc(sizeof(int)*num_local_elements);

// distribute the input from master to everybody using MPI_Scatterv
int* counts; int* displs;
if (rank == 0) {
    counts = (int*)malloc(sizeof(int) * size);
    displs = (int*)malloc(sizeof(int) * size);
    for (int i = 0; i < size; i++) {
        counts[i] = N / size + (N % size < i ? 1 : 0);
        if (i > 0)
            displs[i] = displs[i-1] + counts[i-1];
    }
    // scatter from master
    MPI_Scatterv(input, counts, displs, MPI_INT, local_input, num_local_elements, MPI_INT, 0, MPI_COMM_WORLD);
} else {
    // receive scattered numbers
    MPI_Scatterv(NULL, NULL, NULL, MPI_DATATYPE_NULL, local_input, num_local_elements, MPI_INT, 0, MPI_COMM_WORLD);
}

// perform prime testing
int* local_results = (int*) malloc(sizeof(int)*num_local_elements);
for (int i = 0; i < num_local_elements; ++i) {
    local_results[i] = test(local_input[i], k);
}

// gather results back to master process
int* results;
if (rank == 0) {
    results = (int*)malloc(sizeof(int)*N);
    MPI_Gatherv(local_results, num_local_elements, MPI_INT, results, counts, displs, MPI_INT, 0, MPI_COMM_WORLD);
    // TODO: save results on master process
} else {
    MPI_Gatherv(local_results, num_local_elements, MPI_INT, NULL, NULL, NULL, MPI_INT, 0, MPI_COMM_WORLD);
}