给出像这样10x10大小的网格:
0,1,2,3,4, 5,6,7,8,9,
10,11,12,13,14, 15,16,17,18,19,
20,21,22,23,24, 25,26,27,28,29,
30,31,32,33,34, 35,36,37,38,39,
40,41,42,43,44, 45,46,47,48,49,
50,51,52,53,54, 55,56,57,58,59,
60,61,62,63,64, 65,66,67,68,69,
70,71,72,73,74, 75,76,77,78,79,
80,81,82,83,84, 85,86,87,88,89,
90,91,92,93,94, 95,96,97,98,99
如何将视觉上分离的块分散到四个进程? 我用Scatterv尝试了here(或者甚至是here)的方式并且它有效,但是我记得有一个项目有完全相同的问题而且不是< / strong>使用resize或Scatterv来解决它。
这是我的最低代码示例:
#include <stdio.h>
#include <mpi.h>
#include <stdlib.h>
#include <assert.h>
#include <memory.h>
#include <unistd.h>
void print_array(int* arr, int width, int height)
{
int i;
for (i = 0; i< width * height;i++)
{
if((i != 0) && (i % width == 0))
printf("\n");
printf("%4d ", arr[i]);
}
putchar('\n');
}
int main() {
int board[100] = {
0,1,2,3,4, 5,6,7,8,9,
10,11,12,13,14, 15,16,17,18,19,
20,21,22,23,24, 25,26,27,28,29,
30,31,32,33,34, 35,36,37,38,39,
40,41,42,43,44, 45,46,47,48,49,
50,51,52,53,54, 55,56,57,58,59,
60,61,62,63,64, 65,66,67,68,69,
70,71,72,73,74, 75,76,77,78,79,
80,81,82,83,84, 85,86,87,88,89,
90,91,92,93,94, 95,96,97,98,99
};
int numprocs, rank;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* block = calloc(25, sizeof(int));
assert(block != NULL);
MPI_Datatype sent_block_t, resized_sent_block_t;
MPI_Type_vector(5, 5, 10, MPI_INT, &sent_block_t);
MPI_Type_create_resized(sent_block_t, 0, 5*sizeof(int), &resized_sent_block_t);
MPI_Type_commit(&sent_block_t);
MPI_Type_commit(&resized_sent_block_t);
if (rank == 0) {
print_array(board, 10, 10);
MPI_Scatter(&(board[0]), 1, resized_sent_block_t,
&(block[0]), 25, MPI_INT,
0, MPI_COMM_WORLD);
}
else {
MPI_Scatter(NULL, 0, resized_sent_block_t,
&(block[0]), 25, MPI_INT,
0, MPI_COMM_WORLD);
}
for (int i = 0; i < numprocs; i++) {
MPI_Barrier(MPI_COMM_WORLD);
sleep(1);
if (i == rank) {
printf("\nRank: %d\n", rank);
print_array(block, 5, 5);
}
}
MPI_Finalize();
free(block);
}
使用4个进程运行它我得到了:
0 1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39
40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59
60 61 62 63 64 65 66 67 68 69
70 71 72 73 74 75 76 77 78 79
80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99
Rank: 0
0 1 2 3 4
10 11 12 13 14
20 21 22 23 24
30 31 32 33 34
40 41 42 43 44
Rank: 1
5 6 7 8 9
15 16 17 18 19
25 26 27 28 29
35 36 37 38 39
45 46 47 48 49
Rank: 2
10 11 12 13 14
20 21 22 23 24
30 31 32 33 34
40 41 42 43 44
50 51 52 53 54
Rank: 3
15 16 17 18 19
25 26 27 28 29
35 36 37 38 39
45 46 47 48 49
55 56 57 58 59
这种分散是错误的,请注意等级2和3的内容 正确的是:
Rank 0: Rank 1:
0,1,2,3,4, 5,6,7,8,9,
10,11,12,13,14, 15,16,17,18,19,
20,21,22,23,24, 25,26,27,28,29,
30,31,32,33,34, 35,36,37,38,39,
40,41,42,43,44, 45,46,47,48,49,
Rank 2: Rank 3:
50,51,52,53,54, 55,56,57,58,59,
60,61,62,63,64, 65,66,67,68,69,
70,71,72,73,74, 75,76,77,78,79,
80,81,82,83,84, 85,86,87,88,89,
90,91,92,93,94, 95,96,97,98,99
问题
有没有办法在不使用Scatterv的情况下分散相等大小的网格块?
答案 0 :(得分:1)
我认为不可能用MPI_Scatter来做这个,因为数据类型的位移不是常数 - 在一行内,位移是5个整数(或者你的调整大小类型的计数为1),但是跳到了下一行是50个整数的位移(或计数= 10)。
在4个过程中,使用:
int counts[4] = {1, 1, 1, 1};
int disps[4] = {0, 1, 10, 11};
...
if (rank == 0) print_array(board, 10, 10);
MPI_Scatterv(&(board[0]), counts, disps, resized_sent_block_t,
&(block[0]), 25, MPI_INT,
0, MPI_COMM_WORLD);
似乎工作正常。请注意,您不需要单独调用scatter - MPI保证只在根处引用仅在根处有效的参数。