我认为描述问题的最简单方法是使用简单的代码。在每个处理器上,我动态分配了“2D数组”(通过新的* [行],新[cols]形式实现,请参阅下面的代码以进行说明)。无论是对还是错,我都试图使用一个提交的MPI_Datatype来帮助我做MPI_Gatherv()将所有数组收集到根处理器上的一个2D数组中。
这是代码,下面我突出显示它的重点(如果编译并运行它应该很容易理解 - 它要求你想要的数组的尺寸):
#include <iostream>
#include <string>
#include <cmath>
#include <cstdlib>
#include <time.h>
#include "mpi.h"
using namespace std;
// A function that prints out the 2D arrays to the terminal.
void print_2Darray(int **array_in,int dim_rows, int dim_cols) {
cout << endl;
for (int i=0;i<dim_rows;i++) {
for (int j=0;j<dim_cols;j++) {
cout << array_in[i][j] << " ";
if (j==(dim_cols-1)) {
cout << endl;
}
}
}
cout << endl;
}
int main(int argc, char *argv[]) {
MPI::Init(argc, argv);
// Typical MPI incantations...
int size, rank;
size = MPI::COMM_WORLD.Get_size();
rank = MPI::COMM_WORLD.Get_rank();
cout << "size = " << size << endl;
cout << "rank = " << rank << endl;
sleep(1);
// Dynamically allocate a 2D square array of user-defined size 'dim'.
int dim;
if (rank == 0) {
cout << "Please enter dimensions of 2D array ( dim x dim array ): ";
cin >> dim;
cout << "dim = " << dim << endl;
}
MPI_Bcast(&dim,1,MPI_INT,0,MPI_COMM_WORLD);
int **array2D;
array2D = new int*[dim];
for (int i=0; i<dim; i++) {
array2D[i] = new int[dim](); // the extra '()' initializes to zero.
}
// Fill the arrays with i*j+rank where i and j are the indices.
for (int i=0;i<dim;i++) {
for (int j=0;j<dim;j++) {
array2D[i][j] = i*j + rank;
}
}
// Print out the arrays.
print_2Darray(array2D,dim,dim);
// Commit a MPI_Datatype for these arrays.
MPI_Datatype MPI_ARRAYROW;
MPI_Type_contiguous(dim, MPI_INT, &MPI_ARRAYROW);
MPI_Type_commit(&MPI_ARRAYROW);
// Declare 'all_array2D[][]' which will contain array2D[][] from all procs.
int **all_array2D;
all_array2D = new int*[size*dim];
for (int i=0; i<size*dim; i++) {
all_array2D[i] = new int[dim](); // the extra '()' initializes to zero.
}
// Print out the arrays.
print_2Darray(all_array2D,size*dim,dim);
// Displacement vector for MPI_Gatherv() call.
int *displace;
displace = (int *)calloc(size,sizeof(int));
int *dim_list;
dim_list = (int *)calloc(size,sizeof(int));
int j = 0;
for (int i=0; i<size; i++) {
displace[i] = j;
cout << "displace[" << i << "] = " << displace[i] << endl;
j += dim;
dim_list[i] = dim;
}
// MPI_Gatherv call.
MPI_Barrier(MPI_COMM_WORLD);
MPI_Gatherv(array2D,dim,MPI_ARRAYROW,all_array2D,&dim_list[rank],&displace[rank],MPI_ARRAYROW,0,MPI_COMM_WORLD);
// Print out the arrays.
print_2Darray(all_array2D,size*dim,dim);
MPI::Finalize();
return 0;
}
代码编译,但遇到分段错误(我使用'mpic ++'编译并使用'mpirun -np 2'使用2个处理器):
[unknown-78-ca-39-b4-09-4f:02306] *** Process received signal ***
[unknown-78-ca-39-b4-09-4f:02306] Signal: Segmentation fault (11)
[unknown-78-ca-39-b4-09-4f:02306] Signal code: Address not mapped (1)
[unknown-78-ca-39-b4-09-4f:02306] Failing at address: 0x0
[unknown-78-ca-39-b4-09-4f:02306] [ 0] 2 libSystem.B.dylib 0x00007fff844021ba _sigtramp + 26
[unknown-78-ca-39-b4-09-4f:02306] [ 1] 3 ??? 0x0000000000000001 0x0 + 1
[unknown-78-ca-39-b4-09-4f:02306] [ 2] 4 gatherv2Darrays.x 0x00000001000010c2 main + 1106
[unknown-78-ca-39-b4-09-4f:02306] [ 3] 5 gatherv2Darrays.x 0x0000000100000a98 start + 52
[unknown-78-ca-39-b4-09-4f:02306] *** End of error message ***
mpirun noticed that job rank 0 with PID 2306 on node unknown-78-ca-39-b4-09-4f.home exited on signal 11 (Segmentation fault).
1 additional process aborted (not shown)
在代码末尾附近执行'print_2Darray(all_array2D,size * dim,dim)'函数时发生分段错误,其中'all_array2D''应该'包含收集的数组。更具体地说,代码似乎为从主处理器收集的位打印'all_array2D',但是当print_2Darray()函数开始处理来自其他处理器的位时,会给出seg错误。
突出的代码点:
我想这归结为我不知道MPI_Gatherv()如何正确地动态分配数组......我应该使用MPI_Datatypes吗?对我来说,动态分配数组非常重要。
我将非常感谢任何帮助/建议!我的想法几乎耗尽了!
答案 0 :(得分:4)
MPI_Gatherv
,MPI_Scatterv
,实际上所有其他采用数组参数的MPI通信调用,都希望数组元素在内存中连续排列。这意味着在调用MPI_Gatherv(array2D, dim, MPI_ARRAYROW, ...)
中,MPI期望MPI_ARRAYROW
类型的第一个元素从array2D
指向的内存位置开始,第二个元素从(BYTE*)array2D + extent_of(MPI_ARRAYROW)
开始,第三个元素从(BYTE*)array2D + 2*extent_of(MPI_ARRAYROW)
开始,依此类推。此处extent_of()
是MPI_ARRAYROW
类型的范围,可以通过调用MPI_Type_get_extent
获取。
显然,2D数组的行在内存中不是连续的,因为它们中的每一行都是由new
运算符的单独调用分配的。另外array2D
不是指向数据的指针,而是指向每行指针向量的指针。这在MPI中不起作用,StackOverflow上有无数其他问题,这里讨论了这个事实 - 只需搜索MPI 2D
并自己查看。
解决方案是使用一大块单独分配的内存块和一个伴随的涂料向量 - 请参阅答案中提到的this question和arralloc()
函数。
答案 1 :(得分:2)
这个涉及数组分配的问题一直在处理C / C ++和MPI时出现。这样:
int **array2D;
array2D = new int*[dim];
for (int i=0; i<dim; i++) {
array2D[i] = new int[dim](); // the extra '()' initializes to zero.
}
分配dim
个1d数组,每个dim
个整数。然而,没有理由为什么这些应该彼此相邻 - 昏暗的阵列可能分散在内存中。因此,即使从dim*dim
发送array2D[0]
整数也行不通。 all_array2D
是一样的;你正在创建size*dim
个数组,每个数组都有dim
个大小,但是他们知道的位置相互关联,使你的位移可能出错。
要使数组在内存中连续,您需要执行类似
的操作int **array2D;
array2D = new int*[dim];
array2D[0] = new int[dim*dim];
for (int i=1; i<dim; i++) {
array2D[i] = &(array2D[dim*i]);
}
,同样适用于all_array2D
。只有这样才能开始推理内存布局。
答案 2 :(得分:1)
我只是想总结一下@Hristolliev和@JonathanDursi帮助我解决的问题。
MPI_Gatherv()
这样的MPI命令可以处理连续分配的内存块,因此使用“new
”来构建2D数组然后输入MPI命令将不起作用,因为'new
'不保证连续的块。请使用“calloc
”来制作这些数组(请参阅下面的代码作为示例)。MPI_ARRAYROW
类型的第一个元素的指针。将2D阵列解除引用一级,例如array2D [0]将实现此目的(再次,请参阅下面的修改后的工作代码)。最终的工作代码如下:
#include <iostream>
#include <string>
#include <cmath>
#include <cstdlib>
#include <time.h>
#include "mpi.h"
using namespace std;
void print_2Darray(int **array_in,int dim_rows, int dim_cols) {
cout << endl;
for (int i=0;i<dim_rows;i++) {
for (int j=0;j<dim_cols;j++) {
cout << array_in[i][j] << " ";
if (j==(dim_cols-1)) {
cout << endl;
}
}
}
cout << endl;
}
int main(int argc, char *argv[]) {
MPI::Init(argc, argv);
// Typical MPI incantations...
int size, rank;
size = MPI::COMM_WORLD.Get_size();
rank = MPI::COMM_WORLD.Get_rank();
cout << "size = " << size << endl;
cout << "rank = " << rank << endl;
sleep(1);
// Dynamically allocate a 2D square array of user-defined size 'dim'.
int dim;
if (rank == 0) {
cout << "Please enter dimensions of 2D array ( dim x dim array ): ";
cin >> dim;
cout << "dim = " << dim << endl;
}
MPI_Bcast(&dim,1,MPI_INT,0,MPI_COMM_WORLD);
// Use another way of declaring the 2D array which ensures it is contiguous in memory.
int **array2D;
array2D = (int **) calloc(dim,sizeof(int *));
array2D[0] = (int *) calloc(dim*dim,sizeof(int));
for (int i=1;i<dim;i++) {
array2D[i] = array2D[0] + i*dim;
}
// Fill the arrays with i*j+rank where i and j are the indices.
for (int i=0;i<dim;i++) {
for (int j=0;j<dim;j++) {
array2D[i][j] = i*j + rank;
}
}
// Print out the arrays.
print_2Darray(array2D,dim,dim);
// Commit a MPI_Datatype for these arrays.
MPI_Datatype MPI_ARRAYROW;
MPI_Type_contiguous(dim, MPI_INT, &MPI_ARRAYROW);
MPI_Type_commit(&MPI_ARRAYROW);
// Use another way of declaring the 2D array which ensures it is contiguous in memory.
int **all_array2D;
all_array2D = (int **) calloc(size*dim,sizeof(int *));
all_array2D[0] = (int *) calloc(dim*dim,sizeof(int));
for (int i=1;i<size*dim;i++) {
all_array2D[i] = all_array2D[0] + i*dim;
}
// Print out the arrays.
print_2Darray(all_array2D,size*dim,dim);
// Displacement vector for MPI_Gatherv() call.
int *displace;
displace = (int *)calloc(size,sizeof(int));
int *dim_list;
dim_list = (int *)calloc(size,sizeof(int));
int j = 0;
for (int i=0; i<size; i++) {
displace[i] = j;
cout << "displace[" << i << "] = " << displace[i] << endl;
j += dim;
dim_list[i] = dim;
cout << "dim_list[" << i << "] = " << dim_list[i] << endl;
}
// MPI_Gatherv call.
MPI_Barrier(MPI_COMM_WORLD);
cout << "array2D[0] = " << array2D[0] << endl;
MPI_Gatherv(array2D[0],dim,MPI_ARRAYROW,all_array2D[0],&dim_list[rank],&displace[rank],MPI_ARRAYROW,0,MPI_COMM_WORLD);
// Print out the arrays.
print_2Darray(all_array2D,size*dim,dim);
MPI::Finalize();
return 0;
}
使用mpic++
进行编译。