我正在尝试渲染使用MPI计算的分形。我使用以下问题的答案作为参考:sending blocks of 2D array in C using MPI
我的问题是,由所有进程计算的MPI_Gatherv
数据合并似乎无法正常工作,因为我的主进程总是呈现黑屏。
我定义了以下结构:
typedef struct Point {
float r,g,b,x,y;
} Point;
在我的主要内容中,我尝试为结构创建一个MPI_Datatype:
MPI_Datatype struct_type;
MPI_Datatype struct_members[1] = {MPI_FLOAT};
MPI_Aint offsets[1] = {0};
int struct_blengths[1] = {5};
int struct_items = 1;
MPI_Type_create_struct(struct_items, struct_blengths, offsets, struct_members, &struct_type);
MPI_Type_commit(&struct_type);
我有一个计算结果的全局变量:
Point **mandelbrot;
在重新计算每个帧之前,分配变量:
if (proc_id == root) {
//Just a check if this is the first frame that is being rendered
if (s > 0) {
free(&(mandelbrot[0][0]));
free(mandelbrot);
}
s = W;
Point *p = (Point *) malloc(W * H * sizeof(Point));
mandelbrot = (Point **) malloc(W*sizeof(Point *));
for (int i = 0; i < W; i++) {
mandelbrot[i] = &(p[i*H]);
}
}
这里我尝试使用Point
结构创建一个数组子类型(尽可能遵循引用的答案):
//Width of the fractal to render
W = width;
//Height of the fractal
H = height;
//Chunk of width each process is responsible for [width / number of processes]
int segmentSize = (int) W / ntasks;
MPI_Datatype type, resizedtype;
int sizes[2] = {W,H}; /* size of global array */
int subsizes[2] = {segmentSize, H}; /* size of sub-region */
int starts[2] = {0,0};
MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_C, struct_type, &type);
MPI_Type_create_resized(type, 0, H*sizeof(Point), &resizedtype);
MPI_Type_commit(&resizedtype);
计算要发送的块的位移和计数,并为进程分配内存&#39;子阵列:
int sendcounts[segmentSize*H];
int displs[segmentSize*H];
if (proc_id == root) {
for (int i=0; i<segmentSize*H; i++) sendcounts[i] = 1;
int disp = 0;
for (int i=0; i<segmentSize; i++) {
for (int j=0; j<H; j++) {
displs[i*H+j] = disp;
disp += 1;
}
disp += ((W/segmentSize)-1)*H;
}
}
Point *p = (Point *) malloc(segmentSize * H * sizeof(Point));
Point **segment;
segment = (Point **) malloc(segmentSize * sizeof(Point*));
for (int i = 0; i < segmentSize; i++) {
segment[i] = &(p[i*H]);
}
然后我计算了块中每个点的Mandelbrot集的颜色:
int i;
float c[3], dX, dY;
for ( x = 0; x < segmentSize; x++) {
for ( y = 0; y < H; y++) {
//Iterate over the point
i = iterateMandelbrot(rM + x * dR, iM - y * dI);
// Get decimal coordinates for rendering <0,1>
dX = (x + segmentSize * proc_id) / W;
dY = y / H;
//Calculate color using Bernoulli Polynomials
makeColor(i, maxIterations, c);
segment[x][y].x = (float) dX;
segment[x][y].y = (float) dY;
segment[x][y].r = (float) c[0];
segment[x][y].g = (float) c[1];
segment[x][y].b = (float) c[2];
}
}
最后,我尝试将块收集到mandelbort
变量中,以便呈现根进程:
int buffsize = (int) segmentSize * H;
MPI_Gatherv(&(segment[0][0]), W*H/(buffsize), struct_type,
&(mandelbrot[0][0]), sendcounts, displs, resizedtype,
root, MPI_COMM_WORLD);
MPI_Type_free(&resizedtype);
好的问题是,现在问题似乎没有数据写入mandelbrot
变量,因为我的主进程呈现黑屏。在不使用MPI的情况下,代码可以工作,因此问题出在MPI_Gatherv
调用的某个地方,或者可能是我分配数组的方式。我意识到可能存在与mandelbrot集或本地segment
数组相关的内存泄漏,但这不是我目前关注的主要问题。你能看到我在这里做错了什么吗?任何帮助表示赞赏!