发送部分MPI消息

时间:2016-04-13 00:41:28

标签: network-programming mpi cluster-computing distributed-computing

为了避免分配中间缓冲区,在我的应用程序中我的MPI_Recv接收一个单个大数组是有意义的,但在发送端,数据是非连续的,我喜欢它一旦可以组织数据,就可以使数据可用于网络接口。像这样:

MPI_Request reqs[N];
for(/* each one of my N chunks */) {
    partial_send(chunk, &reqs[chunk->idx]);
}

MPI_Waitall(N, reqs, MPI_STATUSES_IGNORE);

或者对我来说更好,在POSIX' writev函数中也是如此:

/* Precalculate this. */
struct iovec iov[N];
for(/* each one of my N chunks */) {
    iov[chunk->idx].iov_base = chunk->ptr;
    iov[chunk->idx].iov_len = chunk->len;
}

/* Done every time I need to send. */
MPI_Request req;
chunked_send(iov, &req);
MPI_Wait(req, MPI_STATUS_IGNORE);

MPI有可能这样吗?

1 个答案:

答案 0 :(得分:0)

I'd like to simply comment but can't as I am new to stack overflow and don't have sufficient reputation ...

If all your chunks are aligned on regular boundaries (e.g. they're pointers into some larger contiguous array) then you should use MPI_Type_indexed where the displacements and counts are all measured in multiples of the basic type (here it's MPI_DOUBLE I guess). However, if the chunks have, for example, been individually malloc'd and there's no guarantee of alignment then you'll need to use a more general MPI_Type_create_struct which specifies displacements in bytes (and also allows a different type for each block which you don't require).

I was worried that you might have to do some sorting to ensure that you scan linearly through memory so the displacements never go backwards (i.e. they are "monotonically nondecreasing"). However, I believe this is only a constraint if you are going to use the types for file IO with MPI-IO rather than for point-to-point send/recv.