我正在开发一个包含多个从属节点和一个主节点的项目。在某些时候,我需要从不同的从节点(主节点也可以被视为从节点)收集数据到主节点。数据可以是任何类型,但我们假设它是unsigned int。这就是数据在从属节点上的外观:
node0:| chunk01 | chunk02 | chunk03 | chunk04 | ....
node1:| chunk11 | chunk12 | chunk13 | chunk14 | ....
...
noden:| chunkn1 | chunkn2 | chunkn3 | chunkn4 | ....
数据应全部收集到node0,如下所示:
node0:| chunk01 | chunk11 | chunk21 | .... | chunkn1 | chunk02 | chunk12 | ... | chunkn2 | ... | chunknm |
这意味着我们将每个节点的第一个块连接在一起,然后将每个节点的第二个块连接在一起......
我不知道如何使用MPI_Gatherv实现这一点,因为每个chunkij都有不同的大小,每个节点只知道自己的块大小和起始索引,但不知道其他节点的信息。
我对MPI不太熟悉,所以我想知道是否有任何API可以从不同节点收集不同大小的数据到一个节点?
答案 0 :(得分:1)
以下是您可以编辑的示例。几乎肯定不是解决问题的最佳方式 - 我需要更多代码细节来评论。我没有检查它是否编译,但是如果你修正了任何错别字,我很乐意尝试解决任何突出的错误。
我也不知道效率对你有多重要 - 这个操作会每秒进行数百次或每天进行一次吗?如果是后者,则此代码可能没问题。我也在假设C / C ++。
// Populate this on each node from MPI_Comm_rank.
int myRank;
// Populate this on each node from MPI_Comm_size.
int P;
// Num chunks per core.
const int M = 4;
// I'm assuming 0 is the master.
int masterNodeRank = 0;
// Populate this.
// It only needs to have meaningful data on the master node.
//If master node doesn't have the data, fill with MPI_GATHER.
int* sizeOfEachChunkOnEachRank[M];
// Populate this.
//It needs to exist on every 'slave' node.
int sizeOfMyChunks[M];
// Assuming you already have this array
// it should be the contiguous store of each core's data.
unsigned* myData;
// This is what we'll gather all the data into on master node only.
unsigned* gatheredData = new unsigned[totalDataSize];
// This array will keep all of the displacements from each sending node.
int* displacements = new int[P];
// This keeps track of how many unsigneds we've received so far.
int totalCountSoFar = 0;
// We'll work through all the first chunks on each node at once, then all
// the second chunks, etc.
for(int localChunkNum = 0; localChunkNum < M; ++localChunkNum)
{
// On the receiving node we need to calculate all the displacements
// for the received data to go into the array
if (myRank == masterNodeRank)
{
displacements[0] = 0;
for(int otherCore = 1; otherCore < P; ++otherCore)
{
displacements[otherCore] = displacements[otherCore-1] + sizeOfEachChunkOnEachRank[localChunkNum][otherCore-1];
}
}
// On all cores, we'll need to calculate how far into our local array
// to start the sending from.
int myFirstIndex = 0;
for(int previousChunk=0; previousChunk < localChunkNum; previousChunk++)
{
myFirstIndex += sizeOfMyChunks[previousChunk];
}
// Do the variable gather
MPI_Gatherv(&myData[myFirstIndex], // Start address to send from
sizeOfMyChunks[localChunkNum], // Number to send
MPI_UNSIGNED, // Type to send
&gatheredData[totalCountSoFar], // Start address to receive into
sizeOfEachChunkOnEachRank[localChunkNum], // Number expected from each core
displacements, // Displacements to receive into from each core
MPI_UNSIGNED, // Type to receive
masterNodeRank, // Receiving core rank
MPI_COMM_WORLD); // MPI communicator.
// If this is the receiving rank, update the count we've received so far
// so that we don't overwrite data the next time we do the gather.
// Note that the total received is the displacement to the receive from the
// last core + the total received from that core.
if(myRank == masterNodeRank)
{
totalCountSoFar += displacements[P-1] + sizeOfEachChunkOnEachRank[localChunkNum][P-1];
}
}
delete[] displacements;
delete[] gatheredData;