Petsc - 将分布式矢量组合到局部矢量

时间:2018-01-18 05:07:29

标签: c petsc

我正在使用Petsc,我想结合使用分布式Vec,以便每个进程都有Vec的完整副本。我有一个最小的例子,它从一个数据数组开始,从它构造一个MPI Vec,然后尝试使用VecScatter来组合来自多个进程的向量。当我这样做时,本地向量只接收存储在第0个进程的值,它不接收来自其他进程的信息。如何组合分布式矢量以生成完整的局部矢量?

#include <petscvec.h>

double primes[] = {2,3,5,7,11,13,17};
int nprimes = 7;

int main(int argc,char **argv)
{
    PetscInitialize(&argc,&argv, NULL,NULL);

    MPI_Comm       comm=MPI_COMM_WORLD;
    Vec xpar,xseq;
    PetscInt low,high;
    IS index_set_global, index_set_local;
    const PetscInt *indices;
    VecScatter vc;
    PetscErrorCode ierr;

    //Set up parallel vector
    ierr = VecCreateMPI(comm, PETSC_DETERMINE, nprimes, &xpar); CHKERRQ(ierr);
    ierr = VecGetOwnershipRange(xpar, &low, &high); CHKERRQ(ierr);
    ierr = ISCreateStride(comm, high - low, low, 1, &index_set_global); CHKERRQ(ierr);
    ierr = ISGetIndices(index_set_global, &indices); CHKERRQ(ierr);
    ierr = ISView(index_set_global, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
    ierr = VecSetValues(xpar, high - low, indices, primes + low, INSERT_VALUES);CHKERRQ(ierr);
    ierr = VecAssemblyBegin(xpar); CHKERRQ(ierr);
    ierr = VecAssemblyEnd(xpar); CHKERRQ(ierr);
    ierr = VecView(xpar, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);

    //Scatter parallel vector so all processes have full vector
    ierr = VecCreateSeq(PETSC_COMM_SELF, nprimes, &xseq); CHKERRQ(ierr);
    //ierr = VecCreateMPI(comm, high - low, nprimes, &xseq); CHKERRQ(ierr);
    ierr = ISCreateStride(comm, high - low, 0, 1, &index_set_local); CHKERRQ(ierr);
    ierr = VecScatterCreate(xpar, index_set_local, xseq, index_set_global, &vc); CHKERRQ(ierr);
    ierr = VecScatterBegin(vc, xpar, xseq, ADD_VALUES, SCATTER_FORWARD); CHKERRQ(ierr);
    ierr = VecScatterEnd(vc, xpar, xseq, ADD_VALUES, SCATTER_FORWARD); CHKERRQ(ierr);
    ierr = PetscPrintf(PETSC_COMM_SELF, "\nPrinting out scattered vector\n"); CHKERRQ(ierr);
    ierr = VecView(xseq, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);

    PetscFinalize();
}

输出:

mpiexec -n 2 ./test

IS Object: 2 MPI processes
type: stride
[0] Index set is permutation
[0] Number of indices in (stride) set 4
[0] 0 0
[0] 1 1
[0] 2 2
[0] 3 3
[1] Number of indices in (stride) set 3
[1] 0 4
[1] 1 5
[1] 2 6
Vec Object: 2 MPI processes
type: mpi
Process [0]
2.
3.
5.
7.
Process [1]
11.
13.
17.

Printing out scattered vector

Printing out scattered vector
Vec Object: 1 MPI processes
type: seq
2.
3.
5.
7.
0.
0.
0.

1 个答案:

答案 0 :(得分:1)

VecScatterCreateToAll()正是您所需要的:

  

创建向量和散射上下文,将所有向量值复制到每个处理器

它在ksp/.../ex49.c中使用。最后,它在vecmpitoseq.c中实现。

命名约定可能受MPI函数的启发,例如MPI_Allgather(),它将收集的数据分发到所有进程,而MPI_Gather()仅收集指定根进程上的数据。