如何使用MPI在Python中的进程之间共享数据?

时间:2012-12-14 14:45:41

标签: python mpi

我正在尝试并行编写我编写的脚本。每个进程都需要进行计算并将数据存储到数组的特定部分(列表列表)。每个进程都在计算和存储其数据,但我无法弄清楚如何将数据从非根进程获取到根进程,以便它可以将数据打印到文件中。我创建了一个我脚本的最小工作示例 - 这个脚本只是为了简单而设计为在2个内核上运行:

from mpi4py import MPI 
import pdb 
import os

comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()

# Declare the array that will store all the temp results
temps = [[0 for x in xrange(5)] for x in xrange(4)]

# Loop over all directories
if rank==0:
   counter = 0 
   for i in range(2):
      for j in range(5):
         temps[i][j] = counter
     counter = counter + 1 

else:
   counter = 20
   for i in range(2,4):
      for j in range(5):
         temps[i][j] = counter
         counter = counter + 1 

temps = comm.bcast(temps,root=0)

if rank==0:

   print temps

我使用:

执行脚本
mpiexec -n 2 python mne.py

案例结束时,输出为:

[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]

因此,您可以看到数据共享无法正常工作。有人可以告诉我将数据恢复到根进程的正确方法吗?

1 个答案:

答案 0 :(得分:4)

代码工作正常,只是没有做你想做的事。

这一行

temps = comm.bcast(temps,root=0)

将处理器0的temps变量广播到所有处理器(包括等级0),这当然给出了上面的结果。如果您希望所有处理器都有答案,您想使用gather(或allgather。这看起来更像是这样:

from mpi4py import MPI
import pdb
import os

comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()

assert size == 2

# Declare the array that will store all the temp results
temps = [[0 for x in xrange(5)] for x in xrange(4)]

# declare the array that holds the local results
locals =[[0 for x in xrange(5)] for x in xrange(2)]

# Loop over all directories
if rank==0:
   counter = 0
   for i in range(2):
      for j in range(5):
         locals[i][j] = counter
         counter = counter + 1

else:
   counter = 20
   for i in range(2):
      for j in range(5):
         locals[i][j] = counter
         counter = counter + 1

temps = comm.gather(locals,temps,root=0)

if rank==0:
   print temps

如果您确实想要就地进行收集,并且您知道(比方说)所有实际数据将大于您初始化数据的零,则可以使用缩减操作,但是使用numpy数组会更容易:

from mpi4py import MPI
import numpy

comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()

assert size == 2

# Declare the array that will store all the temp results
temps = numpy.zeros((4,5))

# Loop over all directories
if rank==0:
   counter = 0
   for i in range(2):
      for j in range(5):
         temps[i,j] = counter
         counter = counter + 1

else:
   counter = 20
   for i in range(2,4):
      for j in range(5):
         temps[i,j] = counter
         counter = counter + 1

comm.Allreduce(MPI.IN_PLACE,temps,op=MPI.MAX)

if rank==0:
   print temps