我使用MPI
(mpi4py
)脚本(在单个节点上),该脚本适用于非常大的对象。为了让所有进程都可以访问该对象,我通过comm.bcast()
进行分发。这会将对象复制到所有进程并占用大量内存,尤其是在复制过程中。因此,我想分享像指针而不是对象本身。我发现memoryview
中的一些功能对于增强流程中对象的工作非常有用。对象的实际内存地址也可以通过memoryview
对象字符串表示来访问,并且可以像这样分发:
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank:
content_pointer = comm.bcast(root = 0)
print(rank, content_pointer)
else:
content = ''.join(['a' for i in range(100000000)]).encode()
mv = memoryview(content)
print(mv)
comm.bcast(str(mv).split()[-1][: -1], root = 0)
打印:
<memory at 0x7f362a405048>
1 0x7f362a405048
2 0x7f362a405048
...
这就是为什么我认为必须有一种方法可以在另一个过程中重建对象。但是,我在文档中找不到有关如何操作的线索。
简而言之,我的问题是:是否可以在mpi4py
中的同一节点上的流程之间共享对象?
答案 0 :(得分:2)
这是一个使用MPI的共享内存的简单示例,稍微修改了https://groups.google.com/d/msg/mpi4py/Fme1n9niNwQ/lk3VJ54WAQAJ
您可以使用:mpirun -n 2 python3 shared_memory_test.py
运行它(假设您将其保存为shared_memory_test.py)
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
# create a shared array of size 1000 elements of type double
size = 1000
itemsize = MPI.DOUBLE.Get_size()
if comm.Get_rank() == 0:
nbytes = size * itemsize
else:
nbytes = 0
# on rank 0, create the shared block
# on rank 1 get a handle to it (known as a window in MPI speak)
win = MPI.Win.Allocate_shared(nbytes, itemsize, comm=comm)
# create a numpy array whose data points to the shared mem
buf, itemsize = win.Shared_query(0)
assert itemsize == MPI.DOUBLE.Get_size()
ary = np.ndarray(buffer=buf, dtype='d', shape=(size,))
# in process rank 1:
# write the numbers 0.0,1.0,..,4.0 to the first 5 elements of the array
if comm.rank == 1:
ary[:5] = np.arange(5)
# wait in process rank 0 until process 1 has written to the array
comm.Barrier()
# check that the array is actually shared and process 0 can see
# the changes made in the array by process 1
if comm.rank == 0:
print(ary[:10])
应该输出(从流程等级0打印):
[0. 1. 2. 3. 4. 0. 0. 0. 0. 0.]