我将ring_c.c代码从python中的OPENMPI示例转换为使用mpi4py进行实验。这是我的代码。
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
next_proc = (rank + 1) % size
prev_proc = (rank + size - 1) % size
tag = 2
message = 10
if 0 == rank:
comm.send(message, dest=next_proc, tag=tag)
while(1):
message = comm.recv(message, source=prev_proc, tag=tag)
comm.Recv_init
if 0 == rank:
message = message - 1
print "Process %d decremented value: %d\n" %(rank, message)
comm.send(message, dest=next_proc, tag=tag)
if 0 == message:
print "Process %d exiting\n" %(rank)
break;
if 0 == rank:
message = comm.recv(message, source=prev_proc, tag=tag)
当我通过mpiexec运行任意数量的进程时,例如
mpiexec -n 10 python ring_py.py
它提供以下输出和错误:
Process 0 decremented value: 9
Process 0 decremented value: 8
Process 0 decremented value: 7
Process 0 decremented value: 6
Process 0 decremented value: 5
Process 0 decremented value: 4
Traceback (most recent call last):
File "ring_py.py", line 20, in <module>
message = comm.recv(message, source=prev_proc, tag=tag)
File "MPI/Comm.pyx", line 1192, in mpi4py.MPI.Comm.recv (src/mpi4py.MPI.c:106889)
File "MPI/msgpickle.pxi", line 287, in mpi4py.MPI.PyMPI_recv (src/mpi4py.MPI.c:42965)
mpi4py.MPI.Exception: MPI_ERR_TRUNCATE: message truncated
一些观察
关于我的系统的一些细节。
有人可以帮我理解我的代码发生了什么。
谢谢你, 贾扬
答案 0 :(得分:0)
我尝试了你的代码,我收到了一个错误:
message = comm.recv(message, source=prev_proc, tag=tag)
陈述:
TypeError:期望可写缓冲区对象
在tutorial of mpi4py或MPI4Py causes error on send/recv之后,我成功尝试了:
message = comm.recv( source=prev_proc, tag=tag)
答案 1 :(得分:0)
感谢Francis,我能够解开这个谜团。我知道Python是区分大小写的,即使这样我也错过了发送和接收消息有两组不同的功能。 Send / Recv使用Numpy数组,而send / recv使用pickle。
所以,第一个版本即Numpy版本可能是:
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
next_proc = (rank + 1) % size
prev_proc = (rank + size - 1) % size
tag = 2
message = np.array([0,])
message[0] = 10
if 0 == rank:
print "Process %d sending %d to %d, tag %d (%d processes in ring)\n" %(rank, message, next_proc, tag, size)
comm.Send([message, MPI.INT], dest=next_proc, tag=tag)
while(1):
comm.Recv([message, MPI.INT], source=prev_proc, tag=tag)
if 0 == rank:
message = message - 1
print "Process %d decremented value: %d\n" %(rank, message)
comm.Send([message, MPI.INT], dest=next_proc, tag=tag)
if 0 == message[0]:
print "Process %d exiting\n" %(rank)
break;
第二个版本即泡菜版本可以是:
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
next_proc = (rank + 1) % size
prev_proc = (rank + size - 1) % size
tag = 2
message = 10
if 0 == rank:
print "Process %d sending %d to %d, tag %d (%d processes in ring)\n" %(rank, message, next_proc, tag, size)
comm.send(message, dest=next_proc, tag=tag)
while(1):
message = comm.recv(source=prev_proc, tag=tag)
if 0 == rank:
message = message - 1
print "Process %d decremented value: %d\n" %(rank, message)
comm.send(message, dest=next_proc, tag=tag)
if 0 == message:
print "Process %d exiting\n" %(rank)
break;
两个版本都会提供相同的输出。不同之处在于它们的执行时间,根据MPI教程说Numpy版本会更快。