我正在尝试使用pytorch的多处理模块在openmpi分布式后端中产生几个过程。我所拥有的是以下代码:
def run(rank_local, rank, world_size, maingp):
print("I WAS SPAWNED ", rank_local, " OF ", rank)
tensor = torch.zeros(1)
tensor += 1
if rank == 0:
tensor += 100
dist.send(tensor, dst=1)
else:
print("I am spawn: ", rank, "and my tensor value before receive: ", tensor[0])
dist.recv(tensor, src=0)
print("I am spawn: ", rank, "and my tensor value after receive: ", tensor[0])
if __name__ == '__main__':
# Initialize Process Group
dist.init_process_group(backend="mpi", group_name="main")
maingp = None #torch.distributed.new_group([0,1])
mp.set_start_method('spawn')
# get current process information
world_size = dist.get_world_size()
rank = dist.get_rank()
# Establish Local Rank and set device on this node
mp.spawn(run, args=(rank, world_size, maingp), nprocs=1)
我使用openmpi运行此代码,如下所示:
mpirun -n 2 python code.py
所以我的理解是mpirun创建了两个等级为[0,1]的进程,这些进程中的每一个都产生了本地等级为0的新进程。现在,如果我想在主进程的这两个子进程之间进行通信我得到一些回溯和以下错误:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/usama/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/usama/code/test/code.py", line 19, in run
dist.send(tensor, dst=1)
File "/home/usama/anaconda3/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 666, in send
_check_default_pg()
File "/home/usama/anaconda3/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 191, in _check_default_pg
"Default process group is not initialized"
AssertionError: Default process group is not initialized
我的问题是我如何使这些子流程能够进行通信,即[0,0]流程向[1,0]流程发送内容。有什么想法吗?