使用nn.DataParallel(model)时,参数位于不同的GPU上

时间:2018-10-06 09:59:25

标签: python pytorch multi-gpu

火炬0.4.1

Python 2.7.12

我正在使NMP QC code (with some compatibility issues ironed out)适应使用多个GPU,因为我的GPU无法处理工作负载(在VRAM耗尽后崩溃了)

我是pytorch的新手,但是我发现a tutorial on using nn.DataParallel(model)实现了多GPU的使用

modified main.py to use nn.DataParallel(model).更改的区域粘贴了“ #NEW”。

即使在单个GPU上运行,即使在多GPU模式下,代码也可以正常运行,但是在2个或更多GPU上运行时,会出现“参数位于不同GPU上”错误

libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs3
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs2
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs1
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
Unexpected end of /proc/mounts line `overlay / overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/QKSBQ5PAFDDC3OMBEELQQETALQ:/var/lib/docker/overlay2/l/WWYI3IDQPNXGON7AHODBPSTVXL:/var/lib/docker/overlay2/l/Q54I2HYS4TKH4LDJKBTVTGWWO6:/var/lib/docker/overlay2/l/IUV2LFPNMPOS3MREOTT52TKL54:/var/lib/docker/overlay2/l/DB5GBUCI3DCBPX6TJG3O337YVB:/var/lib/docker/overlay2/l/DNYKXCZJH5FMFNJLNGYJJ2ITPI:/var/lib/docker/overlay2/l/7DZCTDVNSTPJISGW65UG7U3F75:/var/lib/docker/overlay2/l/VOEQO652VS63NLDLZZ4TCIJLO6:/var/lib/docker/overlay2/l/4SI6ZCRUIORG5'
Traceback (most recent call last):
  File "main.py", line 332, in <module>
    main()
  File "main.py", line 190, in main
    train(train_loader, model, criterion, optimizer, epoch, evaluation, logger)
  File "main.py", line 251, in train
    output = model(g, h, e)
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/data_parallel.py", line 123, in forward
    outputs = self.parallel_apply(replicas, inputs, kwargs)
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/data_parallel.py", line 133, in parallel_apply
    return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
  File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/parallel_apply.py", line 77, in parallel_apply
    raise output
RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:236

由于我一次发送一次输入,而不是像本教程中那样一次发送一次,所以我使用.get_device()进行检查,该操作确认所有4个参数(g,h,e,target)都已发送到同一设备(设备0)

0 个答案:

没有答案