如何使用PyTorch多处理?

时间:2018-02-16 08:05:10

标签: python computer-vision multiprocessing pytorch

我尝试在private static <T> void merge (ArrayList<T> list, ArrayList<T> temp, int start, int mid, int end, Comparator<? super T> comparator) { int i1 = start, i2 = mid; int index = start; // Loop through each Array list together. while (i1 < mid && i2 < end+1) { // If the item in the first ArrayList is smaller than the second // add that item to the temporary list and increment the index // of the first iterator. if(comparator.compare(list.get(i1), list.get(i2)) < 0) { temp.set(index, list.get(i1)); index++; i1++; } // If the item in the first ArrayList is not smaller than the second // add the second ArrayList item to the temporary list and increment the index // of the second iterator. else { temp.set(index, list.get(i2)); index++; i2++; } } // Add all remaining items from second ArrayList. while(i2 < end+1) { temp.set(index, list.get(i2)); index++; i2++; } // Add all remaining items from first ArrayList. while(i1 < mid) { temp.set(index, list.get(i1)); index++; i1++; } //Replace the order of the list with the temporary list. for (int i = start; i <end+1 ; i++) { list.set(i,temp.get(i)); } } 中使用python的多处理Pool方法来处理图片。这是代码:

pytorch

我收到此错误:

from multiprocessing import Process, Pool
from torch.autograd import Variable
import numpy as np
from scipy.ndimage import zoom

def get_pred(args):

  img = args[0]
  scale = args[1]
  scales = args[2]
  img_scale = zoom(img.numpy(),
                     (1., 1., scale, scale),
                     order=1,
                     prefilter=False,
                     mode='nearest')

  # feed input data
  input_img = Variable(torch.from_numpy(img_scale),
                     volatile=True).cuda()
  return input_img

scales = [1,2,3,4,5]
scale_list = []
for scale in scales: 
    scale_list.append([img,scale,scales])
multi_pool = Pool(processes=5)
predictions = multi_pool.map(get_pred,scale_list)
multi_pool.close() 
multi_pool.join()

` 在这一行:

`RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method

谁能告诉我我做错了什么?

2 个答案:

答案 0 :(得分:4)

pytorch documentation所述,处理多处理的最佳做法是使用torch.multiprocessing代替multiprocessing

请注意,仅在Python 3中支持在进程之间共享CUDA张量,以spawnforkserver作为启动方法。

在不触及您的代码的情况下,您获得的错误的解决方法是替换

from multiprocessing import Process, Pool

使用:

from torch.multiprocessing import Pool, Process, set_start_method
try:
     set_start_method('spawn')
except RuntimeError:
    pass

答案 1 :(得分:0)

我建议您阅读多处理模块的文档,尤其是this section。您必须通过调用set_start_method来更改创建子进程的方式。取自引用的文档:

import multiprocessing as mp

def foo(q):
    q.put('hello')

if __name__ == '__main__':
    mp.set_start_method('spawn')
    q = mp.Queue()
    p = mp.Process(target=foo, args=(q,))
    p.start()
    print(q.get())
    p.join()