Python多处理循环总是针对某些线程停止

时间:2018-07-11 22:56:20

标签: python multiprocessing

我想通过使用多线程来加速数据加载和内插。到目前为止,我的代码似乎可以正常工作。唯一的问题是,它总是可以运行到给定数量的proc。 假设我的文件是100。如果我以24个内核运行脚本,则仅加载了24个文件,但我需要继续增加到100个文件。这里我缺少什么?

        print ('Number of overall VTK Files: ' + str(len(vtkArray)))
    def dataoperator(N,i,vtkArray,field):
        #for i in range(N-100,N):
        print ("Loading data " +str(vtkArray[i,1] + ' index :' + str(i)))
        points, cells, point_data, cell_data, field_data = meshio.read(ID +str(vtkArray[i,1]))
        x,y,z=np.transpose(points)
        print ("\t Interpolating data " +str(vtkArray[i,1] + ' index :' + str(i)))
        from scipy.interpolate import griddata
        if (scalar=='p'):
            p = np.transpose(point_data['p'])
            pi= griddata((x, y, z), p, (xx, yy, zz), method='nearest')
            field[i,:,:,:]= pi
        else:
            u,v,w = np.transpose(point_data['subtract(U,U_bf)'])
            if (scalar=='u'):
                ui= griddata((x, y, z), u, (xx, yy, zz), method='nearest')
                field[i,:,:,:]= ui
            elif (scalar=='v'):
                vi= griddata((x, y, z), v, (xx, yy, zz), method='nearest')
                field[i,:,:,:]= vi
            else:
                wi = griddata((x, y, z), w, (xx, yy, zz), method='nearest')
                field[i,:,:,:]= wi
        del points, cells, point_data, cell_data, field_data


    import multiprocessing
    jobs = []
    for i in range(0, procs):
        process = multiprocessing.Process(target=dataoperator,args=(N, i, vtkArray,field))
        jobs.append(process)

        # Start the processes 
    for j in jobs:
        j.start()

        # Ensure all of the processes have finished
    for j in jobs:
        j.join()

2 个答案:

答案 0 :(得分:1)

在您的代码中,创建了procs个进程,这些进程仅使用参数dataoperator调用一次(N, i, vtkArray,field)。索引i永远不会大于procs

重写您的代码,使其使用队列:(我在这里假设vtkArray包含100种不同的情况)。

# Assuming python3.    
from queue import Empty as EmptyException

resultQueue = multiprocessing.Queue() # to return data from your workers
taskQueue = multiprocessing.Queue()
processes = []

def _worker(taskQueue, resultQueue, N, vtkArray, field):
    while True:
        try:
            # Block maximally 0.1s before EmptyException is thrown.
            i = taskQueue.get(timeout=0.1)
            ret = dataoperator(N, i, vtkArray, field)
            resultQueue.put(ret)
        except EmptyException:
            # Stop the loop.
            break

for i in range(len(vtkArray)):
    taskQueue.put(i)

for i in range(procs):
    process = multiprocessing.Process(target=_worker,
                                      args=(taskQueue, resultQueue, N, vtkArray, field),
                                      name="process%03d" % i)
    process.start()
    processes.append(process)

for p in processes:
    p.join()

try:
    rets = []
    while True:
        rets.append(resultQueue.get(block=False))
except EmptyException:
    pass

我尚未测试此代码,仅将其用作起点。我强烈建议您阅读multiprocessing模块的文档。 (我也建议使用上下文,但这是另一回事了。)

答案 1 :(得分:0)

由于normanius的帮助,我能够正确设置加载和内插多个文件,此外我仍然无法使用resultQueue.get()收集数据。列表始终为空,尽管ret的_workers函数中的打印语句会给出结果:

再次澄清我的目的

1)设置多维数组(目标定义)field=np.zeros([N,nx,ny,nz]) where N = len(vtkArray)和vtkArray只是要处理的文件列表(最多100个),nx,ny,nz个节点内插

2)dataoperator()是一种用于加载和内插数据的函数,现在具有多处理功能,其中索引i对应于处理器的索引,并且将加载的数据放置在多维数组字段的哪一行。

3)normanius建议的代码似乎可以正常工作,因为_workers函数中的ret打印语句显示正确的结果,对于每个过程

4)无论如何,我仍然无法通过resultQueue.get()函数接收完全“填充”的多维数组。是否还可以再次返回多维数组,还是我必须使用rets = []作为列表。

在此先感谢您的帮助,

    ###################################################
     #Loading all vtk files and saving to multidim array
    ###################################################
    def dataoperator(i,vtkArray,nx,ny,nz,x0,x1,y0,y1,z0,z1,scalar):
        xi = np.linspace(x0,x1,nx,endpoint=True)
        yi = np.linspace(y0,y1,ny,endpoint=True)
        zi = np.linspace(z0,z1,nz,endpoint=True)
        yy,xx,zz=np.meshgrid(yi,xi,zi)

        #Generate MultiDimensional array
        vtkArrayString=[str(x) for x in vtkArray[:,1]]
        print ("Loading data " +str(prefix) + str(vtkArrayString[i])+' with index ' + str(i))
        points, cells, point_data, cell_data, field_data = meshio.read('./VTK/' +str(vtkArrayString[i]))
        #points, point_data=loaddata(vtkArrayString,i)

        x,y,z=np.transpose(points)
        print ("\t Interpolating data " +str(vtkArrayString[i]))
        from scipy.interpolate import griddata
        if (scalar=='p'):
            p = np.transpose(point_data['p'])
            pi= griddata((x, y, z), p, (xx, yy, zz), method='nearest')
            field[i,:,:,:]= pi
        else:
            u,v,w = np.transpose(point_data['subtract(U,U_bf)'])
            if (scalar=='u'):
                ui= griddata((x, y, z), u, (xx, yy, zz), method='nearest')
                field[i,:,:,:]= ui
            elif (scalar=='v'):
                vi= griddata((x, y, z), v, (xx, yy, zz), method='nearest')
                field[i,:,:,:]= vi
            else:
                wi = griddata((x, y, z), w, (xx, yy, zz), method='nearest')
                field[i,:,:,:]= wi
        #return field, vtkArray
        print ("\t Finished Interpolating data " +str(vtkArrayString[i]))
        return field 

    import multiprocessing
    from queue import Empty as EmptyException


    resultQueue = multiprocessing.Queue() # to return data from your workers
    taskQueue = multiprocessing.Queue()
    processes = []

    def _worker(taskQueue, resultQueue, i, vtkArray):
        try:
            # Block maximally 0.1s before EmptyException is thrown.
            i = taskQueue.get(timeout=0.1)
            ret  = dataoperator(i,vtkArray,nx,ny,nz,x0,x1,y0,y1,z0,z1,scalar)
            print (ret)
            resultQueue.put(ret)
        except EmptyException:
            # Idle action, if needed.
            pass

    for i in range(steps):
        taskQueue.put(i)

    for i in range(procs):
        process = multiprocessing.Process(target=_worker,
                                          args=(taskQueue, resultQueue, i, vtkArray),
                                          name="process%03d" % i)
        process.start()
        processes.append(process)

    #ret2=np.zeros([N,nx,ny,nz])
    for p in processes:
        p.join()

    try:
        rets = []
        while True:
            rets.append(resultQueue.get(block=False))
            print (rets)
    except EmptyException:
        pass
    print(rets)
    field=np.array(rets)
    print(field)