Python多处理:在pool.join()之后丢失计数?

时间:2014-03-14 21:44:35

标签: python multiprocessing

我正在尝试解决这个问题,即存储给定长度的子串的位置和计数。由于字符串可能很长(基因组序列),我正在尝试使用多个进程来加速它。程序运行时,存储对象的变量似乎在线程结束后丢失所有信息。

import numpy
import multiprocessing
from multiprocessing.managers import BaseManager, DictProxy
from collections import defaultdict, namedtuple, Counter
from functools import partial
import ctypes as c

class MyManager(BaseManager):
        pass

MyManager.register('defaultdict', defaultdict, DictProxy)

def gc_count(seq):
        return int(100 * ((seq.upper().count('G') + seq.upper().count('C') + 0.0) / len(seq)))

def getreads(length, table, counts, genome):
        genome_len = len(genome)
        for start in range(0,genome_len): 
                gc = gc_count(genome[start:start+length])
                table[ (length, gc) ].append( (start) )
                counts[length,gc] +=1

if __name__ == "__main__":
    g = 'ACTACGACTACGACTACGCATCAGCACATACGCATACGCATCAACGACTACGCATACGACCATCAGATCACGACATCAGCATCAGCATCACAGCATCAGCATCAGCACTACAGCATCAGCATCAGCATCAG'
    genome_len = len(g)

    mgr = MyManager()
    mgr.start()
    m = mgr.defaultdict(list)
    mp_arr = multiprocessing.Array(c.c_double, 10*101)
    arr = numpy.frombuffer(mp_arr.get_obj())
    count = arr.reshape(10,101)

    pool = multiprocessing.Pool(9)
    partial_getreads = partial(getreads, table=m, counts=count, genome=g)
    pool.map(partial_getreads, range(1, 10))
    pool.close()
    pool.join()

    for i in range(1, 10):
            for j in range(0,101):
                    print count[i,j]
    for i in range(1, 10):
            for j in  range(0,101):
                    print len(m[(i,j)])

最后的循环只会为0.0中的每个元素打印出count 0m中的每个列表都会打印getreads(...),所以不知何故我失去了所有计数。如果我在len(table[ (length, gc) ])函数中打印计数,我可以看到值正在增加。相反,在主体getreads(...)len(m[(i,j)])中打印0只会产生{{1}}。

1 个答案:

答案 0 :(得分:1)

您还可以将问题表示为map-reduce问题,通过该问题可以避免在多个进程之间共享数据(我想它会加快计算速度)。您只需返回结果表并从函数(map)计数并合并来自所有进程的结果(reduce)。

回到原来的问题......

Managers的底部有一个相关的说明 修改dict和list中的可变值或项。基本上,你 需要将修改后的对象重新分配给容器代理。

l = table[ (length, gc) ]
l.append( (start) )
table[ (length, gc) ] = l

还有一篇关于combining pool map with Array的相关Stackoverflow帖子。

考虑到这两点,您可以执行以下操作:

def getreads(length, table, genome):
        genome_len = len(genome)

        arr = numpy.frombuffer(mp_arr.get_obj())
        counts = arr.reshape(10,101)

        for start in range(0,genome_len): 
                gc = gc_count(genome[start:start+length])
                l = table[ (length, gc) ]
                l.append( (start) )
                table[ (length, gc) ] = l
                counts[length,gc] +=1


if __name__ == "__main__":
    g = 'ACTACGACTACGACTACGCATCAGCACATACGCATACGCATCAACGACTACGCATACGACCATCAGATCACGACATCAGCATCAGCATCACAGCATCAGCATCAGCACTACAGCATCAGCATCAGCATCAG'
    genome_len = len(g)

    mgr = MyManager()
    mgr.start()
    m = mgr.defaultdict(list)
    mp_arr = multiprocessing.Array(c.c_double, 10*101)
    arr = numpy.frombuffer(mp_arr.get_obj())
    count = arr.reshape(10,101)

    pool = multiprocessing.Pool(9)
    partial_getreads = partial(getreads, table=m, genome=g)

    pool.map(partial_getreads, range(1, 10))
    pool.close()
    pool.join()

    arr = numpy.frombuffer(mp_arr.get_obj())
    count = arr.reshape(10,101)