如何在python中多处理大型文本文件?

时间:2011-09-12 16:05:31

标签: text csv io python-3.x multiprocessing

我在读取60 MB的csv文件后尝试消化DictReader对象的行。我在这里问了这个问题:how to chunk a csv (dict)reader object in python 3.2?。 (代码在下面重复。)

然而,现在我意识到,对原始文本文件进行分块也可以做到这一点(并且稍后会进行DictRead和逐行摘要)。但是,我没有发现multiprocessing.Pool可以使用的io工具。

感谢您的任何想法!

source = open('/scratch/data.txt','r')
def csv2nodes(r):
    strptime = time.strptime
    mktime = time.mktime
    l = []
    ppl = set()
    for row in r:
        cell = int(row['cell'])
        id = int(row['seq_ei'])
        st = mktime(strptime(row['dat_deb_occupation'],'%d/%m/%Y'))
        ed = mktime(strptime(row['dat_fin_occupation'],'%d/%m/%Y'))
        # collect list
        l.append([(id,cell,{1:st,2: ed})])
        # collect separate sets
        ppl.add(id)
    return (l,ppl)


def csv2graph(source):
    r = csv.DictReader(source,delimiter=',')
    MG=nx.MultiGraph()
    l = []
    ppl = set()
    # Remember that I use integers for edge attributes, to save space! Dic above.
    # start: 1
    # end: 2
    p = Pool(processes=4)
    node_divisor = len(p._pool)*4
    node_chunks = list(chunks(r,int(len(r)/int(node_divisor))))
    num_chunks = len(node_chunks)
    pedgelists = p.map(csv2nodes,
                       zip(node_chunks))
    ll = []
    for l in pedgelists:
        ll.append(l[0])
        ppl.update(l[1])
    MG.add_edges_from(ll)
    return (MG,ppl)

0 个答案:

没有答案