Python多处理:解析,编辑和编写长系列的csv文件

时间:2015-03-17 16:07:23

标签: python-2.7 csv multiprocessing

我有一系列类似的cvs文件(共14Gb)。我需要打开每个文件,替换某些字符,并将固定版本写入新文件。我想使用我的多核计算机的处理能力。我尝试使用mp.Pools和mp.Process / mp.Queue。池版本有效,但队列方法会产生此错误:

IOError: [Errno 22] invalid mode ('r') or filename: '<multiprocessing.queues.Queue object at 0x0000000002775A90>'

这是我的泳池代码的简化版本:

import os
import pandas as pd
import multiprocessing as mp
def fixer(a_file):
    lines = []
    opened_file = open(a_file)
    for each_line in opened_file:
        lines.append(each_line.replace('mad', 'rational'))
    opened_file.close()
    df = pd.DataFrame(lines)
    #some pandas magics here
    df.to_csv(a_file[:-4] + '_fixed.csv')
if __name__ == "__main__":
    my_path = os.getcwd()
    my_files = list(os.walk(my_path))[0][2] #I just get the list of file names here
    processors = mp.cpu_count()
    pool = mp.Pool(processes = processors) # I set as many processes as processors my computer has.
    pool.map(fixer, my_files)

这是Queue方法的一个:

import os
import pandas as pd
import multiprocessing as mp
def fixer(a_file):
    lines = []
    opened_file = open(a_file)
    for each_line in opened_file:
        lines.append(each_line.replace('mad', 'rational'))
    opened_file.close()
    df = pd.DataFrame(lines)
    #some pandas magics here
    df.to_csv(a_file[:-4] + '_fixed.csv')
if __name__ == "__main__":
    my_path = os.getcwd()
    my_files = list(os.walk(my_path))[0][2] #I just get the list of file names here
    processors = mp.cpu_count()
    queue = mp.Queue()
    for each_file in my_files:
        queue.put(each_file)
    processes = [mp.Process(target = fixer, args=(queue,)) for core in range(processors)]
    for process in processes:
        process.start()
    for process in processes:
        process.join()

如果您能提供一个示例来使Queue版本正常工作,我将不胜感激。在第二个处理步骤中,在写入文件之前,我需要处理器获得中间结果并进行一些计算。这就是我需要队列的原因。

1 个答案:

答案 0 :(得分:0)

队列脚本中的问题是我没有获得队列中的下一个元素,而是将整个队列传递给修复函数。通过将queue.get()的值赋给fixer函数中的变量来解决此问题:

import os
import pandas as pd
import multiprocessing as mp
def fixer(a_queue):
    a_file = a_queue.get()
    lines = []
    opened_file = open(a_file)
    for each_line in opened_file:
        lines.append(each_line.replace('mad', 'rational'))
    opened_file.close()
    df = pd.DataFrame(lines)
    #some pandas magics here
    df.to_csv(a_file[:-4] + '_fixed.csv')
if __name__ == "__main__":
    my_path = os.getcwd()
    my_files = list(os.walk(my_path))[0][2] #I just get the list of file names here
    processors = mp.cpu_count()
    queue = mp.Queue()
    for each_file in my_files:
        queue.put(each_file)
    processes = [mp.Process(target = fixer, args=(queue,)) for core in range(processors)]
    for process in processes:
        process.start()
    for process in processes:
        process.join()