multiprocessing.Process导致:OSError:[Errno 12]即使我只运行1个进程也无法分配内存

时间:2019-03-10 12:15:52

标签: python multithreading process multiprocessing

我正在尝试在远程服务器(AWS)中处理非常大的文本文件(〜11 GB)。 需要对文件进行的处理非常复杂,使用常规的python程序,总运行时间约为1个月。为了减少运行时间,我试图在一些进程之间划分文件上的工作。 电脑规格: Computer specs

代码:

def initiate_workers(works, num_workers, output_path):
    """
    :param works: Iterable of lists of strings (The work to be processed divided in num_workers pieces)
    :param num_workers: Number of workers
    :return: A list of Process objects where each object is ready to process its share.
    """
    res = []
    for i in range(num_workers):
        # process_batch is the processing function
        res.append(multiprocessing.Process(target=process_batch, args=(output_path + str(i), works[i])))
    return res



def run_workers(workers):
    """
    Run the workers and wait for them to finish
    :param workers: Iterable of Process objects
    """
    logging.info("Starting multiprocessing..")
    for i in range(len(workers)):
        workers[i].start()
        logging.info("Started worker " + str(i))
    for j in range(len(workers)):
        workers[j].join()

我得到以下回溯:

Traceback (most recent call last):
  File "w2v_process.py", line 93, in <module>
    run_workers(workers)
  File "w2v_process.py", line 58, in run_workers
    workers[i].start()
  File "/usr/lib/python3.6/multiprocessing/process.py", line 105, in start
    self._popen = self._Popen(self)
  File "/usr/lib/python3.6/multiprocessing/context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/usr/lib/python3.6/multiprocessing/context.py", line 277, in _Popen
    return Popen(process_obj)
  File "/usr/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/usr/lib/python3.6/multiprocessing/popen_fork.py", line 66, in _launch
    self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory

无论num_workers = 1还是6或14都无关紧要。

我在做什么错了?

谢谢!

编辑

发现了问题。我在SO的某个地方看到了fork(回溯的最后一行)实际上使RAM加倍。在处理文件时,我将其加载到内存中,该内存已填满〜18GB,并且鉴于RAM的整个容量为30GB,确实存在内存分配错误。 我将大文件分成较小的文件(工作程序的数量),并为每个Process对象提供此文件的路径。这样,每个进程都会以一种懒惰的方式读取数据,并且一切正常!

1 个答案:

答案 0 :(得分:0)

发现了问题。我在SO的某个地方看到了fork(回溯的最后一行)实际上使RAM加倍。在处理文件时,我将其加载到内存中,该内存已填满〜18GB,并且鉴于RAM的整个容量为30GB,确实存在内存分配错误。我将大文件分成较小的文件(工作程序的数量),并为每个Process对象提供此文件的路径。这样,每个进程都会以一种懒惰的方式读取数据,并且一切正常!