如何为不泄漏内存的大型生成器编写使用者?

时间:2018-12-27 21:22:11

标签: python garbage-collection

TL / DR:是ThreadPoolExecutor的原因。 Memory usage with concurrent.futures.ThreadPoolExecutor in Python3

这是一个运行所有路由算法的Python脚本(简化了很多),并在此过程中耗尽了所有内存。

我知道问题在于主函数不会返回,并且内部创建的对象不会被垃圾收集器清除。

我的主要问题:是否可以为返回的生成器编写使用者,以便清理数据?还是应该只调用垃圾收集器实用程序?

# thread pool executor like in python documentation example
def table_process(callable, total):
    with ThreadPoolExecutor(max_workers=threads) as e:
    future_map = {
        e.submit(callable, i): i
        for i in range(total)
    }

    for future in as_completed(future_map):
        if future.exception() is None:
            yield future.result()
        else:
            raise future.exception()

@argh.dispatch_command
def main():
    threads = 10
    data = pd.DataFrame(...)  # about 12K rows

    # this function routes only one slice of sources/destinations
    def _process_chunk(x:int) -> gpd.GeoDataFrame:
        # slicing is more complex, but simplified here for presentation
        # do cross-product and an http request to process the result
        result_df = _do_process(grid[x], grid)
        return result_df

    # writing to geopackage
    with fiona.open('/tmp/some_file.gpkg', 'w', driver='GPKG', schema=...) as f:
        for results_df in table_process(_process_chunk, len(data)):
            aggregated_df = results_df.groupby('...').aggregate({...})
            f.writerecords(aggregated_df)

1 个答案:

答案 0 :(得分:0)

原来是ThreadPoolExecutor保留了工作程序,并且不释放内存。

解决方案在这里:Memory usage with concurrent.futures.ThreadPoolExecutor in Python3