触发Dask工作人员释放记忆

时间:2019-04-30 15:03:02

标签: dask dask-distributed

我正在使用Dask分发某些函数的计算。我的总体布局如下:


    from dask.distributed import Client, LocalCluster, as_completed

    cluster = LocalCluster(processes=config.use_dask_local_processes,
                           n_workers=1,
                           threads_per_worker=1,
                           )
    client = Client(cluster)
    cluster.scale(config.dask_local_worker_instances)

    work_futures = []

    # For each group do work
    for group in groups:
        fcast_futures.append(client.submit(_work, group))

    # Wait till the work is done
    for done_work in as_completed(fcast_futures, with_results=False):
        try:
            result = done_work.result()
        except Exception as error:
            log.exception(error)

我的问题是,对于大量作业,我倾向于达到内存限制。我看到很多:

distributed.worker - WARNING - Memory use is high but worker has no data to store to disk.  Perhaps some other process is leaking memory?  Process memory: 1.15 GB -- Worker memory limit: 1.43 GB

似乎每个未来都不会释放自己的记忆。我该如何触发呢?我在Python 2.7上使用dask == 1.2.0。

1 个答案:

答案 0 :(得分:0)

只要客户端有指望它的结果,调度程序就会对结果有所帮助。在python垃圾回收最后一个future时(或不久之后)释放内存。在您的情况下,您将在整个计算过程中将所有期货保存在列表中。您可以尝试修改循环:

for done_work in as_completed(fcast_futures, with_results=False):
    try:
        result = done_work.result()
    except Exception as error:
        log.exception(error)    
    done_work.release()

或将as_completed循环替换为可以将期货从列表中明确删除的东西。