在Dask中重用中间结果(混合延迟和dask.dataframe)

时间:2016-09-09 12:17:20

标签: python dask

根据我在an earlier question收到的答案,我编写了一个ETL程序,如下所示:

import pandas as pd
from dask import delayed
from dask import dataframe as dd

def preprocess_files(filename):
    """Reads file, collects metadata and identifies lines not containing data.
    """
    ...
    return filename, metadata, skiprows

def load_file(filename, skiprows):
    """Loads the file into a pandas dataframe, skipping lines not containing data."""
    return df

def process_errors(filename, skiplines):
    """Calculates error metrics based on the information 
    collected in the pre-processing step
    """
    ...

def process_metadata(filename, metadata):
    """Analyses metadata collected in the pre-processing step."""
    ...

values = [delayed(preprocess_files)(fn) for fn in file_names]
filenames = [value[0] for value in values]
metadata = [value[1] for value in values]
skiprows = [value[2] for value in values]

error_results = [delayed(process_errors)(arg[0], arg[1]) 
                 for arg in zip(filenames, skiprows)]
meta_results = [delayed(process_metadata)(arg[0], arg[1]) 
                for arg in zip(filenames, metadata)]

dfs = [delayed(load_file)(arg[0], arg[1]) 
       for arg in zip(filenames, skiprows)]
... # several delayed transformations defined on individual dataframes

# finally: categorize several dataframe columns and write them to HDF5
dfs = dd.from_delayed(dfs, meta=metaframe)
dfs.categorize(columns=[...])  # I would like to delay this
dfs.to_hdf(hdf_file_name, '/data',...)  # I would also like to delay this

all_operations = error_results + meta_results # + delayed operations on dask dataframe
# trigger all computation at once, 
# allow re-using of data collected in the pre-processing step.
dask.compute(*all_operations)

ETL过程经历了几个步骤:

  1. 预处理文件,识别不包含任何相关数据的行并解析元数据
  2. 使用收集的信息,处理错误信息,元数据并将数据线并行加载到pandas数据帧中(重新使用预处理步骤的结果)。操作(process_metadataprocess_errorsload_file)具有共享数据依赖性,因为它们都使用在预处理步骤中收集的信息。理想情况下,预处理步骤只运行一次,结果跨进程共享。
  3. 最终,将pandas数据帧收集到一个dask数据帧中,对它们进行分类并将它们写入hdf。
  4. 我遇到的问题是,categorizeto_hdf会立即触发计算,丢弃元数据和错误数据,否则将由process_errors和{{1进一步处理}}。

    我被告知延迟process_metadata上的操作会导致问题,这就是为什么我会非常有兴趣知道是否有可能触发整个计算(处理元数据,处理错误,加载数据帧,转换)数据帧并以HDF格式存储它们,允许不同的进程共享在预处理阶段收集的数据。

1 个答案:

答案 0 :(得分:4)

有两种方法可以解决您的问题:

  1. 延迟一切
  2. 分阶段计算
  3. 延迟一切

    to_hdf调用接受compute=关键字参数,您可以将其设置为False。如果为False,它会返回dask.delayed值,您可以随时计算它。

    但是,如果要继续使用dask.dataframe,则需要立即计算分类调用。我们无法在不立即查看数据的情况下创建一致的dask.dataframe。最近关于联盟分类的Pandas的改进将让我们在将来改变它,但是现在你已经陷入困境。如果这是一个阻止你,那么你必须切换到dask.delayed并手动处理df.to_delayed()

    分阶段计算

    如果您使用distributed scheduler,则可以使用.persist method进行计算。

    from dask.distributed import Executor
    e = Executor()  # make a local "cluster" on your laptop
    
    delayed_values = e.persist(*delayed_values)
    
    ... define further computations on delayed values ...
    
    results = dask.compute(results)  # compute as normal
    

    这将允许您触发一些计算,仍然可以继续定义您的计算。您持久保存的值将保留在内存中。