如何让Dask知道索引是否已排序?

时间:2017-06-23 15:02:04

标签: dask

根据this答案,如果Dask知道数据帧的索引已经排序,则Dask数据帧可以执行智能索引。

如何让Dask知道索引是否已排序?

在我的具体情况下,我正在做这样的事情:

for source is sources:
  # This df has a datetimeindex that I know to be sorted
  pd = load_pandas_df_from_some_source(source)
  dd = dask.dataframe.from_pandas(pd, chunksize=foo)
  dd.to_hdf(some_unique_filename, '/data')

然而,当我做这样的事情时,索引速度非常慢:

 dd = dask.dataframe.read_hdf(some_glob, '/data')
 print(dd.loc['2001-1-1':'2001-1-2'])

我认为Dask不知道我的数据帧已经排序。我怎么知道?

2 个答案:

答案 0 :(得分:2)

从HDF加载时,每个分区中索引的数据值不一定是已知的。这些用于构造daataframe的divisions属性,用于加速查找。

对于像您这样的数据集,您应该能够传递sorted_index=True,并获得所需的行为。

正如@kuanb建议的那样,您可能想尝试以专门用于表格数据的镶木地板格式存储。它是否提供更高的性能取决于数据的性质(hdf主要是为数字数据编写的)和用例,ymmv;虽然实木复合地板通常可以很好地保持每个分区中数据值的元数据统计数据。

答案 1 :(得分:0)

正如@mdurant建议的那样,使用sorted_index= keyword to the read_hdf function是理想的选择。

更一般地说,您可以使用set_index function在任何数据帧上设置索引,甚至是使用其他方法创建的索引。此函数具有新关键字,如果您的新索引列已经排序,并且您已知道分区之间的分隔值,则可以使您高效。这是当前的文档字符串。最后一个例子可能是您感兴趣的:

"""Set the DataFrame index (row labels) using an existing column

This realigns the dataset to be sorted by a new column.  This can have a
significant impact on performance, because joins, groupbys, lookups, etc.
are all much faster on that column.  However, this performance increase
comes with a cost, sorting a parallel dataset requires expensive shuffles.
Often we ``set_index`` once directly after data ingest and filtering and
then perform many cheap computations off of the sorted dataset.

This function operates exactly like ``pandas.set_index`` except with
different performance costs (it is much more expensive).  Under normal
operation this function does an initial pass over the index column to
compute approximate qunatiles to serve as future divisions.  It then passes
over the data a second time, splitting up each input partition into several
pieces and sharing those pieces to all of the output partitions now in
sorted order.

In some cases we can alleviate those costs, for example if your dataset is
sorted already then we can avoid making many small pieces or if you know
good values to split the new index column then we can avoid the initial
pass over the data.  For example if your new index is a datetime index and
your data is already sorted by day then this entire operation can be done
for free.  You can control these options with the following parameters.

Parameters
----------
df: Dask DataFrame
index: string or Dask Series
npartitions: int, None, or 'auto'
    The ideal number of output partitions.   If None use the same as
    the input.  If 'auto' then decide by memory use.
shuffle: string, optional
    Either ``'disk'`` for single-node operation or ``'tasks'`` for
    distributed operation.  Will be inferred by your current scheduler.
sorted: bool, optional
    If the index column is already sorted in increasing order.
    Defaults to False
divisions: list, optional
    Known values on which to separate index values of the partitions.
    See http://dask.pydata.org/en/latest/dataframe-design.html#partitions
    Defaults to computing this with a single pass over the data. Note
    that if ``sorted=True``, specified divisions are assumed to match
    the existing partitions in the data. If this is untrue, you should
    leave divisions empty and call ``repartition`` after ``set_index``.
compute: bool
    Whether or not to trigger an immediate computation. Defaults to False.

Examples
--------
>>> df2 = df.set_index('x')  # doctest: +SKIP
>>> df2 = df.set_index(d.x)  # doctest: +SKIP
>>> df2 = df.set_index(d.timestamp, sorted=True)  # doctest: +SKIP

A common case is when we have a datetime column that we know to be
sorted and is cleanly divided by day.  We can set this index for free
by specifying both that the column is pre-sorted and the particular
divisions along which is is separated

>>> import pandas as pd
>>> divisions = pd.date_range('2000', '2010', freq='1D')
>>> df2 = df.set_index('timestamp', sorted=True, divisions=divisions)  # doctest: +SKIP
    """