适当改变熊猫的不规则时间序列

时间:2016-07-02 02:45:19

标签: python pandas

转换此时间序列并将数据重新对齐到同一索引的正确方法是什么?例如。如何使用与“data”相同的索引值生成数据框,但是每个点的值是在索引时间戳之后0.4秒看到的最后一个值?

我希望这对于处理不规则和混合频率时间序列的人来说是一个相当普遍的操作(“当前时间的任意时间偏移的最后一个值是什么?”),所以我希望(希望for?)这个功能存在......

假设我有以下数据框:

>>> import pandas as pd
>>> import numpy as np
>>> import time
>>> 
>>> x = np.arange(10)
>>> #t = time.time() + x + np.random.randn(10)
... t = np.array([1467421851418745856, 1467421852687532544, 1467421853288187136,
...        1467421854838806528, 1467421855148979456, 1467421856415879424,
...        1467421857259467264, 1467421858375025408, 1467421859019387904,
...        1467421860235784448])
>>> data = pd.DataFrame({"x": x})
>>> data.index = pd.to_datetime(t)
>>> data["orig_time"] = data.index
>>> data
                               x                     orig_time
2016-07-02 01:10:51.418745856  0 2016-07-02 01:10:51.418745856
2016-07-02 01:10:52.687532544  1 2016-07-02 01:10:52.687532544
2016-07-02 01:10:53.288187136  2 2016-07-02 01:10:53.288187136
2016-07-02 01:10:54.838806528  3 2016-07-02 01:10:54.838806528
2016-07-02 01:10:55.148979456  4 2016-07-02 01:10:55.148979456
2016-07-02 01:10:56.415879424  5 2016-07-02 01:10:56.415879424
2016-07-02 01:10:57.259467264  6 2016-07-02 01:10:57.259467264
2016-07-02 01:10:58.375025408  7 2016-07-02 01:10:58.375025408
2016-07-02 01:10:59.019387904  8 2016-07-02 01:10:59.019387904
2016-07-02 01:11:00.235784448  9 2016-07-02 01:11:00.235784448

我可以编写以下函数:

def time_shift(df, delta):
    """Shift a DataFrame object such that each row contains the last known
    value as of the time `df.index + delta`."""
    lookup_index = df.index + delta
    mapped_indicies = np.searchsorted(df.index, lookup_index, side='left')
    # Clamp bounds to allow us to index into the original DataFrame
    cleaned_indicies = np.clip(mapped_indicies, 0, 
                               len(mapped_indicies) - 1)
    # Since searchsorted gives us an insertion point, we'll generally
    # have to shift back by one to get the last value prior to the
    # insertion point. I choose to keep contemporaneous values,
    # rather than looking back one, but that's a matter of personal
    # preference.
    lookback = np.where(lookup_index < df.index[cleaned_indicies], 1, 0)
    # And remember to re-clip to avoid index errors...
    cleaned_indicies = np.clip(cleaned_indicies - lookback, 0, 
                               len(mapped_indicies) - 1)

    new_df = df.iloc[cleaned_indicies]
    # We don't know what the value was before the beginning...
    new_df.iloc[lookup_index < df.index[0]] = np.NaN
    # We don't know what the value was after the end...
    new_df.iloc[mapped_indicies >= len(mapped_indicies)] = np.NaN
    new_df.index = df.index

    return new_df

具有所需的行为:

>>> time_shift(data, pd.Timedelta('0.4s'))
                                 x                     orig_time
2016-07-02 01:10:51.418745856  0.0 2016-07-02 01:10:51.418745856
2016-07-02 01:10:52.687532544  1.0 2016-07-02 01:10:52.687532544
2016-07-02 01:10:53.288187136  2.0 2016-07-02 01:10:53.288187136
2016-07-02 01:10:54.838806528  4.0 2016-07-02 01:10:55.148979456
2016-07-02 01:10:55.148979456  4.0 2016-07-02 01:10:55.148979456
2016-07-02 01:10:56.415879424  5.0 2016-07-02 01:10:56.415879424
2016-07-02 01:10:57.259467264  6.0 2016-07-02 01:10:57.259467264
2016-07-02 01:10:58.375025408  7.0 2016-07-02 01:10:58.375025408
2016-07-02 01:10:59.019387904  8.0 2016-07-02 01:10:59.019387904
2016-07-02 01:11:00.235784448  NaN                           NaT

正如你所看到的,正确计算这个计算有点棘手,所以我更喜欢支持的实现而不是“自己动手”。

这不起作用。它会移动截断第一个参数并将所有行移动0个位置:

>>> data.shift(0.4)
                                 x                     orig_time
2016-07-02 01:10:51.418745856  0.0 2016-07-02 01:10:51.418745856
2016-07-02 01:10:52.687532544  1.0 2016-07-02 01:10:52.687532544
2016-07-02 01:10:53.288187136  2.0 2016-07-02 01:10:53.288187136
2016-07-02 01:10:54.838806528  3.0 2016-07-02 01:10:54.838806528
2016-07-02 01:10:55.148979456  4.0 2016-07-02 01:10:55.148979456
2016-07-02 01:10:56.415879424  5.0 2016-07-02 01:10:56.415879424
2016-07-02 01:10:57.259467264  6.0 2016-07-02 01:10:57.259467264
2016-07-02 01:10:58.375025408  7.0 2016-07-02 01:10:58.375025408
2016-07-02 01:10:59.019387904  8.0 2016-07-02 01:10:59.019387904
2016-07-02 01:11:00.235784448  9.0 2016-07-02 01:11:00.235784448

这只是为data.index ...添加了一个偏移量:

>>> data.shift(1, pd.Timedelta("0.4s"))
                               x                     orig_time
2016-07-02 01:10:51.818745856  0 2016-07-02 01:10:51.418745856
2016-07-02 01:10:53.087532544  1 2016-07-02 01:10:52.687532544
2016-07-02 01:10:53.688187136  2 2016-07-02 01:10:53.288187136
2016-07-02 01:10:55.238806528  3 2016-07-02 01:10:54.838806528
2016-07-02 01:10:55.548979456  4 2016-07-02 01:10:55.148979456
2016-07-02 01:10:56.815879424  5 2016-07-02 01:10:56.415879424
2016-07-02 01:10:57.659467264  6 2016-07-02 01:10:57.259467264
2016-07-02 01:10:58.775025408  7 2016-07-02 01:10:58.375025408
2016-07-02 01:10:59.419387904  8 2016-07-02 01:10:59.019387904
2016-07-02 01:11:00.635784448  9 2016-07-02 01:11:00.235784448

这导致了所有时间点的Na:

>>> data.shift(1, pd.Timedelta("0.4s")).reindex(data.index)
                                x orig_time
2016-07-02 01:10:51.418745856 NaN       NaT
2016-07-02 01:10:52.687532544 NaN       NaT
2016-07-02 01:10:53.288187136 NaN       NaT
2016-07-02 01:10:54.838806528 NaN       NaT
2016-07-02 01:10:55.148979456 NaN       NaT
2016-07-02 01:10:56.415879424 NaN       NaT
2016-07-02 01:10:57.259467264 NaN       NaT
2016-07-02 01:10:58.375025408 NaN       NaT
2016-07-02 01:10:59.019387904 NaN       NaT
2016-07-02 01:11:00.235784448 NaN       NaT

2 个答案:

答案 0 :(得分:1)

就像在this question上一样,你要求加入asof-join。幸运的是,下一个版本的熊猫(很快就会发布)!在此之前,您可以使用pandas系列来确定您想要的值。

原始DataFrame:

In [44]: data
Out[44]: 
                               x
2016-07-02 13:27:05.249071616  0
2016-07-02 13:27:07.280549376  1
2016-07-02 13:27:08.666985984  2
2016-07-02 13:27:08.410521856  3
2016-07-02 13:27:09.896294912  4
2016-07-02 13:27:10.159203328  5
2016-07-02 13:27:10.492438784  6
2016-07-02 13:27:13.790925312  7
2016-07-02 13:27:13.896483072  8
2016-07-02 13:27:13.598456064  9

转换为系列:

In [45]: ser = pd.Series(data.x, data.index)

In [46]: ser
Out[46]: 
2016-07-02 13:27:05.249071616    0
2016-07-02 13:27:07.280549376    1
2016-07-02 13:27:08.666985984    2
2016-07-02 13:27:08.410521856    3
2016-07-02 13:27:09.896294912    4
2016-07-02 13:27:10.159203328    5
2016-07-02 13:27:10.492438784    6
2016-07-02 13:27:13.790925312    7
2016-07-02 13:27:13.896483072    8
2016-07-02 13:27:13.598456064    9
Name: x, dtype: int64

使用asof功能:

In [47]: ser.asof(ser.index + pd.Timedelta('4s'))
Out[47]: 
2016-07-02 13:27:09.249071616    3
2016-07-02 13:27:11.280549376    6
2016-07-02 13:27:12.666985984    6
2016-07-02 13:27:12.410521856    6
2016-07-02 13:27:13.896294912    7
2016-07-02 13:27:14.159203328    9
2016-07-02 13:27:14.492438784    9
2016-07-02 13:27:17.790925312    9
2016-07-02 13:27:17.896483072    9
2016-07-02 13:27:17.598456064    9
Name: x, dtype: int64

(我用了4秒钟让这个例子更容易阅读。)

答案 1 :(得分:0)

使用chrisaycock答案..您的数据低于0.4s间隔。所以,你的结果是正确的。 1s表明它有效。

pd.Series(x, data.index).asof(data.index + pd.Timedelta('1s'))

#     2016-07-02 01:10:52.418745856    0
#     2016-07-02 01:10:53.687532544    2
#     2016-07-02 01:10:54.288187136    2
#     2016-07-02 01:10:55.838806528    4
#     2016-07-02 01:10:56.148979456    4
#     2016-07-02 01:10:57.415879424    6
#     2016-07-02 01:10:58.259467264    6
#     2016-07-02 01:10:59.375025408    8
#     2016-07-02 01:11:00.019387904    8
#     2016-07-02 01:11:01.235784448    9