熊猫:df.refill,添加两个不同形状的列

时间:2014-03-07 07:46:44

标签: python pandas

我有一个包含这些条目的csv文件

Timestamp       Spread
34200.405839234 0.18
34201.908794218 0.17
...

CSV文件可用here

我导入了csv文件,如下所示:

df = pd.read_csv(stock1.csv,index_col=None,usecols=['Timestamp','Spread'], header=0, dtype=np.float)   

df=DataFrame(df)

然后我重新格式化Timestamp列如下:

df['Time'] = (df.Timestamp * 1e9).astype('timedelta64[ns]')+ pd.to_datetime(date)

因此,我的数据框Time中的第一列df如下所示:

815816   2011-01-10 15:59:59.970055123
815815   2011-01-10 15:59:59.945755073
815814   2011-01-10 15:59:59.914206190
815813   2011-01-10 15:59:59.913996055
815812   2011-01-10 15:59:59.889747847
815811   2011-01-10 15:59:59.883946409
815810   2011-01-10 15:59:59.881460044
Name: Time, Length: 110, dtype: datetime64[ns]

我还在另一个数据框中有另一列构造如下:

start = pd.Timestamp(date+'T09:30:00')
end = pd.Timestamp(date+'T16:00:00')
x=pd.date_range(start,end,freq='S')
x=pd.DataFrame(x)

print x

4993 2011-01-10 10:53:13
4994 2011-01-10 10:53:14
4995 2011-01-10 10:53:15
4996 2011-01-10 10:53:16
4997 2011-01-10 10:53:17
4998 2011-01-10 10:53:18
4999 2011-01-10 10:53:19
[23401 rows x 1 columns]

我希望做到以下几点:

data = df.reindex(df.Time + x)
data = data.ffill()

我得到了

ValueError: operands could not be broadcast together with shapes (2574110) (110)

当然与x的长度有关。我如何“重塑”x合并两者?我在网上看了如何修改长度但没有成功。

1 个答案:

答案 0 :(得分:2)

您只需要先设置索引,否则您所做的是正确的。您不能直接添加一系列日期时间(例如df.Time)和索引范围。你想要一个联合(所以你可以明确并使用.union或转换为索引,默认情况下,'+'在2个索引之间。)

In [35]: intervals = np.random.randint(0,1000,size=100).cumsum()

In [36]: df = DataFrame({'time' : [ Timestamp('20140101')+pd.offsets.Milli(i) for i in intervals ],
                         'value' : np.random.randn(len(intervals))})

In [37]: df.head()
Out[37]: 
                        time     value
0 2014-01-01 00:00:00.946000 -0.322091
1 2014-01-01 00:00:01.127000  0.887412
2 2014-01-01 00:00:01.690000  0.537789
3 2014-01-01 00:00:02.332000  0.311556
4 2014-01-01 00:00:02.335000  0.273509

[5 rows x 2 columns]

In [40]: date_range('20140101 00:00:00','20140101 01:00:00',freq='s')
Out[40]: 
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-01-01 00:00:00, ..., 2014-01-01 01:00:00]
Length: 3601, Freq: S, Timezone: None

 In [38]: new_range = date_range('20140101 00:00:00','20140101 01:00:00',freq='s') + Index(df.time)

In [39]: new_range
Out[39]: 
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-01-01 00:00:00, ..., 2014-01-01 01:00:00]
Length: 3701, Freq: None, Timezone: None

In [42]: df.set_index('time').reindex(new_range).head()
Out[42]: 
                               value
2014-01-01 00:00:00              NaN
2014-01-01 00:00:00.946000 -0.322091
2014-01-01 00:00:01              NaN
2014-01-01 00:00:01.127000  0.887412
2014-01-01 00:00:01.690000  0.537789

[5 rows x 1 columns]

In [44]: df.set_index('time').reindex(new_range).ffill().head(10)
Out[44]: 
                               value
2014-01-01 00:00:00              NaN
2014-01-01 00:00:00.946000 -0.322091
2014-01-01 00:00:01        -0.322091
2014-01-01 00:00:01.127000  0.887412
2014-01-01 00:00:01.690000  0.537789
2014-01-01 00:00:02         0.537789
2014-01-01 00:00:02.332000  0.311556
2014-01-01 00:00:02.335000  0.273509
2014-01-01 00:00:03         0.273509
2014-01-01 00:00:03.245000 -1.034595

[10 rows x 1 columns]

从提供的csv文件(其FYI被命名为'stocksA.csv')(并且您不需要 做df=DataFrame(df)因为它已经是一个框架(你也不需要指定dtype)

您在时间列上有重复项

In [34]: df.drop_duplicates(['Time']).set_index('Time').reindex(new_range).info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 49354 entries, 2011-01-10 09:29:59.999400 to 2011-01-10 16:00:00
Data columns (total 2 columns):
Timestamp    25954 non-null float64
Spread       25954 non-null float64
dtypes: float64(2)

In [35]: df.drop_duplicates(['Time']).set_index('Time').reindex(new_range).ffill().info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 49354 entries, 2011-01-10 09:29:59.999400 to 2011-01-10 16:00:00
Data columns (total 2 columns):
Timestamp    49354 non-null float64
Spread       49354 non-null float64
dtypes: float64(2)

In [36]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 45782 entries, 0 to 45781
Data columns (total 3 columns):
Timestamp    45782 non-null float64
Spread       45782 non-null int64
Time         45782 non-null datetime64[ns]
dtypes: datetime64[ns](1), float64(1), int64(1)

In [37]: df.drop_duplicates(['Time','Spread']).info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 26171 entries, 0 to 45780
Data columns (total 3 columns):
Timestamp    26171 non-null float64
Spread       26171 non-null int64
Time         26171 non-null datetime64[ns]
dtypes: datetime64[ns](1), float64(1), int64(1)

因此,最简单的方法是简单地将它们丢弃并重新索引到您想要的新时间。如果你想保留时间/传播重复,那么这将成为一个更复杂的问题。您将不得不在重复项上使用多索引和循环,或者更好,只需重新对数据进行重新采样(例如说“或”)。

以下是处理重复数据的方法; groupby由复制列组成并执行操作(此处为mean)。您应该在重建索引步骤之前执行此操作。

In [13]: df.groupby('Time')['Spread'].mean()
Out[13]: 
Time
2011-01-10 09:29:59.999400       2800
2011-01-10 09:30:00.000940       3800
2011-01-10 09:30:00.010130       1100
2011-01-10 09:30:00.018500       1100
2011-01-10 09:30:00.020060       1100
2011-01-10 09:30:00.020980       1100
2011-01-10 09:30:00.024570        100
2011-01-10 09:30:00.024769999     100
2011-01-10 09:30:00.028210       1100
2011-01-10 09:30:00.037950       1100
2011-01-10 09:30:00.038880       1100
2011-01-10 09:30:00.039140       1100
2011-01-10 09:30:00.040410       1100
2011-01-10 09:30:00.041510        100
2011-01-10 09:30:00.042530        100
...
2011-01-10 09:40:32.850540       300
2011-01-10 09:40:32.862300       300
2011-01-10 09:40:32.937410       300
2011-01-10 09:40:33.001750       300
2011-01-10 09:40:33.129500       300
2011-01-10 09:40:33.129650       300
2011-01-10 09:40:33.131560       300
2011-01-10 09:40:33.136100       200
2011-01-10 09:40:33.136310       200
2011-01-10 09:40:33.136560       200
2011-01-10 09:40:33.137590       200
2011-01-10 09:40:33.137640       200
2011-01-10 09:40:33.137850       200
2011-01-10 09:40:33.138840       200
2011-01-10 09:40:33.154219999    200
Name: Spread, Length: 25954