如何将5分钟的DataFrame过滤到15分钟的DataFrame?

时间:2018-03-13 11:58:30

标签: python pandas

我在csv文件中有比特币价格。我的文件每5分钟更新一次,并将Timestamp设置为epoc。我已将此数据加载到pandas DataFrame。

我要做的是将这些数据从5分钟转换为15,30,60等分钟。我需要做的是从起始行添加x秒并将文件的其余部分读取到DataFrame。

要清楚,我需要获取时间戳的确切数据。例如:

1006.54999992,1483304400,1006.54999997,1004.00000002,1005.84692623,2686.70823136
1002.69522396,1483305300,1003.50000156,1002.03739724,1003.50000156,1066.56568909
1004.9,1483306200,1004.9,1003.50000155,1003.50000155,4978.96836354

以下是示例数据:

Close,Timestamp,High,Low,Open,Volume
1006.54999992,1483304400,1006.54999997,1004.00000002,1005.84692623,2686.70823136
1005.40527258,1483304700,1006.54999994,1004.00000001,1006.50831019,11553.13463685
1003.50000156,1483305000,1006.54999994,1002.42767301,1005.40527258,24319.95180383
1002.69522396,1483305300,1003.50000156,1002.03739724,1003.50000156,1066.56568909
1001.97782306,1483305600,1002.69522396,1001.97782306,1002.69522396,2074.17726448
1003.50000155,1483305900,1003.50000155,1001.84692611,1001.84692612,3281.67078015
1004.9,1483306200,1004.9,1003.50000155,1003.50000155,4978.96836354
1006.49999618,1483306500,1006.5499955,1003.50000164,1003.50000347,6070.86273057
1006.99999998,1483306800,1007.0,1004.30668523,1004.30668523,723.41389783
1007.98333891,1483307100,1008.151,1006.54999328,1006.99999999,1357.21576969
1008.23099997,1483307400,1008.54999326,1007.0,1007.0,459.99976456
1005.99999956,1483307700,1008.231,1004.33924087,1007.391,6139.66580632
1007.18578657,1483308000,1007.4,1004.79999999,1005.99999939,11867.90775651
1003.9999994,1483308300,1007.18578594,1001.84692611,1007.18578594,27285.53584028
1001.00000001,1483308600,1003.99999997,1000.2,1003.9999991,11068.8150516
1005.99669899,1483308900,1007.40360648,1001.84692611,1001.84692611,13223.84822808
1004.99999988,1483309200,1005.99669893,1003.00000001,1003.14143239,3069.76051701
1004.00000001,1483309500,1005.99669899,1004.00000001,1004.00000001,616.35942426
1004.99999989,1483309800,1005.99669893,1002.55436881,1003.80404142,1519.48804831
1005.0,1483310100,1006.14142953,1003.05841976,1003.05841976,8158.1735214
1004.99999997,1483310400,1005.0,1004.9999999,1005.0,3497.33824251
1004.99999999,1483310700,1005.0,1002.55399997,1004.99999991,7791.517061
1004.99999969,1483311000,1006.99669898,1004.99999968,1004.99999999,8604.25057064
1005.99999949,1483311300,1007.39313634,1004.99999999,1007.39313634,162.26831131
1004.44444427,1483311600,1005.99999991,1001.84362417,1004.99999999,3803.79028496
1004.99999992,1483311900,1005.99999985,1003.85858574,1003.85858574,69939.19414843
1001.00000001,1483312200,1004.99999993,1001.0,1004.99999992,96461.36606918

2 个答案:

答案 0 :(得分:2)

您需要resample first,但是对于完全相同的值,必须在string中将dtype=str的所有值转换为read_csv

df = pd.read_csv(file, dtype=str)

df['Timestamp'] = pd.to_datetime(df['Timestamp'], unit='s')

df = (df.set_index('Timestamp')
        .resample('15T').first()
        .reset_index()
       .reindex(columns=df.columns))
df['Timestamp'] = df['Timestamp'].astype(np.int64) // 10**9
print (df)
           Close   Timestamp           High            Low           Open  \
0  1006.54999992  1483304400  1006.54999997  1004.00000002  1005.84692623   
1  1002.69522396  1483305300  1003.50000156  1002.03739724  1003.50000156   
2         1004.9  1483306200         1004.9  1003.50000155  1003.50000155   
3  1007.98333891  1483307100       1008.151  1006.54999328  1006.99999999   
4  1007.18578657  1483308000         1007.4  1004.79999999  1005.99999939   
5  1005.99669899  1483308900  1007.40360648  1001.84692611  1001.84692611   
6  1004.99999989  1483309800  1005.99669893  1002.55436881  1003.80404142   
7  1004.99999999  1483310700         1005.0  1002.55399997  1004.99999991   
8  1004.44444427  1483311600  1005.99999991  1001.84362417  1004.99999999   

           Volume  
0   2686.70823136  
1   1066.56568909  
2   4978.96836354  
3   1357.21576969  
4  11867.90775651  
5  13223.84822808  
6   1519.48804831  
7     7791.517061  
8   3803.79028496  

答案 1 :(得分:1)

在重新采样期间决定如何聚合时,必须考虑基础数据。

从根本上说,当从5分钟粒度移动到15分钟的OHLC数据粒度时,简单地使用标准( first / last / mean / max )等进行重新采样是错误的,因为这会改变含义列,并使它们不正确imo。

我认为我们应该使用.last()用于Close,。first()用于Open,.max()用于高,.min()用于Close。我们还应总结成交量,以便在15分钟内交易成交量。

import pandas as pd
import numpy as np
# Load the DataFrame
df = pd.read_csv(file, dtype=str)
# Convert the Timestamp column to the correct format
df['Timestamp'] = pd.to_datetime(df['Timestamp'], unit='s')
# Index by time to allow us to use .resample()
df.set_index('Timestamp', inplace=True)

# Resample and Aggregate appropriately.
df = (df.resample('15T')
     .agg({'Open': 'first', 'Close': 'last', 
           'High': np.max, 'Low': np.min, 
           'Volume': np.sum})
     )

这样做是将数据重新采样到15分钟,这必然意味着有3个标记'需要按索引汇总。我们注意到要处理Open,我们想要第一个,创建一个关闭,我们想要最后一个关闭等等。

.agg()函数允许我们将Dictionary传递给它,这允许我们将不同的聚合函数传递给每一列。

我认为您必须将此逻辑应用于OHLC以准确地对其进行下采样。<​​/ p>