我有以下格式dropbox download (23kb csv)
的数据集在某些情况下,数据的采样率从0Hz变为第二秒,在某些情况下,所提供的数据集中样本的最高速率约为每秒50个样本。
当采集样本时,它们总是均匀分布在第二个样本
time x
2012-12-06 21:12:40 128.75909883327378
2012-12-06 21:12:40 32.799224301545976
2012-12-06 21:12:40 98.932953779777989
2012-12-06 21:12:43 132.07033814856786
2012-12-06 21:12:43 132.07033814856786
2012-12-06 21:12:43 65.71691352191452
2012-12-06 21:12:44 117.1350194748169
2012-12-06 21:12:45 13.095622561808861
2012-12-06 21:12:47 61.295242676059246
2012-12-06 21:12:48 94.774064119961352
2012-12-06 21:12:49 80.169378222553533
2012-12-06 21:12:49 80.291142695702533
2012-12-06 21:12:49 136.55650749231367
2012-12-06 21:12:49 127.29790925838365
应该是
time x
2012-12-06 21:12:40 000ms 128.75909883327378
2012-12-06 21:12:40 333ms 32.799224301545976
2012-12-06 21:12:40 666ms 98.932953779777989
2012-12-06 21:12:43 000ms 132.07033814856786
2012-12-06 21:12:43 333ms 132.07033814856786
2012-12-06 21:12:43 666ms 65.71691352191452
2012-12-06 21:12:44 000ms 117.1350194748169
2012-12-06 21:12:45 000ms 13.095622561808861
2012-12-06 21:12:47 000ms 61.295242676059246
2012-12-06 21:12:48 000ms 94.774064119961352
2012-12-06 21:12:49 000ms 80.169378222553533
2012-12-06 21:12:49 250ms 80.291142695702533
2012-12-06 21:12:49 500ms 136.55650749231367
2012-12-06 21:12:49 750ms 127.29790925838365
有一种简单的方法可以使用pandas timeseries重采样功能,还是有一些内置于numpy或scipy中的东西会起作用?
答案 0 :(得分:4)
我认为没有内置的pandas或numpy方法/功能来执行此操作。
但是,我赞成使用python生成器:
def repeats(lst):
i_0 = None
n = -1 # will still work if lst starts with None
for i in lst:
if i == i_0:
n += 1
else:
n = 0
yield n
i_0 = i
# list(repeats([1,1,1,2,2,3])) == [0,1,2,0,1,0]
然后你可以把这个generator into a numpy array:
import numpy as np
df['rep'] = np.array(list(repeats(df['time'])))
重复重复:
from collections import Counter
count = Counter(df['time'])
df['count'] = df['time'].apply(lambda x: count[x])
并进行计算(这是计算中最昂贵的部分):
df['time2'] = df.apply(lambda row: (row['time']
+ datetime.timedelta(0, 1) # 1s
* row['rep']
/ row['count']),
axis=1)
注意:要删除计算列,请使用del df['rep']
和del df['count']
。
实现它的一种“内置”方式可能是使用shift
两次完成的,但我认为这会有些麻烦......
答案 1 :(得分:2)
我发现这是pandas groupby机制的一个很好的用例,所以我也希望为此提供一个解决方案。我发现它比Andy的解决方案更容易阅读,但它实际上并没有那么短。
# First, get your data into a dataframe after having copied
# it with the mouse into a multi-line string:
import pandas as pd
from StringIO import StringIO
s = """2012-12-06 21:12:40 128.75909883327378
2012-12-06 21:12:40 32.799224301545976
2012-12-06 21:12:40 98.932953779777989
2012-12-06 21:12:43 132.07033814856786
2012-12-06 21:12:43 132.07033814856786
2012-12-06 21:12:43 65.71691352191452
2012-12-06 21:12:44 117.1350194748169
2012-12-06 21:12:45 13.095622561808861
2012-12-06 21:12:47 61.295242676059246
2012-12-06 21:12:48 94.774064119961352
2012-12-06 21:12:49 80.169378222553533
2012-12-06 21:12:49 80.291142695702533
2012-12-06 21:12:49 136.55650749231367
2012-12-06 21:12:49 127.29790925838365"""
sio = StringIO(s)
df = pd.io.parsers.read_csv(sio, parse_dates=[[0,1]], sep='\s*', header=None)
df = df.set_index('0_1')
df.index.name = 'time'
df.columns = ['x']
到目前为止,这只是数据准备,所以如果你想比较解决方案的长度,那么从现在开始吧! ;)
# Now, groupby the same time indices:
grouped = df.groupby(df.index)
# Create yourself a second object
from datetime import timedelta
second = timedelta(seconds=1)
# loop over group elements, catch new index parts in list
l = []
for _,group in grouped:
size = len(group)
if size == 1:
# go to pydatetime for later addition, so that list is all in 1 format
l.append(group.index.to_pydatetime())
else:
offsets = [i * second / size for i in range(size)]
l.append(group.index.to_pydatetime() + offsets)
# exchange index for new index
import numpy as np
df.index = pd.DatetimeIndex(np.concatenate(l))