我有一个这样的熊猫数据框:
timestamp status
2019-01-01 09:00:00 FAILED
2019-01-01 09:00:00 FAILED
2019-01-01 09:00:00 UNKNOWN
2019-01-01 09:00:00 PASSED
2019-01-01 09:00:00 PASSED
2019-01-01 09:01:00 PASSED
2019-01-01 09:01:00 FAILED
如何将每分钟的数据分组并计算每分钟每种状态的数量以获取此数据帧:
timestamp PASSED FAILED UNKNOWN
2019-01-01 09:00:00 2 2 1
2019-01-01 09:01:00 1 1 0
答案 0 :(得分:2)
方法1:
pd.crosstab(df['timestamp'],df['status'])
status FAILED PASSED UNKNOWN
timestamp
2019-01-01-09:00:00 2 2 1
2019-01-01-09:01:00 1 1 0
如果您希望时间戳像列一样:
pd.crosstab(df['timestamp'],df['status'],colnames=[None]).reset_index()
timestamp FAILED PASSED UNKNOWN
0 2019-01-01-09:00:00 2 2 1
1 2019-01-01-09:01:00 1 1 0
方法2:
df.groupby(['timestamp','status']).size().unstack(fill_value=0)
时间比较:
方法2似乎是最快的。
%%timeit
new_df=pd.crosstab(df['timestamp'],df['status'])
21 ms ± 759 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit
new_df=df.groupby(['timestamp','status']).size().unstack(fill_value=0)
4.65 ms ± 290 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%%timeit
df2 = (
df
.groupby(df['timestamp'].map(lambda x: x.replace(second=0)))['status']
.value_counts()
.unstack()
.fillna(0)
.astype(int)
.reset_index()
)
8.5 ms ± 1.52 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
答案 1 :(得分:2)
如果时间戳记有几秒钟,则可以先将它们删除以整分钟分组。
df2 = (
df
.groupby(df['timestamp'].map(lambda x: x.replace(second=0)))['status']
.value_counts()
.unstack(fill_value=0)
.reset_index()
)
>>> df2
status timestamp FAILED PASSED UNKNOWN
0 2019-01-01 09:00:00 2 2 1
1 2019-01-01 09:01:00 1 1 0
您可能还希望填写该范围内的每一分钟。与上面相同的代码,但不要在最后重置索引。然后:
df2 = df2.reindex(pd.date_range(df2.index[0], df2.index[-1], freq='1min'), fill_value=0)
时间
时间肯定会因数据集而异(小数据与大数据,异构数据与同质数据等)。假设数据集基本上是一个日志,那么人们会期望很多数据的时间戳变化很大。要创建更合适的测试数据,请使样本数据帧大100k倍,然后使时间戳唯一(每分钟一个)。
df_ = pd.concat([df] * 100000)
df_['timestamp'] = pd.date_range(df_.timestamp.iat[0], periods=len(df_), freq='1min')
这是新的计时:
%timeit pd.crosstab(df_['timestamp'],df['status'])
# 4.27 s ± 150 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit df_.groupby(['timestamp','status']).size().unstack(fill_value=0)
# 567 ms ± 34.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
(
df_
.groupby(['timestamp', 'status'])
.size()
.unstack(fill_value=0)
.reset_index()
)
# 614 ms ± 27.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
(
df_
.groupby(df['timestamp'].map(lambda x: x.replace(second=0)))['status']
.value_counts()
.unstack(fill_value=0)
.reset_index()
)
# 147 ms ± 6.66 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
答案 2 :(得分:0)
这将起作用:
df.groupby(['timestamp', 'status']).size().unstack(level=1)