这是我正在使用的数据框的一个例子:
d = {
'item_number':['bdsm1000', 'bdsm1000', 'bdsm1000', 'ZZRWB18','ZZRWB18', 'ZZRWB18', 'ZZRWB18', 'ZZHP1427BLK', 'ZZHP1427', 'ZZHP1427', 'ZZHP1427', 'ZZHP1427', 'ZZHP1427', 'ZZHP1427', 'ZZHP1427', 'ZZHP1427', 'ZZHP1427', 'ZZHP1427', 'ZZHP1427', 'ZZHP1427', 'ZZHP1414', 'ZZHP1414', 'ZZHP1414', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115WNTR', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE', 'WRM115SCFRE'],
'Comp_ID':[2454, 2454, 2454, 1395, 1395, 1395, 1395, 3378, 1266941, 660867, 43978, 1266941, 660867, 43978, 1266941, 660867, 43978, 1266941, 660867, 43978, 43978, 43978, 43978, 1197347907, 70745, 4737, 1197347907, 4737, 1197347907, 70745, 4737, 1197347907, 70745, 4737, 1197347907, 4737, 1197487704, 1197347907, 70745, 23872, 4737, 1197347907, 4737, 1197487704, 1197347907, 23872, 4737, 1197487704, 1197347907, 70745],
'date':['2016-11-22', '2016-11-20', '2016-11-19', '2016-11-22', '2016-11-20', '2016-11-19', '2016-11-18', '2016-11-22', '2016-11-22', '2016-11-22', '2016-11-22', '2016-11-20', '2016-11-20', '2016-11-20', '2016-11-19', '2016-11-19', '2016-11-19', '2016-11-18', '2016-11-18', '2016-11-18', '2016-11-22', '2016-11-20', '2016-11-19', '2016-11-22', '2016-11-22', '2016-11-22', '2016-11-21', '2016-11-21', '2016-11-20', '2016-11-20', '2016-11-20', '2016-11-19', '2016-11-19', '2016-11-19', '2016-11-18', '2016-11-18', '2016-11-22', '2016-11-22', '2016-11-22', '2016-11-22', '2016-11-22', '2016-11-21', '2016-11-21', '2016-11-20', '2016-11-20', '2016-11-20', '2016-11-20', '2016-11-19', '2016-11-19', '2016-11-19']}
df = pd.DataFrame(data=d)
df.date = pd.to_datetime(df.date)
我想计算从2016-11-22开始的连续观察,按照Comp_ID和item_number进行分组。
基本上,我要做的是,计算连续多少天有一个观察值从今天开始计算每个Comp_ID和item_number。 (这个例子是在11月22日放在一起的)在今天之前几天/几天观察到的连续观察是不相关的。只有像今天这样的序列......昨天......前天......等等都是相关的。
我让这个工作在一个较小的样本上,但它似乎在更大的数据集上被绊倒。
以下是较小样本的代码。我需要找到数千个卖家/商品的连续日期。出于某种原因,下面的代码不适用于较大的数据集。
d = {'item_number':['KIN005','KIN005','KIN005','KIN005','KIN005','A789B','A789B','A789B','G123H','G123H','G123H'],
'Comp_ID':['1395','1395','1395','1395','1395','7787','7787','7787','1395','1395','1395'],
'date':['2016-11-22','2016-11-21','2016-11-20','2016-11-14','2016-11-13','2016-11-22','2016-11-21','2016-11-12','2016-11-22','2016-11-21','2016-11-08']}
df = pd.DataFrame(data=d)
df.date = pd.to_datetime(df.date)
d = pd.Timedelta(1, 'D')
df = df.sort_values(['item_number','date','Comp_ID'],ascending=False)
g = df.groupby(['Comp_ID','item_number'])
sequence = g['date'].apply(lambda x: x.diff().fillna(0).abs().le(d)).reset_index()
sequence.set_index('index',inplace=True)
test = df.join(sequence)
test.columns = ['Comp_ID','date','item_number','consecutive']
g = test.groupby(['Comp_ID','item_number'])
g['consecutive'].apply(lambda x: x.idxmin() - x.idxmax() )
这可以获得较小数据集的预期结果:
Comp_ID item_number
1395 G123H 2
KIN005 3
7787 KIN005 2
Name: consecutive, dtype: int64
答案 0 :(得分:4)
你可以这样做:
today = pd.to_datetime('2016-11-22')
# sort DF by `date` (descending)
x = df.sort_values('date', ascending=0)
g = x.groupby(['Comp_ID','item_number'])
# compare the # of days to `today` with a consecutive day# in each group
x[(today - x['date']).dt.days == g.cumcount()].groupby(['Comp_ID','item_number']).size()
结果:
Comp_ID item_number
1395 G123H 2
KIN005 3
7787 A789B 2
dtype: int64
PS感谢@DataSwede's for faster diff
calculation!
说明:
In [124]: x[(today - x['date']).dt.days == g.cumcount()] \
.sort_values(['Comp_ID','item_number','date'], ascending=[1,1,0])
Out[124]:
Comp_ID date item_number
8 1395 2016-11-22 G123H
9 1395 2016-11-21 G123H
0 1395 2016-11-22 KIN005
1 1395 2016-11-21 KIN005
2 1395 2016-11-20 KIN005
5 7787 2016-11-22 A789B
6 7787 2016-11-21 A789B
答案 1 :(得分:2)
首先,我建议我们生成一系列日期,每个日期比之前的日期少1天......
import datetime
import pandas as pd
def gen_prior_date(start_date):
yield start_date
while True:
start_date -= datetime.timedelta(days=1)
yield start_date
...
>>> start_date = datetime.date(2016, 11, 22)
>>> back_in_time = gen_prior_date(start_date)
>>> next(back_in_time)
datetime.date(2016, 11, 22)
>>> next(back_in_time)
datetime.date(2016, 11, 21)
现在我们需要一个可以应用于每个组的功能......
def count_consec_dates(dates, start_date):
dates = pd.to_datetime(dates.values).date
dates_set = set(dates) # O(1) vs O(n) lookup times
back_in_time = gen_prior_date(start_date)
tally = 0
while next(back_in_time) in dates_set: # jump out on first miss
tally += 1
return tally
其余的很容易......
>>> small_data = {'item_number': ['KIN005','KIN005','KIN005','KIN005','KIN005','A789B','A789B','A789B','G123H','G123H','G123H'],
... 'Comp_ID': ['1395','1395','1395','1395','1395','7787','7787','7787','1395','1395','1395'],
... 'date': ['2016-11-22','2016-11-21','2016-11-20','2016-11-14','2016-11-13','2016-11-22','2016-11-21','2016-11-12','2016-11-22','2016-11-21','2016-11-08']}
>>> small_df = pd.DataFrame(data=small_data)
>>> start_date = datetime.date(2016, 11, 22)
>>> groups = small_df.groupby(['Comp_ID', 'item_number']).date
>>> groups.apply(lambda x: count_consec_dates(x, start_date))
Comp_ID item_number
1395 G123H 2
KIN005 3
7787 A789B 2