熊猫:如何在偏移日期合并两个数据帧?

时间:2015-05-27 17:51:59

标签: date join pandas merge offset

我想合并两个数据帧,df1& df2,基于df2行是否落在df1行之后3-6个月的日期范围内。例如:

df1(每个公司我有季度数据):

    company DATADATE
0   012345  2005-06-30
1   012345  2005-09-30
2   012345  2005-12-31
3   012345  2006-03-31
4   123456  2005-01-31
5   123456  2005-03-31
6   123456  2005-06-30
7   123456  2005-09-30

df2(对于每家公司,我都有可能在任何一天发生的事件日期):

    company EventDate
0   012345  2005-07-28 <-- won't get merged b/c not within date range
1   012345  2005-10-12
2   123456  2005-05-15
3   123456  2005-05-17
4   123456  2005-05-25
5   123456  2005-05-30
6   123456  2005-08-08
7   123456  2005-11-29
8   abcxyz  2005-12-31 <-- won't be merged because company not in df1

理想的合并df - df2中的EventDates在df1行中的DATADATE合并后3-6个月(即1个季度)的行:

    company DATADATE    EventDate
0   012345  2005-06-30  2005-10-12
1   012345  2005-09-30  NaN   <-- nan because no EventDates fell in this range
2   012345  2005-12-31  NaN
3   012345  2006-03-31  NaN
4   123456  2005-01-31  2005-05-15
5   123456  2005-01-31  2005-05-17
5   123456  2005-01-31  2005-05-25
5   123456  2005-01-31  2005-05-30
6   123456  2005-03-31  2005-08-08
7   123456  2005-06-30  2005-11-19
8   123456  2005-09-30  NaN

我试图通过在dAT1之后添加start_time和end_time列到df1来表示3个月(start_time)到6个月(end_time),然后使用np.searchsorted()来应用这个相关主题[Merge pandas DataFrames based on irregular time intervals],但是这种情况有点棘手,因为我想在逐个公司的基础上合并。

2 个答案:

答案 0 :(得分:2)

这实际上是一个罕见的问题,其中算法复杂性可能对于不同的解决方案而言显着不同。你可能想要考虑这个问题,而不是单线程片段。

从算法:

  • 根据日期

  • 对较大的数据帧进行排序
  • 对于较小数据框中的每个日期,使用bisect模块查找较大数据框中的相关行

对于长度 m n 的数据帧( m&lt; n ),复杂度应为 O(m log) (n))的

答案 1 :(得分:2)

这是我的解决方案,它取决于Ami Tavory在下面提出的算法:

#find the date offsets to define date ranges
start_time = df1.DATADATE.apply(pd.offsets.MonthEnd(3))
end_time = df1.DATADATE.apply(pd.offsets.MonthEnd(6))

#make these extra columns
df1['start_time'] = start_time
df1['end_time'] = end_time

#find unique company names in both dfs
unique_companies_df1 = df1.company.unique()
unique_companies_df2 = df2.company.unique()

#sort df1 by company and DATADATE, so we can iterate in a sensible order
sorted_df1=df1.sort(['company','DATADATE']).reset_index(drop=True)

#define empty df to append data
df3 = pd.DataFrame()

#iterate through each company in df1, find 
#that company in sorted df2, then for each 
#DATADATE quarter of df1, bisect df2 in the 
#correct locations (i.e. start_time to end_time)

for cmpny in unique_companies_df1:

    if cmpny in unique_companies_df2: #if this company is in both dfs, take the relevant rows that are associated with this company 
        selected_df2 = df2[df2.company==cmpny].sort('EventDate').reset_index(drop=True)
        selected_df1 = sorted_df1[sorted_df1.company==cmpny].reset_index(drop=True)

        for quarter in xrange(len(selected_df1.DATADATE)): #iterate through each DATADATE quarter in df1
            lo=bisect.bisect_right(selected_df2.EventDate,selected_CS.start_time[quarter]) #bisect_right to ensure that we do not include dates before our date range
            hi=bisect.bisect_left(selected_IT.EventDate,selected_CS.end_time[quarter]) #bisect_left here to not include dates after our desired date range            
            df_right = selected_df2.loc[lo:hi].copy()  #grab all rows with EventDates that fall within our date range
            df_left = pd.DataFrame(selected_df1.loc[quarter]).transpose()

            if len(df_right)==0: # if no EventDates fall within range, create a row with cmpny in the 'company' column, and a NaT in the EventDate column to merge
                df_right.loc[0,'company']=cmpny

            temp = pd.merge(df_left,df_right,how='inner',on='company') #merge the df1 company quarter with all df2's rows that fell within date range
            df3=df3.append(temp)