我正在尝试使用pandas实现增量数据导入。
我有两个数据帧:df_old(原始数据,之前加载)和df_new(新数据,与df_old合并)。
df_old / df_new中的数据在多列上是唯一的 (为简单起见,我们只说2:key1和key2)。其他列是要合并的数据,可以说,它们也只有2个:val1和val2。
除了这些之外,还有一个要处理的列:change_id - 每个新条目都会覆盖旧条目导入的逻辑非常简单:
如果df_old中存在df_old中存在密钥对,则:
2a)如果df_old和df_new中的相应值相同,则应保留旧值
2b)如果df_old和df_new中的对应值不同,则df_new中的值应替换df_old中的旧值
没有必要关心dala删除(如果存在df_old中的某些数据,df_new中不存在这些数据)
>>> df_old = pd.DataFrame([['A1','B2',1,2,1],['A1','A2',1,3,1],['B1','A2',1,3,1],['B1','B2',1,4,1],], columns=['key1','key2','val1','val2','change_id'])
>>> df_old.set_index(['key1','key2'], inplace=True)
>>> df_old
val1 val2 change_id
key1 key2
A1 B2 1 2 1
A2 1 3 1
B1 A2 1 3 1
B2 1 4 1
>>> df_new = pd.DataFrame([['A1','B2',2,1,2],['A1','A2',1,3,2],['C1','B2',2,1,2]], columns=['key1','key2','val1','val2','change_id'])
>>> df_new.set_index(['key1','key2'], inplace=True)
>>> df_new
val1 val2 change_id
key1 key2
A1 B2 2 1 2
A2 1 3 2
C1 B2 2 1 2
解决方案1
# this solution groups concatenated old data with new ones, group them by keys and for each group evaluates if new data are different
def merge_new(x):
if x.shape[0] == 1:
return x.iloc[0]
else:
if x.iloc[0].loc[['val1','val2']].equals(x.iloc[1].loc[['val1','val2']]):
return x.iloc[0]
else:
return x.iloc[1]
def solution1(df_old, df_new):
merged = pd.concat([df_old, df_new])
return merged.groupby(level=['key1','key2']).apply(merge_new).reset_index()
解决方案2
# this solution uses pd.merge to merge data + additional logic to compare merged rows and select new data
>>> def solution2(df_old, df_new):
>>> merged = pd.merge(df_old, df_new, left_index=True, right_index=True, how='outer', suffixes=('_old','_new'), indicator='ind')
>>> merged['isold'] = (merged.loc[merged['ind'] == 'both',['val1_old','val2_old']].rename(columns=lambda x: x[:-4]) == merged.loc[merged['ind'] == 'both',['val1_new','val2_new']].rename(columns=lambda x: x[:-4])).all(axis=1)
>>> merged.loc[merged['ind'] == 'right_only','isold'] = False
>>> merged['isold'] = merged['isold'].fillna(True)
>>> return pd.concat([merged[merged['isold'] == True][['val1_old','val2_old','change_id_old']].rename(columns=lambda x: x[:-4]), merged[merged['isold'] == False][['val1_new','val2_new','change_id_new']].rename(columns=lambda x: x[:-4])])
>>> solution1(df_old, df_new)
key1 key2 val1 val2 change_id
0 A1 A2 1 3 1
1 A1 B2 2 1 2
2 B1 A2 1 3 1
3 B1 B2 1 4 1
4 C1 B2 2 1 2
>>> solution2(df_old, df_new)
val1 val2 change_id
key1 key2
A1 A2 1.0 3.0 1.0
B1 A2 1.0 3.0 1.0
B2 1.0 4.0 1.0
A1 B2 2.0 1.0 2.0
C1 B2 2.0 1.0 2.0
然而,这两项工作,我仍然对巨大数据帧的性能感到非常失望。
问题是:有没有更好的方法来做到这一点?任何提升体面速度的提示都将受到欢迎......
>>> %timeit solution1(df_old, df_new)
100 loops, best of 3: 10.6 ms per loop
>>> %timeit solution2(df_old, df_new)
100 loops, best of 3: 14.7 ms per loop
答案 0 :(得分:2)
这是一种非常快速的方法:
merged = pd.concat([df_old.reset_index(), df_new.reset_index()])
merged = merged.drop_duplicates(["key1", "key2", "val1", "val2"]).drop_duplicates(["key1", "key2"], keep="last")
# 100 loops, best of 3: 1.69 ms per loop
# key1 key2 val1 val2 change_id
# 1 A1 A2 1 3 1
# 2 B1 A2 1 3 1
# 3 B1 B2 1 4 1
# 0 A1 B2 2 1 2
# 2 C1 B2 2 1 2
这里的基本原理是连接所有行,只需调用drop_duplicates
两次,而不是依赖连接逻辑来获取所需的行。第一次调用drop_duplicates
会删除源自df_new
的行,这些行与密钥和&值列,因为此方法的默认行为是保留第一个重复行(在本例中为df_old
中的行)。第二个调用会删除与键列匹配的重复项,但指定应保留每组重复项的last
行。
此方法假定行在change_id
上排序;考虑到示例DataFrame连接的顺序,这是一个安全的假设。但是,如果这是对您的真实数据的错误假设,只需在.sort_values('change_id')
上调用merged
,然后再删除重复项。