Concat python数据帧基于唯一行

时间:2017-05-29 07:34:57

标签: python pandas pandas-groupby

我的数据框如下:

DF1

user_id    username firstname lastname 
 123         abc      abc       abc
 456         def      def       def 
 789         ghi      ghi       ghi

DF2

user_id     username  firstname lastname
 111         xyz       xyz       xyz
 456         def       def       def
 234         mnp       mnp        mnp

现在我想要一个像

这样的输出数据帧
 user_id    username firstname lastname 
 123         abc      abc       abc
 456         def      def       def 
 789         ghi      ghi       ghi
 111         xyz       xyz       xyz
 234         mnp       mnp        mnp

由于user_id 456在两个数据框架中都很常见。我在user_id groupby(['user_id'])上尝试过groupby。但看起来像groupby需要跟随一些我不想要的aggregation

4 个答案:

答案 0 :(得分:0)

使用concat + drop_duplicates

df = pd.concat([df1, df2]).drop_duplicates('user_id').reset_index(drop=True)
print (df)
   user_id username firstname lastname
0      123      abc       abc      abc
1      456      def       def      def
2      789      ghi       ghi      ghi
3      111      xyz       xyz      xyz
4      234      mnp       mnp      mnp

groupby和汇总first的解决方案更慢:

df = pd.concat([df1, df2]).groupby('user_id', as_index=False, sort=False).first()
print (df)
   user_id username firstname lastname
0      123      abc       abc      abc
1      456      def       def      def
2      789      ghi       ghi      ghi
3      111      xyz       xyz      xyz
4      234      mnp       mnp      mnp

编辑:

boolean indexingnumpy.in1d的另一种解决方案:

df = pd.concat([df1, df2[~np.in1d(df2['user_id'], df1['user_id'])]], ignore_index=True)
print (df)
   user_id username firstname lastname
0      123      abc       abc      abc
1      456      def       def      def
2      789      ghi       ghi      ghi
3      111      xyz       xyz      xyz
4      234      mnp       mnp      mnp

答案 1 :(得分:0)

掩盖的一种方法 -

def app1(df1,df2):
    df20 = df2[~df2.user_id.isin(df1.user_id)]
    return pd.concat([df1, df20],axis=0)

使用底层数组数据np.in1dnp.searchsorted的另外两种方法来获取匹配的掩码,然后将这两种数据堆叠起来并从堆叠的数组数据构造输出数据帧 -

def app2(df1,df2):    
    df20_arr = df2.values[~np.in1d(df1.user_id.values, df2.user_id.values)]
    arr = np.vstack(( df1.values, df20_arr ))
    df_out = pd.DataFrame(arr, columns= df1.columns)
    return df_out

def app3(df1,df2):
    a = df1.values
    b = df2.values

    df20_arr = b[~np.in1d(a[:,0], b[:,0])]
    arr = np.vstack(( a, df20_arr ))
    df_out = pd.DataFrame(arr, columns= df1.columns)
    return df_out

def app4(df1,df2):
    a = df1.values
    b = df2.values

    b0 = b[:,0].astype(int)
    as0 = np.sort(a[:,0].astype(int))
    df20_arr = b[as0[np.searchsorted(as0,b0)] != b0]
    arr = np.vstack(( a, df20_arr ))
    df_out = pd.DataFrame(arr, columns= df1.columns)
    return df_out

给定样本的时间 -

In [49]: %timeit app1(df1,df2)
    ...: %timeit app2(df1,df2)
    ...: %timeit app3(df1,df2)
    ...: %timeit app4(df1,df2)
    ...: 
1000 loops, best of 3: 753 µs per loop
10000 loops, best of 3: 192 µs per loop
10000 loops, best of 3: 181 µs per loop
10000 loops, best of 3: 171 µs per loop

# @jezrael's edited solution
In [85]: %timeit pd.concat([df1, df2[~np.in1d(df2['user_id'], df1['user_id'])]], ignore_index=True)
1000 loops, best of 3: 614 µs per loop

看看大型数据集的价格如何,这将会很有趣。

答案 2 :(得分:0)

另一种方法是使用np.in1d检查重复的user_id。

pd.concat([df1,df2[df2.user_id.isin(np.setdiff1d(df2.user_id,df1.user_id))]])

或者使用集合从df1和df2的合并记录中获取唯一行。这个似乎要快几倍。

pd.DataFrame(data=np.vstack({tuple(row) for row in np.r_[df1.values,df2.values]}),columns=df1.columns)

时间:

%timeit pd.concat([df1,df2[df2.user_id.isin(np.setdiff1d(df2.user_id,df1.user_id))]])
1000 loops, best of 3: 2.48 ms per loop

%timeit pd.DataFrame(data=np.vstack({tuple(row) for row in np.r_[df1.values,df2.values]}),columns=df1.columns)

1000 loops, best of 3: 632 µs per loop

答案 3 :(得分:0)

一个人也可以使用append + drop_duplicates

df1.append(df2)
df1.drop_duplicates(inplace=True)