我有两个数据框df_1
和df_2
df_1
是我的主数据帧,而df_2
是查找数据帧。
我想测试df_1[‘col_c1’]
中的值是否包含df_2[‘col_a2’]
中的任何值。
如果是这样(可以有多个匹配项!);
df_2[‘col_b2’]
添加到df_1[‘col_d1’]
df_2[‘col_c2’]
添加到df_1[‘col_e1’]
我该如何实现?
我真的不知道,因此我无法为此共享任何代码。
示例df_1
col_a1 | col_b1 | col_c1 | col_d1 | col_e1
----------------------------------------------------
1_001 | aaaaaa | bbbbccccdddd | |
1_002 | zzzzz | ggggjjjjjkkkkk | |
1_003 | pppp | qqqqffffgggg | |
1_004 | sss | wwwcccyyy | |
1_005 | eeeeee | eecccffffll | |
1_006 | tttt | hhggeeuuuuu | |
示例df_2
col_a2 | col_b2 | col_c2
------------------------------
ccc | 2_001 | some_data_c
jjj | 2_002 | some_data_j
fff | 2_003 | some_data_f
所需的输出df_1
col_a1 | col_b1 | col_c1 | col_d1 | col_e1
------------------------------------------------------------------------------
1_001 | aaaaaa | bbbbccccdddd | 2_001 | some_data_c
1_002 | zzzzz | ggggjjjjjkkkkk | 2_002 | some_data_j
1_003 | pppp | qqqqffffgggg | 2_003 | some_data_f
1_004 | sss | wwwcccyyy | 2_001 | some_data_c
1_005 | eeeeee | eecccffffll | 2_001;2_003 | some_data_c; some_data_f
1_006 | tttt | hhggeeuuuuu | |
df_1大约有45.000行,而df_2大约有45,000行。 16.000行。 (还添加了不匹配的行)
我已经为此苦苦挣扎了好几个小时,但我真的不知道。
我不认为合并是一种选择,因为没有完全匹配。
非常感谢您的帮助。
答案 0 :(得分:1)
这会解决
df['col_d1'] = df.apply(lambda x: ','.join([df2['col_b2'][i] for i in range(len(df2)) if df2['col_a2'][i] in x.col_c1]), axis=1)
df['col_e1'] = df.apply(lambda x: ','.join([df2['col_c2'][i] for i in range(len(df2)) if df2['col_a2'][i] in x.col_c1]), axis=1)
输出
col_a1 col_b1 col_c1 col_d1 \
0 1_001 aaaaaa bbbbccccdddd 2_001
1 1_002 zzzzz ggggjjjjjkkkkk 2_002
2 1_003 pppp qqqqffffgggg 2_003
3 1_004 sss wwwcccyyy 2_001
4 1_005 eeeeee eecccffffll 2_001 , 2_003
col_e1
0 some_data_c
1 some_data_j
2 some_data_f
3 some_data_c
4 some_data_c; some_data_f
答案 1 :(得分:1)
使用:
#exctract values by df_2["col_a2"] to new column
s = (df_1['col_c1'].str.extractall(f'({"|".join(df_2["col_a2"])})')[0].rename('new')
.reset_index(level=1, drop=True))
#repeat rows with duplicated match
df_1 = df_1.join(s)
#add new columns by map
df_1['col_d1'] = df_1['new'].map(df_2.set_index('col_a2')['col_b2'])
df_1['col_e1'] = df_1['new'].map(df_2.set_index('col_a2')['col_c2'])
#aggregate join
cols = df_1.columns.difference(['new','col_d1','col_e1']).tolist()
df = df_1.drop('new', axis=1).groupby(cols).agg(','.join).reset_index()
print (df)
col_a1 col_b1 col_c1 col_d1 col_e1
0 1_001 aaaaaa bbbbccccdddd 2_001 some_data_c
1 1_002 zzzzz ggggjjjjjkkkkk 2_002 some_data_j
2 1_003 pppp qqqqffffgggg 2_003 some_data_f
3 1_004 sss wwwcccyyy 2_001 some_data_c
4 1_005 eeeeee eecccffffll 2_001,2_003 some_data_c,some_data_f