通过在两个不同的数据框/熊猫中选择多个列来创建条件列的熊猫

时间:2018-11-29 14:30:00

标签: python pandas dataframe pyspark pyspark-sql

问题:我有2个数据帧;

  1. df1具有coil_id,sample_factor,seq。每个coil_id具有449条记录(范围为1-499),并且具有约1000个唯一的coil_id。
  2. df2具有coil_id,样本,量规。每个coil_id大约有500条记录(范围为10-5000;可以更少),并且与df1中具有相同的1000个唯一的coil_id。

df1:

+-------+-----------------
|coil_id|sample_factor|SEQ
+-------+-----------------
|E101634|10.4066      |  1
|E101634|20.8132      |  2
|E101634|31.2198      |  3 
|E101634|41.6264      |  4
|E101634|5220.033     |449

df2:

+-------+------+------+--
|coil_id|SAMPLE|GAUGE |
+-------+------+------+--
|E101634|    10|0.0565|
|E101634|    20|0.0569|
|E101634|    30|0.0567|
|E101634|    40|0.0561|
|E101634|  5000| 0.055|

由于记录数不同,我无法加入两个表。如果这样做的话,我的样品值和量规会发生变化。所以我不应该加入。 接下来,我需要检查 df1.sample_factor是否位于df2.sample和df2.sample + 1 之间,然后对量规进行计算。  例如:(如果10.4位于10到20之间,则 0.0565 +((((0.0569-0.0565)/ 10)*(10.4-10)))基本上可以按比例分配量规。

我想迭代df1中Sample_factor的每一行,并检查它是否位于df2中sample [i]和sample [i + 1]之间。然后按比例执行比例,然后将结果添加到df1。

我尝试过:

def new_gauge : for row in df1('sample_factor'):
    if df1['sample_factor'] > df2['sample'] and df1['sample_factor'] < df2['sample'] + 1:
        return df2['gauge']+(((df2['gauge']+1)-df2['gauge'])/10)*(df1['sample_factor']-df2['sample']))
df1['new_gauge'] = df1.apply(new_gauge)

我知道它的语法绝对错误,只是出于我想要的想法。

感谢您的帮助。谢谢:)

输出:

enter image description here

1 个答案:

答案 0 :(得分:0)

这是与您的预期输出相匹配的起始样本数据

df1

   coil_id  sample_factor  SEQ
0  E101634        10.4066    1
1  E101634        20.8132    2
2  E101634        31.2198    3
3  E101634        41.6264    4
4  E101634        52.0330    5
5  E101634        62.4396    6
6  E101634      5220.0330  449

df2

   coil_id  SAMPLE   GAUGE
0  E101634      10  0.0550
1  E101634      20  0.0568
2  E101634      30  0.0543
3  E101634      40  0.0531
4  E101634      50  0.0529
5  E101634      60  0.0519

第一步是merge_asof,将样本因子带到最接近的样本。然后,为每一行计算new_gauge列。但是,只有在sample_factor介于当前行和下一行的值之间,并且下一行的coil_id相同时,我们才真正分配一个值。

import pandas as pd

merged = pd.merge_asof(df2.assign(SAMPLE = df2.SAMPLE.astype('float')).sort_values('SAMPLE'), 
                       df1.sort_values('sample_factor'),
                       by='coil_id',
                       left_on='SAMPLE',
                       right_on='sample_factor',
                       direction='forward')
print(merged)
#   coil_id  SAMPLE   GAUGE  sample_factor  SEQ
#0  E101634    10.0  0.0550        10.4066    1
#1  E101634    20.0  0.0568        20.8132    2
#2  E101634    30.0  0.0543        31.2198    3
#3  E101634    40.0  0.0531        41.6264    4
#4  E101634    50.0  0.0529        52.0330    5
#5  E101634    60.0  0.0519        62.4396    6

# Now perform your calculation:
new_gauge = (merged.GAUGE.shift(1) 
             + ((merged.GAUGE - merged.GAUGE.shift(1))/10 
                 * (merged.sample_factor - merged.SAMPLE.shift(1))))

# Assign it only where it makes sense
# Assumes df2 was sorted on ['coil_id',  'SAMPLE']
mask = (merged.sample_factor.between(merged.SAMPLE, merged.SAMPLE.shift(-1)) 
        & (merged.coil_id == merged.coil_id.shift(-1)))

merged.loc[mask, 'new_gauge'] = new_gauge[mask] 

输出:merged

   coil_id  SAMPLE   GAUGE  sample_factor  SEQ  new_gauge
0  E101634    10.0  0.0550        10.4066    1        NaN
1  E101634    20.0  0.0568        20.8132    2   0.056946
2  E101634    30.0  0.0543        31.2198    3   0.053995
3  E101634    40.0  0.0531        41.6264    4   0.052905
4  E101634    50.0  0.0529        52.0330    5   0.052859
5  E101634    60.0  0.0519        62.4396    6        NaN

在这种情况下,我们没有分配最后一行,因为您提供的子集中没有Sample> 60。