熊猫:我正在使用的应用功能给我错误的结果

时间:2016-02-19 05:58:19

标签: python pandas group-by apply

我有一个看起来像这样的数据集

     a_id b_received brand_id c_consumed type_received       date  output  \
0    sam       soap     bill        oil       edibles 2011-01-01       1   
1    sam        oil    chris        NaN       utility 2011-01-02       1   
2    sam      brush      dan       soap       grocery 2011-01-03       0   
3  harry        oil      sam      shoes      clothing 2011-01-04       1   
4  harry      shoes     bill        oil       edibles 2011-01-05       1   
5  alice       beer      sam       eggs     breakfast 2011-01-06       0   
6  alice      brush    chris      brush      cleaning 2011-01-07       1   
7  alice       eggs      NaN        NaN       edibles 2011-01-08       1   

我正在使用以下代码

 def probability(x):
    y=[]
    for i in range(len(x)):
        y.append(float(x[i])/float(len(x)))
    return y

 df2['prob']= (df2.groupby('a_id')
           .apply(probability(['output']))
           .reset_index(level='a_id', drop=True))

理想的结果应该是具有以下值的新列

    prob  
 0  0.333334  
 1  0.333334  
 2  0.0  
 3  0.5  
 4  0.5  
 5  0     
 6  0.333334     
 7  0.333334     

但是我收到了错误

y.append(float(x[i])/float(len(x)))
ValueError: could not convert string to float: output

列输出为int格式。我不明白为什么我会收到这个错误。

我正在尝试计算消耗产品的每个人的输出概率,这是由列输出给出的。例如,如果山姆接受了肥皂,并且栏中也存在肥皂,那就是“消费者”。然后结果为1,否则结果为0.

现在,由于山姆收到3件他消耗2的产品,每件产品的消费概率为1/3。因此输出为1的概率应为0.333334,输出为0的概率应为0。

如何达到预期效果?

1 个答案:

答案 0 :(得分:1)

我认为你可以简单地将output列与已经计算的分组GroupBy一起传递给.groupby('a_id')['output']对象,然后使用函数probability,它只返回除列output及其len

def probability(x):
    #print x
    return x / len(x)

df2['prob']= (df2.groupby('a_id')['output']
           .apply(probability)
           .reset_index(level='a_id', drop=True))

lambda

df2['prob']= (df2.groupby('a_id')['output']
           .apply(lambda x: x / len(x) )
           .reset_index(level='a_id', drop=True))

更简单,更快速的解决方案是使用transform

df2['prob']= df2['output'] / df2.groupby('a_id')['output'].transform('count')
print df2
    a_id b_received brand_id c_consumed type_received        date  output  \
0    sam       soap     bill        oil       edibles  2011-01-01       1   
1    sam        oil    chris        NaN       utility  2011-01-02       1   
2    sam      brush      dan       soap       grocery  2011-01-03       0   
3  harry        oil      sam      shoes      clothing  2011-01-04       1   
4  harry      shoes     bill        oil       edibles  2011-01-05       1   
5  alice       beer      sam       eggs     breakfast  2011-01-06       0   
6  alice      brush    chris      brush      cleaning  2011-01-07       1   
7  alice       eggs      NaN        NaN       edibles  2011-01-08       1   

       prob  
0  0.333333  
1  0.333333  
2  0.000000  
3  0.500000  
4  0.500000  
5  0.000000  
6  0.333333  
7  0.333333  

<强>计时

In [505]: %timeit (df2.groupby('a_id')['output'].apply(lambda x: x / len(x) ).reset_index(level='a_id', drop=True))
The slowest run took 10.99 times longer than the fastest. This could mean that an intermediate result is being cached 
100 loops, best of 3: 1.73 ms per loop

In [506]: %timeit df2['output'] / df2.groupby('a_id')['output'].transform('count')
The slowest run took 5.03 times longer than the fastest. This could mean that an intermediate result is being cached 
1000 loops, best of 3: 449 µs per loop