以下是sci-kit pr-curve计算的片段。
>>> import numpy as np
>>> from sklearn.metrics import precision_recall_curve
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> precision, recall, thresholds = precision_recall_curve(
... y_true, y_scores)
>>> precision
array([ 0.66..., 0.5 , 1. , 1. ])
>>> recall
array([ 1. , 0.5, 0.5, 0. ])
>>> thresholds
array([ 0.35, 0.4 , 0.8 ])
怀疑:
为什么阈值只有3而精确度和召回率为4.正如人们可以清楚地看到0.1的阈值被忽略了。并且计算从阈值0.35开始以及更多。
答案 0 :(得分:1)
阈值仅足够低以达到100%的召回率。这个想法是你通常不会设置一个较低的门槛,因为它会引入不必要的误报。
https://github.com/scikit-learn/scikit-learn/blob/a24c8b46/sklearn/metrics/ranking.py
# stop when full recall attained
# and reverse the outputs so recall is decreasing
last_ind = tps.searchsorted(tps[-1])
sl = slice(last_ind, None, -1)
return np.r_[precision[sl], 1], np.r_[recall[sl], 0], thresholds[sl]