我如何使用python打印精度,召回率,fscore?

时间:2020-04-01 08:56:22

标签: python machine-learning scikit-learn deep-learning nlp

我想在python中使用sklearn.metrics计算和打印精度,召回率,fscore和支持。 我是doig NLP,所以我的y_test和y_pred基本上是矢量化步骤之前的单词。

下面一些可以帮助您的信息:

y_test:  [0 0 0 1 1 0 1 1 1 0]
y_pred [0.86 0.14 1.   0.   1.   0.   0.04 0.96 0.01 0.99 1.   0.   0.01 0.99
 0.41 0.59 0.02 0.98 1.   0.  ]

x_train 50
y_train 50
x_test 10
y_test 10
x_valid 6
y_valid 6

y_pred dimension:  (20,)
y_test dimension:  (10,)

完整的引用错误:

  Traceback (most recent call last):
  File "C:\Users\iduboc\Documents\asd-dev\train.py", line 324, in <module>
    precision, recall, fscore, support = score(y_test, y_pred)
  File "C:\Users\iduboc\Python1\envs\asd-v3-1\lib\site-packages\sklearn\metrics\classification.py", line 1415, in precision_recall_fscore_support
    pos_label)
  File "C:\Users\iduboc\Python1\envs\asd-v3-1\lib\site-packages\sklearn\metrics\classification.py", line 1239, in _check_set_wise_labels
    y_type, y_true, y_pred = _check_targets(y_true, y_pred)
  File "C:\Users\iduboc\Python1\envs\asd-v3-1\lib\site-packages\sklearn\metrics\classification.py", line 71, in _check_targets
    check_consistent_length(y_true, y_pred)
  File "C:\Users\iduboc\Python1\envs\asd-v3-1\lib\site-packages\sklearn\utils\validation.py", line 205, in check_consistent_length
    " samples: %r" % [int(l) for l in lengths])
ValueError: Found input variables with inconsistent numbers of samples: [10, 20]

我的代码:

 from sklearn.metrics import precision_recall_fscore_support as score
    precision, recall, fscore, support = score(y_test, y_pred)
    print('precision: {}'.format(precision))
    print('recall: {}'.format(recall))
    print('fscore: {}'.format(fscore))
    print('support: {}'.format(support))

我的代码来预测值:

elif clf == 'rndforest':

    # No validation data in rnd forest
    x_train = np.concatenate((x_train, x_valid))
    y_train = np.concatenate((y_train, y_valid))

    model = RandomForestClassifier(n_estimators=int(clf_params['n_estimators']),
                                   max_features=clf_params['max_features'])
    model.fit(pipe_vect.transform(x_train), y_train)

    datetoday = datetime.today().strftime('%d-%b-%Y-%H_%M')
    model_name_save = abspath(os.path.join("models", dataset,  name_file + '-' + 
    vect + reduction + '-rndforest'\
                                   + datetoday + '.pickle'))
    print("Model d'enregistrement : ", model_name_save)




    x_test_vect = pipe_vect.transform(x_test)

    y_pred = model.predict_proba(x_test_vect)  

1 个答案:

答案 0 :(得分:0)

该错误是由于预测和真实向量的大小不同所致。函数precision_recall_fscore_support仅在这些大小相同时起作用。

查看文档:

https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html

此外,上述函数期望接收不连续的值,否则。如果将一个浮点数在0到1之间的列表(y_pred列表)作为参数传递,则会出现下一个错误:

ValueError: Classification metrics can't handle a mix of binary and continuous targets

产生错误的示例代码是这样的:

y_test =  [0., 0., 0., 1., 1.]
y_pred = [0.86, 0.14, 1., 0., 1.]

from sklearn.metrics import precision_recall_fscore_support as score

precision, recall, fscore, support = score(y_test, y_pred)
print('precision: {}'.format(precision))
print('recall: {}'.format(recall))
print('fscore: {}'.format(fscore))
print('support: {}'.format(support))

因此,如果要计算这些指标,则必须以某种方式决定预测矢量的值是1(正预测),哪个是0(负预测)。例如,您可以使用一个阈值(例如0.5),也可以使用多个阈值,然后选择最佳阈值,或在不同阈值水平(例如0.1,0.2、0.3等)下绘制具有不同指标的曲线。