如何在多标签设置中最大限度地提高召回率?

时间:2018-03-18 20:15:17

标签: python scikit-learn classification text-classification

我有一个文本分类问题,我想在其中为文本文档指定三个标签(-1,0,1)中的一个。最重要的指标是召回:我关心所有应标记为“-1”的文本确实标记为“-1”。标记为“-1”的所有内容确实标记为“-1”的精度不太重要。

到目前为止,我正在使用scikit-learn中的管道进行逻辑回归。超参数在GridSearchCV中进行了调整,但到目前为止,精度最大化。

steps = [('vect', CountVectorizer()),
      ('tfidf', TfidfTransformer()), 
      ('clf', LogisticRegression())]

parameters = {'vect__ngram_range': [(1, 1), (1, 2), (1, 3), (1, 4)],
           'tfidf__use_idf': (True, False),
           'clf__C': [0.001, 0.01, 0.1, 1, 10],}

pipeline = Pipeline(steps)
text_clf = GridSearchCV(pipeline, parameters, cv = 5)

text_clf.fit(X_train, y_train)
y_pred = text_clf.predict(X_test)

scores = cross_val_score(text_clf, X_test, y_test, cv = 5)

更改

text_clf = GridSearchCV(pipeline, parameters, scoring = 'recall', cv = 5)

不起作用,因为它是多类设置。有没有人知道如何重新制定这个以便最大限度地回忆?

1 个答案:

答案 0 :(得分:1)

如果指标仅显示单个数字作为GridSearchCV将用于对结果进行排序的分数,则网格搜索可以正常工作。

如果是多标签设置,您需要确定哪种类型的平均值适用于不同的标签。您可以使用以下替代方法:

scoring = 'recall_micro'
scoring = 'recall_macro'
scoring = 'recall_weighted'
scoring = 'recall_samples'

有关这些内容的说明,请参阅documentation of recall_score

average : string, [None, ‘binary’ (default), ‘micro’, ‘macro’, ‘samples’, ‘weighted’]

    This parameter is required for multiclass/multilabel targets. 
    If None, the scores for each class are returned. Otherwise, this
    determines the type of averaging performed on the data:

    'binary':
        Only report results for the class specified by pos_label. 
        This is applicable only if targets (y_{true,pred}) are binary.

    'micro':
        Calculate metrics globally by counting the total true positives, 
        false negatives and false positives.

    'macro':
        Calculate metrics for each label, and find their unweighted mean. 
        This does not take label imbalance into account.

    'weighted':
        Calculate metrics for each label, and find their average, weighted 
        by support (the number of true instances for each label).
        This alters ‘macro’ to account for label imbalance; it can result in
        an F-score that is not between precision and recall.

    'samples':
        Calculate metrics for each instance, and find their average 
        (only meaningful for multilabel classification where this
        differs from accuracy_score).