scikit-learn与make_scorer挣扎

时间:2018-11-19 17:14:38

标签: python machine-learning scikit-learn data-science

我必须对医学数据集实施分类算法。因此,我认为重要的是要对疾病的认知度有良好的回忆。我想实现这样的得分手

sequence.fq.gz.bam

但是,然后,我想在GridSearchCV中使用此计分器,因此对我来说它将适合KFold。因此,我不知道如何初始化计分器,因为它需要立即传递y_true和y_pred。

我该如何解决这个问题?我要编写自己的超参数调整吗?

1 个答案:

答案 0 :(得分:1)

根据您的评论,在Cross-Validation中仅对 两个类进行Scikit-learn迭代期间计算召回是可行的。

考虑以下数据集示例:

dataset example


您可以使用make_scorer函数在Cross-Validation期间抓取元数据,如下所示:

import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import recall_score, make_scorer
from sklearn.model_selection import GridSearchCV, StratifiedKFold, StratifiedShuffleSplit
import numpy as np


def getDataset(path, x_attr, y_attr, mapping):
    """
    Extract dataset from CSV file
    :param path: location of csv file
    :param x_attr: list of Features Names
    :param y_attr: Y header name in CSV file
    :param mapping: dictionary of the classes integers
    :return: tuple, (X, Y)
    """
    df = pd.read_csv(path)
    df.replace(mapping, inplace=True)
    X = np.array(df[x_attr]).reshape(len(df), len(x_attr))
    Y = np.array(df[y_attr])
    return X, Y


def custom_recall_score(y_true, y_pred):
    """
    Workaround for the recall score
    :param y_true: Ground Truth during iterations
    :param y_pred: Y predicted during iterations
    :return: float, recall
    """
    wanted_labels = [0, 1]
    assert set(wanted_labels).issubset(y_true)
    wanted_indices = [y_true.tolist().index(x) for x in wanted_labels]
    wanted_y_true = [y_true[x] for x in wanted_indices]
    wanted_y_pred = [y_pred[x] for x in wanted_indices]
    recall_ = recall_score(wanted_y_true, wanted_y_pred,
                           labels=wanted_labels, average='macro')
    print("Wanted Indices: {}".format(wanted_indices))
    print("Wanted y_true: {}".format(wanted_y_true))
    print("Wanted y_pred: {}".format(wanted_y_pred))
    print("Recall during cross validation: {}".format(recall_))
    return recall_


def run(X_data, Y_data):
    sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=0)
    train_index, test_index = next(sss.split(X_data, Y_data))
    X_train, X_test = X_data[train_index], X_data[test_index]
    Y_train, Y_test = Y_data[train_index], Y_data[test_index]
    param_grid = {'C': [0.1, 1]} # or whatever parameter you want
    # I am using LR just for example
    model = LogisticRegression(solver='saga', random_state=0)
    clf = GridSearchCV(model, param_grid,
                       cv=StratifiedKFold(n_splits=2),
                       return_train_score=True,
                       scoring=make_scorer(custom_recall_score))
    clf.fit(X_train, Y_train)
    print(clf.cv_results_)


X_data, Y_data = getDataset("dataset_example.csv", ['TSH', 'T4'], 'diagnosis',
                            {'compensated_hypothyroid': 0, 'primary_hypothyroid': 1,
                             'hyperthyroid': 2, 'normal': 3})
run(X_data, Y_data)

结果样本

Wanted Indices: [3, 5]
Wanted y_true: [0, 1]
Wanted y_pred: [3, 3]
Recall during cross validation: 0.0
...
...
Wanted Indices: [0, 4]
Wanted y_true: [0, 1]
Wanted y_pred: [1, 1]
Recall during cross validation: 0.5
...
...
{'param_C': masked_array(data=[0.1, 1], mask=[False, False],
  fill_value='?', dtype=object), 
  'mean_score_time': array([0.00094521, 0.00086224]), 
  'mean_fit_time': array([0.00298035, 0.0023526 ]), 
  'std_score_time': array([7.02142715e-05, 1.78813934e-06]), 
  'mean_test_score': array([0.21428571, 0.5       ]), 
  'std_test_score': array([0.24743583, 0.        ]), 
  'params': [{'C': 0.1}, {'C': 1}], 
  'mean_train_score': array([0.25, 0.5 ]), 
  'std_train_score': array([0.25, 0.  ]), 
  ....
  ....}

警告

必须 使用StratifiedShuffleSplitStratifiedKFold并具有 平衡 类以确保在迭代过程中类的分层分布,否则上面的assertion可能会抱怨!