我想在scikit-learn中为GridSearchCV
执行RandomForestClassifier
,我有一个我想要使用的自定义评分函数。
评分函数仅在提供概率时才有效(例如,必须调用rfc.predict_proba(...)
而不是rfc.predict(...)
)
如何指示GridSearchCV使用predict_proba()
而不是predict()
?
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
def my_custom_loss_func(ground_truth, predictions):
# predictions must be probabilities - e.g. model.predict_proba()
# example code here:
diff = np.abs(ground_truth - predictions).max()
return np.log(1 + diff)
param_grid = {'min_samples_leaf': [1, 2, 5, 10, 20, 50, 100], 'n_estimators': [100, 200, 300]}
grid = GridSearchCV(RandomForestClassifier(), param_grid=param_grid,
scoring=my_custom_loss_func)
答案 0 :(得分:4)
参见文档here:callable应该有参数(estimator,X,y)
然后,您可以在定义中使用estimator.predict_proba(X)
或者,您可以将make_scorer与needs_proba=True
完整的代码示例:
from sklearn.datasets import make_classification
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import make_scorer
import pandas as pd
import numpy as np
X, y = make_classification()
def my_custom_loss_func_est(estimator, X, y):
# predictions must be probabilities - e.g. model.predict_proba()
# example code here:
diff = np.abs(y - estimator.predict_proba(X)[:, 1]).max()
return -np.log(1 + diff)
def my_custom_loss_func(ground_truth, predictions):
# predictions must be probabilities - e.g. model.predict_proba()
# example code here:
diff = np.abs(ground_truth - predictions[:, 1]).max()
return np.log(1 + diff)
custom_scorer = make_scorer(my_custom_loss_func,
greater_is_better=False,
needs_proba=True)
使用记分器对象:
param_grid = {'min_samples_leaf': [10, 50], 'n_estimators': [100, 200]}
grid = GridSearchCV(RandomForestClassifier(), param_grid=param_grid,
scoring=custom_scorer, return_train_score=True)
grid.fit(X, y)
pd.DataFrame(grid.cv_results_)[['mean_test_score',
'mean_train_score',
'param_min_samples_leaf',
'param_n_estimators']]
mean_test_score mean_train_score param_min_samples_leaf param_n_estimators
0 -0.505201 -0.495011 10 100
1 -0.509190 -0.498283 10 200
2 -0.406279 -0.406292 50 100
3 -0.406826 -0.406862 50 200
直接使用损失函数也很容易
grid = GridSearchCV(RandomForestClassifier(), param_grid=param_grid,
scoring=my_custom_loss_func_est, return_train_score=True)
grid.fit(X, y)
pd.DataFrame(grid.cv_results_)[['mean_test_score',
'mean_train_score',
'param_min_samples_leaf',
'param_n_estimators']]
mean_test_score mean_train_score param_min_samples_leaf param_n_estimators
0 -0.509098 -0.491462 10 100
1 -0.497693 -0.490936 10 200
2 -0.409025 -0.408957 50 100
3 -0.409525 -0.409500 50 200
由于不同的cv折叠,结果不同(我假设,但我现在懒得设置种子并再次编辑(或者是否有更好的方法来粘贴代码而无需手动缩进所有内容?)