我正在尝试在执行网格搜索时查看gridsearchcv的自定义评分功能中当前使用的参数。理想情况下,看起来应该像这样:
编辑:为澄清起见,我希望使用网格搜索中的参数,因此我需要能够在函数中访问它们。
def fit(X, y):
grid = {'max_features':[0.8,'sqrt'],
'subsample':[1, 0.7],
'min_samples_split' : [2, 3],
'min_samples_leaf' : [1, 3],
'learning_rate' : [0.01, 0.1],
'max_depth' : [3, 8, 15],
'n_estimators' : [10, 20, 50]}
clf = GradientBoostingClassifier()
score_func = make_scorer(make_custom_score, needs_proba=True)
model = GridSearchCV(estimator=clf,
param_grid=grid,
scoring=score_func,
cv=5)
def make_custom_score(y_true, y_score):
'''
y_true: array-like, shape = [n_samples] Ground truth (true relevance labels).
y_score : array-like, shape = [n_samples] Predicted scores
'''
print(parameters_used_in_current_gridsearch)
…
return score
我知道我可以在执行完成后获取参数,但是我试图在代码执行时获取参数。
答案 0 :(得分:0)
不确定这是否满足您的用例,但是有一个verbose
参数可用于此类工作:
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import SGDRegressor
estimator = SGDRegressor()
gscv = GridSearchCV(estimator, {
'alpha': [0.001, 0.0001], 'average': [True, False],
'shuffle': [True, False], 'max_iter': [5], 'tol': [None]
}, cv=3, verbose=2)
gscv.fit([[1,1,1],[2,2,2],[3,3,3]], [1, 2, 3])
这将打印到stdout
的以下内容:
Fitting 3 folds for each of 8 candidates, totalling 24 fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
[CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
[CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
[CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
[CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
[CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
[CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
[CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
[CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
[CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
[CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
[CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
[CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
[CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
[CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
[Parallel(n_jobs=1)]: Done 24 out of 24 | elapsed: 0.0s finished
您可以参考文档,但是也可以为更高的详细程度指定更高的值。
答案 1 :(得分:0)
如果您实际上需要在网格搜索步骤之间做某事,则需要使用一些较低级的Scikit学习功能编写自己的例程。
GridSearchCV
在内部使用ParameterGrid
类,您可以对其进行迭代以获得参数值的组合。
基本循环看起来像这样
import sklearn
from sklearn.model_selection import ParameterGrid, KFold
clf = GradientBoostingClassifier()
grid = {
'max_features': [0.8,'sqrt'],
'subsample': [1, 0.7],
'min_samples_split': [2, 3],
'min_samples_leaf': [1, 3],
'learning_rate': [0.01, 0.1],
'max_depth': [3, 8, 15],
'n_estimators': [10, 20, 50]
}
scorer = make_scorer(make_custom_score, needs_proba=True)
sampler = ParameterGrid(grid)
cv = KFold(5)
for params in sampler:
for ix_train, ix_test in cv.split(X, y):
clf_fitted = clone(clf).fit(X[ix_train], y[ix_train])
score = scorer(clf_fitted, X[ix_test], y[ix_test])
# do something with the results
答案 2 :(得分:0)
您可以制作自己的make_scorer()
(注意"custom score"
和scorer
!!),而不用在score
上使用scorer
!签名为(estimator, X_test, y_test)
的三个参数。参见the documentation for more details。
在此功能中,您可以访问在网格搜索中已根据训练数据进行训练的estimator
对象。然后,您可以轻松访问该估算器的所有参数。但是请确保返回浮点值作为得分。
类似的东西:
def make_custom_scorer(estimator, X_test, y_test):
'''
estimator: scikit-learn estimator, fitted on train data
X_test: array-like, shape = [n_samples, n_features] Data for prediction
y_test: array-like, shape = [n_samples] Ground truth (true relevance labels).
y_score : array-like, shape = [n_samples] Predicted scores
'''
# Here all_params is a dict of all the parameters in use
all_params = estimator.get_params()
# You need to do some filtering to get the parameters you want,
# but that should be easy I guess (just specify the key you want)
parameters_used_in_current_gridsearch = {k:v for k,v in all_params.items()
if k in ['max_features', 'subsample', ..., 'n_estimators']}
print(parameters_used_in_current_gridsearch)
y_score = estimator.predict(X_test)
# Use whichever metric you want here
score = scoring_function(y_test, y_score)
return score