是否有办法对所选类的分数(例如'f1')优化的参数值运行网格搜索,而不是所有类的默认分数?
[编辑]假设这样的网格搜索应该返回一组参数,这些参数最大化得分(例如'f1','准确度','回忆')仅针对选定的类,而不是所有的总分。类。这种方法似乎是有用的,例如对于高度不平衡的数据集,当试图构造一个在具有少量实例的类上做出合理工作的分类器时。
使用默认评分方法的GridSearchCV示例(此处:所有类的'f1'):
from __future__ import print_function
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.svm import SVC
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4, 1e-5],
'C': [1, 50, 100, 500, 1000, 5000]},
{'kernel': ['linear'], 'C': [1, 100, 500, 1000, 5000]}]
clf = GridSearchCV(SVC(), tuned_parameters, cv=4, scoring='f1', n_jobs=-1)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print()
print(clf.best_estimator_)
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
如何优化参数以在所选类上获得最佳性能,或在GridSearchCV中包含一系列class_weight的测试?
答案 0 :(得分:3)
Yes, you'll want to use the scoring
parameter in GridSearchCV()
. There are a handful of pre-built scoring functions you can reference via string (such as f1), the full list can be found here: http://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values. Alternatively you can make your own custom scoring function with sklearn.metrics.make_scorer
.
If that isn't enough detail for you post a reproducible example and we can plug this into some actual code.
答案 1 :(得分:2)
需要其他参数的评分指标不属于网格搜索中预先建立的评分功能。
在这种情况下,需要的其他参数是选择必须进行评分的类
您需要从make_scorer
导入fbeta_score
和sklearn.metrics
。
make_scorer
将指标转换为可用于模型评估的callables
F-beta分数是精确度和召回率的加权调和平均值,在1时达到最佳值,在0时达到最差值
F-beta的参数
beta:beta< 1使得精确度更高,而β> 1。 1赞成回忆,beta - > 0仅考虑精度,而β - > inf只召回
pos_label:指定需要进行评分的类(str或int,默认为1)
代码示例如下
from sklearn.metrics import make_scorer, fbeta_score
f2_score = make_scorer(fbeta_score, beta=2, pos_label=1)
clf = GridSearchCV(SVC(), tuned_parameters, cv=4, scoring=f2_score, n_jobs=-1)