打印参数和得分GridSearchCV

时间:2020-08-25 17:38:26

标签: python scikit-learn printf grid-search

我不知道为什么不能在GridSearchCV中使用不同的参数打印所有分数。

代码:

from sklearn.svm import SVC

pipe_svm = Pipeline([
    ('sc', StandardScaler()),
    ('SVM', SVC())
    ])

params_svm = {'SVM__C': np.logspace(-2, 10, 13),
              'SVM__kernel': ['rbf', 'poly', 'sigmoid']}

search_svm = GridSearchCV(estimator=pipe_svm,
                      param_grid=params_svm,
                      cv = 5,
                      return_train_score=True)

search_svm.fit(X_train, y_train)
print(search_svm.best_score_)
print(search_svm.best_params_)

输出:

0.9004240532229588
{'SVM__C': 1.0, 'SVM__kernel': 'rbf'}

这很好,但是我想用给定的参数打印所有不同的分数(以与最佳参数进行比较)。以下是我尝试过的方法,它缺少许多带有各自得分的不同参数组合。

代码:

scores_svm = search_svm.cv_results_['mean_test_score']
for score, C, kernel in zip(scores_svm, np.logspace(-2, 10, 13), ['rbf', 'poly', 'sigmoid']):
    print(f"{C, kernel}: {score:.10f}")

输出:

0.01, rbf: 0.8500203678
0.1, poly: 0.6785667684
1.0, sigmoid: 0.8364788196

所需的输出将包括np.logspace(-2, 10, 13)中具有不同内核的所有C值,并分配相应的分数。像这样:

0.001, rbf: corresponding score
0.01, rbf: corresponding score
1.0, rbf: corresponding score
10.0, rbf: corresponding score
.
.
.

依此类推

1 个答案:

答案 0 :(得分:0)

这应该是:

kernels = ['rbf', 'poly', 'sigmoid']
C = np.logspace(-2, 10, 13)
for idx, kernel in enumerate(kernels):
    for score, c in (zip(scores_svm[idx*len(C): (idx+1)*len(C)], C)):
        print(f"{c, kernel}: {score:.10f}")
        

实际上len(scores_svm)13*n,而len(np.logspace(-2, 10, 13))13

当它们的长度不同时,如何将它们压缩在一起。这就是为什么将它们压缩在一起时只能得到其中三个的原因。至少zip内的len为3。

示例代码:

from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
import numpy as np
from sklearn.model_selection import GridSearchCV

X_train = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1], [3, 1], [3, 2], [2, 3]])
y_train = np.array([1, 1, 2, 2, 1, 2, 1])
pipe_svm = Pipeline([
    ('sc', StandardScaler()),
    ('SVM', SVC())
    ])

params_svm = {'SVM__C': np.logspace(-2, 10, 13),
              'SVM__kernel': ['rbf', 'poly', 'sigmoid']}

search_svm = GridSearchCV(estimator=pipe_svm,
                      param_grid=params_svm,
                      cv = 2,
                      return_train_score=True)

search_svm.fit(X_train, y_train)
print(search_svm.best_score_)
print(search_svm.best_params_)

# 0.41666666666666663
# {'SVM__C': 0.01, 'SVM__kernel': 'rbf'}

scores_svm = search_svm.cv_results_['mean_test_score']
for score, C, kernel in zip(scores_svm, np.logspace(-2, 10, 13), ['rbf', 'poly', 'sigmoid']):
    print(f"{C, kernel}: {score:.10f}")

(0.01, 'rbf'): 0.4166666667
(0.1, 'rbf'): 0.4166666667
(1.0, 'rbf'): 0.4166666667
(10.0, 'rbf'): 0.4166666667
(100.0, 'rbf'): 0.4166666667
(1000.0, 'rbf'): 0.4166666667
(10000.0, 'rbf'): 0.4166666667
(100000.0, 'rbf'): 0.4166666667
(1000000.0, 'rbf'): 0.4166666667
(10000000.0, 'rbf'): 0.4166666667
(100000000.0, 'rbf'): 0.4166666667
(1000000000.0, 'rbf'): 0.4166666667
(10000000000.0, 'rbf'): 0.4166666667
(0.01, 'poly'): 0.4166666667
(0.1, 'poly'): 0.4166666667
(1.0, 'poly'): 0.4166666667
.
.
.