网格搜索和交叉验证SVM

时间:2018-11-10 20:35:27

标签: python

我正在使用网格搜索的最佳参数在10倍交叉验证上实现svm,我需要了解预测结果为什么会有所不同我在训练集上进行了两次准确性结果测试请注意,我需要训练集上最佳参数的预测结果为了进一步分析,代码和结果描述如下。任何解释

from __future__ import print_function

from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from time import *
from sklearn import metrics
X=datascaled.iloc[:,0:13]
y=datascaled['num']

np.random.seed(1)
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.3, random_state=0)

# Set the parameters by cross-validation
tuned_parameters =  [{'kernel': ['rbf'], 'gamma': [1e-2, 1e-3, 1e-4, 1e-5],
                     'C': [0.001, 0.10, 0.1, 10, 25, 50, 100, 1000]},
                    {'kernel': ['sigmoid'], 'gamma': [1e-2, 1e-3, 1e-4, 1e-5],
                     'C': [0.001, 0.10, 0.1, 10, 25, 50, 100, 1000] },{'kernel': ['linear'], 'C': [0.001, 0.10, 0.1, 10, 25, 50, 100, 1000]}]              





print()

clf = GridSearchCV(SVC(), tuned_parameters, cv=10,
                       scoring='accuracy')
t0 = time()

clf.fit(X_train, y_train)
t = time() - t0
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print('Training accuracy')
print(clf.best_score_)
print(clf.best_estimator_)
print()
print()
print('****Results****')
svm_pred=clf.predict(X_train)
#print("\t\taccuracytrainkfold: {}".format(metrics.accuracy_score(y_train, svm_pred)))
print("=" * 52)
print("time cost: {}".format(t))
print()
print("confusion matrix\n", metrics.confusion_matrix(y_train, svm_pred))
print()
print("\t\taccuracy: {}".format(metrics.accuracy_score(y_train, svm_pred)))
print("\t\troc_auc_score: {}".format(metrics.roc_auc_score(y_train, svm_pred)))
print("\t\tcohen_kappa_score: {}".format(metrics.cohen_kappa_score(y_train, svm_pred)))
print()
print("\t\tclassification report")
print("-" * 52)
print(metrics.classification_report(y_train, svm_pred)) 

Best parameters set found on development set:

{'C': 1000, 'gamma': 0.01, 'kernel': 'rbf'}

Training accuracy
0.9254658385093167


****Results****
====================================================
time cost: 7.728448867797852

confusion matrix
 [[77  2]
 [ 4 78]]

        accuracy: 0.9627329192546584
        roc_auc_score: 0.9629515282494597
        cohen_kappa_score: 0.9254744638173121

        classification report
----------------------------------------------------
             precision    recall  f1-score   support

          0       0.95      0.97      0.96        79
          1       0.97      0.95      0.96        82

avg / total       0.96      0.96      0.96       161

1 个答案:

答案 0 :(得分:1)

您正在使用10倍交叉验证进行训练,并要求在每次折叠后计算预测准确性。我建议您执行以下操作。

使用 sklearn.model_selection.KFold 将数据拆分为10折,并创建一个循环遍历每个折点,如下所示:

for train_index, test_index in kf.split(X):
    print("TRAIN:", train_index, "TEST:", test_index)
    X_train, X_test = X[train_index], X[test_index]
    y_train, y_test = y[train_index], y[test_index]

在该循环中,使用下面重复使用的先前的代码来构建和训练模型。但是请在 GridSearchCV()

中使用 cv = 1 而不是 cv = 10
    clf = GridSearchCV(SVC(), tuned_parameters, cv=1, scoring='accuracy')
    clf.fit(X_train, y_train)

使用一倍的数据训练模型后,然后根据代码中使用的以下几行,使用相同倍数的数据预测模型的准确性。

    svm_pred=clf.predict(X_train)
    print("\t\taccuracy: {}".format(metrics.accuracy_score(y_train, svm_pred)))

完整的代码如下:

for train_index, test_index in kf.split(X):
    print("TRAIN:", train_index, "TEST:", test_index)
    X_train, X_test = X[train_index], X[test_index]
    y_train, y_test = y[train_index], y[test_index]

    clf = GridSearchCV(SVC(), tuned_parameters, cv=1, scoring='accuracy')
    clf.fit(X_train, y_train)

    svm_pred=clf.predict(X_train)
    print("\t\taccuracy: {}".format(metrics.accuracy_score(y_train, svm_pred)))

希望有帮助:)