我正在尝试将模型拟合到具有以下结构的数据集中:
# Import stuff and generate dataset.
import sklearn as skl
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn import preprocessing
from sklearn import svm
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn import metrics
from tempfile import mkdtemp
from shutil import rmtree
from sklearn.externals.joblib import Memory
X, y = skl.datasets.make_classification(n_samples=1400, n_features=11, n_informative=5, n_classes=2, weights=[0.94, 0.06], flip_y=0.05, random_state=42)
X_train, X_test, y_train, y_test = skl.model_selection.train_test_split(X, y, test_size=0.3, random_state=42)
# 1. Instantiate a scaler.
#normer = preprocessing.Normalizer()
normer = preprocessing.StandardScaler()
# 2. Instantiate a Linear Support Vector Classifier.
svm1 = svm.SVC(probability=True, class_weight={1: 10})
# 3. Forge normalizer and classifier into a pipeline. Make sure the pipeline steps are memorizable during the grid search.
cached = mkdtemp()
memory = Memory(cachedir=cached, verbose=1)
pipe_1 = Pipeline(steps=[('normalization', normer), ('svm', svm1)], memory=memory)
# 4. Instantiate Cross Validation
cv = skl.model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# 5. Instantiate the Grid Search for Hypereparameter Tuning
params = [ {"svm__kernel": ["linear"], "svm__C": [1, 10, 100, 1000]},
{"svm__kernel": ["rbf"], "svm__C": [1, 10, 100, 1000], "svm__gamma": [0.001, 0.0001]} ]
grd = GridSearchCV(pipe_1, params, scoring='roc_auc', cv=cv)
调用
时,程序在我的Jupyter笔记本中冻结y_pred = grd3.fit(X_train, y_train).predict_proba(X_test)[:, 1]
我在20分钟后流产了。 当我使用preprocessing.Normalizer()而不是StandardScaler时,.fit()会在两三分钟后完成。
这可能是什么问题?
编辑:这是GridSearchCV()的输出:
GridSearchCV(cv=KFold(n_splits=5, random_state=2, shuffle=True), error_score='raise',estimator=Pipeline(memory=None, steps=[('normalization', StandardScaler(copy=True, with_mean=True, with_std=True)), ('svm', SVC(C=1.0, cache_size=200, class_weight={1: 10}, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf', max_iter=-1, probability=True, random_state=None, shrinking=True, tol=0.001, verbose=False))]), fit_params=None, iid=True, n_jobs=1,param_grid=[{'svm__kernel': ['linear'], 'svm__C': [1, 10, 100, 1000]}, {'svm__kernel': ['rbf'], 'svm__C': [1, 10, 100, 1000], 'svm__gamma': [0.001, 0.0001]}],pre_dispatch='2*n_jobs', refit=True, return_train_score=True, scoring='roc_auc', verbose=0)
答案 0 :(得分:1)
感谢您回答我的评论(我没有看到您的数据代码,我的错误)。
您的代码中有拼写错误,应该是:
y_pred = grd.fit(X_train, y_train).predict_proba(X_test)[:, 1]
不
y_pred = grd3.fit(X_train, y_train).predict_proba(X_test)[:, 1]
但是从日志中看,它似乎没有冻结,但在网格搜索中测试C = 1000时,VERRRRY会变慢。
是否需要这么高???
在我的计算机上进行测试(对于线性内核,RBF可能需要更长时间):
SVM_C = [10,100,1000]需要[1.8s,16s,127s]
所以我建议只测试C = 200/500,除非你打算在多重CV网格搜索中一夜之间运行。
更一般地
网格搜索的拟合函数和预测概率函数都需要花费大量时间。
我建议将它们分成两步,这样可以减少冻结的可能性。
grd.fit(X_train, y_train)
y_pred = grd.predict_proba(X_test)[:, 1]