我正在使用SVM构建分类器,并希望执行网格搜索以帮助自动查找最佳模型。这是代码:
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.multiclass import OneVsRestClassifier
X.shape # (22343, 323)
y.shape # (22343, 1)
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size=0.4, random_state=0
)
tuned_parameters = [
{
'estimator__kernel': ['rbf'],
'estimator__gamma': [1e-3, 1e-4],
'estimator__C': [1, 10, 100, 1000]
},
{
'estimator__kernel': ['linear'],
'estimator__C': [1, 10, 100, 1000]
}
]
model_to_set = OneVsRestClassifier(SVC(), n_jobs=-1)
clf = GridSearchCV(model_to_set, tuned_parameters)
clf.fit(X_train, y_train)
我收到以下错误消息(这不是整个堆栈跟踪。只是最后3次调用):
----------------------------------------------------
/anaconda/lib/python3.5/site-packages/sklearn/model_selection/_split.py in split(self, X, y, groups)
88 X, y, groups = indexable(X, y, groups)
89 indices = np.arange(_num_samples(X))
---> 90 for test_index in self._iter_test_masks(X, y, groups):
91 train_index = indices[np.logical_not(test_index)]
92 test_index = indices[test_index]
/anaconda/lib/python3.5/site-packages/sklearn/model_selection/_split.py in _iter_test_masks(self, X, y, groups)
606
607 def _iter_test_masks(self, X, y=None, groups=None):
--> 608 test_folds = self._make_test_folds(X, y)
609 for i in range(self.n_splits):
610 yield test_folds == i
/anaconda/lib/python3.5/site-packages/sklearn/model_selection/_split.py in _make_test_folds(self, X, y, groups)
593 for test_fold_indices, per_cls_splits in enumerate(zip(*per_cls_cvs)):
594 for cls, (_, test_split) in zip(unique_y, per_cls_splits):
--> 595 cls_test_folds = test_folds[y == cls]
596 # the test split can be too big because we used
597 # KFold(...).split(X[:max(c, n_splits)]) when data is not 100%
IndexError: too many indices for array
此外,当我尝试重新整形数组以使y为(22343)时,我发现即使我将tuned_parameters设置为默认值,GridSearch也永远不会完成。
以下是所有软件包的版本,如果有帮助的话:
Python:3.5.2
scikit-learn:0.18
pandas:0.19.0
答案 0 :(得分:3)
您的实施似乎没有错误。
然而,正如sklearn
文档中提到的那样,“拟合时间复杂度超过二次方与样本数量,这使得难以扩展到具有多个10000
的数据集的数据集样本”。 See documentation here
在您的情况下,您有22343
个样本,这可能会导致一些计算问题/内存问题。这就是为什么当你使用默认CV时需要花费很多时间。尝试使用10000
或更少的样本减少您的火车组。