Sklearn SVM分类器交叉验证需要永远

时间:2018-05-07 14:37:38

标签: machine-learning scikit-learn svm cross-validation

我正在尝试比较我拥有的数据集上的多个分类器。为了获得分类器的准确准确度分数,我现在对每个分类器执行10倍交叉验证。除了SVM(线性和rbf内核)之外,所有这些都很顺利。数据加载如下:

dataset = pd.read_csv("data/distance_annotated_indels.txt", delimiter="\t", header=None)

X = dataset.iloc[:, [5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]].values
y = dataset.iloc[:, 4].values

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)

from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)

例如,随机森林的交叉验证工作正常:

start = time.time()
classifier = RandomForestClassifier(n_estimators = 100, criterion = 'entropy')
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
cv = ShuffleSplit(n_splits=10, test_size=0.2)
scores = cross_val_score(classifier, X, y, cv=10)
print(classification_report(y_test, y_pred))
print("Random Forest accuracy after 10 fold CV: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2) + ", " + str(round(time.time() - start, 3)) + "s")

输出:

             precision    recall  f1-score   support

          0       0.97      0.95      0.96      3427
          1       0.95      0.97      0.96      3417

avg / total       0.96      0.96      0.96      6844

Random Forest accuracy after 10 fold CV: 0.92 (+/- 0.06), 90.842s

然而对于SVM来说,这个过程需要很长时间(等待2个小时,但仍然没有)。 sklearn网站并没有让我更聪明。对于SVM分类器,我应该做些什么吗? SVM代码如下:

start = time.time()
classifier = SVC(kernel = 'linear')
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
scores = cross_val_score(classifier, X, y, cv=10)
print(classification_report(y_test, y_pred))
print("Linear SVM accuracy after 10 fold CV: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2) + ", " + str(round(time.time() - start, 3)) + "s")

2 个答案:

答案 0 :(得分:1)

如果您有大量样本,问题的计算复杂性会受到阻碍,请参阅Training complexity of Linear SVM

考虑使用verbose的{​​{1}}标记来查看有关进度的更多日志。此外,将cross_val_score设置为值> 1(或者甚至使用n_jobs设置为-1的所有CPU,如果内存允许),您可以通过并行化加速计算。 http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html可用于评估这些选项。

如果效果不佳,我会考虑降低n_jobs的价值(有关此问题的讨论,请参阅https://stats.stackexchange.com/questions/27730/choice-of-k-in-k-fold-cross-validation

答案 1 :(得分:0)

您还可以通过更改max_iter来控制时间。如果设置为-1,则根据隔离空间可以永远消失。设置一些整数值(例如10000)作为停止条件。