Python,sklearn:使用MinMaxScaler和SVC进行管道操作的顺序

时间:2016-04-12 22:03:48

标签: python machine-learning scikit-learn svm pipeline

我有一个数据集,我想运行sklearn SVM的SVC模型。某些特征值的大小在[0,1e + 7]的范围内。我试过使用SVC w / o预处理,我要么得到不可接受的长计算时间,要么得到0真正的预测。因此,我试图实现预处理步骤,特别是MinMaxScaler

到目前为止我的代码:

selection_KBest = SelectKBest()
selection_PCA = PCA()
combined_features = FeatureUnion([("pca", selection_PCA), 
                                  ("univ_select", selection_KBest)])
param_grid = dict(features__pca__n_components = range(feature_min,feature_max),
                  features__univ_select__k = range(feature_min,feature_max))
svm = SVC()            
pipeline = Pipeline([("features", combined_features), 
                     ("scale", MinMaxScaler(feature_range=(0, 1))),
                     ("svm", svm)])
param_grid["svm__C"] = [0.1, 1, 10]
cv = StratifiedShuffleSplit(y = labels_train, 
                            n_iter = 10, 
                            test_size = 0.1, 
                            random_state = 42)
grid_search = GridSearchCV(pipeline,
                           param_grid = param_grid, 
                           verbose = 1,
                           cv = cv)
grid_search.fit(features_train, labels_train)
"(grid_search.best_estimator_): ", (grid_search.best_estimator_)

我的问题特定于行:

pipeline = Pipeline([("features", combined_features), 
                     ("scale", MinMaxScaler(feature_range=(0, 1))),
                     ("svm", svm)])

我想知道我的程序的最佳逻辑是什么,以及featuresscalesvmpipeline的顺序。具体来说,我无法决定featuresscale是否应该从现在开始切换。

注1:我想使用grid_search.best_estimator_作为我的分类器模型进行预测。

注意2:我关注的是制定pipeline的正确方法,以便在预测步骤中,从训练步骤中完成的方式中选择要素并进行缩放。

注3:我注意到svm结果中未显示grid_search.best_estimator_。这是否意味着它没有被正确调用?

以下是一些结果,表明订单可能很重要:

pipeline = Pipeline([("scale", MinMaxScaler(feature_range=(0, 1))),
                     ("features", combined_features), 
                     ("svm", svm)]):

Pipeline(steps=[('scale', MinMaxScaler(copy=True, feature_range=(0, 1)))
('features', FeatureUnion(n_jobs=1, transformer_list=[('pca', PCA(copy=True, 
n_components=11, whiten=False)), ('univ_select', SelectKBest(k=2, 
score_func=<function f_classif at 0x000000001ED61208>))], 
transformer_weights=...f', max_iter=-1, probability=False, 
random_state=None, shrinking=True, tol=0.001, verbose=False))])

Accuracy: 0.86247   Precision: 0.38947  Recall: 0.05550 
F1: 0.09716 F2: 0.06699 Total predictions: 15000    
True positives:  111    False positives:  174   
False negatives: 1889   True negatives: 12826


pipeline = Pipeline([("features", combined_features),
                     ("scale", MinMaxScaler(feature_range=(0, 1))), 
                     ("svm", svm)]):

Pipeline(steps=[('features', FeatureUnion(n_jobs=1,
transformer_list=[('pca', PCA(copy=True, n_components=1, whiten=False)), 
('univ_select', SelectKBest(k=1, score_func=<function f_classif at   
0x000000001ED61208>))],
transformer_weights=None)), ('scale', MinMaxScaler(copy=True, feature_range=
(0,...f', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False))])

Accuracy: 0.86680   Precision: 0.50463  Recall: 0.05450 
F1: 0.09838 F2: 0.06633 Total predictions: 15000    
True positives:  109    False positives:  107   
False negatives: 1891   True negatives: 12893

编辑1 16041310: 注3解决了。使用grid_search.best_estimator_.steps获取完整步骤。

1 个答案:

答案 0 :(得分:1)

GridsearchCV中有一个参数refit(默认为True),这意味着最佳估算器将针对整个数据集进行重新设置;然后,您将使用best_estimator_或仅使用fit对象上的GridsearchCV方法访问此估算工具。

best_estimator_将是完整的管道,如果您在其上调用predict,您将获得与训练阶段相同的预处理步骤。

如果您想打印出所有步骤,可以

print(grid_search.best_estimator_.steps)

for step in grid_search.best_estimator_.steps:
    print(type(step))
    print(step.get_params())