将sklearn管道+嵌套交叉验证放在一起进行KNN回归

时间:2017-07-17 17:53:41

标签: python scikit-learn pipeline feature-selection hyperparameters

我正在尝试弄清楚如何为sklearn.neighbors.KNeighborsRegressor构建包含以下内容的工作流程:

  • 规范化功能
  • 功能选择(20个数字功能的最佳子集,无特定总数)
  • 交叉验证超参数K的范围为1到20
  • 交叉验证模型
  • 使用RMSE作为错误指标

在scikit中有很多不同的选择 - 学习我有点不知所措,试图决定我需要哪些课程。

除了sklearn.neighbors.KNeighborsRegressor,我认为我需要:

sklearn.pipeline.Pipeline  
sklearn.preprocessing.Normalizer
sklearn.model_selection.GridSearchCV
sklearn.model_selection.cross_val_score

sklearn.feature_selection.selectKBest
OR
sklearn.feature_selection.SelectFromModel

有人请告诉我这个管道/工作流程的定义是什么样的吗?我认为它应该是这样的:

import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Normalizer
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import cross_val_score, GridSearchCV

# build regression pipeline
pipeline = Pipeline([('normalize', Normalizer()),
                     ('kbest', SelectKBest(f_classif)),
                     ('regressor', KNeighborsRegressor())])

# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k':  list(range(1, X.shape[1]+1)),
              'regressor__n_neighbors': list(range(1,21))}

# outer cross-validation on model, inner cross-validation on hyperparameters
scores = cross_val_score(GridSearchCV(pipeline, parameters, scoring="neg_mean_squared_error", cv=10), 
                         X, y, cv=10, scoring="neg_mean_squared_error", verbose=2)

rmses = np.abs(scores)**(1/2)
avg_rmse = np.mean(rmses)
print(avg_rmse)

它似乎没有错误,但我的一些担忧是:

  • 我是否正确执行了嵌套交叉验证,以便我的RMSE不偏不倚?
  • 如果我想根据最佳RMSE选择最终模型,我是否应该scoring="neg_mean_squared_error"使用cross_val_scoreGridSearchCV
  • SelectKBest, f_classif是用于选择KNeighborsRegressor模型功能的最佳选择吗?
  • 我怎么看?
    • 选择哪个功能子集为最佳
    • 哪个K被选为最佳

非常感谢任何帮助!

1 个答案:

答案 0 :(得分:4)

您的代码似乎没问题。

对于scoring="neg_mean_squared_error"cross_val_score的{​​{1}},我会做同样的事情以确保运行正常但是测试这个的唯一方法是删除其中一个并查看结果是否发生变化。

GridSearchCV是一种很好的方法,但您也可以使用SelectKBest甚至是其他可以找到的方法here

最后,为了获得最佳参数功能分数,我修改了一下您的代码,如下所示:

SelectFromModel

使用我的数据的结果:

import ...


pipeline = Pipeline([('normalize', Normalizer()),
                     ('kbest', SelectKBest(f_classif)),
                     ('regressor', KNeighborsRegressor())])

# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k':  list(range(1, X.shape[1]+1)),
              'regressor__n_neighbors': list(range(1,21))}

# changes here

grid = GridSearchCV(pipeline, parameters, cv=10, scoring="neg_mean_squared_error")

grid.fit(X, y)

# get the best parameters and the best estimator
print("the best estimator is \n {} ".format(grid.best_estimator_))
print("the best parameters are \n {}".format(grid.best_params_))

# get the features scores rounded in 2 decimals
pip_steps = grid.best_estimator_.named_steps['kbest']

features_scores = ['%.2f' % elem for elem in pip_steps.scores_ ]
print("the features scores are \n {}".format(features_scores))

feature_scores_pvalues = ['%.3f' % elem for elem in pip_steps.pvalues_]
print("the feature_pvalues is \n {} ".format(feature_scores_pvalues))

# create a tuple of feature names, scores and pvalues, name it "features_selected_tuple"

featurelist = ['age', 'weight']

features_selected_tuple=[(featurelist[i], features_scores[i], 
feature_scores_pvalues[i]) for i in pip_steps.get_support(indices=True)]

# Sort the tuple by score, in reverse order

features_selected_tuple = sorted(features_selected_tuple, key=lambda 
feature: float(feature[1]) , reverse=True)

# Print
print 'Selected Features, Scores, P-Values'
print features_selected_tuple
相关问题