我使用管道和grid_search来选择最佳参数,然后使用这些参数来拟合最佳管道(' best_pipe')。但是,由于feature_selection(SelectKBest)在管道中,所以没有适用于SelectKBest。
我需要知道' k'的功能名称。选定的功能。有任何想法如何检索它们?提前谢谢
from sklearn import (cross_validation, feature_selection, pipeline,
preprocessing, linear_model, grid_search)
folds = 5
split = cross_validation.StratifiedKFold(target, n_folds=folds, shuffle = False, random_state = 0)
scores = []
for k, (train, test) in enumerate(split):
X_train, X_test, y_train, y_test = X.ix[train], X.ix[test], y.ix[train], y.ix[test]
top_feat = feature_selection.SelectKBest()
pipe = pipeline.Pipeline([('scaler', preprocessing.StandardScaler()),
('feat', top_feat),
('clf', linear_model.LogisticRegression())])
K = [40, 60, 80, 100]
C = [1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001]
penalty = ['l1', 'l2']
param_grid = [{'feat__k': K,
'clf__C': C,
'clf__penalty': penalty}]
scoring = 'precision'
gs = grid_search.GridSearchCV(estimator=pipe, param_grid = param_grid, scoring = scoring)
gs.fit(X_train, y_train)
best_score = gs.best_score_
scores.append(best_score)
print "Fold: {} {} {:.4f}".format(k+1, scoring, best_score)
print gs.best_params_
best_pipe = pipeline.Pipeline([('scale', preprocessing.StandardScaler()),
('feat', feature_selection.SelectKBest(k=80)),
('clf', linear_model.LogisticRegression(C=.0001, penalty='l2'))])
best_pipe.fit(X_train, y_train)
best_pipe.predict(X_test)
答案 0 :(得分:6)
您可以在best_pipe
:
features = best_pipe.named_steps['feat']
然后,您可以在索引数组上调用transform()
以获取所选列的名称:
X.columns[features.transform(np.arange(len(X.columns)))]
此处的输出将是管道中选定的80个列名称。
答案 1 :(得分:4)
这可能是一个有益的选择:我遇到了与OP所要求的相似的需求。如果想直接从GridSearchCV
得到k个最佳特征的索引:
finalFeatureIndices = gs.best_estimator_.named_steps["feat"].get_support(indices=True)
通过index manipulation,可以获得finalFeatureList
:
finalFeatureList = [initialFeatureList[i] for i in finalFeatureIndices]
答案 2 :(得分:4)
X.columns[features.get_support()]
它给了我一个与杰克答案相同的答案。您可以在the docs中查看有关它的更多信息,但get_support
会返回是否使用该列的true / false值数组。此外,值得注意的是X
必须与功能选择器上使用的训练数据具有相同的形状。