使用scikit-learn中的决策树进行交叉验证,绘制递归特征消除(RFE)

时间:2013-10-04 08:11:21

标签: python scikit-learn decision-tree rfe

我想在SciKitLearn中使用决策树和kNN来绘制“使用交叉验证的递归功能消除”,如记录here

我想在我已经使用的分类器中实现这一点,同时输出两个结果。但是,它一直给我一个错误。

这是我为DT修改的代码:

from collections import defaultdict

import numpy as np
from sklearn.cross_validation import cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sk.learn.feature_selection import RFECV
from sklearn.metrics import zero_one_loss


from scipy.sparse import csr_matrix

lemma2feat = defaultdict(lambda: defaultdict(float))  # { lemma: {feat : weight}}
lemma2cat = dict()
features = set()


with open("input.csv","rb") as infile:
    for line in infile:
        lemma, feature, weight, tClass = line.split()
        lemma2feat[lemma][feature] = float(weight)
        lemma2cat[lemma] = int(tClass)
        features.add(feature)


sorted_rows = sorted(lemma2feat.keys())
col2index = dict()
for colIdx, col in enumerate(sorted(list(features))):
    col2index[col] = colIdx

dMat = np.zeros((len(sorted_rows), len(col2index.keys())), dtype = float)


# populate matrix
for vIdx, vector in enumerate(sorted_rows):
    for feature in lemma2feat[vector].keys():
        dMat[vIdx][col2index[feature]] = lemma2feat[vector][feature]


# sort targ. results.


res = []
for lem in sorted_rows:
    res.append(lemma2cat[lem])


clf = DecisionTreeClassifier(random_state=0)
rfecv = RFECV(estimator=DecisionTreeClassifier, step1, cv=10, 
              scoring='accuracy')
rfecv.fit(dMat)

print("Optimal number of features : %d" % rfecv.n_features_)

# Plot number of features VS. cross-validation scores
import pylab as pl
pl.figure()
pl.xlabel("Number of features selected")
pl.ylabel("Cross validation score (nb of misclassifications)")
pl.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)
pl.show()

print "Acc:"
print cross_val_score(clf, dMat, np.asarray(res), cv=10, scoring = "accuracy")

错误从第56行开始,更具体地说:rfecv = RFECV(estimator = DecisionTreeClassifier,step1,cv = 10, SyntaxError:关键字arg之后的非关键字arg

任何人都可以提供有关如何纠正我的代码的见解,以便至少使用DT实现此功能吗?

以下来自 ogrisel 的回复似乎解决了参数的问题,但却引发了以下错误:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/anaconda/python.app/Contents/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 540, in runfile
    execfile(filename, namespace)
  File "input.py", line 58, in <module>
    rfecv.fit(col_index, rows)
  File "/anaconda/python.app/Contents/lib/python2.7/site-packages/sklearn/feature_selection/rfe.py", line 321, in fit
    X, y = check_arrays(X, y, sparse_format="csr")
  File "/anaconda/python.app/Contents/lib/python2.7/site-packages/sklearn/utils/validation.py", line 211, in check_arrays
    % (size, n_samples))
ValueError: Found array with dim 267. Expected 16

似乎RFE正在以相反的方式读取输入文件格式(因为我的输入包含具有267个目标的16个特征)。通过这种方式,如何正确地将dims提供到代码中?

谢谢。

1 个答案:

答案 0 :(得分:1)

SyntaxError: non-keyword arg after keyword arg非常明确:您无法在关键字参数step1之后传递非关键字参数(例如estimator=DecisionTreeClassifier)。

因此,在这种情况下,正确的语法是删除第一个arg的estimator=前缀:

rfecv = RFECV(DecisionTreeClassifier, step1, cv=10, 
              scoring='accuracy')

现在您将收到另一个错误:RFECV期望模型的实例而不是类作为第一个参数。要使用默认决策树参数,只需使用:

rfecv = RFECV(DecisionTreeClassifier(), step1, cv=10, 
              scoring='accuracy')