Python中的逻辑回归和交叉验证(使用sklearn)

时间:2017-02-17 19:22:54

标签: python machine-learning scikit-learn classification cross-validation

我试图通过逻辑回归解决给定数据集的分类问题(这不是问题)。为了避免过度拟合,我试图通过交叉验证来实现它(这就是问题):我完成程序时缺少一些东西。我的目的是确定准确性

但是让我具体一点。这就是我所做的:

  1. 我将该组分成火车组和测试组
  2. 我定义了要使用的logregression预测模型
  3. 我使用了cross_val_predict方法(在sklearn.cross_validation中)进行预测
  4. 最后,我测量了准确度
  5. 以下是代码:

    import pandas as pd
    import numpy as np
    import seaborn as sns
    from sklearn.cross_validation import train_test_split
    from sklearn import metrics, cross_validation
    from sklearn.linear_model import LogisticRegression
    
    # read training data in pandas dataframe
    data = pd.read_csv("./dataset.csv", delimiter=';')
    # last column is target, store in array t
    t = data['TARGET']
    # list of features, including target
    features = data.columns
    # item feature matrix in X
    X = data[features[:-1]].as_matrix()
    # remove first column because it is not necessary in the analysis
    X = np.delete(X,0,axis=1)
    # divide in training and test set
    X_train, X_test, t_train, t_test = train_test_split(X, t, test_size=0.2, random_state=0)
    
    # define method
    logreg=LogisticRegression()
    
    # cross valitadion prediction
    predicted = cross_validation.cross_val_predict(logreg, X_train, t_train, cv=10)
    print(metrics.accuracy_score(t_train, predicted)) 
    

    我的问题

    • 根据我的理解在最后之前不应考虑测试集并且应在训练集上进行交叉验证。这就是我在cross_val_predict方法中插入X_train和t_train的原因。 Thuogh,我收到一个错误说:

      ValueError: Found input variables with inconsistent numbers of samples: [6016, 4812]

      其中6016是整个数据集中的样本数,4812是数据集分割后训练集中的样本数

    • 在此之后,我不知道该怎么做。我的意思是:什么时候X_test和t_test发挥作用?我不知道在交叉验证后如何使用它们以及如何获得最终的准确性。

    加分问题:我还希望在每个步骤中执行扩展减少维度(通过功能选择或PCA)交叉验证。我怎样才能做到这一点?我已经看到定义管道可以帮助扩展,但我不知道如何将其应用于第二个问题。

    我真的很感激任何帮助: - )

2 个答案:

答案 0 :(得分:3)

以下是在示例数据帧上测试的工作代码。代码中的第一个问题是目标数组不是np.array。您也不应该在功能中包含目标数据。下面我将介绍如何使用train_test_split手动拆分训练和测试数据。我还展示了如何使用包装器cross_val_score自动拆分,适合和得分。

random.seed(42)
# Create example df with alphabetic col names.
alphabet_cols = list(string.ascii_uppercase)[:26]
df = pd.DataFrame(np.random.randint(1000, size=(1000, 26)),
                  columns=alphabet_cols)
df['Target'] = df['A']
df.drop(['A'], axis=1, inplace=True)
print(df.head())
y = df.Target.values  # df['Target'] is not an np.array.
feature_cols = [i for i in list(df.columns) if i != 'Target']
X = df.ix[:, feature_cols].as_matrix()
# Illustrated here for manual splitting of training and testing data.
X_train, X_test, y_train, y_test = \
    model_selection.train_test_split(X, y, test_size=0.2, random_state=0)

# Initialize model.
logreg = linear_model.LinearRegression()

# Use cross_val_score to automatically split, fit, and score.
scores = model_selection.cross_val_score(logreg, X, y, cv=10)
print(scores)
print('average score: {}'.format(scores.mean()))

输出

     B    C    D    E    F    G    H    I    J    K   ...    Target
0   20   33  451    0  420  657  954  156  200  935   ...    253
1  427  533  801  183  894  822  303  623  455  668   ...    421
2  148  681  339  450  376  482  834   90   82  684   ...    903
3  289  612  472  105  515  845  752  389  532  306   ...    639
4  556  103  132  823  149  974  161  632  153  782   ...    347

[5 rows x 26 columns]
[-0.0367 -0.0874 -0.0094 -0.0469 -0.0279 -0.0694 -0.1002 -0.0399  0.0328
 -0.0409]
average score: -0.04258093018969249

有用的参考资料:

答案 1 :(得分:3)

请查看documentation of cross-validation at scikit以了解更多信息。

您也错误地使用cross_val_predict。它会做什么在内部调用您提供的cvcv = 10)将提供的数据(即您的情况下的X_train,t_train)拆分为重新训练和测试,使估算器适合列车和预测仍在测试中的数据。

现在,对于X_testy_test的使用,您应首先使您的estimtor适合列车数据(cross_val_predict不适合),然后使用它来预测测试数据,然后计算准确性。

简单的代码片段来描述上述内容(借鉴代码)(请阅读评论并询问是否有任何理解):

# item feature matrix in X
X = data[features[:-1]].as_matrix()
# remove first column because it is not necessary in the analysis
X = np.delete(X,0,axis=1)
# divide in training and test set
X_train, X_test, t_train, t_test = train_test_split(X, t, test_size=0.2, random_state=0)

# Until here everything is good
# You keep away 20% of data for testing (test_size=0.2)
# This test data should be unseen by any of the below methods

# define method
logreg=LogisticRegression()

# Ideally what you are doing here should be correct, until you did anything wrong in dataframe operations (which apparently has been solved)
#cross valitadion prediction
#This cross validation prediction will print the predicted values of 't_train'
predicted = cross_validation.cross_val_predict(logreg, X_train, t_train, cv=10)
# internal working of cross_val_predict:
  #1. Get the data and estimator (logreg, X_train, t_train)
  #2. From here on, we will use X_train as X_cv and t_train as t_cv (because cross_val_predict doesnt know that its our training data) - Doubts??
  #3. Split X_cv, t_cv into X_cv_train, X_cv_test, t_cv_train, t_cv_test by using its internal cv
  #4. Use X_cv_train, t_cv_train for fitting 'logreg' 
  #5. Predict on X_cv_test (No use of t_cv_test)
  #6. Repeat steps 3 to 5 repeatedly for cv=10 iterations, each time using different data for training and different data for testing.

# So here you are correctly comparing 'predicted' and 't_train'
print(metrics.accuracy_score(t_train, predicted)) 

# The above metrics will show you how our estimator 'logreg' works on 'X_train' data. If the accuracies are very high it may be because of overfitting.

# Now what to do about the X_test and t_test above.
# Actually the correct preference for metrics is this X_test and t_train
# If you are satisfied by the accuracies on the training data then you should fit the entire training data to the estimator and then predict on X_test

logreg.fit(X_train, t_train)
t_pred = logreg(X_test)

# Here is the final accuracy
print(metrics.accuracy_score(t_test, t_pred)) 
# If this accuracy is good, then your model is good.

如果您的数据较少或者不想将数据拆分为培训和测试,那么您应该使用@fuzzyhedge

建议的方法
# Use cross_val_score on your all data
scores = model_selection.cross_val_score(logreg, X, y, cv=10)

# 'cross_val_score' will almost work same from steps 1 to 4
  #5. t_cv_pred = logreg.predict(X_cv_test) and calculate accuracy with t_cv_test. 
  #6. Repeat steps 1 to 5 for cv_iterations = 10
  #7. Return array of accuracies calculated in step 5.

# Find out average of returned accuracies to see the model performance
scores = scores.mean()

注意 - 此外,cross_validation最好与gridsearch一起使用,以找出最适合给定数据的估算器参数。 例如,使用LogisticRegression它定义了许多参数。但是如果你使用

logreg = LogisticRegression() 

将仅使用默认参数初始化模型。可能是参数的不同值

logreg = LogisticRegression(penalty='l1', solver='liblinear') 

可能会更好地处理您的数据。搜索更好的参数是gridsearch。

现在使用管道的scaling, dimension reductions等的第二部分。您可以参考documentation of pipeline和以下示例:

如果需要任何帮助,请随时与我联系。