如何在scikit-learn中准备单热编码以进行多类逻辑回归?

时间:2020-04-02 13:22:07

标签: scikit-learn one-hot-encoding multiclass-classification

我正在尝试使用scikit-learn中的一键编码对以下DataFrame中的4个类进行分类:

          K   T_STAR                 REGIME
15   90.929  0.95524  BoilingInducedBreakup
9   117.483  0.89386                 Splash
16   97.764  1.17972  BoilingInducedBreakup
13   76.917  0.91399  BoilingInducedBreakup
6    44.889  0.95725  BoilingInducedBreakup
20  151.662  0.56287                 Splash
12   67.155  1.22842     ReboundWithBreakup
7   114.747  0.47618                 Splash
17  121.731  0.52956                 Splash
12   29.397  0.88702             Deposition
14   31.733  0.69154             Deposition
13  119.433  0.39422                 Splash
21   97.913  1.21309     ReboundWithBreakup
10  117.544  0.18538                 Splash
27   76.957  0.52879             Deposition
22  155.842  0.17559                 Splash
3    25.620  0.18680             Deposition
30  151.773  1.23027     ReboundWithBreakup
34   91.146  0.90138             Deposition
19   58.095  0.46110             Deposition
14   85.596  0.97520  BoilingInducedBreakup
41   97.783  0.16985             Deposition
0    16.683  0.99355             Deposition
28  122.022  1.22977     ReboundWithBreakup
0    25.570  1.24686     ReboundWithBreakup
3   113.315  0.48886                 Splash
7    31.873  1.30497     ReboundWithBreakup
0   108.488  0.73423                 Splash
2    25.725  1.29953     ReboundWithBreakup
37   97.695  0.50930             Deposition

以下是CSV格式的示例:

,K,T_STAR,REGIME
15,90.929,0.95524,BoilingInducedBreakup
9,117.483,0.89386,Splash
16,97.764,1.17972,BoilingInducedBreakup
13,76.917,0.91399,BoilingInducedBreakup
6,44.889,0.95725,BoilingInducedBreakup
20,151.662,0.56287,Splash
12,67.155,1.22842,ReboundWithBreakup
7,114.747,0.47618,Splash
17,121.731,0.52956,Splash
12,29.397,0.88702,Deposition
14,31.733,0.69154,Deposition
13,119.433,0.39422,Splash
21,97.913,1.21309,ReboundWithBreakup
10,117.544,0.18538,Splash
27,76.957,0.52879,Deposition
22,155.842,0.17559,Splash
3,25.62,0.1868,Deposition
30,151.773,1.23027,ReboundWithBreakup
34,91.146,0.90138,Deposition
19,58.095,0.4611,Deposition
14,85.596,0.9752,BoilingInducedBreakup
41,97.783,0.16985,Deposition
0,16.683,0.99355,Deposition
28,122.022,1.22977,ReboundWithBreakup
0,25.57,1.24686,ReboundWithBreakup
3,113.315,0.48886,Splash
7,31.873,1.30497,ReboundWithBreakup
0,108.488,0.73423,Splash
2,25.725,1.29953,ReboundWithBreakup
37,97.695,0.5093,Deposition

特征向量是二维的(K,T_STAR)REGIMES是类别,它们不以任何方式排序。

这是我到目前为止对一键编码和缩放的操作:

from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import MinMaxScaler 
from sklearn.preprocessing import OneHotEncoder 
num_attribs = ["K", "T_STAR"] 
cat_attribs = ["REGIME"]
preproc_pipeline = ColumnTransformer([("num", MinMaxScaler(), num_attribs),
                                      ("cat", OneHotEncoder(),  cat_attribs)])
regimes_df_prepared = preproc_pipeline.fit_transform(regimes_df)

但是,当我打印regimes_df_prepared的第一行时,我得到了

array([[0.73836403, 0.19766192, 0.        , 0.        , 0.        ,
        1.        ],
       [0.43284301, 0.65556065, 1.        , 0.        , 0.        ,
        0.        ],
       [0.97076007, 0.93419198, 0.        , 0.        , 1.        ,
        0.        ],
       [0.96996242, 0.34623652, 0.        , 0.        , 0.        ,
        1.        ],
       [0.10915571, 1.        , 0.        , 0.        , 1.        ,
        0.        ]])

因此,一键编码似乎奏效了,但问题是特征向量与该数组中的编码打包在一起。

如果我尝试像这样训练模型:

from sklearn.linear_model import LogisticRegression

logreg_ovr = LogisticRegression(solver='lbfgs', max_iter=10000, multi_class='ovr')
logreg_ovr.fit(regimes_df_prepared, regimes_df["REGIME"])
print("Model training score : %.3f" % logreg_ovr.score(regimes_df_prepared, regimes_df["REGIME"]))

分数为1.0,不能为(过拟合?)。

现在,我希望模型预测(K,T_STAR)对中的类别

logreg_ovr.predict([[40,0.6]])

我得到一个错误

ValueError: X has 2 features per sample; expecting 6
怀疑

,该模型将regimes_df_prepared的整个行视为特征向量。如何避免这种情况?

1 个答案:

答案 0 :(得分:1)

目标标签不应使用一键编码,sklearn为此使用LabelEncoder。在您的情况下,用于数据预处理的工作代码将类似于:

X,y = regimes_df[num_attribs].values,regimes_df['REGIME'].values
y = LabelEncoder().fit_transform(y)

我注意到您正在根据用于训练模型的相同数据计算分数,这自然会导致过拟合。请使用train_test_splitcross_val_score之类的东西来正确评估模型的性能。

相关问题