为什么管道中的过度采样会激增模型系数的数量?

时间:2019-02-04 11:49:44

标签: python machine-learning scikit-learn logistic-regression

我有一个这样的模型管道:

from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer, make_column_transformer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split

# define preprocessor
preprocess = make_column_transformer(
    (StandardScaler(), ['attr1', 'attr2', 'attr3', 'attr4', 'attr5', 
                        'attr6', 'attr7', 'attr8', 'attr9']),
    (OneHotEncoder(categories='auto'), ['attrcat1', 'attrcat2'])
)

# define train and test datasets
X_train, X_test, y_train, y_test = 
    train_test_split(features, target, test_size=0.3, random_state=0)

当我执行流水线而不进行过度采样时,我得到:

# don't do over-sampling in this case
os_X_train = X_train
os_y_train = y_train

print('Training data is type %s and shape %s' % (type(os_X_train), os_X_train.shape))
logreg = LogisticRegression(penalty='l2',solver='lbfgs',max_iter=1000)
model = make_pipeline(preprocess, logreg)
model.fit(os_X_train, np.ravel(os_y_train))
print("The coefficients shape is: %s" % logreg.coef_.shape)
print("Model coefficients: ", logreg.intercept_, logreg.coef_)
print("Logistic Regression score: %f" % model.score(X_test, y_test))

输出为:

Training data is type <class 'pandas.core.frame.DataFrame'> and shape (87145, 11)
The coefficients shape is: (1, 47)
Model coefficients:  [-7.51822124] [[ 0.10011794  0.10313989 ... -0.14138371  0.01612046  0.12064405]]
Logistic Regression score: 0.999116

意思是,对于一个训练集87145个样本,我得到47个模型系数,这要考虑到已定义的预处理。 OneHotEncoder适用于attrcat1attrcat2,它们共有31 + 7个类别,这增加了38列,另外我已经拥有的9列总共提供了47个功能。

现在,如果我也这样做,但是这次使用SMOTE进行过采样,则是这样的:

from imblearn.over_sampling import SMOTE
# balance the classes by oversampling the training data
os = SMOTE(random_state=0)
os_X_train,os_y_train=os.fit_sample(X_train, y_train.ravel())
os_X_train = pd.DataFrame(data=os_X_train, columns=X_train.columns)
os_y_train = pd.DataFrame(data=os_y_train, columns=['response'])

输出变为:

Training data is type <class 'pandas.core.frame.DataFrame'> and shape (174146, 11)
The coefficients shape is: (1, 153024)
Model coefficients:  [12.02830778] [[ 0.42926969  0.14192505 -1.89354062 ...  0.008847    0.00884372 -8.15123962]]
Logistic Regression score: 0.997938

在这种情况下,我得到了大约两倍的训练样本量来平衡响应类,这是我想要的,但是我的逻辑回归模型激增至153024系数。这没有任何意义……为什么有任何想法?

1 个答案:

答案 0 :(得分:0)

好的,我找到了这个问题的元凶。问题是SMOTE将所有要素列转换为浮点数(包括这两个分类要素)。因此,在列类型上应用列转换器OneHotEncoder时,float会将列数扩展为样本数,即,每次出现的相同float值都属于不同的类别。

解决方案只是在运行管道之前将这些分类列类型转换回int:

# balance the classes by over-sampling the training data
os = SMOTE(random_state=0)
os_X_train, os_y_train = os.fit_sample(X_train, y_train.ravel())
os_X_train = pd.DataFrame(data=os_X_train, columns=X_train.columns)
# critically important to have the categorical variables from float back to int
os_X_train['attrcat1'] = os_X_train['attrcat1'].astype(int)
os_X_train['attrcat2'] = os_X_train['attrcat2'].astype(int)