如何在Keras分类器中使用交叉验证

时间:2020-10-01 17:36:22

标签: python pandas tensorflow keras scikit-learn

我正在对不平衡数据进行keras分类。我遵循了官方示例:

https://keras.io/examples/structured_data/imbalanced_classification/

,并使用scikit-learn api进行交叉验证。 我尝试了不同参数的模型。 但是,三折中的所有一次的值始终为0。

例如

results [0.99242424 0.99236641 0.        ]

我在做什么错? 如何获得“ 0.8”阶的所有三个验证召回值?

MWE

%%time
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split

from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold

import os
import random
SEED = 100
os.environ['PYTHONHASHSEED'] = str(SEED)
np.random.seed(SEED)
random.seed(SEED)
tf.random.set_seed(SEED)

# load the data
ifile = "https://github.com/bhishanpdl/Datasets/blob/master/Projects/Fraud_detection/raw/creditcard.csv.zip?raw=true"
df = pd.read_csv(ifile,compression='zip')

# train test split
target = 'Class'
Xtrain,Xtest,ytrain,ytest = train_test_split(df.drop([target],axis=1),
    df[target],test_size=0.2,stratify=df[target],random_state=SEED)

print(f"Xtrain shape: {Xtrain.shape}")
print(f"ytrain shape: {ytrain.shape}")


# build the model
def build_fn(n_feats):
    model = keras.models.Sequential()
    model.add(keras.layers.Dense(256, activation="relu", input_shape=(n_feats,)))
    model.add(keras.layers.Dense(256, activation="relu"))
    model.add(keras.layers.Dropout(0.3))
    model.add(keras.layers.Dense(256, activation="relu"))
    model.add(keras.layers.Dropout(0.3))

    # last layer is dense 1 for binary sigmoid
    model.add(keras.layers.Dense(1, activation="sigmoid"))

    # compile
    model.compile(loss='binary_crossentropy',
                optimizer=keras.optimizers.Adam(1e-2),
                metrics=['Recall'])

    return model

# fitting the model
n_feats      = Xtrain.shape[-1]
counts = np.bincount(ytrain)
weight_for_0 = 1.0 / counts[0]
weight_for_1 = 1.0 / counts[1]
class_weight = {0: weight_for_0, 1: weight_for_1}
FIT_PARAMS   = {'class_weight' : class_weight}

clf_keras = KerasClassifier(build_fn=build_fn,
                            n_feats=n_feats, # custom argument
                            epochs=30,
                            batch_size=2048,
                            verbose=2)

skf = StratifiedKFold(n_splits=3, shuffle=True, random_state=SEED)
results = cross_val_score(clf_keras, Xtrain, ytrain,
                          cv=skf,
                          scoring='recall',
                          fit_params = FIT_PARAMS,
                          n_jobs = -1,
                          error_score='raise'
                          )

print('results', results)

结果

Xtrain shape: (227845, 30)
ytrain shape: (227845,)
results [0.99242424 0.99236641 0.        ]
CPU times: user 3.62 s, sys: 117 ms, total: 3.74 s
Wall time: 5min 15s

问题

我得到的第三次召回为0。我期望它的值为0.8,如何确保所有三个值都在0.8或更大?

1 个答案:

答案 0 :(得分:1)

MilkyWay001,

您已选择对模型使用sklearn包装器-它们有好处,但是模型训练过程是隐藏的。相反,我单独添加了验证数据集来训练模型。此代码为:

clf_1 = KerasClassifier(build_fn=build_fn,
                       n_feats=n_feats)

clf_1.fit(Xtrain, ytrain, class_weight=class_weight,
          validation_data=(Xtest, ytest),
          epochs=30,batch_size=2048,
          verbose=1)
     

Model.fit()输出中,可以清楚地看到,虽然损耗度量降低了,但是召回不稳定。如您所见,这导致CV表现不佳,反映为CV结果为零。

我通过将学习率降低到0.0001来解决此问题。它比您的少100倍-在短短10个时间段内即可达到98%的火车召回率和100%(或接近)的测试召回率。

您的代码只需要一个修补程序即可获得稳定的结果:将LR更改为较低的值,例如0.0001:

optimizer=keras.optimizers.Adam(1e-4),

您可以在<0.001的范围内尝试LR。 作为参考,我得到了LR 0.0001

results [0.99242424 0.97709924 1.        ]

祝你好运!

PS:感谢您提供紧凑而完整的MWE