在learning_curve函数中使用顺序模型作为估计量

时间:2018-07-11 18:20:23

标签: python scikit-learn keras rnn sequential

**更新了代码,并删除了回调。当前正在处理错误:“ TypeError:无法腌制_thread.lock对象”

使用Python,我使用以下函数构建了带有keras的RNN:

def create_rnn_stateless(X, y, seq_length=20, batch_size=25):

    nscans = X.shape[0]
    ninputs = X.shape[2]
    if len(y.shape) > 2:
        noutputs = y.shape[2]
    else:
        noutputs = y.shape[1]

    n_hidden = 32

    model = Sequential()
    model.add(Masking(mask_value=-9., input_shape=(seq_length, ninputs)))
    model.add(GRU(n_hidden, batch_size=batch_size, input_shape=(seq_length, ninputs),
                  return_sequences=True, stateful=False, activation='tanh', recurrent_activation='tanh',
                  bias_initializer='ones',
                  ))
    model.add(TimeDistributed(Dense(noutputs, use_bias=True, activation='sigmoid')))
    model.compile(loss=weighted_loss, optimizer=RMSprop(lr=0.0023806748), metrics=None)

    model.summary()

    return model

然后我调用该函数。

rnn_model = create_rnn_stateless(X, y, seq_length=seq_length, batch_size=batch_size)

history = rnn_model.fit(X_train, y_train, validation_data=(X_test, y_test), 
                        batch_size=batch_size, epochs=n_epoch, verbose=1, shuffle=True,
                        callbacks=None)

Train on 19200 samples, validate on 4920 samples
Epoch 1/50
19200/19200 [==============================] - 51s - loss: 0.0360 - val_loss: 
0.0240
....
Epoch 50/50
19200/19200 [==============================] - 39s - loss: 0.0014 - val_loss:  
0.0166

losses = history.history['loss'][:]
val_losses = history.history['val_loss'][:]

一切都很好,而且结果不错。现在,我被要求确定其他培训数据是否对改善模型有帮助。这是我开始探索学习曲线的地方。对于这样的顺序模型,随着时期的增加,看学习曲线似乎更为普遍。但是,我需要研究损失随着培训实例数量的增加而如何变化。我认为scikit learning具有内置函数learning_curve,我认为这应该会有所帮助。我似乎对使用顺序模型作为learning_curve中的估计量感到困惑。具体定义评分方法或使用sklearn API。我一直在尝试根据Plotting Learning Curves

中的文档改编该代码示例
print(__doc__)

from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import make_scorer
from keras.wrappers.scikit_learn import KerasRegressor


def plot_learning_curve(estimator, title, X, y, scoring, ylim=None, cv=None, 
                        n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
    """
    Generate a simple plot of the test and training learning curve.

    Parameters
    ----------
    estimator : object type that implements the "fit" and "predict" methods
        An object of that type which is cloned for each validation.

    title : string
        Title for the chart.

    X : array-like, shape (n_samples, n_features)
        Training vector, where n_samples is the number of samples and
        n_features is the number of features.

    y : array-like, shape (n_samples) or (n_samples, n_features), optional
        Target relative to X for classification or regression;
        None for unsupervised learning.

    ylim : tuple, shape (ymin, ymax), optional
        Defines minimum and maximum yvalues plotted.

    cv : int, cross-validation generator or an iterable, optional
        Determines the cross-validation splitting strategy.
        Possible inputs for cv are:
          - None, to use the default 3-fold cross-validation,
          - integer, to specify the number of folds.
          - An object to be used as a cross-validation generator.
          - An iterable yielding train/test splits.

        For integer/None inputs, if ``y`` is binary or multiclass,
        :class:`StratifiedKFold` used. If the estimator is not a classifier
        or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.

        Refer :ref:`User Guide <cross_validation>` for the various
        cross-validators that can be used here.

    scoring : string, callable or None, optional, default: None
        A string (see model evaluation documentation) or a scorer callable object / function with 
        signature scorer(estimator, X, y).

    n_jobs : integer, optional
        Number of jobs to run in parallel (default 1).
    """
    plt.figure()
    plt.title(title)
    if ylim is not None:
        plt.ylim(*ylim)
    plt.xlabel("Training examples")
    plt.ylabel("Score")
    train_sizes, train_scores, test_scores = learning_curve(
        estimator, X, y, cv=cv, scoring=scoring, n_jobs=n_jobs, train_sizes=train_sizes)
    train_scores_mean = np.mean(train_scores, axis=1)
    train_scores_std = np.std(train_scores, axis=1)
    test_scores_mean = np.mean(test_scores, axis=1)
    test_scores_std = np.std(test_scores, axis=1)
    plt.grid()

    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
                     train_scores_mean + train_scores_std, alpha=0.1,
                     color="r")
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
                     test_scores_mean + test_scores_std, alpha=0.1, color="g")
    plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
             label="Training score")
    plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
             label="Cross-validation score")

    plt.legend(loc="best")
    return plt

title = "Learning Curves"
# Cross validation with 100 iterations to get smoother mean test and train
# score curves, each time with 20% data randomly selected as a validation set.
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = KerasRegressor(
    build_fn=rnn_model, 
    batch_size=batch_size, 
    nb_epoch=n_epoch)
scoring = make_scorer(rnn_model.evaluate(x=X, y=y), greater_is_better=False)
plot_learning_curve(estimator, title, X, y, scoring, ylim=(0.7, 1.01), cv=cv, n_jobs=4)

plt.show()

这会产生以下错误:

TypeError: can't pickle _thread.lock objects

我遇到了很多有关该错误的讨论。尚未能够解决它。这迫使我重新考虑如何尝试创建学习曲线。我正在寻找有关如何创建这些学习曲线的指导。特别是使用顺序模型作为估计量并创建一个评分函数传递给learning_curve时。

提前谢谢。

0 个答案:

没有答案