如何通过使用keras从函数返回损失图并将其打印为子图?

时间:2019-06-05 18:31:59

标签: python matplotlib keras subplot

我想知道如何在训练2个模型(RNN和LSTM)后如何返回表示以下函数历史的hist并将其损失函数打印在子图中:

def train_model(model_type):
    '''
    This code is parallelised and runs on each process
    It trains a model with different layer sizes (hyperparameters)
    It saves the model and returns the score (error)
    '''
    import time

    import numpy as np
    import pandas as pd
    import multiprocessing
    import matplotlib.pyplot as plt

    from keras.layers import LSTM, SimpleRNN, Dense, Activation
    from keras.models import Sequential
    from keras.callbacks import EarlyStopping, ReduceLROnPlateau
    from keras.layers.normalization import BatchNormalization

    print(f'Training a model: {model_type}')

    callbacks = [
        EarlyStopping(patience=10, verbose=1),
        ReduceLROnPlateau(factor=0.1, patience=3, min_lr=0.00001, verbose=1),
    ]

    model = Sequential()

    if model_type == 'rnn':
        model.add(SimpleRNN(units=1440, input_shape=(trainX.shape[1], trainX.shape[2])))
    elif model_type == 'lstm':
        model.add(LSTM(units=1440, input_shape=(trainX.shape[1], trainX.shape[2])))

    model.add(Dense(480))
    model.add(BatchNormalization())
    model.add(Activation('tanh'))
    model.compile(loss='mean_squared_error', optimizer='adam')
    model.fit(
        trainX,
        trainY,
        epochs=50,
        batch_size=20,
        validation_data=(testX, testY),
        verbose=1,
        callbacks=callbacks,
    )

    # predict
    Y_Train_pred = model.predict(trainX)
    Y_Test_pred = model.predict(testX)

    train_MSE = mean_squared_error(trainY, Y_Train_pred)
    test_MSE = mean_squared_error(testY, Y_Test_pred)

    # you can also return values eg. the eval score
    return {'type': model_type, 'train_MSE': train_MSE, 'test_MSE': test_MSE}

我尝试了以下代码:

def train_model(model_type):

...
hist = model.fit(... )

# Return values eg. the eval score or plots history
    return {..., 'hist': hist}

num_workers = 2
model_types = ['rnn', 'lstm']
# guard in the main module to avoid creating subprocesses recursively.
if __name__ == "__main__":
     pool = multiprocessing.Pool(num_workers, init_worker)

    scores = pool.map(train_model, model_types  )
    for s in scores:
        #plot losses for RNN + LSTM
        f, ax = plt.subplots(figsize=(20, 15))
        plt.subplot(1, 2, 1)
        ax=plt.plot(s['hist'].history['loss']    ,label='Train loss')
        #ax=plt.plot(hist_RNN.history['loss']    ,label='Train loss')

        plt.subplot(1, 2, 2)
        #ax=plt.plot(hist_LSTM.history['loss']    ,label='Train loss')
        ax=plt.plot(s['hist'].history['loss']    ,label='Train loss')

        plt.subplots_adjust(top=0.80, bottom=0.38, left=0.12, right=0.90, hspace=0.37, wspace=0.28)
        plt.savefig('_All_Losses_history_.png')
        plt.show()

print(scores)

通常,我想分配自己喜欢的独立模型名称,例如plt.plot(hist_RNN...)plt.plot(hist_LSTM...),以便我可以独立调用/传递它们,但是由于RNN和LSTM模型设计都相同,因此减少我不喜欢的代码,现在我正在寻找一种优雅的方式来返回这些图并最终在子图中任何合适的位置打印它们! 任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:-1)

print(history.history.keys())
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])

您可以将诸如history.history ['loss']之类的东西分配给其他人并玩耍。