如何使用hyperopt调整参数并计算Tensorflow keras中的损失?

时间:2019-06-24 20:10:03

标签: python python-3.x tensorflow keras

我是深度学习的新手。我一直在寻找一种为MNIST tensorflow-keras模型调整参数的方法,但遇到过hyperopt,但理解起来似乎有点复杂。

在完成kaggle中的几个内核之后,下面的代码是我整理起来的尝试。

import tensorflow as tf
from tensorflow.keras.layers import Dense, BatchNormalization, Dropout
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from hyperopt import space_eval, Trials, hp, fmin, STATUS_OK, tpe

mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = np.reshape(x_train, (x_train.shape[0], 784)) / 255.0
x_test  = np.reshape(x_test, (x_test.shape[0], 784)) / 255.0

y_train = tf.keras.utils.to_categorical(y_train, 10)
y_test  = tf.keras.utils.to_categorical(y_test, 10)

space = {
    'dense_units': hp.choice('Dense Units', [512, 256, 128, 64, 32]),
    'dropout_p': hp.choice('Dropout Percentage', np.arange(0., 1., .1)),
    'activations': hp.choice('Activations', ['relu', 'sigmoid']),
    'kernel_init': hp.choice('Kernal Init', ['glorot_uniform', 'glorot_normal', 
                                             'he_normal', 'he_uniform']),
    'optimizers': hp.choice('Optimizers', ['Adam', 'RMSprop', 'SGD']),
    'batch_size': hp.choice('Batch Size', [16, 32, 64, 128, 256])
}

def objective(params, epochs=100, verbose=1):
    # architecture
    model = Sequential([
        # layer1
        Dense(params['dense_units'], activation= params['activations'], 
              input_shape=(784,), 
              kernel_initializer= params['kernel_init']),
        BatchNormalization(),
        Dropout(params['dropout_p']),
        # layer2
        Dense(params['dense_units'], activation= params['activations'], 
              kernel_initializer= params['kernel_init']),
        BatchNormalization(),
        Dropout(params['dropout_p']),
        # layer3
        Dense(params['dense_units'], activation= params['activations'], 
              kernel_initializer= params['kernel_init']),
        BatchNormalization(),
        Dropout(params['dropout_p']),
        #output    
        Dense(10, activation='softmax')])

    # model compilation
    model.compile(loss='categorical_crossentropy', metrics=['accuracy'], 
                  optimizer=params['optimizers'])
    # callbacks
    e = EarlyStopping(monitor='val_loss', patience=10, mode='min', verbose=verbose)
    m = ModelCheckpoint('best_weights.hdf5', monitor='val_loss', save_best_only=True, 
                         mode='min', verbose=verbose)    
    # fitting the model
    result = model.fit(x_train, y_train, batch_size=params['batch_size'], epochs=epochs,
                       verbose=verbose, validation_split=0.2, callbacks=[e, m])
    # loss
    val_loss = np.amin(result.history['val_loss'])
    return {'loss': val_loss, 'status': STATUS_OK, 'model': model}

result = fmin(objective, space, algo=tpe.suggest, trials=Trials(), max_evals=5)

我只是想知道我是否正确地计算了损失以及目标函数中的全部?有什么我可以改善的地方吗?

还可以查看正在调整的参数吗?

0 个答案:

没有答案