尝试访问Keras中的模型输出时出现问题

时间:2018-08-11 17:00:18

标签: python keras

我在Keras中创建了一个函数(函数可能不是正确的词),该函数构建了一个深层的神经网络。如下所示):

def create_model(x_train, y_train, x_val, y_val, layers=[20, 20, 4], 
                 kernel_init ='he_uniform', bias_init ='he_uniform',
                 batch_norm=True, dropout=True):

    model = Sequential()

    # layer 1
    model.add(Dense(layers[0], input_dim=x_train.shape[1],
                    W_regularizer=l2(l2_reg),
                    kernel_initializer=kernel_init,
                    bias_initializer=bias_init))

    if batch_norm == True:
        model.add(BatchNormalization(axis=-1, momentum=momentum, center=True))

    model.add(Activation(params['activation']))

    if dropout == True:
        model.add(Dropout(params['dropout']))

    # layer 2+    
    for layer in range(0, len(layers)-1):

        model.add(Dense(layers[layer+1], W_regularizer=l2(l2_reg),
                        kernel_initializer=kernel_init,
                        bias_initializer=bias_init))

        if batch_norm == True:
            model.add(BatchNormalization(axis=-1, momentum=momentum, center=True))

        model.add(Activation(params['activation']))

        if dropout == True:
            model.add(Dropout(params['dropout']))

    # Last layer
    model.add(Dense(layers[-1], activation=params['last_activation'],
                    kernel_initializer=kernel_init,
                    bias_initializer=bias_init))

    model.compile(loss=params['losses'],
                  optimizer=keras.optimizers.adam(lr=params['lr']),
                  metrics=['accuracy'])

    history = model.fit(x_train, y_train, 
                        validation_data=[x_val, y_val],
                        batch_size=params['batch_size'],
                        epochs=params['epochs'],
                        verbose=1)

    history_dict = history.history

    model_output = {'model':model}

    return model_output

现在,如果我运行此代码而不使其具有功能(不在def creat_model上面),那么可以做类似

的操作
model.summary()

或者我可以拥有hist = model.fit(),然后获取hist.history以获取损失等信息

但是,如果我运行上面的代码,即使我将所需的值放在

之后,也无法执行这些操作
return

我曾尝试将不同的东西放回去,例如

return model
return model, history
return {'model':model, 'history':history}

我在上面的代码中得到的输出是(在运行其他代码之后):

l2_reg = 0.4
momentum = 0.99
seed = 5

create_model(x_train, y_train, x_val, y_val, layers=[30, 20, 4], 
                 kernel_init ='he_uniform', bias_init ='he_uniform',
                 batch_norm=True, dropout=True)


Epoch 499/500
614/614 [==============================] - 0s 135us/step - loss: 0.9233 - acc: 0.6515 - val_loss: 1.3652 - val_acc: 0.4470
Epoch 500/500
614/614 [==============================] - 0s 135us/step - loss: 0.9401 - acc: 0.6564 - val_loss: 1.3660 - val_acc: 0.4470
{'model': <keras.engine.sequential.Sequential at 0x7f4f3e140b00>}

但是访问模型输出仍然有问题

model_output['model'].summary()

输出

NameError                                 Traceback (most recent call last)
<ipython-input-24-50b8bc82940b> in <module>()
----> 1 model_output['model'].summary()

NameError: name 'model_output' is not defined

编辑/解决方案:多亏Joel Berkeley

l2_reg = 0.4
momentum = 0.99
seed = 5

m = create_model(x_train, y_train, x_val, y_val, layers=[30, 20, 4], 
                 kernel_init ='he_uniform', bias_init ='he_uniform',
                 batch_norm=True, dropout=True)


Epoch 499/500
614/614 [==============================] - 0s 135us/step - loss: 0.9233 - acc: 0.6515 - val_loss: 1.3652 - val_acc: 0.4470
Epoch 500/500
614/614 [==============================] - 0s 135us/step - loss: 0.9401 - acc: 0.6564 - val_loss: 1.3660 - val_acc: 0.4470


m['model'].summary()

Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 30)                1410      
_________________________________________________________________
batch_normalization_1 (Batch (None, 30)                120       
_________________________________________________________________
activation_1 (Activation)    (None, 30)                0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 30)                0         
_________________________________________________________________
dense_2 (Dense)              (None, 20)                620       
_________________________________________________________________
batch_normalization_2 (Batch (None, 20)                80        
_________________________________________________________________
activation_2 (Activation)    (None, 20)                0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 20)                0         
_________________________________________________________________
dense_3 (Dense)              (None, 4)                 84        
_________________________________________________________________
batch_normalization_3 (Batch (None, 4)                 16        
_________________________________________________________________
activation_3 (Activation)    (None, 4)                 0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 4)                 0         
_________________________________________________________________
dense_4 (Dense)              (None, 4)                 20        
=================================================================
Total params: 2,350
Trainable params: 2,242
Non-trainable params: 108

0 个答案:

没有答案