keras提取最佳val损失

时间:2018-09-04 03:58:12

标签: python keras deep-learning

网络就像

inp=Input((1,12))
dense0=GRU(200,activation='relu',recurrent_dropout=0.2,return_sequences=True)(inp)
drop0=Dropout(0.3)(dense1)
dense1=GRU(200,activation='relu',recurrent_dropout=0.2)(drop0)
drop1=Dropout(0.3)(dense1)
dense1=Dense(200,activation='relu')(inp)
drop1=Dropout(0.3)(dense1)
dense2=Dense(200,activation='relu')(drop1)
drop2=Dropout(0.3)(dense2)
dense3=Dense(100,activation='relu')(drop2)
drop3=Dropout(0.3)(dense3)
out=Dense(6,activation='relu')(drop2)

md=Model(inputs=inp,outputs=out)
##md.summary()
opt=keras.optimizers.rmsprop(lr=0.000005)
md.compile(opt,loss='mean_squared_error')
esp=EarlyStopping(patience=90, verbose=1, mode='auto')
md.fit(x_train.reshape((8105,1,12)),y_train.reshape((8105,1,6)),batch_size=2048,epochs=1500,callbacks=[esp], validation_split=0.2)

输出:

    Epoch 549/1500
    6484/6484 [==============================] - 0s 13us/step - loss: 0.0589 - val_loss: 0.0100
    Epoch 550/1500
    6484/6484 [==============================] - 0s 10us/step - loss: 0.0587 - val_loss: 0.0099
    Epoch 551/1500
    6484/6484 [==============================] - 0s 12us/step - loss: 0.0584 - val_loss: 0.0100
    Epoch 552/1500
    6484/6484 [==============================] - 0s 12us/step - loss: 0.0593 - val_loss: 0.0100
    Epoch 553/1500
    6484/6484 [==============================] - 0s 12us/step - loss: 0.0584 - val_loss: 0.0100
    Epoch 554/1500
    6484/6484 [==============================] - 0s 15us/step - loss: 0.0587 - val_loss: 0.0101
    Epoch 555/1500
    6484/6484 [==============================] - 0s 12us/step - loss: 0.0583 - val_loss: 0.0100
    Epoch 556/1500
    6484/6484 [==============================] - 0s 13us/step - loss: 0.0578 - val_loss: 0.0101
    Epoch 557/1500
    6484/6484 [==============================] - 0s 12us/step - loss: 0.0578 - val_loss: 0.0101
    Epoch 558/1500
    6484/6484 [==============================] - 0s 14us/step - loss: 0.0578 - val_loss: 0.0100
    Epoch 559/1500
    6484/6484 [==============================] - 0s 13us/step - loss: 0.0573 - val_loss: 0.0099
    Epoch 560/1500
    6484/6484 [==============================] - 0s 13us/step - loss: 0.0577 - val_loss: 0.0099
    Epoch 561/1500
    6484/6484 [==============================] - 0s 14us/step - loss: 0.0570 - val_loss: 0.0100
    Epoch 562/1500
    6484/6484 [==============================] - 0s 12us/step - loss: 0.0567 - val_loss: 0.0100
    Epoch 563/1500
    6484/6484 [==============================] - 0s 15us/step - loss: 0.0575 - val_loss: 0.0100
Epoch 00563: early stopping

最佳迭代在这里:

Epoch 473/1500
6484/6484 [==============================] - 0s 12us/step - loss: 0.0698 - val_loss: 0.0096

我如何提取该分数0.0096,例如作为贝叶斯优化或SMAC的评分输出? (即给定md,我需要md.min_val_loss()) 我尝试过:

    print(md.history.keys())
Traceback (most recent call last):

  File "<ipython-input-100-d1e5bda1287c>", line 1, in <module>
    print(md.history.keys())

AttributeError: 'History' object has no attribute 'keys'




print(md.history['val_loss'])
Traceback (most recent call last):

  File "<ipython-input-101-37ce8f0572c5>", line 1, in <module>
    print(md.history['val_loss'])

TypeError: 'History' object is not subscriptable

显然不起作用。 顺便说一句,我可以进行最佳迭代预测吗?就像是: md.predict(new_data,eration = 473)

1 个答案:

答案 0 :(得分:3)

Keras仅允许您存储最佳模型的结果,这可能正是您要寻找的结果:
您只需将另一个回调传递给您的.fit()调用,就像这样:

checkpoint = ModelCheckpoint("keras_model.pt", monitor='val_loss', save_best_only=True)
model.fit(...., callbacks=[...,checkpoint]) # attach callback to training!

这样,您可以在训练完成后重新加载模型,并从那里进行预测。至于之后如何获取值的问题,请看this post on SO。即,只需创建您自己的History()(在重新阅读该句子后我就笑了)。

from keras.callbacks import History
history = History()
hist = model.fit(..., callbacks=[...,history])
print(hist.history)

要访问特定时期的丢失信息,请执行hist.history['val_loss'][<epoch>]