多次调用fit时,如何为keras模型节省权重?

时间:2019-04-07 23:31:25

标签: python keras

我希望每隔一定时间保存模型权重。

我有:

checkpoint = ModelCheckpoint('models/' + self._model_name + '.h5', period=10,
                                     monitor='loss', verbose=1, save_best_only=True, save_weights_only=True, mode='auto')
return self._model.fit(X, Y, epochs=50, verbose=0, callbacks=[checkpoint])

我多次调用此函数。这是一门课程,所以self._model停留在我称为它的不同时间。

我跑步一次,得到输出:

Epoch 00010: loss improved from inf to 9.95919, saving model to models/2019-04-07-23-02-16.h5

Epoch 00020: loss improved from 9.95919 to 7.46431, saving model to models/2019-04-07-23-02-16.h5

Epoch 00030: loss improved from 7.46431 to 5.46186, saving model to models/2019-04-07-23-02-16.h5

Epoch 00040: loss improved from 5.46186 to 4.57174, saving model to models/2019-04-07-23-02-16.h5

Epoch 00050: loss improved from 4.57174 to 3.75795, saving model to models/2019-04-07-23-02-16.h5

但是在那之后,我得到了:

Epoch 00010: loss improved from inf to 20.38285, saving model to models/2019-04-07-23-02-16.h5

Epoch 00020: loss improved from 20.38285 to 11.98181, saving model to models/2019-04-07-23-02-16.h5

Epoch 00030: loss did not improve from 11.98181

Epoch 00040: loss improved from 11.98181 to 10.54640, saving model to models/2019-04-07-23-02-16.h5

Epoch 00050: loss improved from 10.54640 to 6.20022, saving model to models/2019-04-07-23-02-16.h5

那为什么又回到inf?它不应该将3.75795保持为最低损失,并因此继续将其用作检查点吗?

我在做什么错了?

1 个答案:

答案 0 :(得分:2)

您可以在每个方法调用中初始化检查点,因此它是一个新的检查点,它从inf开始。 我知道这似乎是一个简单的解决方案,但是我使用了for循环。我需要使用一些已开发的指标来评估我的模型,因此我节省了权重,然后根据每个循环中生成的权重对其进行评估。

checkpointer = ModelCheckpoint(filepath="w1.h5", monitor='val_loss', verbose=1, save_best_only=True, mode='min')
for i in range(0,3):
    if(i >0):
        model.load_weights('weight'+str(i-1)+'.h5')

    model.fit(inputX,outputY , validation_data=(inputTestX,outputTest), batch_size=None, epochs=3, steps_per_epoch=200,validation_steps=200, callbacks=[checkpointer])         
    model.save_weights('model'+str(i)+'.h5')
    evaluate(i)

它可以工作并生成此类日志。如您所见,它不会返回inf,它会继续训练。

Epoch 1/3
98/98 [==============================] - 14s 145ms/step - loss: 14.2190 - acc: 0.0110 - val_loss: 13.9000 - val_acc: 0.0000e+003s - loss: 14.5255 - acc: 0.0126
Epoch 00001: val_loss improved from inf to 13.89997, saving model to oldData/main/result/GCN-fullgraph-w1.h5
Epoch 2/3
98/98 [==============================] - 5s 46ms/step - loss: 13.8863 - acc: 0.0128 - val_loss: 13.5243 - val_acc: 0.0000e+00
Epoch 00002: val_loss improved from 13.89997 to 13.52433, saving model to oldData/main/result/GCN-fullgraph-w1.h5
Epoch 3/3
98/98 [==============================] - 4s 39ms/step - loss: 13.5929 - acc: 0.0135 - val_loss: 13.2898 - val_acc: 0.0000e+00
Epoch 00003: val_loss improved from 13.52433 to 13.28980, saving model to oldData/main/result/GCN-fullgraph-w1.h5
0.6165177671418206
0.6264390563241374

Epoch 1/3
98/98 [==============================] - 6s 58ms/step - loss: 13.2707 - acc: 0.0156 - val_loss: 12.9703 - val_acc: 0.0027
Epoch 00001: val_loss improved from 13.28980 to 12.97031, saving model to oldData/main/result/GCN-fullgraph-w1.h5
Epoch 2/3
98/98 [==============================] - 7s 72ms/step - loss: 12.8552 - acc: 0.0175 - val_loss: 12.6153 - val_acc: 0.0035
Epoch 00002: val_loss improved from 12.97031 to 12.61535, saving model to oldData/main/result/GCN-fullgraph-w1.h5
Epoch 3/3
98/98 [==============================] - 5s 55ms/step - loss: 12.5612 - acc: 0.0194 - val_loss: 12.2473 - val_acc: 0.0049
Epoch 00003: val_loss improved from 12.61535 to 12.24730, saving model to oldData/main/result/GCN-fullgraph-w1.h5
0.638404356344817
0.6429751200231312

如果将检查点放入for循环中,则会得到此结果,该结果从inf开始:

Epoch 1/3
98/98 [==============================] - 14s 145ms/step - loss: 14.2190 - acc: 0.0110 - val_loss: 13.9000 - val_acc: 0.0000e+003s - loss: 14.5255 - acc: 0.0126
Epoch 00001: val_loss improved from inf to 13.54957, saving model to oldData/main/result/GCN-fullgraph-w1.h5
Epoch 2/3
98/98 [==============================] - 5s 46ms/step - loss: 13.8863 - acc: 0.0128 - val_loss: 13.5243 - val_acc: 0.0000e+00
Epoch 00002: val_loss improved from 13.54957 to 13.22187, saving model to oldData/main/result/GCN-fullgraph-w1.h5
Epoch 3/3
98/98 [==============================] - 4s 39ms/step - loss: 13.5929 - acc: 0.0135 - val_loss: 13.2898 - val_acc: 0.0000e+00
Epoch 00003: val_loss improved from 13.22187 to 13.105615, saving model to oldData/main/result/GCN-fullgraph-w1.h5
0.6165177671418206
0.6264390563241374

Epoch 1/3
98/98 [==============================] - 6s 58ms/step - loss: 13.2707 - acc: 0.0156 - val_loss: 12.9703 - val_acc: 0.0027
Epoch 00001: val_loss improved from inf to 13.97031, saving model to oldData/main/result/GCN-fullgraph-w1.h5
Epoch 2/3
98/98 [==============================] - 7s 72ms/step - loss: 12.8552 - acc: 0.0175 - val_loss: 12.6153 - val_acc: 0.0035
Epoch 00002: val_loss improved from 13.97031 to 12.86802, saving model to oldData/main/result/GCN-fullgraph-w1.h5
Epoch 3/3
98/98 [==============================] - 5s 55ms/step - loss: 12.5612 - acc: 0.0194 - val_loss: 12.2473 - val_acc: 0.0049
Epoch 00003: val_loss improved from 12.86802 to 12.23080, saving model to oldData/main/result/GCN-fullgraph-w1.h5
0.638404356344817
0.6429751200231312