我已经使用NN
训练了tf.keras
,并用ModelCheckpoint
将整个模型保存在.h5
文件中。
但是,当我使用models.load_model
还原它,然后使用方法fit
对其进行再次训练时,它仅返回一个History
对象,并且无所事事。
以下是培训的minimal example
:
import numpy as np
import tensorflow as tf
# Creates dummy data
train_x = np.random.randint(10,size=40).reshape(-1,1)
train_y = np.random.randint(2,size=40).reshape(-1,1)
train_set = (train_x,train_y)
val_x = np.random.randint(10,size=20).reshape(-1,1)
val_y = np.random.randint(2,size=20).reshape(-1,1)
val_set = (val_x,val_y)
# Set Learning Rate Decay
import math
def step_decay(epoch):
print('--- Epoch:',epoch)
print(tf.keras.callbacks.History())
init_lr = 0.001
drop = 0.9
epochs_drop = 1.0
lr = init_lr*math.pow(drop,math.floor((1+epoch)/epochs_drop))
return(lr)
lr_callback = tf.keras.callbacks.LearningRateScheduler(step_decay)
# Saves the whole model
cp_callback = tf.keras.callbacks.ModelCheckpoint('model.h5',
save_weights_only=False,
verbose=True)
# Creates the model
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1,activation='relu',use_bias=False,input_dim=(1)))
model.add(tf.keras.layers.Dense(100,activation='relu',use_bias=False))
model.add(tf.keras.layers.Dense(1,activation='relu',use_bias=False))
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
print('Learning Rate: ',tf.keras.backend.eval(model.optimizer.lr))
# Train the model
model.fit(x=train_set[0],y=train_set[1],epochs=2,steps_per_epoch=40,
validation_data=val_set,validation_steps=20,
callbacks=[lr_callback,cp_callback])
print('Learning Rate: ',tf.keras.backend.eval(model.optimizer.lr))
我当前用于再次加载的代码如下。
import numpy as np
import tensorflow as tf
# Creates dummy data
train_x = np.random.randint(10,size=40).reshape(-1,1)
train_y = np.random.randint(2,size=40).reshape(-1,1)
train_set = (train_x,train_y)
val_x = np.random.randint(10,size=20).reshape(-1,1)
val_y = np.random.randint(2,size=20).reshape(-1,1)
val_set = (val_x,val_y)
# Set Learning Rate Decay
import math
def step_decay(epoch):
print('--- Epoch:',epoch)
print(tf.keras.callbacks.History())
init_lr = 0.001
drop = 0.9
epochs_drop = 1.0
lr = init_lr*math.pow(drop,math.floor((1+epoch)/epochs_drop))
return(lr)
lr_callback = tf.keras.callbacks.LearningRateScheduler(step_decay)
# Saves the whole model
cp_callback = tf.keras.callbacks.ModelCheckpoint('model.h5',
save_weights_only=False,
verbose=True)
# Load model
model = tf.keras.models.load_model('model.h5')
print('Learning Rate: ',tf.keras.backend.eval(model.optimizer.lr))
model.fit(x=train_set[0],y=train_set[1],epochs=2,steps_per_epoch=40,
validation_data=val_set,validation_steps=20,initial_epoch=3,
callbacks=[lr_callback,cp_callback])
正如您在运行时所观察到的那样,学习率得到了恢复,因此整个模型也得到了恢复,或者至少我是这样认为的。但是,在运行model.fit(...)
之后,除了返回<tensorflow.python.keras.callbacks.History object at 0x7f11c81cb940>
外,它什么也没做。知道如何让它再次训练吗?
编辑:我还尝试通过将compile
的{{1}}属性设置为true来对其进行编译。
答案 0 :(得分:0)
加载后是否尝试编译它?