我已经用纯张量流训练了一个模型(由tf.keras构建)。我已经使用model.save(model_150.h5)
保存了模型(因为它是一个keras模型)。
这是我的模型:
conv_1_1 = Conv2D(filters = 64, kernel_size = 3, activation='relu', padding='same')(input_img)
conv_1_1_bn = BatchNormalization()(conv_1_1)
conv_1_1_do = Dropout(droprate)(conv_1_1_bn)
pool_1 = MaxPooling2D(pool_size= 2, strides = 2)(conv_1_1_do)
conv_4_1 = SeparableConv2D(filters = 512, kernel_size = 3, activation='relu', padding='same')(pool_1)
conv_4_1_bn = BatchNormalization()(conv_4_1)
conv_4_1_do = Dropout(droprate)(conv_4_1_bn)
pool_4 = MaxPooling2D(pool_size= 2, strides = 2)(conv_4_1_do)
conv_5_1 = SeparableConv2D(filters = 1024, kernel_size = 3, activation='relu', padding='same')(pool_4)
conv_5_1_bn = BatchNormalization()(conv_5_1)
conv_5_1_do = Dropout(droprate)(conv_5_1_bn)
upconv_1 = upconv_concat(conv_5_1_do, conv_4_1_do, n_filter=512, pool_size=2, stride=2)
conv_6_1 = SeparableConv2D(filters = 512, kernel_size = 3, activation='relu', padding='same')(upconv_1)
conv_6_1_bn = BatchNormalization()(conv_6_1)
conv_6_1_do = Dropout(droprate)(conv_6_1_bn)
upconv_2 = upconv_concat(conv_6_1_do, conv_1_1_do, n_filter=64, pool_size=2, stride=2)
conv_9_1 = SeparableConv2D(filters = 64, kernel_size = 3, activation='relu', padding='same')(upconv_2)
conv_9_1_bn = BatchNormalization()(conv_9_1)
conv_9_1_do = Dropout(droprate)(conv_9_1_bn)
ae_output = Conv2D(num_classes, kernel_size=1, strides = (1,1), activation="softmax")(conv_9_1_do)
我最初是这样定义模型的:
e_model = Model(input_img, ae_output)
现在我需要一些自定义培训。所以我像这样用纯张量流训练模型:
这是我的损失函数
def cut_loss(original_image):
ypred = e_model(original_image)
...
...
#do some computations and calculate some custom loss
...
return loss
这是我的优化器
#optimizer
opt = tf.train.AdamOptimizer(learning_rate=e_lr).minimize(cut_loss(original_image))
这是我的训练循环
with tf.Session() as sess:
sess.run(tf.global_variable_initializer())
for epoch in range(num_epochs):
print("epoch:", epoch)
count = 0
batch_start_index = 0
while (count != num_batches):
X_train_batch = X_train[batch_start_index : batch_start_index+batch_size] #send a batch of input images of size (batchsize, 224, 224, 1)
_, train_loss = sess.run([opt,loss], feed_dict={original_image: X_train_batch})
batch_start_index+=batch_size
count+=1
print("Train loss after ", str(epoch), "is", str(train_loss))
训练后,我保存了模型,然后重新启动了jupyter内核。当我尝试从h5文件中加载模型时
from tensorflow.keras.models import load_model
e_model = load_model('model_150.h5')
我遇到以下错误:
UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually
如何摆脱这个错误?
使用tensorflow训练模型并使用model.save()
中的tf.keras
函数保存模型是否很糟糕?
请让我知道是否需要其他详细信息。谢谢!