如何在没有完整连接层的情况下保存自定义训练模型,就像MobileNetV2 include_top = False

时间:2019-09-03 09:28:17

标签: tensorflow

我想将训练有素的模型保存到.h5,而无需最后两层,以便将来使用我的自定义模型来转移学习,就像MobileNetV2 include_top = False一样,有人可以帮助我,谢谢!

base_model = tf.keras.applications.mobilenet_v2.MobileNetV2(
    alpha=1.0,
    input_shape=IMG_SHAPE,
    include_top=False,
    weights='imagenet')

model = tf.keras.Sequential([
    base_model,
    tf.keras.layers.GlobalAveragePooling2D(),
    tf.keras.layers.Dense(255, activation=tf.nn.softmax)
])

像这样的训练模型:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
mobilenetv2_1.00_224 (Model) (None, 2, 2, 1280)        2257984
_________________________________________________________________
global_average_pooling2d (Gl (None, 1280)              0
_________________________________________________________________
dense (Dense)                (None, 205)               262605
=================================================================
Total params: 2,520,589
Trainable params: 2,486,477
Non-trainable params: 34,112
_________________________________________________________________

当我尝试将其用于转学时

keras_model = loadModel(keras_model_path)
keras_model.summary()

input = keras_model.input
hidden = tf.keras.layers.GlobalMaxPooling2D()(keras_model.layers[-3].output)
out = tf.keras.layers.Dense(128, activation=tf.nn.softmax)(hidden)
model2 = tf.keras.Model(input, out)
model2.summary()

发生错误

ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(?, 64, 64, 3), dtype=float32) at layer "input_1". The following previous layers were accessed without issue: []

1 个答案:

答案 0 :(得分:0)

  

我想将训练有素的模型保存到.h5,而没有最后两层,

为什么不使用model.save()保存完整的模型,而当您重新加载模型以进行迁移学习时,只需使用以下命令删除图层:

model.layers.pop()

您还可以在保存模型之前删除图层,但我不会这么做