冻结TensorFlow2层

时间:2020-08-16 12:49:14

标签: neural-network tensorflow2.0

我有一个用于MNIST数据集的LeNet-300-100密集神经网络,我想冻结在前两个隐藏层中分别包含300和100个隐藏神经元的前两个层。我只想训练输出层。我要做的代码如下:

from tensorflow import keras

inner_model = keras.Sequential(
    [
        keras.Input(shape=(1024,)),
        keras.layers.Dense(300, activation="relu", kernel_initializer = tf.initializers.GlorotNormal()),
        keras.layers.Dense(100, activation="relu", kernel_initializer = tf.initializers.GlorotNormal()),
    ]
)

model_mnist = keras.Sequential(
    [keras.Input(shape=(1024,)), inner_model, keras.layers.Dense(10, activation="softmax"),]
)

# model_mnist.trainable = True  # Freeze the outer model
# Freeze the inner model-
inner_model.trainable = False


# Sanity check-
inner_model.trainable, model_mnist.trainable
# (False, True)

# Compile NN-
model_mnist.compile(
    loss=tf.keras.losses.categorical_crossentropy,
    # optimizer='adam',
    optimizer=tf.keras.optimizers.Adam(lr = 0.0012),
    metrics=['accuracy'])
    

但是,这段代码似乎并没有冻结前两个隐藏层,它们也在学习中。我在做什么错了?

谢谢!

1 个答案:

答案 0 :(得分:0)

解决方案:在定义神经网络模型时使用“可训练”参数来冻结模型的所需层,如下所示-

message.guild.members.fetch