所有人
我对如何在Keras中修改经过预训练的VGG16网络有疑问。我尝试在最后三个卷积层的末尾删除最大池化层,并在每个卷积层的末尾添加批处理规范化层。同时,我想保留参数。这意味着整个修改过程将不仅包括删除一些中间层,添加一些新层,而且还将修改后的层与其余层连接在一起。
我在Keras还是很新。我可以找到的唯一方法如下所示 Removing then Inserting a New Middle Layer in a Keras Model
所以我编辑的代码如下:
from keras import applications
from keras.models import Model
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers.normalization import BatchNormalization
vgg_model = applications.VGG16(weights='imagenet',
include_top=False,
input_shape=(160, 80, 3))
# Disassemble layers
layers = [l for l in vgg_model.layers]
# Defining new convolutional layer.
# Important: the number of filters should be the same!
# Note: the receiptive field of two 3x3 convolutions is 5x5.
layer_dict = dict([(layer.name, layer) for layer in vgg_model.layers])
x = layer_dict['block3_conv3'].output
for i in range(11, len(layers)-5):
# layers[i].trainable = False
x = layers[i](x)
for j in range(15, len(layers)-1):
# layers[j].trainable = False
x = layers[j](x)
x = Conv2D(filters=128, kernel_size=(1, 1))(x)
x = BatchNormalization()(x)
x = Conv2D(filters=128, kernel_size=(1, 1))(x)
x = BatchNormalization()(x)
x = Conv2D(filters=128, kernel_size=(1, 1))(x)
x = BatchNormalization()(x)
x = Flatten()(x)
x = Dense(50, activation='softmax')(x)
custom_model = Model(inputs=vgg_model.input, outputs=x)
for layer in custom_model.layers[:16]:
layer.trainable = False
custom_model.summary()
然而,块4和块5中的卷积层的输出形状是多个。我试图通过添加一层MaxPool2D(batch_size =(1,1),stride = none)来纠正它,但是输出形状仍然是多个。就是这样:
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 160, 80, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 160, 80, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 160, 80, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 80, 40, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 80, 40, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 80, 40, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 40, 20, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 40, 20, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 40, 20, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 40, 20, 256) 590080
_________________________________________________________________
block4_conv1 (Conv2D) multiple 1180160
_________________________________________________________________
block4_conv2 (Conv2D) multiple 2359808
_________________________________________________________________
block4_conv3 (Conv2D) multiple 2359808
_________________________________________________________________
block5_conv1 (Conv2D) multiple 2359808
_________________________________________________________________
block5_conv2 (Conv2D) multiple 2359808
_________________________________________________________________
block5_conv3 (Conv2D) multiple 2359808
_________________________________________________________________
conv2d_1 (Conv2D) (None, 40, 20, 128) 65664
_________________________________________________________________
batch_normalization_1 (Batch (None, 40, 20, 128) 512
_________________________________________________________________
conv2d_2 (Conv2D) (None, 40, 20, 128) 16512
_________________________________________________________________
batch_normalization_2 (Batch (None, 40, 20, 128) 512
_________________________________________________________________
conv2d_3 (Conv2D) (None, 40, 20, 128) 16512
_________________________________________________________________
batch_normalization_3 (Batch (None, 40, 20, 128) 512
_________________________________________________________________
flatten_1 (Flatten) (None, 102400) 0
_________________________________________________________________
dense_1 (Dense) (None, 50) 5120050
=================================================================
Total params: 19,934,962
Trainable params: 5,219,506
Non-trainable params: 14,715,456
_________________________________________________________________
任何人都可以提供有关实现目标的建议吗?
非常感谢。
答案 0 :(得分:0)
存在multiple
输出形状是因为这些图层被调用了两次,所以它们具有两个输出形状。
您可以看到here,如果调用layer.output_shape
引发AttributeError,则打印的输出形状将为'multiple'。
如果您致电custom_model.layers[10].output_shape
,则会收到此错误:
AttributeError: The layer "block4_conv1 has multiple inbound nodes, with different output shapes. Hence the notion of "output shape" is ill-defined for the layer. Use `get_output_shape_at(node_index)` instead.
然后如果您调用custom_model.layers[10].get_output_shape_at(0)
,将获得与初始网络相对应的输出形状,对于custom_model.layers[10].get_output_shape_at(1)
,您将获得期望的输出形状。
我只想表达一下我对此修改的意图表示怀疑:如果删除了MaxPooling层,并且将下一层(11号)应用于MaxPooling层之前的输出,则学习到的过滤器正在“预期”分辨率降低两倍的图像,因此它们可能无法正常工作。
让我们想象一个过滤器正在“寻找”眼睛,并且通常眼睛的宽度为10像素,您将需要20像素的眼睛来触发图层中的相同激活。
我的示例显然过于简单化并且不准确,但是只是表明原始想法是错误的,您应该重新训练模型的顶部/保留MaxPooling层/在最上层的block3_conv3上定义一个全新的模型。 / p>