我在.h5
文件中有一个完全卷积的,预训练的模型。
现在我想改变输入分辨率并再次训练。
我目前的方法是遍历所有图层,创建一个新图层并分配预训练的权重。
这是一个最小的样本:
from keras.layers import Input
from keras.layers import BatchNormalization
from keras.layers.convolutional import Conv2D
from keras.models import Model
# this would be the pretrained model
input_layer = Input((10, 10, 3))
conv = Conv2D(16, 3)(input_layer)
bnorm = BatchNormalization()(conv)
model = Model(inputs = input_layer, outputs = bnorm)
# now I want to create a new model with the same architecture but different sizes
new_input = Input((100,100,3))
prev_layer = new_input
for old_layer in model.layers[1:]:
weights = old_layer.weights
if type(old_layer) == Conv2D:
filters = old_layer.filters
kernel_size = old_layer.kernel_size
conv_layer = Conv2D(filters = filters,
kernel_size = kernel_size,
)(prev_layer)
prev_layer = conv_layer
elif type(old_layer) == BatchNormalization:
bn_layer = BatchNormalization(weights=weights)
prev_layer = bn_layer(prev_layer)
Batchnormalization的代码失败。错误信息相当长,关键问题似乎是:
ValueError:形状必须等于等级,但是为1和0 ' batch_normalization_3 /分配' (op:' Assign')输入形状:[16], []。
完整的错误消息在pastebin上:https://pastebin.com/NVWs4tq2
如果我在batchnormalization的构造函数中删除权重参数,代码工作正常。 我已经看过了我试图在构造函数中提供的权重以及在没有提供权重的情况下分配的权重。形状完全相同。
[<tf.Variable 'batch_normalization_1/gamma:0' shape=(16,) dtype=float32_ref>,
<tf.Variable 'batch_normalization_1/beta:0' shape=(16,) dtype=float32_ref>,
<tf.Variable 'batch_normalization_1/moving_mean:0' shape=(16,) dtype=float32_ref>,
<tf.Variable 'batch_normalization_1/moving_variance:0' shape=(16,) dtype=float32_ref>]
如何将重量加载到Batchnormalization?