我将keras中的两个VGG网络组合在一起进行分类任务。当我运行程序时,它显示错误:
RuntimeError:名称"预测"在模型中使用2次。所有图层名称都应该是唯一的。
我很困惑,因为我只在代码中使用prediction
图层一次:
from keras.layers import Dense
import keras
from keras.models import Model
model1 = keras.applications.vgg16.VGG16(include_top=True, weights='imagenet',
input_tensor=None, input_shape=None,
pooling=None,
classes=1000)
model1.layers.pop()
model2 = keras.applications.vgg16.VGG16(include_top=True, weights='imagenet',
input_tensor=None, input_shape=None,
pooling=None,
classes=1000)
model2.layers.pop()
for layer in model2.layers:
layer.name = layer.name + str("two")
model1.summary()
model2.summary()
featureLayer1 = model1.output
featureLayer2 = model2.output
combineFeatureLayer = keras.layers.concatenate([featureLayer1, featureLayer2])
prediction = Dense(1, activation='sigmoid', name='main_output')(combineFeatureLayer)
model = Model(inputs=[model1.input, model2.input], outputs= prediction)
model.summary()
感谢@putonspectacles的帮助,我按照他的指示找到了一些有趣的部分。如果您使用model2.layers.pop()
并使用" model.layers.keras.layers.concatenate([model1.output, model2.output])
"组合两个模型的最后一层,您会发现仍使用model.summary()
显示最后一个图层信息。但实际上它们并不存在于结构中。相反,您可以使用model.layers.keras.layers.concatenate([model1.layers[-1].output, model2.layers[-1].output])
。它看起来很棘手,但它的工作原理..我认为这是一个关于日志和结构同步的问题。
答案 0 :(得分:6)
首先,根据您发布的代码,您有 no 图层属性为'predictions'的图层,因此此错误与您的图层无关
Dense
图层prediction
:即:
prediction = Dense(1, activation='sigmoid',
name='main_output')(combineFeatureLayer)
VGG16
模型的Dense
图层name
predictions
。特别是这一行:
x = Dense(classes, activation='softmax', name='predictions')(x)
由于您使用其中两个模型,因此您的图层名称重复。
您可以做的是将第二个模型中的图层重命名为预测以外的其他内容,可能是predictions_1
,如下所示:
model2 = keras.applications.vgg16.VGG16(include_top=True, weights='imagenet',
input_tensor=None, input_shape=None,
pooling=None,
classes=1000)
# now change the name of the layer inplace.
model2.get_layer(name='predictions').name='predictions_1'
答案 1 :(得分:2)
您可以在keras中更改图层的名称,请勿使用'tensorflow.python.keras'。
这是我的示例代码:
from keras.layers import Dense, concatenate
from keras.applications import vgg16
num_classes = 10
model = vgg16.VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=(64,64,3), pooling='avg')
inp = model.input
out = model.output
model2 = vgg16.VGG16(include_top=False,weights='imagenet', input_tensor=None, input_shape=(64,64,3), pooling='avg')
for layer in model2.layers:
layer.name = layer.name + str("_2")
inp2 = model2.input
out2 = model2.output
merged = concatenate([out, out2])
merged = Dense(1024, activation='relu')(merged)
merged = Dense(num_classes, activation='softmax')(merged)
model_fusion = Model([inp, inp2], merged)
model_fusion.summary()
答案 2 :(得分:0)
示例:
# Network for affine transform estimation
affine_transform_estimator = MobileNet(
input_tensor=None,
input_shape=(config.IMAGE_H // 2, config.IMAGE_W //2, config.N_CHANNELS),
alpha=1.0,
depth_multiplier=1,
include_top=False,
weights='imagenet'
)
affine_transform_estimator.name = 'affine_transform_estimator'
for layer in affine_transform_estimator.layers:
layer.name = layer.name + str("_1")
# Network for landmarks regression
landmarks_regressor = MobileNet(
input_tensor=None,
input_shape=(config.IMAGE_H // 2, config.IMAGE_W // 2, config.N_CHANNELS),
alpha=1.0,
depth_multiplier=1,
include_top=False,
weights='imagenet'
)
landmarks_regressor.name = 'landmarks_regressor'
for layer in landmarks_regressor.layers:
layer.name = layer.name + str("_2")
input_image = Input(shape=(config.IMAGE_H, config.IMAGE_W, config.N_CHANNELS))
downsampled_image = MaxPooling2D(pool_size=(2,2))(input_image)
x1 = affine_transform_estimator(downsampled_image)
x2 = landmarks_regressor(downsampled_image)
x3 = add([x1,x2])
model = Model(inputs=input_image, outputs=x3)
optimizer = Adadelta()
model.compile(optimizer=optimizer, loss=mae_loss_masked)