如何在VGG16-CNN中一次输入多张图像?

时间:2020-04-05 00:09:11

标签: python keras deep-learning transfer-learning vgg-net

我正在努力实现一个基于VGG-16的特征提取器,该提取器接受两个输入,即第一个输入是整个图像,第二个输入是逐块图像(N个局部区域子图像)。首先,我定义了两个模型,全局模型对整个图像进行操作,局部模型对局部图像区域进行操作。这个想法是在本地模型中添加某种通道方式的池化,在N个本地补丁中,只有一个是结果补丁,然后,需要将全局特征和由此产生的本地补丁连接起来。

您能帮我实现这种基于vgg-16的特征提取器吗? 图中显示了此方法背后的想法。 VGG-16 fusion scheme 代码如下:

#spring.profiles.active#

您能帮我解决这类问题吗,目前的问题是这样:


def ChannelPool(x):
    return K.max(x, axis=0, keepdims = True) 

def ConcatLayer(x):
    tensor_1 = x[0]
    tensor2 = x[1]
    return K.concatenate([tensor_1, tensor2], axis = 1)
N_patches = 9 # local image regions
input_shape_global = Input(shape=(224, 224, 3))
input_shape_local = Input(shape=(N_patches, 50, 50, 3)) # i struggle with this part

Global_Model = VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=(224,224,3), pooling='avg')
Local_Model = VGG16(include_top=False, weights='imagenet', input_tensor=input_shape_local[0], input_shape=(50,50,3), pooling='avg')

# Change layer names to avoid confusion
for layer_l in Global_Model.layers:
    layer_l.name = layer_l.name + str("_1")

for layer_g in Local_Model.layers:
    layer_g.name = layer_g.name + str("_2")

inp1 = Global_Model.input
out1 = Global_Model.output

inp2 = Local_Model.input
out2 = Local_Model.output

image_features = Global_Model(inp1)
patch_features = Local_Model(inp2)

patch_feature = Lambda(ChannelPool, name="Channel_Pool_Layer")(patch_features)
image_features = K.reshape(image_features, (1,512))
patch_feature = K.reshape(patch_feature, (1,512))

merged = Concatenate(axis = 1)([image_features, patch_feature])

merged = Dense(total_features ,activation='softmax', name='Fc1')(merged)
merged = Dense(total_features , activation='relu', name='Fc2')(merged)

final_model = Model(inputs = [inp1,inp2], outputs = merged)

AttributeError:“ NoneType”对象没有属性“ _inbound_nodes”

谢谢。

0 个答案:

没有答案