我正在实现this paper提出的网络体系结构。我在this question上看到了答案。
还有一个融合层,我难以实现。
因为 mid_level_network输出的形状未固定 (None, H/8, W/8, 256)
,并且 global_network输出的形状已固定 (None, 256)
。
我使用以下代码实现。
def fusion_layer(mid_level_output, global_output):
repeat_time = mid_level_output.shape[1] * mid_level_output.shape[2]
global_output = RepeatVector(repeat_time)(global_output)
target_shape = (mid_level_output.shape[1], mid_level_output.shape[2], global_output.shape[2])
global_output = Reshape(target_shape)(global_output)
fusion_output = Concatenate()([mid_level_output, global_output])
return fusion_output
但是当我调用此函数时,它将引发错误。
TypeError: unsupported operand type(s) for *: 'NoneType' and 'NoneType'
def low_level_feature_network(low_level_input):
X = Conv2D(64, kernel_size=(3, 3), strides=(2, 2), padding='same', activation='relu')(low_level_input)
X = Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu')(X)
X = Conv2D(128, kernel_size=(3, 3), strides=(2, 2), padding='same', activation='relu')(X)
X = Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu')(X)
X = Conv2D(256, kernel_size=(3, 3), strides=(2, 2), padding='same', activation='relu')(X)
X = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu')(X)
return X
def mid_level_feature_network(mid_level_input):
X = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu')(mid_level_input)
X = Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu')(X)
return X
def global_feature_network(global_input):
X = Conv2D(512, kernel_size=(3, 3), strides=(2, 2), padding='same', activation='relu')(global_input)
X = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu')(X)
X = Conv2D(512, kernel_size=(3, 3), strides=(2, 2), padding='same', activation='relu')(X)
X = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu')(X)
X = Flatten()(X)
X = Dense(1024, activation='relu')(X)
X = Dense(512, activation='relu')(X)
X = Dense(256, activation='relu')(X)
return X
# dynamic output shape : mid_level_output
low_level_no_scaling_input = Input((None, None, 1))
low_level_no_scaling_output = low_level_feature_network(low_level_no_scaling_input)
mid_level_output = mid_level_feature_network(low_level_no_scaling_output)
# fixed output shape: global_output
low_level_scaling_input = Input((224, 224, 1))
low_level_scaling_output = low_level_feature_network(low_level_scaling_input)
global_output = global_feature_network(low_level_scaling_output)
# fusion two layers however the errors raised
fusion_output = fusion_layer(mid_level_output, global_output)
如何将这两个不同类型的形状图层融合在一起。
非常感谢您。
答案 0 :(得分:0)
对于动态形状,最好使用https://www.tensorflow.org/api_docs/python/tf/keras/backend/shape?version=stable
因此,mid_level_output.shape [1]应为K.shape(mid_level_output)[1],其中K为keras.backend。
此外,Keras图层不仅仅是一个功能,对于Fusion_layer的实现,请使用Lambda图层或子类tf.keras.Layer。最后,看来您可能会忘记批量大小(维度0)。