模型中的Keras层命名未遵循TF name_scope前缀

时间:2019-11-19 16:46:05

标签: python tensorflow keras

我正在TensorFlow 1.15中使用Keras的Functional API。我的模型很复杂,并且具有嵌套的结构,因此我认为tf.name_scope可能允许我创建一个不错的模块化结构,每个块都有自己的唯一前缀添加到该块的层中。但是,我似乎无法正常工作。这是一个示例:

#!/usr/bin/env python
import tensorflow as tf
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Dense


if __name__ == '__main__':
    inputs = Input((10,))
    with tf.name_scope('block_1'):
        x = Dense(32)(inputs)
        x = Dense(32)(x)
    with tf.name_scope('block_2'):
        x = Dense(32)(x)
        outputs = Dense(32)(x)
    model = Model(inputs=inputs, outputs=outputs)

    print("\nLayer Names:")
    for layer in model.layers:
        print(layer.name)

    print("\nModel Summary:")
    print(model.summary())

    print("\nOutputs:", outputs.name)

我得到的输出是:

Layer Names:
input_1
dense
dense_1
dense_2
dense_3

Model Summary:
Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 10)]              0         
_________________________________________________________________
dense (Dense)                (None, 32)                352       
_________________________________________________________________
dense_1 (Dense)              (None, 32)                1056      
_________________________________________________________________
dense_2 (Dense)              (None, 32)                1056      
_________________________________________________________________
dense_3 (Dense)              (None, 32)                1056      
=================================================================
Total params: 3,520
Trainable params: 3,520
Non-trainable params: 0
_________________________________________________________________
None

Outputs: block_2/dense_3/BiasAdd:0

如您所见,在最后一行中,如果我仅打印输出图层的名称,它似乎采用了name_scope前缀,但是如果我尝试从从模型中检索到的内容打印图层名称,它不起作用。我希望图层名称看起来像

input_1
block_1/dense
block_1/dense_1
block_2/dense_2
block_2/dense_3

或者类似的东西。关于如何实现此目标的任何想法,或者我应该知道有什么其他机制比tf.name_scope更适合于此吗?

2 个答案:

答案 0 :(得分:0)

tf.name_scope张量名称放入名称范围。如果在各处打印x.name,则将看到正确应用了示波器名称,因为x是张量。另一方面,Keras层不是张量,因此它们不尊重名称范围(可以吗?可以。为什么不呢?我不知道)。

您可以为Keras图层明确命名,例如Dense(32, name='scope_1/layer_1。我不知道其他选择。

答案 1 :(得分:0)

def vgg_block(layer_in, n_filters, n_conv, i):
    
    for j in range(n_conv):
        layer_in=Conv2D(n_filters, (3,3), padding='same', activation='relu', name=f"block{i}_conv{i+j}")(layer_in)
    
    layer_in = MaxPooling2D((2,2), strides=(2,2), name=f"block{i}_pool")(layer_in)
    
    return layer_in
# define model input
visible = Input(shape=(256, 256, 3))
# add vgg module
layer = vgg_block(visible, 64, 2, 1)
# add vgg module
layer = vgg_block(layer, 128, 2, 2)
# add vgg module
layer = vgg_block(layer, 256, 4, 3)
# create model
model = Model(inputs=visible, outputs=layer)
# summarize model
model.summary()
# plot model architecture
plot_model(model, to_file='1-multiple_vgg_blocks.png', show_shapes=True, show_layer_names=True, show_dtype=True)

enter image description here

enter image description here