获取中间层(Functional API)的输出并在SubClassed API中使用

时间:2021-03-07 06:27:11

标签: python tensorflow keras deep-learning

keras doc中,它说如果我们想选择模型的中间层输出(顺序和功能),我们需要做的如下:

model = ...  # create the original model

layer_name = 'my_layer'
intermediate_layer_model = keras.Model(inputs=model.input,
                                       outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model(data)

所以,这里我们得到两个模型,intermediate_layer_model 是其父模型的子模型。而且他们也是独立的。同样,如果我们得到父模型(或基础模型)的中间层的输出特征图,并对它做一些操作,并从这个操作中得到一些输出特征图,那么我们也可以将这个输出特征映射归咎于父模型。


input = tf.keras.Input(shape=(size,size,3))
model = tf.keras.applications.DenseNet121(input_tensor = input)

layer_name = "conv1_block1" # for example 
output_feat_maps = SomeOperationLayer()(model.get_layer(layer_name).output)  

# assume, they're able to add up
base = Add()([model.output, output_feat_maps])

# bind all 
imputed_model = tf.keras.Model(inputs=[model.input], outputs=base)

这样,我们就有了一个修改过的模型。使用函数式 API 很容易。所有 keras imagenet 模型都是用函数式 API 编写的(大部分)。在模型子类化 API 中,我们可以使用这些模型。我在这里担心的是,如果我们需要这些函数式 API 模型在 call 函数内部的中间输出特征图该怎么办。

class Subclass(tf.keras.Model): 
    def __init__(self, dim):
         super(Subclass, self).__init__()
         self.dim = dim
         self.base = DenseNet121(input_shape=self.dim)

         # building new model with the desired output layer of base model 
         self.mid_layer_model = tf.keras.Model(self.base.inputs, 
                                     self.base.get_layer(layer_name).output)

    def call(self, inputs):
         # forward with base model 
         x = self.base(inputs)

         # forward with mid_layer_model 
         mid_feat = self.mid_layer_model(inputs)

         # do some op with it 
         mid_x = SomeOperationLayer()(mid_feat)
         
         # assume, they're able to add up
         out = tf.keras.layers.add([x, mid_x])

         return out 

问题是,我们在技术上以联合方式两种模型。但与构建这样的模型不同,这里我们只需要基本模型的中间输出特征图(来自一些输入)前向方式并在其他地方使用它并获得一些输出。像这样

mid_x = SomeOperationLayer()(self.base.get_layer(layer_name).output)

但它给出了 ValueError: Graph disconnected。因此,目前,我们必须基于所需的中间层从基础模型构建一个新模型。在 init 方法中,我们定义或创建新的 self.mid_layer_model 模型,该模型提供我们所需的输出特征图,如下所示:mid_feat = self.mid_layer_model(inputs)。接下来,我们取 mid_faet 并执行一些操作并获得一些输出,最后将它们与 tf.keras.layers.add([x, mid_x]) 相加。因此,通过创建具有所需中间输出工作的新模型,但同时,我们重复相同的操作两次,即基本模型及其子集模型。也许我遗漏了一些明显的东西,请补充一些东西。是不是这样啊!或者我们可以采用一些策略。我已经在论坛 here 中询问过,还没有回复。


更新

这是一个工作示例。假设我们有一个这样的自定义层

import tensorflow as tf
from tensorflow.keras.applications import DenseNet121
from tensorflow.keras.layers import Add
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten

class ConvBlock(tf.keras.layers.Layer):
    def __init__(self, kernel_num=32, kernel_size=(3,3), strides=(1,1), padding='same'):
        super(ConvBlock, self).__init__()
        # conv layer
        self.conv = tf.keras.layers.Conv2D(kernel_num, 
                        kernel_size=kernel_size, 
                        strides=strides, padding=padding)
        # batch norm layer
        self.bn = tf.keras.layers.BatchNormalization()

    def call(self, input_tensor, training=False):
        x = self.conv(input_tensor)
        x = self.bn(x, training=training)
        return tf.nn.relu(x)

我们想将此层归入 ImageNet 模型并构建这样的模型

input = tf.keras.Input(shape=(32, 32, 3))
base = DenseNet121(weights=None, input_tensor = input)

# get output feature maps of at certain layer, ie. conv2_block1_0_relu
cb = ConvBlock()(base.get_layer("conv2_block1_0_relu").output)
flat = Flatten()(cb)
dense = Dense(1000)(flat)

# adding up
adding = Add()([base.output, dense])
model = tf.keras.Model(inputs=[base.input], outputs=adding)

from tensorflow.keras.utils import plot_model 
plot_model(model,
           show_shapes=True, show_dtype=True, 
           show_layer_names=True,expand_nested=False)

enter image description here

这里从输入到层 conv2_block1_0_relu 的计算被计算了一次。接下来,如果我们想将这个函数式 API 转换为子类化 API,我们必须首先从基础模型的输入到层 conv2_block1_0_relu 构建一个模型。喜欢

class ModelWithMidLayer(tf.keras.Model):
    def __init__(self, dim=(32, 32, 3)):
        super().__init__()
        self.dim = dim
        self.base = DenseNet121(input_shape=self.dim, weights=None)
        
        # building sub-model from self.base which gives 
        # desired output feature maps: ie. conv2_block1_0_relu
        self.mid_layer = tf.keras.Model(self.base.inputs,
                                        self.base.get_layer("conv2_block1_0_relu").output)
        
        self.flat = Flatten()
        self.dense = Dense(1000)
        self.add = Add()
        self.cb = ConvBlock()
    
    def call(self, x):
        # forward with base model
        bx = self.base(x)

        # forward with mid layer
        mx = self.mid_layer(x)

        # make same shape or do whatever
        mx = self.dense(self.flat(mx))
        
        # combine
        out = self.add([bx, mx])
        return out
    
    def build_graph(self):
        x = tf.keras.layers.Input(shape=(self.dim))
        return tf.keras.Model(inputs=[x], outputs=self.call(x))

mwml = ModelWithMidLayer()
plot_model(mwml.build_graph(),
           show_shapes=True, show_dtype=True, 
           show_layer_names=True,expand_nested=False)

enter image description here

这里的model_1实际上是DenseNet的一个子模型,它可能导致整个模型(ModelWithMidLayer)计算相同的操作两次。如果这个观察是正确的,那么这让我们感到担忧。

1 个答案:

答案 0 :(得分:0)

我认为它可能非常复杂,但实际上非常简单。我们只需要在 __init__ 方法中构建具有所需输出层的模型,并在 call 方法中正常使用它。

import tensorflow as tf
from tensorflow.keras.applications import DenseNet121
from tensorflow.keras.layers import Add
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten

class ConvBlock(tf.keras.layers.Layer):
    def __init__(self, kernel_num=32, kernel_size=(3,3), strides=(1,1), padding='same'):
        super(ConvBlock, self).__init__()
        # conv layer
        self.conv = tf.keras.layers.Conv2D(kernel_num, 
                        kernel_size=kernel_size, 
                        strides=strides, padding=padding)
        # batch norm layer
        self.bn = tf.keras.layers.BatchNormalization()

    def call(self, input_tensor, training=False):
        x = self.conv(input_tensor)
        x = self.bn(x, training=training)
        return tf.nn.relu(x)
class ModelWithMidLayer(tf.keras.Model):
    def __init__(self, dim=(32, 32, 3)):
        super().__init__()
        self.dim = dim
        self.base = DenseNet121(input_shape=self.dim, weights=None)
        
        # building sub-model from self.base which gives 
        # desired output feature maps: ie. conv2_block1_0_relu
        self.mid_layer = tf.keras.Model(
            inputs=[self.base.inputs],
            outputs=[
                     self.base.get_layer("conv2_block1_0_relu").output,
                     self.base.output])
        self.flat = Flatten()
        self.dense = Dense(1000)
        self.add = Add()
        self.cb = ConvBlock()
    
    def call(self, x):
        # forward with base model
        bx = self.mid_layer(x)[1] # output self.base.output
        # forward with mid layer
        mx = self.mid_layer(x)[0] # output base.get_layer("conv2_block1_0_relu").output
        # make same shape or do whatever
        mx = self.dense(self.flat(mx))
        # combine
        out = self.add([bx, mx])
        return out
    
    def build_graph(self):
        x = tf.keras.layers.Input(shape=(self.dim))
        return tf.keras.Model(inputs=[x], outputs=self.call(x))
mwml = ModelWithMidLayer()
tf.keras.utils.plot_model(mwml.build_graph(),
                          show_shapes=True, show_dtype=True, 
                          show_layer_names=True,expand_nested=False)

enter image description here

相关问题