我正在寻找相反的功能:BatchNormalization,LeakyRelu,Lamda和Reshape为我的CNN定义可视化

时间:2019-03-27 18:00:59

标签: conv-neural-network yolo

我正在尝试实现一个DeconvNet来可视化CNN,以查看网络中不同层在寻找什么功能,为此,我需要在网络中使用反向功能(例如Relu,批量标准化)。

您可以查看本文以了解我要做什么:https://arxiv.org/abs/1311.2901

这是我在互联网上找到的反卷积代码:

Class DConvolution2D(object):

    def __init__(self, layer):

        self.layer = layer

        weights = layer.get_weights()
        W = weights[0]
        b = weights[1]


        filters = W.shape[3]
        up_row = W.shape[0]
        up_col = W.shape[1]
        input_img = keras.layers.Input(shape = layer.input_shape[1:])

        output=keras.layers.Conv2D(filters,(up_row,up_col),kernel_initializer=tf.constant_initializer(W),
                                   bias_initializer=tf.constant_initializer(b),padding='same')(input_img)
        self.up_func = K.function([input_img, K.learning_phase()], [output])
        # Deconv filter (exchange no of filters and depth of each filter)
        W = np.transpose(W, (0,1,3,2))
        # Reverse columns and rows
        W = W[::-1, ::-1,:,:]
        down_filters = W.shape[3]
        down_row = W.shape[0]
        down_col = W.shape[1]
        b = np.zeros(down_filters)
        input_d = keras.layers.Input(shape = layer.output_shape[1:])

        output=keras.layers.Conv2D(down_filters,(down_row,down_col),kernel_initializer=tf.constant_initializer(W),
                                   bias_initializer=tf.constant_initializer(b),padding='same')(input_d)
        self.down_func = K.function([input_d, K.learning_phase()], [output])

    def up(self, data, learning_phase = 0):
        #Forward pass
        self.up_data = self.up_func([data, learning_phase])
        self.up_data=np.squeeze(self.up_data,axis=0)
        self.up_data=numpy.expand_dims(self.up_data,axis=0)
        #print(self.up_data.shape)
        return self.up_data

    def down(self, data, learning_phase = 0):
        # Backward pass
        self.down_data= self.down_func([data, learning_phase])
        self.down_data=np.squeeze(self.down_data,axis=0)
        self.down_data=numpy.expand_dims(self.down_data,axis=0)
        #print(self.down_data.shape)
        return self.down_data

因此,我希望对YOLO架构上的其他功能执行相同的操作。

感谢您的帮助,如果我不太清楚,请为我的英语感到抱歉

0 个答案:

没有答案