Keras卷积层输出的可视化

时间:2017-04-02 04:46:59

标签: tensorflow keras conv-neural-network keras-layer

我已经为这个问题编写了以下代码,其中有两个卷积层(简称Conv1和Conv2),我想绘制每个层的所有输出(它是自包含的)。对于Conv1来说一切都很好,但是我错过了一些关于Conv2的东西。

我正在将1x1x25x25(num图像,num通道,高度,宽度(我的约定,TF或Theano惯例))图像输入到具有4个5x5滤镜的Conv1。这意味着它的输出形状是4x1x1x25x25(num滤镜,num图像,num通道,高度,宽度),产生4个图。

现在,此输出被馈送到具有SIX 3x3滤波器的Conv1。因此,Conv2的输出应为6x(4x1x1x25x25),但事实并非如此!它相当于6x1x1x25x25。这意味着,只有6个地块而不是6x4,但为什么呢?以下函数还会打印每个输出的形状

(1, 1, 25, 25, 4)
-------------------
(1, 1, 25, 25, 6)
-------------------

但应该是

(1, 1, 25, 25, 4)
-------------------
(1, 4, 25, 25, 6)
-------------------

右?

import numpy as np
#%matplotlib inline #for Jupyter ONLY
import matplotlib.pyplot as plt

from   keras.models     import Sequential
from   keras.layers     import Conv2D
from   keras            import backend as K

model = Sequential()

# Conv1
conv1_filter_size = 5
model.add(Conv2D(nb_filter=4, nb_row=conv1_filter_size, nb_col=conv1_filter_size, 
                 activation='relu', 
                 border_mode='same',
                 input_shape=(25, 25, 1)))

# Conv2
conv2_filter_size = 3
model.add(Conv2D(nb_filter=6, nb_row=conv2_filter_size, nb_col=conv2_filter_size, 
                 activation='relu', 
                 border_mode='same'))

# The image to be sent through the model
img = np.array([
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.]],
[[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[1.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.]],
[[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[1.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[0.],[1.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[1.],[0.],[1.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[1.],[1.]],
[[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.]],
[[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]]])

def get_layer_outputs(image):
    '''This function extracts the numerical output of each layer.'''
    outputs    = [layer.output for layer in model.layers]
    comp_graph = [K.function([model.input] + [K.learning_phase()], [output]) for output in outputs]

    # Feeding the image
    layer_outputs_list = [op([[image]]) for op in comp_graph]

    layer_outputs = []
    for layer_output in layer_outputs_list:
        print(np.array(layer_output).shape, end='\n-------------------\n')
        layer_outputs.append(layer_output[0][0])

    return layer_outputs

def plot_layer_outputs(image, layer_number):
    '''This function handels plotting of the layers'''
    layer_outputs = get_layer_outputs(image)

    x_max = layer_outputs[layer_number].shape[0]
    y_max = layer_outputs[layer_number].shape[1]
    n     = layer_outputs[layer_number].shape[2]

    L = []
    for i in range(n):
        L.append(np.zeros((x_max, y_max)))

    for i in range(n):
        for x in range(x_max):
            for y in range(y_max):
                L[i][x][y] = layer_outputs[layer_number][x][y][i]


    for img in L:
        plt.figure()
        plt.imshow(img, interpolation='nearest')

plot_layer_outputs(img, 1)

1 个答案:

答案 0 :(得分:0)

卷积层的输出被捆绑为具有多个通道的一个图像。与颜色通道相比,这些可以被认为是特征通道。例如,如果卷积层具有F个过滤器,则无论输入图像具有多少(颜色或特征)通道,它都将输出具有F个通道数的图像。这就是Conv2生成6个特征映射而不是6x4的原因。 更详细地说,卷积滤波器将在所有输入通道上进行卷积,并且其卷积的线性组合将被馈送到其激活函数。