Keras中的卷积层可视化

时间:2018-12-16 16:00:08

标签: tensorflow keras neural-network deep-learning conv-neural-network

我想可视化深度学习模型的卷积层中的图像,我在链接中找到了代码。

https://github.com/yashk2810/Visualization-of-Convolutional-Layers/blob/master/Visualizing%20Filters%20Python3%20Theano%20Backend.ipynb

我应用了相同的代码,但是图像空白。

我正在使用flow_from_directory来读取图像。

enter image description here

请帮助我找到解决方案。

此处的代码

 img_to_visualize = image.load_img('img.jpg', target_size=(224, 224))


    img_to_visualize = np.expand_dims(img_to_visualize,axis=0)

    def layer_to_visualize(layer):


        inputs = [K.learning_phase()] + vgg16_face_model.inputs

        _convout1_f = K.function(inputs, [layer.output])
        def convout1_f(X):
            # The [0] is to disable the training phase flag
            return _convout1_f([0] + [X])

        convolutions = convout1_f(img_to_visualize)
        convolutions = np.squeeze(convolutions)

        print ('Shape of conv:', convolutions.shape)

        n = convolutions.shape[0]
        n = int(np.ceil(np.sqrt(n)))

        # Visualization of each filter of the layer
        fig = plt.figure(figsize=(12,8))
        for i in range(len(convolutions)):
            ax = fig.add_subplot(n,n,i+1)
            ax.imshow(convolutions[i], cmap='viridis')

    # Specify the layer to want to visualize
    layer_to_visualize(convout2)

1 个答案:

答案 0 :(得分:1)

由于您具有“转换形状:(14,14,512)”输出,并且已将问题标记为“ tensorflow”,因此我假设您未使用Theano后端,并且具有"image_data_format" being "channels_last"。我自己没有使用Theano,但根据我的搜索,Thenao后端默认情况下可能具有“ channels_first”。因此,在遍历图层输出时:

for i in range(len(convolutions)):
    ax = fig.add_subplot(n,n,i+1)
    ax.imshow(convolutions[i], cmap='viridis')

您实际上绘制的是14x512的14张图像,而不是14x14(我认为是您想要的)的512张图像。

一个简单的(意味着您可以使用已经编码的功能)修复程序是通过在笔记本/脚本的顶部添加K.set_image_data_format('channels_first')来设置“ image_data_format”“ channels_first”。但是,此修复程序可能与您的其他代码冲突。在这种情况下,您可以重写图层可视化功能。这是与https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py

一起使用的示例
import matplotlib.pyplot as plt
import numpy as np

def visualize_layer(model, layer, input, train_mode=False):
    get_layer_output = K.function([model.input, K.learning_phase()],
                                  [layer.output])
    layer_output = get_layer_output([input, int(train_mode)])[0]
    print('Shape of {} layer output: {}'.format(layer, layer_output.shape))
    for i, sample in enumerate(layer_output):
        n_img = sample.shape[-1]
        img_row = int(np.ceil(np.sqrt(n_img)))
        fig = plt.figure()
        for j in range(n_img):
            ax = fig.add_subplot(img_row, img_row, j+1)
            ax.imshow(sample[:, :, j], cmap='gray')
        fig.savefig('sample_{}.png'.format(i))

visualize_layer(model, model.layers[1], [x_train[0]])