如何在Keras中使用CNN时提取图像的特征向量

时间:2017-12-29 11:19:16

标签: keras feature-extraction

我正在做二进制分类问题,我的模型架构如下

def CNN_model(height, width, depth):
    input_shape = (height, width, depth)

    model = Sequential()
    # Block 1
    model.add(Conv2D(64, kernel_size=(3, 3), strides=1, activation='relu', input_shape=input_shape, padding='VALID'))
    model.add(Conv2D(64, kernel_size=(3, 3), strides=1, activation='relu', padding='VALID'))
    model.add(MaxPooling2D(pool_size=(2, 2)))

    # Block 2
    model.add(Conv2D(128, kernel_size=(3, 3), strides=1, activation='relu', padding='VALID'))
    model.add(Conv2D(128, kernel_size=(3, 3), strides=1, activation='relu', padding='VALID'))
    model.add(AveragePooling2D(pool_size=(19, 19)))

    # set of FC => RELU layers
    model.add(Flatten())
    model.add(Dense(128))
    model.add(Activation('relu'))
    model.add(BatchNormalization())
    model.add(Dense(num_classes, activation='softmax'))
    model.compile(loss=keras.losses.binary_crossentropy,
                  optimizer=keras.optimizers.Adadelta(),
                  metrics=['accuracy'])
    return model

我需要测试集上的每个图像,我得到一个从FC层收集的128-D特征向量用于SVM分类。更多细节,来自model.add(Dense(128))。你能告诉我如何解决这个问题吗?谢谢!

1 个答案:

答案 0 :(得分:7)

这里最简单的方法是删除密集图层。

  

我将回答一个具有相似图层但不同的input_shape的反例:

from keras.layers import *
from keras.preprocessing import image
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
import numpy as np
from scipy.misc import imsave
import  numpy  as  np
from keras.layers import *
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.layers import Dropout, Flatten, Dense
from keras.applications import ResNet50
from keras.models import Model, Sequential
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
import matplotlib.pyplot as plt
from keras.applications.resnet50 import preprocess_input

model = Sequential()
model.add(Conv2D(64, kernel_size=(3, 3), input_shape=(530, 700, 3), padding='VALID'))
model.add(Conv2D(64, kernel_size=(3, 3), padding='VALID'))
model.add(MaxPooling2D(pool_size=(2, 2)))

# Block 2
model.add(Conv2D(128, kernel_size=(3, 3), strides=1, activation='relu', padding='VALID'))
model.add(Conv2D(128, kernel_size=(3, 3), strides=1, activation='relu', padding='VALID'))
model.add(AveragePooling2D(pool_size=(19, 19)))

# set of FC => RELU layers
model.add(Flatten())

#getting the summary of the model (architecture)
model.summary()

img_path = '/home/sb0709/Desktop/dqn/DQN/data/data/2016_11_01-2017_11_01.png'
img = image.load_img(img_path, target_size=(530, 700))
img_data = image.img_to_array(img)
img_data = np.expand_dims(img_data, axis=0)
img_data = preprocess_input(img_data)

vgg_feature = model.predict(img_data)
#print the shape of the output (so from your architecture is clear will be (1, 128))
#print shape
print(vgg_feature.shape)

#print the numpy array output flatten layer
print(vgg_feature.shape)
  

以下是包含所有图层的输出模型体系结构:   model summary

     

此处列出了特征向量:   feature vector size of (1,128) (numpy array)

     

示例中使用的图片:

Image for the example

第二种方法适用于使用Functional Api而不是Sequencial()来使用How can I obtain the output of an intermediate layer?

from keras import backend as K
# with a Sequential model
get_6rd_layer_output = K.function([model.layers[0].input],
                                  [model.layers[6].output])
layer_output = get_6rd_layer_output([x])[0]

#print shape
print(layer_output.shape)

#print the numpy array output flatten layer
print(layer_output.shape)
  

另一个有用的步骤是功能的可视化,我敢打赌很多人都希望看到什么看到电脑,并且只会说明" Flatten"层输出(更好地说网络):

def visualize_stock(img_data):
    plt.figure(1, figsize=(25, 25))
    stock = np.squeeze(img_data, axis=0)
    print(stock.shape)
    plt.imshow(stock)

和魔术:

visualize_stock(img_data)

feature map 注意:从input_shape =(530,800,3)更改为input_shape =(84,800,3),以便更好地公开显示。

P.S:决定发布任何有此类问题的人都会受益(最近遇到同样类型的问题)。