如何在keras中的微调网络中提取特征向量

时间:2017-07-03 02:45:02

标签: python tensorflow deep-learning keras

我试图在使用新数据对Keras上的Inception v3 CNN进行微调后,从添加的Dense层中提取特征向量。基本上,我加载网络结构及其权重,添加两个密集层(我的数据用于两类问题)并仅从网络的某些部分更新权重,如下面的代码所示:

# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)

# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)

# let's add a fully-connected layer
x = Dense(64, activation='relu')(x)

# and a logistic layer -- I have 2 classes only
predictions = Dense(2, activation='softmax')(x)

# this is the model to train
model = Model(inputs=base_model.input, outputs=predictions)

# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers

for layer in base_model.layers:
        layer.trainable = False

# compile the model (should be done *after* setting layers to non-trainable)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')

#load new training data
x_train, x_test, y_train, y_test =load_data(train_data, test_data, train_labels, test_labels)

datagen = ImageDataGenerator()      
datagen.fit(x_train)

epochs=1
batch_size=32

# train the model on the new data for a few epochs
model.fit_generator(datagen.flow(x_train, y_train,
                                 batch_size=batch_size),
                                 steps_per_epoch=x_train.shape[0] // 
                                 batch_size,
                                 epochs=epochs,
                                 validation_data=(x_test, y_test))

# at this point, the top layers are well trained and 
#I can start fine-tuning convolutional layers from inception V3. 
#I will freeze the bottom N layers and train the remaining top layers. 
#I chose to train the top 2 inception blocks, i.e. I will freeze the 
#first 249 layers and unfreeze the rest:

for layer in model.layers[:249]:
    layer.trainable = False
for layer in model.layers[249:]:
    layer.trainable = True

# I need to recompile the model for these modifications to take effect
# I use SGD with a low learning rate
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['binary_accuracy'])

# I train our model again (this time fine-tuning the top 2 inception blocks alongside the top Dense layers
model.fit_generator(datagen.flow(x_train, y_train,
                                 batch_size=batch_size),
                                 steps_per_epoch=x_train.shape[0] // 
                                 batch_size,
                                 epochs=epochs,
                                 validation_data=(x_test, y_test))

此代码运行良好,不是我的问题。

我的问题是,在对这个网络进行微调之后,我想要从列车上的最后一层输出并测试数据,因为我想将这个新网络用作特征提取器。我想要从上面代码中可以看到的网络部分输出:

x = Dense(64, activation='relu')(x)

我尝试了以下代码,但它不起作用:

 from keras import backend as K
 inputs = [K.learning_phase()] + model.inputs
 _convout1_f = K.function(inputs, model.get_layer(dense_1).output)

错误如下

 _convout1_f = K.function(inputs, model.get_layer(dense_1).output)
 NameError: global name 'dense_1' is not defined

如何在我的新数据中微调预先训练好的网络后,从我添加的新图层中提取特征?我在这里做错了什么?

1 个答案:

答案 0 :(得分:1)

我解决了自己的问题。希望它也适合你。

首先,提取特征的K.function是这个

_convout1_f = K.function([model.layers[0].input, K.learning_phase()],[model.layers[312].output])

其中312是我要提取特征的第312层

然后我将这个_convout1_f参数传递给像这样的函数

    features_train, features_test=feature_vectors_generator(x_train,x_test,_convout1_f)

提取这些功能的功能就像这样

def feature_vectors_generator(x_train,x_test, _convout1_f):

    print('Generating Training Feature Vectors...')

    batch_size=100
    index=0
    if x_train.shape[0]%batch_size==0:
            max_iterations=x_train.shape[0]/batch_size
    else:
            max_iterations=(x_train.shape[0]/batch_size)+1


    for i in xrange(0, max_iterations):

            if(i==0):

                  features=_convout1_f([x_train[index:batch_size], 1])[0]
                  index=index+batch_size
                  features = numpy.squeeze(features)
                  features_train = features

            else:
                     if(i==max_iterations-1):
              features=_convout1_f([x_train[index:x_train.shape[0],:], 1])[0]
                            features = numpy.squeeze(features)
                            features_train =numpy.append(features_train,features, axis=0)

                     else:

            features=_convout1_f([x_train[index:index+batch_size,:], 1])[0]
                            index=index+batch_size
                            features = numpy.squeeze(features)          
                            features_train=numpy.append(features_train,features, axis=0)



print('Generating Testing Feature Vectors...')

batch_size=100
    index=0
    if x_test.shape[0]%batch_size==0:
            max_iterations=x_test.shape[0]/batch_size
    else:
            max_iterations=(x_test.shape[0]/batch_size)+1


    for i in xrange(0, max_iterations):

            if(i==0):
        features=_convout1_f([x_test[index:batch_size], 0])[0]
                    index=index+batch_size
                    features = numpy.squeeze(features)
                    features_test = features

            else:
                     if(i==max_iterations-1):
            features=_convout1_f([x_test[index:x_test.shape[0],:], 0])[0]
                            features = numpy.squeeze(features)
                            features_test = numpy.append(features_test,features, axis=0)

                     else:
            features=_convout1_f([x_test[index:index+batch_size,:], 0])[0]
                            index=index+batch_size
                            features = numpy.squeeze(features)
                            features_test=numpy.append(features_test,features, axis=0)

return(features_train, features_test)