如何使用keras.applications中的模型进行转移学习?

时间:2016-12-29 11:07:29

标签: python deep-learning keras

我想在Keras中获得预先训练的VGG16模型,删除其输出层,然后添加一个新的输出层,其中包含适合我的问题的类数,然后将其放在新数据上。出于这个原因,我试图在这里使用模型:https://keras.io/applications/#vgg16,但由于它不是Sequential,我不能只是Sequential。从图层弹出并添加它也不起作用,因为在预测中它仍然期望旧的形状。我该怎么办?有没有办法将这种类型的模型转换为export LD_PRELOAD=/lib64/libc-2.14.so

2 个答案:

答案 0 :(得分:44)

您可以在pop()上使用model.layers,然后使用model.layers[-1].output创建新图层。

示例:

from keras.models import Model
from keras.layers import Dense,Flatten
from keras.applications import vgg16
from keras import backend as K

model = vgg16.VGG16(weights='imagenet', include_top=True)

model.input

model.summary(line_length=150)

model.layers.pop()
model.layers.pop()

model.summary(line_length=150)

new_layer = Dense(10, activation='softmax', name='my_dense')

inp = model.input
out = new_layer(model.layers[-1].output)

model2 = Model(inp, out)
model2.summary(line_length=150)

或者,您可以使用这些模型的include_top=False选项。在这种情况下,如果您需要使用展平图层,则还需要传递input_shape

model3 = vgg16.VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
model3.summary(line_length=150)

flatten = Flatten()
new_layer2 = Dense(10, activation='softmax', name='my_dense_2')

inp2 = model3.input
out2 = new_layer2(flatten(model3.output))

model4 = Model(inp2, out2)
model4.summary(line_length=150)

答案 1 :(得分:3)

我们可以将VGG模型转换为Sequential:

# Create VGG model
vgg_model = keras.applications.vgg16.VGG16(weights='imagenet')


# Created model is of type Model
type(vgg_model)
>> keras.engine.training.Model


# Convert it to Sequential
model = Sequential()
for layer in vgg_model.layers:
   model.add(layer)


# Now, check the model type, its Sequential! 
type(model)
>> keras.models.Sequential


# Verify the model details
model.summary()
>> 
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_15 (InputLayer)        (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0         
_________________________________________________________________
fc1 (Dense)                  (None, 4096)              102764544 
_________________________________________________________________
fc2 (Dense)                  (None, 4096)              16781312  
_________________________________________________________________
predictions (Dense)          (None, 1000)              4097000   
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________


# Now, that its sequential, we can perform usual operations.
model.layers.pop()


# Freeze the layers 
for layer in model.layers:
layer.trainable = False


# Add 'softmax' instead of earlier 'prediction' layer.
model.add(Dense(2, activation='softmax'))


# Check the summary, and yes new layer has been added. 
model.summary()

    Layer (type)                 Output Shape              Param #   
=================================================================
input_15 (InputLayer)        (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0         
_________________________________________________________________
fc1 (Dense)                  (None, 4096)              102764544 
_________________________________________________________________
fc2 (Dense)                  (None, 4096)              16781312  
_________________________________________________________________
dense_4 (Dense)              (None, 2)                 2002      
=================================================================
Total params: 134,262,546
Trainable params: 2,002
Non-trainable params: 134,260,544
_________________________________________________________________