我试图微调Keras中的现有模型来对我自己的数据集进行分类。直到现在我已经尝试了以下代码(取自Keras docs:https://keras.io/applications/),其中Inception V3在一组新类上进行了微调。
from keras.applications.inception_v3 import InceptionV3
from keras.preprocessing import image
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 200 classes
predictions = Dense(200, activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
layer.trainable = False
# compile the model (should be done *after* setting layers to non-trainable)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
# train the model on the new data for a few epochs
model.fit_generator(...)
# at this point, the top layers are well trained and we can start fine-tuning
# convolutional layers from inception V3. We will freeze the bottom N layers
# and train the remaining top layers.
# let's visualize layer names and layer indices to see how many layers
# we should freeze:
for i, layer in enumerate(base_model.layers):
print(i, layer.name)
# we chose to train the top 2 inception blocks, i.e. we will freeze
# the first 172 layers and unfreeze the rest:
for layer in model.layers[:172]:
layer.trainable = False
for layer in model.layers[172:]:
layer.trainable = True
# we need to recompile the model for these modifications to take effect
# we use SGD with a low learning rate
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')
# we train our model again (this time fine-tuning the top 2 inception blocks
# alongside the top Dense layers
model.fit_generator(...)
任何人都可以指导我在上面的代码中应该做些什么更改,以便微调Keras中的ResNet50模型。
提前致谢。
答案 0 :(得分:4)
很难找出具体的question,您是否尝试过只是复制代码而不做任何更改?
也就是说,代码中存在大量问题:它是来自keras.io的简单复制/粘贴,无法正常运行,并且需要在工作之前进行一些调整(无论使用ResNet50还是InceptionV3 ):
1):您需要在加载InceptionV3时定义input_shape,特别是将base_model = InceptionV3(weights='imagenet', include_top=False)
替换为base_model = InceptionV3(weights='imagenet', include_top=False, input_shape=(299,299,3))
2):此外,您需要调整最后添加的图层中的类数,例如如果您只有2个课程:predictions = Dense(2, activation='softmax')(x)
3):在将模型从categorical_crossentropy
编译为sparse_categorical_crossentropy
4):最重要的是,您需要在调用fit_generator
之前定义model.fit_generator()
并添加steps_per_epoch
。如果您在 ./ data / train 中的训练图像与不同子文件夹中的每个类别,可以这样做,例如像这样:
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
"./data/train",
target_size=(299, 299),
batch_size=50,
class_mode='binary')
model.fit_generator(train_generator, steps_per_epoch=100)
这当然只进行基本训练,例如,您需要定义保存调用以保持训练的权重。只有当您使用上述更改获取InceptionV3的代码时,我建议继续为ResNet50实现此功能:作为开始,您可以将InceptionV3()
替换为ResNet50()
(当然仅在{{1}之后}}),并将from keras.applications.resnet50 import ResNet50
更改为input_shape
和(224,224,3)
更改为target_size
。
上述代码更改应适用于 Python 3.5.3 / Keras 2.0 / Tensorflow 后端。
答案 1 :(得分:0)
除了ResNet50的上述答案中提到的重点(如果您的图像形状与原始Keras代码(224,224)中的形状相似 - 不是矩形),您可以替换:
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
通过
x = base_model.output
x = Flatten(x)
编辑:请阅读@ Yu-Yang评论吼叫
答案 2 :(得分:0)
我认为我遇到了同样的问题。这似乎是一个复杂的问题,它在github(https://github.com/keras-team/keras/issues/9214)上有不错的线程。问题在于网络的未冻结块的批量标准化。您有两种解决方案: