我试图通过重新培训最后4层预训练网络,在VGG16预训练模型上使用转移学习进行图像分类任务,共13个等级。
我也正在使用keras的ImageDataGenerator,如here所述。
在这种方法中,我无法弄清楚如何使用从ImageDataGenerator中preprocess_input
导入的vgg16 from keras.applications.vgg16 import preprocess_input
方法。
每当我运行代码时,我都会收到一条错误消息,指出 JpegImageFile'对象不可订阅
from keras.applications import VGG16
from keras import layers
from keras import optimizers
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing.image import ImageDataGenerator
train_dir = ''
validation_dir = ''
vgg_conv = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
for layer in vgg_conv.layers[:-4]:
layer.trainable = False
for layer in vgg_conv.layers:
print(layer, layer.trainable)
model = Sequential()
# Add the vgg convolutional base model
model.add(vgg_conv)
model.add(layers.Flatten())
model.add(layers.Dense(1024, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(13, activation='softmax'))
model.summary()
train_datagen = ImageDataGenerator(
rescale=1. / 255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
fill_mode='nearest',
preprocessing_function = preprocess_input
)
validation_datagen = ImageDataGenerator(rescale=1. / 255)
train_batchsize = 100
val_batchsize = 20
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(224, 224),
batch_size=train_batchsize,
class_mode='categorical'
)
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=(224, 224),
batch_size=val_batchsize,
class_mode='categorical',
shuffle=False)
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=550,
epochs=30,
validation_data=validation_generator,
validation_steps=430)
model.save('small_last4.h5')
正如在各个地方所建议的,我也尝试过自定义预处理功能。这也行不通。
vgg_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32).reshape((3,1,1))
def vgg_preprocess(x):
"""
Subtracts the mean RGB value, and transposes RGB to BGR.
The mean RGB was computed on the image set used to train the VGG model.
Args:
x: Image array (height x width x channels)
Returns:
Image array (height x width x transposed_channels)
"""
x = x - vgg_mean
return x[:, ::-1] # reverse axis rgb->bgr
有趣的是,这个问题仅出现在Keras 2.1.5中。在2.1.4中它工作正常。我在降级keras时遇到的缺点是我的训练时间急剧增加。
答案 0 :(得分:0)
您可以在添加vgg_conv之前添加labda图层,如下所示:
from keras.applications.inception_v3 import preprocess_input
model = Sequential()
model.add(Lambda(preprocess_input, name='preprocessing', input_shape=(224, 224, 3)))
model.add(vgg_conv)
...
不幸的是,使用preprocess_input
中的keras.applications.vgg16
似乎对我无效,但您可以尝试从inception_v3
导入。希望我们谈论相同的预处理,但我不完全确定。