我想在我的深度学习图像识别项目中使用预训练模型,如Xception,VGG16,ResNet50等,以高精度快速训练训练集上的模型。我无法找到实现我的模型的确切代码。首先,根据VGG16模型的要求,我将训练数据的输入形状从(256,256,3)修改为(224,224,3)。我使用过keras编程环境。我的型号代码如下
train_x = np.expand_dims(train_X, axis=2)
train_y = np.expand_dims(train_Y, axis=2)
print(train_X.shape) # output - (670, 224, 224, 3)
print(train_Y.shape) # output - (670, 224, 224, 1)
print(train_x.shape) # output - (670, 224, 1, 224, 3)
print(train_y.shape) # output - (670, 224, 1, 224, 1)
def vgg16_(IMG_WIDTH=224,IMG_HEIGHT=224,IMG_CHANNELS=3):
inputs = Input(shape=(len(train_x[0]), 1))
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(inputs)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
x = Flatten()(x)
x = Dropout(0.2)(x)
x = Dense(100, activation='tanh')(x)
x = Reshape([len(train_x[0]),1])(x)
model = Model(inputs, reshape)
model.compile(loss='mse', optimizer='rmsprop')
return model
但是,不幸的是我通过在训练数据上拟合这个模型来得到这个错误 ValueError:输入0与图层block1_conv1不兼容:预期ndim = 4,发现ndim = 3。我该怎么做才能获得正确的输出?
此外,我尝试通过仅更改输出图层来运行以下代码。我收到此错误ValueError:检查目标时出错:预期预测有2个维度,但得到的数组有形状(670,224,224,1)
model_vgg16_conv = VGG16(input_shape=(IMG_WIDTH,IMG_HEIGHT,3),weights='imagenet', include_top=False,pooling=max)
model_vgg16_conv.summary()
#print("ss")
#Create your own input format
input = Input(shape=(IMG_WIDTH,IMG_HEIGHT,3),name = 'image_input')
#print("ss2")
#Use the generated model
output_vgg16_conv = model_vgg16_conv(input)
print("ss3")
#Add the fully-connected layers
x = Flatten(name='flatten')(output_vgg16_conv)
x = Dense(512, activation='relu', name='fc1')(x)
x = Dense(128, activation='relu', name='fc2')(x)
x = Dense(1, activation='sigmoid', name='predictions')(x)
#Create your own model
my_model = Model(input=input, output=x)
#In the summary, weights and layers from VGG part will be hidden, but they will be fit during the training
my_model.summary()
my_model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
我被困在这个舞台上。任何人都可以帮助我。非常感谢你提前。
答案 0 :(得分:0)
我猜你的输入层定义是错误的。应该是这个。
inputs = Input(shape=(IMG_WIDTH,IMG_HEIGHT,CHANNELS))
输入将是尺寸(224,224,3)的图像,然后将输入图层的形状设置为(len(train_x [0]),1)
答案 1 :(得分:0)
它看起来像Input
张量形状的错误:
inputs = Input(shape=(len(train_x[0]), 1))
len(train_x[0])
将为224
,因为len
将沿第一个轴取大小。相反它应该是:
inputs = Input(shape=train_x[0].shape)