我正在尝试使用vgg16对使用keras的mnist数位进行分类。 生成的错误是:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-17-fd172601765f> in <module>()
1 # Train the the model
----> 2 history=model.fit(train_features, train_labels, batch_size=128, epochs=100,callbacks=callback, verbose=0, validation_split=0.2)
~\Anaconda3\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
950 sample_weight=sample_weight,
951 class_weight=class_weight,
--> 952 batch_size=batch_size)
953 # Prepare validation data.
954 do_validation = False
~\Anaconda3\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
787 feed_output_shapes,
788 check_batch_axis=False, # Don't enforce the batch size.
--> 789 exception_prefix='target')
790
791 # Generate sample-wise weight values given the `sample_weight` and
~\Anaconda3\lib\site-packages\keras\engine\training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
126 ': expected ' + names[i] + ' to have ' +
127 str(len(shape)) + ' dimensions, but got array '
--> 128 'with shape ' + str(data_shape))
129 if not check_batch_axis:
130 data_shape = data_shape[1:]
ValueError: Error when checking target: expected block5_pool to have 4 dimensions, but got array with shape (60000, 10)
这是包含我完成的所有预处理和调整大小的代码。 我只需将其堆叠三次即可将28x28单通道图像调整为48x48 3通道图像。 由于我是该领域的新手,所以我无法理解我在哪里出错了。
train_features=np.stack([train_features]*3,axis = -1)
test_features=np.stack([test_features]*3,axis = -1)
# Reshape images as per the tensor format required by tensorflow
train_features = train_features.reshape(-1, 28,28,3)
test_features= test_features.reshape (-1,28,28,3)
# Resize the images 48*48 as required by VGG16
from keras.preprocessing.image import img_to_array, array_to_img
train_features = np.asarray([img_to_array(array_to_img(im, scale=False).resize((48,48))) for im in train_features])
test_features = np.asarray([img_to_array(array_to_img(im, scale=False).resize((48,48))) for im in test_features])
train_features.shape, test_features.shape
#normalising the training and testing features
train_features = train_features.astype('float32')
test_features = test_features .astype('float32')
train_features /= 255
test_features /= 255
# Converting Labels to one hot encoded format
test_labels = to_categorical(test_labels,10)
train_labels = to_categorical(train_labels,10)
# Preprocessing the input
train_features = preprocess_input(train_features)
test_features = preprocess_input (test_features)
model = VGG16(weights=None, include_top=False)
input = Input(shape=(48,48,3),name = 'image_input')
#Use the generated model
output = model(input)
#Add the fully-connected layers
x = Flatten(name='flatten')(output)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dense(4096, activation='relu', name='fc2')(x)
x = Dense(10, activation='softmax', name='predictions')(x)
#Create your own model
vgg16_model = Model(input=input, output=x)
model.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy'])
# Train the the model
history=model.fit(train_features, train_labels, batch_size=128, epochs=100,callbacks=callback, verbose=0, validation_split=0.2)
模型摘要如下:
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, None, None, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, None, None, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, None, None, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, None, None, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, None, None, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, None, None, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, None, None, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, None, None, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, None, None, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, None, None, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, None, None, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, None, None, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, None, None, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, None, None, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
image_input (InputLayer) (None, 48, 48, 3) 0
_________________________________________________________________
vgg16 (Model) multiple 14714688
_________________________________________________________________
flatten (Flatten) (None, 512) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 2101248
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
predictions (Dense) (None, 10) 40970
=================================================================
Total params: 33,638,218
Trainable params: 33,638,218
Non-trainable params: 0
_________________________________________________________________
任何对此的帮助,将不胜感激。
答案 0 :(得分:0)
Keras抱怨目标,这是因为模型的输出形状不正确,因为它没有分类(Dense
)层,请尝试以下方法:
model = VGG16(weights=None, include_top=False,input_shape=(48,48,3))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy'])