ValueError:连续图层_1的输入0与该图层不兼容

时间:2020-11-07 16:07:19

标签: python validation machine-learning keras prediction

我在Keras中编写了以下模型,但在进行预测时,遇到ValueError。我在StackOverflow上查看了其他问题,但无法与我的代码完全相关。

我的训练模型为:

#building the CNN model
    cnn = Sequential()

kernelSize = (3, 3)
ip_activation = 'relu'
ip_conv_0 = Conv2D(filters=32, kernel_size=kernelSize, input_shape=im_shape, activation=ip_activation)
cnn.add(ip_conv_0)

# Add the next Convolutional+Activation layer
ip_conv_0_1 = Conv2D(filters=64, kernel_size=kernelSize, activation='relu')
cnn.add(ip_conv_0_1)
# Add the Pooling layer
pool_0 = MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding="same")
cnn.add(pool_0)

ip_conv_1 = Conv2D(filters=64, kernel_size=kernelSize, activation='relu')
cnn.add(ip_conv_1)
ip_conv_1_1 = Conv2D(filters=64, kernel_size=kernelSize, activation='relu')
cnn.add(ip_conv_1_1)
pool_1 = MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding="same")
cnn.add(pool_1)

# Let's deactivate around 20% of neurons randomly for training
drop_layer_0 = Dropout(0.2)
cnn.add(drop_layer_0)


flat_layer_0 = Flatten()
cnn.add(Flatten())

# Now add the Dense layers
h_dense_0 = Dense(units=128, activation='relu', kernel_initializer='uniform')
cnn.add(h_dense_0)
# Let's add one more before proceeding to the output layer
h_dense_1 = Dense(units=64, activation='relu', kernel_initializer='uniform')
cnn.add(h_dense_1)

op_activation = 'softmax'
output_layer = Dense(units=n_classes, activation='softmax', kernel_initializer='uniform')
cnn.add(output_layer)

opt = 'adam'
loss = 'categorical_crossentropy'
metrics = ['accuracy']
# Compile the classifier using the configuration we want
cnn.compile(optimizer=opt, loss=loss, metrics=metrics)

cnn_summary = cnn.summary()

history = cnn.fit(x_train, y_train,
                  batch_size=40, epochs=20,
                  validation_data=(x_test, y_test)
                  )

我尝试在另一个.py文件中使用以下代码进行预测:

import numpy as np
from keras.preprocessing import image 

from keras.models import load_model
model=load_model('trained_model.h5')

test_image = image.load_img('131.png', target_size=(32,32))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis=0)
pre = model.predict(test_image)

但是问题是我得到了如下值错误:

ValueError: Input 0 of layer sequential_1 is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape [None, 32, 32, 3]

那么有人可以帮助我解决这个错误吗?

1 个答案:

答案 0 :(得分:0)

它基本上说您的第一层期望形状为(32, 32, 1)

ip_conv_0 = Conv2D(filters=32, kernel_size=kernelSize, input_shape=im_shape, activation=ip_activation)

,因此这里是im_shape=(32,32,1),但是在预测时,它接收的是形状为(32,32,3)的3通道图像。

我认为您使用灰度图像训练了网络,并且尝试用彩色图像(RGB)进行推断,而这种图像不适合您构建的网络模型。您可以做的是,可以使用形状为(32,32,3)的图像训练模型,而我认为这不是一个选择,或者可以使RGB(彩色)图像成为灰度图像,从而使图像具有形状为{{1} },则可以推断出模型。