使用keras模型.hdf5预测图像时出错

时间:2020-03-18 01:10:12

标签: python tensorflow keras deep-learning siamese-network

我尝试使用暹罗神经网络预测图像,我的模型格式为.hdf5 首先,我尝试加载要预测的图像, 然后加载模型,最后调用.predict预测我想知道的图片。 这是我尝试的代码

img = cv2.imread('/Users/tania/Desktop/TEST/Pa/Pu/Pu - Copy (3).PNG')
siamese_model1.load_weights("/Users/tania/Desktop/weights/siamese_n1.hdf5")
siamese_model1.predict(img)

我发现了这个错误

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-65-789026f30db8> in <module>
      1 img = cv2.imread('/Users/tania/Desktop/TEST/Pa/Pu/Pu - Copy (3).PNG')
      2 siamese_model1.load_weights("/Users/tania/Desktop/weights/siamese_n1.hdf5")
----> 3 siamese_model1.predict(img)

/opt/miniconda3/envs/tensorflow/lib/python3.7/site-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
   1439 
   1440         # Case 2: Symbolic tensors or Numpy array-like.
-> 1441         x, _, _ = self._standardize_user_data(x)
   1442         if self.stateful:
   1443             if x[0].shape[0] > batch_size and x[0].shape[0] % batch_size != 0:

/opt/miniconda3/envs/tensorflow/lib/python3.7/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
    577             feed_input_shapes,
    578             check_batch_axis=False,  # Don't enforce the batch size.
--> 579             exception_prefix='input')
    580 
    581         if y is not None:

/opt/miniconda3/envs/tensorflow/lib/python3.7/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    107                 'Expected to see ' + str(len(names)) + ' array(s), '
    108                 'but instead got the following list of ' +
--> 109                 str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
    110         elif len(names) > 1:
    111             raise ValueError(

ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        ...,
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]],

       [[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        ...,
...

我该如何解决?还是有什么办法解决呢?

模型摘要是

Model: "model_2"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 105, 105, 1)  0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            (None, 105, 105, 1)  0                                            
__________________________________________________________________________________________________
model_1 (Model)                 (None, 4096)         38947648    input_1[0][0]                    
                                                                 input_2[0][0]                    
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 4096)         0           model_1[1][0]                    
                                                                 model_1[2][0]                    
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 1)            4097        lambda_1[0][0]                   
==================================================================================================
Total params: 38,951,745
Trainable params: 38,951,745
Non-trainable params: 0
__________________________________________________________________________________________________

和这样的暹罗

# Siamese Network
def build_network(conv_model):
    # Build two networks
    input_shape = (105, 105, 1)
    input1 = Input(input_shape)
    input2 = Input(input_shape)

    model = conv_model(input_shape)

    model_output_left = model(input1)
    model_output_right = model(input2)

    def l1_distance(x): 
        return K.abs(x[0] - x[1])

    def l1_distance_shape(x): 
        print(x)
        return x[0]
    merged_model = keras.layers.Lambda(l1_distance)([model_output_left, model_output_right])
    #merged_model = merge([model_output_left, model_output_right], mode=l1_distance, output_shape=l1_distance_shape)
    output = Dense(1, activation='sigmoid')(merged_model)
    siamese_model = Model([input1, input2], output)
    return siamese_model

1 个答案:

答案 0 :(得分:0)

我的检查是您输入的形状与模型想要的不匹配。通过运行以下代码来重新检查模型输入:img.shape。确保您的图片形状为(105,105,1),与模型的输入兼容。

此外,由于siamese_model.predict()接受批量输入,因此输入形状(105,105,1)不兼容。因此,确保将图像重塑为(1,105,105,1) 的形状(相当于使用预测批处理大小1)。

TL; DR 运行以下代码:img = img.reshape(1,105,105,1)