InvalidArgumentError:不兼容的形状:[15,3] 与 [100,3]

时间:2021-03-13 10:35:53

标签: machine-learning keras deep-learning neural-network conv-neural-network

我有一个包含 4000 多个图像和 3 个类别的数据集,我正在重复使用具有 10 个类别的胶囊神经网络的代码,但我将其修改为 3 个类别,当我运行模型时,出现以下错误第一个纪元的最后一点(44/45):

const daysOfWeek = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'];
const getDayOfWeek = (date) => daysOfWeek[date.getDay()]

训练代码:

   Epoch 1/16
   44/45 [============================>.] - ETA: 28s - loss: 0.2304 - capsnet_loss: 0.2303 - decoder_loss: 0.2104 - capsnet_accuracy: 0.6598 - decoder_accuracy: 0.5781
    InvalidArgumentError:  Incompatible shapes: [15,3] vs. [100,3]
         [[node gradient_tape/margin_loss/mul/Mul (defined at <ipython-input-22-9d913bd0e1fd>:11) ]] [Op:__inference_train_function_6157]

Function call stack:
train_function

模型是:

m = 100
epochs = 16
# Using EarlyStopping, end training when val_accuracy is not improved for 10 consecutive times
early_stopping = keras.callbacks.EarlyStopping(monitor='val_capsnet_accuracy',mode='max',
                                    patience=2,restore_best_weights=True)

# Using ReduceLROnPlateau, the learning rate is reduced by half when val_accuracy is not improved for 5 consecutive times
lr_scheduler = keras.callbacks.ReduceLROnPlateau(monitor='val_capsnet_accuracy',mode='max',factor=0.5,patience=4)
train_model.compile(optimizer=keras.optimizers.Adam(lr=0.001),loss=[margin_loss,'mse'],loss_weights = [1. ,0.0005],metrics=['accuracy'])
train_model.fit([x_train, y_train],[y_train,x_train], batch_size = m, epochs = epochs, validation_data = ([x_test, y_test],[y_test,x_test]),callbacks=[early_stopping,lr_scheduler])

输入层、卷积层和初级胶囊

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(100, 28, 28, 1)]   0                                            
__________________________________________________________________________________________________
conv2d (Conv2D)                 (100, 27, 27, 256)   1280        input_1[0][0]                    
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (100, 27, 27, 256)   0           conv2d[0][0]                     
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (100, 19, 19, 128)   2654336     max_pooling2d[0][0]              
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (100, 6, 6, 128)     1327232     conv2d_1[0][0]                   
__________________________________________________________________________________________________
reshape (Reshape)               (100, 576, 8)        0           conv2d_2[0][0]                   
__________________________________________________________________________________________________
lambda (Lambda)                 (100, 576, 8)        0           reshape[0][0]                    
__________________________________________________________________________________________________
digitcaps (CapsuleLayer)        (100, 3, 16)         221184      lambda[0][0]                     
__________________________________________________________________________________________________
input_2 (InputLayer)            [(None, 3)]          0                                            
__________________________________________________________________________________________________
mask (Mask)                     (100, 48)            0           digitcaps[0][0]                  
                                                                 input_2[0][0]                    
__________________________________________________________________________________________________
capsnet (Length)                (100, 3)             0           digitcaps[0][0]                  
__________________________________________________________________________________________________
decoder (Sequential)            (None, 28, 28, 1)    1354000     mask[0][0]                       
==================================================================================================
Total params: 5,558,032
Trainable params: 5,558,032
Non-trainable params: 0  

代码source

x_train.shape --> (4415, 28, 28, 1)

y_train.shape --> (4415, 3)

x_test.shape --> (1104, 28, 28, 1)

y_test.shape --> (1104, 3)

我的代码here

1 个答案:

答案 0 :(得分:0)

尝试制作 X 集,以便批量大小完全适合数据我认为适合所有数据后批量大小的余数为 15

例如:使它成为 100 的倍数