Keras形状,而可变大小图像的UpSampling不匹配

时间:2019-08-13 04:24:43

标签: python keras neural-network autoencoder

我正在研究卷积神经网络以处理可变大小的图像:

本质上this same question (遵循同一教程:Keras autoencoder

除了输入大小可变之外,我无法使用“填充”或“裁剪”。

这是我的模型摘要的样子

id

因此,如果我传递尺寸为1382x1439的图像,则会收到错误消息

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, None, None, 3)     0         
_________________________________________________________________
FirstConv (Conv2D)           (None, None, None, 16)    448       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, None, None, 16)    0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, None, None, 8)     1160      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, None, None, 8)     0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, None, None, 8)     584       
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, None, None, 8)     0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, None, None, 8)     584       
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, None, None, 8)     0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, None, None, 8)     584       
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, None, None, 8)     0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, None, None, 16)    1168      
_________________________________________________________________
up_sampling2d_3 (UpSampling2 (None, None, None, 16)    0         
_________________________________________________________________
LastConv (Conv2D)            (None, None, None, 3)     435       
_________________________________________________________________

即使网络应该与tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [1,1382,1439,3] vs. [1,1380,1436,3] [[{{node training/Adadelta/gradients/loss/fix_layer_1_loss/mul_grad/BroadcastGradientArgs}}]] 维度的输出大小相匹配。

我不能使用填充或裁剪,因为填充是相对于图层的大小完成的,并且在编译时也不知道输入的大小。

我尝试制作一个自定义图层,但由于正确计算了输出大小,所以它没有任何区别。

我也尝试过使用不同的优化器,所以我不确定这里发生了什么。

有什么想法吗?

我想避免调整图像的大小,因为它确实会弄乱我的数据集

0 个答案:

没有答案