def get_unet(input_img, n_filters=16, dropout=0.5, batchnorm=True):
# contracting path
c1 = conv2d_block(input_img, n_filters=n_filters * 1, kernel_size=3, batchnorm=batchnorm)
p1 = MaxPooling2D((2, 2))(c1)
p1 = Dropout(dropout * 0.5)(p1)
c2 = conv2d_block(p1, n_filters=n_filters * 2, kernel_size=3, batchnorm=batchnorm)
p2 = MaxPooling2D((2, 2))(c2)
p2 = Dropout(dropout)(p2)
c3 = conv2d_block(p2, n_filters=n_filters * 4, kernel_size=3, batchnorm=batchnorm)
p3 = MaxPooling2D((2, 2))(c3)
p3 = Dropout(dropout)(p3)
c4 = conv2d_block(p3, n_filters=n_filters * 8, kernel_size=3, batchnorm=batchnorm)
p4 = MaxPooling2D(pool_size=(2, 2))(c4)
p4 = Dropout(dropout)(p4)
c5 = conv2d_block(p4, n_filters=n_filters * 16, kernel_size=3, batchnorm=batchnorm)
# expansive path
u6 = Conv2DTranspose(n_filters * 8, (3, 3), strides=(2, 2), padding='same')(c5)
u6 = concatenate([u6, c4])
u6 = Dropout(dropout)(u6)
c6 = conv2d_block(u6, n_filters=n_filters * 8, kernel_size=3, batchnorm=batchnorm)
u7 = Conv2DTranspose(n_filters * 4, (3, 3), strides=(2, 2), padding='same')(c6)
u7 = concatenate([u7, c3])
u7 = Dropout(dropout)(u7)
c7 = conv2d_block(u7, n_filters=n_filters * 4, kernel_size=3, batchnorm=batchnorm)
u8 = Conv2DTranspose(n_filters * 2, (3, 3), strides=(2, 2), padding='same')(c7)
u8 = concatenate([u8, c2])
u8 = Dropout(dropout)(u8)
c8 = conv2d_block(u8, n_filters=n_filters * 2, kernel_size=3, batchnorm=batchnorm)
u9 = Conv2DTranspose(n_filters * 1, (3, 3), strides=(2, 2), padding='same')(c8)
u9 = concatenate([u9, c1], axis=3)
u9 = Dropout(dropout)(u9)
c9 = conv2d_block(u9, n_filters=n_filters * 1, kernel_size=3, batchnorm=batchnorm)
outputs = Conv2D(1, (1, 1), activation='sigmoid')(c9)
model = Model(inputs=[input_img], outputs=[outputs])
return model
我从这里得到了Keras的模型。我似乎收到了错误:
File "train.py", line 87, in get_unet
u8 = concatenate([u8, c2])
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 256, 184, 32), (None, 256, 185, 32)]
因此我打印了每个张量的值,然后得到:
c1: Tensor("activation_2/Relu:0", shape=(?, 512, 370, 16), dtype=float32)
c2: Tensor("activation_4/Relu:0", shape=(?, 256, 185, 32), dtype=float32)
c3: Tensor("activation_6/Relu:0", shape=(?, 128, 92, 64), dtype=float32)
c4: Tensor("activation_8/Relu:0", shape=(?, 64, 46, 128), dtype=float32)
c5: Tensor("activation_10/Relu:0", shape=(?, 32, 23, 256), dtype=float32)
u6: Tensor("dropout_5/cond/Merge:0", shape=(?, 64, 46, 256), dtype=float32)
u7: Tensor("dropout_6/cond/Merge:0", shape=(?, 128, 92, 128), dtype=float32)
u8: Tensor("conv2d_transpose_3/BiasAdd:0", shape=(?, ?, ?, 32), dtype=float32)
C2
发生了什么?为什么u8
的第二维度是184,而C2
的第二维度似乎是185。而且,C3
的第二维度似乎是从184的最大2倍被最大化处理的。 (可能是由于floor
函数所致)
我将如何应对?我是否必须更改正在输入的图像的大小,或者在进行转置卷积时是否必须进行某些设计?我是否需要对一个额外的像素执行插值?
答案 0 :(得分:1)
之所以发生这种情况是因为,即使在C2层中将其除以2时,第二维也没有。
您将185的最大池数乘以2,这使您的92.5->下限降至92
但是,当您以另一种方式进行操作时,您会将92上采样2倍,得到184。
要避免这种情况,您只需将U8零垫板与C2兼容即可,如下所示:
u8 = Conv2DTranspose(n_filters * 2, (3, 3), strides=(2, 2), padding='same')(c7)
u8 = ZeroPadding2D(padding=((0, 0), (0, 1)))(u8)
u8 = concatenate([u8, c2])
如果您不想对键盘进行过零填充,则可以对输入图像进行整形,以使尺寸对应于2的幂,或者可以将其除以2的倍数而无需给出奇数,例如224(可以在得到7)之前除以5的两倍。
希望对您有帮助!