如何从喀拉拉邦的R2U-Net构建循环卷积层?

时间:2020-05-01 21:41:43

标签: python tensorflow keras tensorflow2.0

我正在尝试从此paper重建R2U网络。 我发现了两个实现网络的github存储库(12)。我试图写出喀拉拉邦网络的一小部分。我遇到的问题是,当我在网络中放置相同数量的功能时,会导致生成不同数量的训练参数。

我对此有三个问题:

  1. 我在Unfolded_Recurrent_Convolutional_layer中犯了一个错误吗? (如果没有怎么办?)

  2. 该论文说,您可以使用contracing或加法,但这会改变训练参数,对吗?

  3. 是否需要在conv + add之间进行批标准化?

输入是大小为(832,576,1)的归一化灰度图形。我将tf版本2.2.0与python 3.7.7一起使用。网络的一小部分如下:

import tensorflow as tf
inputs = tf.keras.layers.Input((832, 576, 1))

# block 1 ----------------------------------------------------------------------------------------
one = tf.keras.layers.Conv2D(8, [1, 1], padding="same")(inputs)
c1 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(one)
adition = tf.keras.layers.add([c1, one])   # Do i need to concatonate?
# do I need a batchnormalization here?
c2 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(adition)
adition2 = tf.keras.layers.add([c2, one])  # Do i need to concatonate?
# do I need a batchnormalization here?
c3 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(adition2)
# Unfolded_Recurrent_Convolutional_layer  --------------------------------------------------------
c4 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(c3)
adition3 = tf.keras.layers.add([c4, adition2])  # Do i need to concatonate?
c5 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(adition3)
adition4 = tf.keras.layers.add([c5, adition2])  # Do i need to concatonate?
c6 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(adition4)
# Unfolded_Recurrent_Convolutional_layer2 --------------------------------------------------------
adition5 = tf.keras.layers.add([c6, one])
p1 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(adition5)


# block 2 ----------------------------------------------------------------------------------------
two = tf.keras.layers.Conv2D(16, [1, 1], padding="same")(p1)
c7 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding="same")(two)
adition6 = tf.keras.layers.add([c7, two])
c8 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding="same")(adition6)
adition7 = tf.keras.layers.add([c8, two])
c9 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding="same")(adition7)
# Unfolded_Recurrent_Convolutional_layer  --------------------------------------------------------
c10 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding="same")(c9)
adition8 = tf.keras.layers.add([c10, adition7])
c11 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding="same")(adition8)
adition9 = tf.keras.layers.add([c11, adition7])
c12 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding="same")(adition9)
# Unfolded_Recurrent_Convolutional_layer  --------------------------------------------------------
adition10 = tf.keras.layers.add([c12, two])

# decoder ---------------------------------------------------------------------------------------
u1 = tf.keras.layers.Conv2DTranspose(8, (2, 2), strides=(2, 2), padding="same")(adition10)
# tf.keras.layers.UpSampling2D could also be used less computational expensive
u2 = tf.keras.layers.concatenate([u1, adition5])

# block 6 ---------------------------------------------------------------------------------------
six = tf.keras.layers.Conv2D(8, [1, 1], padding="same")(u2)
c31 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(six)
adition26 = tf.keras.layers.add([c31, six])
c32 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(adition26)
adition27 = tf.keras.layers.add([c32, six])
c33 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(adition27)
# Unfolded_Recurrent_Convolutional_layer  -------------------------------------------------------
c34 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(c33)
adition28 = tf.keras.layers.add([c34, adition27])
c35 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(adition28)
adition29 = tf.keras.layers.add([c35, adition27])
c36 = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding="same")(adition29)
# Unfolded_Recurrent_Convolutional_layer  -------------------------------------------------------
p6 = tf.keras.layers.add([c36, six])

c55 = tf.keras.layers.Conv2D(1, (1, 1), activation="sigmoid", padding='same')(p6)
model = tf.keras.models.Model(inputs=inputs, outputs=c55)


model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])

model.summary()

网络上的帖子非常好:link

添加了一些网络图片:

Unfolded_Recurrent_Convolutional_layer:

Unfolded_Recurrent_Convolutional_layer

网络r2_unet:

The network r2_unet

Ps。这是我的第一篇文章,所以我希望我已经正确地做到了:0,并且我知道最好将它放在循环中,但是我首先想完全理解它;)

0 个答案:

没有答案