keras中的conv2d功能以及对下层的了解

时间:2019-05-09 04:17:45

标签: python tensorflow keras

我有以下代码。它具有五个卷积层,其中包含128个(3X3)滤波器,然后减少2,分别为64(3X3),32(3X3),16(3X3)和8(3X3)。我想了解第一层,第二层的输出,如何可视化它,还请引导我进入下一层时,我们可以看到第一层的输出远远大于第二层的输入,因此我们如何进行管理它?

在最后一行,当我们连接时,它以哪个顺序保存(是8X8输出)吗?由于我是pyhton的新手,请帮助我解决此问题。 代码如下。

ds = 2

t1 = Input((None, None, 1)) (t1 is grey scale image of 35X35)
conv1a = Conv2D(128, (3, 3), activation='relu', padding='same')(t1)
conv2a = Conv2D(128 // ds, (5, 5), activation='relu', padding='same')(conv1a)
conv3a = Conv2D(128 // (ds * 2), (3, 3), activation='relu', padding='same')(conv2a)
conv4a = Conv2D(128 // (ds ** 3), (5, 5), activation='relu', padding='same')(conv3a)
conv5a = Conv2D(128 // (ds ** 4), (3, 3), activation='relu', padding='same')(conv4a)

fl = Input((None, None, 1)) (fl i also a grey scale image of size 35X35)
conv1b = Conv2D(128, (3, 3), activation='relu', padding='same')(fl)
conv2b = Conv2D(128 // ds, (5, 5), activation='relu', padding='same')(conv1b)
conv3b = Conv2D(128 // (ds * 2), (3, 3), activation='relu', padding='same')(conv2b)
conv4b = Conv2D(128 // (ds ** 3), (5, 5), activation='relu', padding='same')(conv3b)
conv5b = Conv2D(128 // (ds ** 4), (3, 3), activation='relu', padding='same')(conv4b)

concat = concatenate([conv5a, conv5b], axis=-1)

0 个答案:

没有答案