预训练的Keras Inception中的形状与训练后的所需形状无关

时间:2018-05-09 11:44:31

标签: python tensorflow keras deep-learning

我正在尝试使用Keras InceptionV3()使用Lucid Toolkit(https://github.com/tensorflow/lucid)执行功能可视化。

在我训练完毕后检查网络内层的形状时,它们具有给定的形状:

================================================================================
input_1 (InputLayer)            (None, 300, 400, 3)  0                                            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 149, 199, 32) 864         input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 149, 199, 32) 96          conv2d_1[0][0]                   
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 149, 199, 32) 0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 147, 197, 32) 9216        activation_1[0][0]               

...

相比之下,具有预训练的imageNet权重的模型没有这样的限制:

input_1 (InputLayer)            (None, None, None, 3 0                                            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, None, None, 3 864         input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, None, None, 3 96          conv2d_1[0][0]                   
__________________________________________________________________________________________________
activation_1 (Activation)       (None, None, None, 3 0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, None, None, 3 9216        activation_1[0][0]               

所以,问题在于,当我想要进行可视化时,使用预训练的网络可行,但不是。

有谁知道,为什么没有对图层形状的限制,因为至少应该有每个转换层中的滤镜数量。

感谢您的帮助,

0 个答案:

没有答案