我正在尝试连接 vgg16 网络的 c0nv4_3 层,而不是 conv5_3 到更快的R-CNN的RPN网络。 Here是vgg16网络的python代码。我改变了这些路线:
def _image_to_head(self, is_training, reuse=False):
with tf.variable_scope(self._scope, self._scope, reuse=reuse):
net = slim.repeat(self._image, 2, slim.conv2d, 64, [3, 3],
trainable=False, scope='conv1')
net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3],
trainable=False, scope='conv2')
net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3],
trainable=is_training, scope='conv3')
net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3],
trainable=is_training, scope='conv4')
net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool4')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3],
trainable=is_training, scope='conv5')
self._act_summaries.append(net)
self._layers['head'] = net
return net
为:
def _image_to_head(self, is_training, reuse=False):
with tf.variable_scope(self._scope, self._scope, reuse=reuse):
net = slim.repeat(self._image, 2, slim.conv2d, 64, [3, 3],
trainable=False, scope='conv1')
net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3],
trainable=False, scope='conv2')
net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3],
trainable=is_training, scope='conv3')
net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3],
trainable=is_training, scope='conv4')
self._act_summaries.append(net)
self._layers['head'] = net
return net
如上所示,我删除了 conv5和pool4图层;因为我的物体很小,我希望得到更好的结果,但结果变得更糟。我想我需要在conv4的末尾添加 deconv图层?还是有另一种方式?
感谢
答案 0 :(得分:1)
还有一些方法,用于减少瓶颈功能的长度。
为什么不添加deconv :
合并图层
平均合并(根据窗口大小,它将返回该窗口的平均值)。因此,如果让我们说带有值[3,2,4,3]的窗口(2,2)只会产生一个值:6
MaxPool(根据窗口大小,它将产生该窗口的最大值)。所以如果让我们说带有值[3,2,4,3]的窗口(2,2)只会产生一个值:3
查看合并图层here