*(星号)在应用于TensorFlow图层时会做什么?

时间:2017-01-15 02:20:35

标签: python tensorflow deep-learning

目前正在阅读Inception-ResNet的Python实现,以帮助构建不同语言的模型(Deeplearning4j)。这个实现是Inception-ResNet-v1,我试图弄清楚它是如何实现ResNet风格的残留快捷方式的。

以下代码块为net += scale * up

# Inception-Renset-A
def block35(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
    """Builds the 35x35 resnet block."""
    with tf.variable_scope(scope, 'Block35', [net], reuse=reuse):
        with tf.variable_scope('Branch_0'):
            tower_conv = slim.conv2d(net, 32, 1, scope='Conv2d_1x1')
        with tf.variable_scope('Branch_1'):
            tower_conv1_0 = slim.conv2d(net, 32, 1, scope='Conv2d_0a_1x1')
            tower_conv1_1 = slim.conv2d(tower_conv1_0, 32, 3, scope='Conv2d_0b_3x3')
        with tf.variable_scope('Branch_2'):
            tower_conv2_0 = slim.conv2d(net, 32, 1, scope='Conv2d_0a_1x1')
            tower_conv2_1 = slim.conv2d(tower_conv2_0, 32, 3, scope='Conv2d_0b_3x3')
            tower_conv2_2 = slim.conv2d(tower_conv2_1, 32, 3, scope='Conv2d_0c_3x3')
        mixed = tf.concat(3, [tower_conv, tower_conv1_1, tower_conv2_2])
        up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None,
                         activation_fn=None, scope='Conv2d_1x1')
        net += scale * up
        if activation_fn:
            net = activation_fn(net)
    return net

比例是介于0和1之间的doubleup是一叠图层,最后一个是转化图层。

scale * up具体发生了什么?

1 个答案:

答案 0 :(得分:1)

up中的每个图层都会乘以scale中的标量值。然后net被重新定义为net + scale * up。因此net应与up具有相同的维度。